text
stringlengths
100
500k
subset
stringclasses
4 values
竞赛 - Eufisky - The lost book Problems of the Miklós Schweitzer Memorial Competition 1、AOPS论坛 2、匈牙利语版本 3、英文版本 4、其它试题 Suppose that $f: \mathbb{R}^+ \to \mathbb{R}^+$ is a continuous function such that for all positive real numbers $x,y$ the following is true : $$(f(x)-f(y)) \left ( f \left ( \frac{x+y}{2} \right ) - f ( \sqrt{xy} ) \right )=0.$$ Is it true that the only solution to this is the constant function ? 陶哲轩解答:Yes. If $f$ were not constant, then (since ${\bf R}^+$ is connected) it could not be locally constant, thus there exists $x_0 \in {\bf R}^+$ such that $f$ is not constant in any neighbourhood of $x_0$. By rescaling (replacing $f(x)$ with $f(x_0 x)$) we may assume without loss of generality that $x_0=1$. For any $y \in {\bf R}^+$, there thus exists $x$ arbitrarily close to $1$ for which $f(x) \neq f(y)$, hence $f((x+y)/2) = f(\sqrt{xy})$. By continuity, this implies that $f((1+y)/2) = f(\sqrt{y})$ for all $y \in {\bf R}^+$. Making the substitution $z := (1+y)/2$, we conclude that $f(z) = f(g(z))$ for all $z \in {\bf R}^+$, where $g(z) := \sqrt{2z-1}$. The function $g$ has the fixed point $z=1$ as an attractor, so on iteration and by using the continuity of $f$ we conclude that $f(z)=f(1)$ for all $z \in {\bf R}^+$, so $f$ is indeed constant. 来源:这里 (1950年)令$a>0,d>0$,设$$ f(x)=\frac{1}{a}+\frac{x}{a(a+d)}+\cdots+\frac{x^n}{a(a+d)\cdots(a+nd)}+\cdots$$给出$f(x)$的封闭解. 也就是求\[\sum\limits_{n = 0}^\infty {\frac{{{x^n}}}{{\prod\limits_{k = 0}^n {\left( {a + kd} \right)} }}} .\] 解.首先有$$\prod_{k=0}^n{\frac{1}{a+kd}}=\frac{\Gamma \left( \frac{a}{d} \right)}{d^{n+1}\Gamma \left( \frac{a}{d}+n+1 \right)},$$ 又因为$$\gamma \left( s,x \right) =\sum_{k=0}^{\infty}{\frac{x^se^{-x}x^k}{s\left( s+1 \right) ...\left( s+k \right)}}=x^s\,\Gamma \left( s \right) \,e^{-x}\sum_{k=0}^{\infty}{\frac{x^k}{\Gamma \left( s+k+1 \right)}},$$我们有 \begin{align*}\sum_{n=0}^{\infty}{\frac{x^n}{\prod\limits_{k=0}^n{\left( a+kd \right)}}}&=\frac{\Gamma \left( \frac{a}{d} \right)}{d}\sum_{n=0}^{\infty}{\frac{\left( x/d \right) ^n}{\Gamma \left( \frac{a}{d}+n+1 \right)}}\\&=\frac{\Gamma \left( \frac{a}{d} \right)}{d}\gamma \left( \frac{a}{d},\frac{x}{d} \right) \left( \frac{d}{x} \right) ^{a/d}\frac{e^{x/d}}{\Gamma \left( \frac{a}{d} \right)}=\left( \frac{d}{x} \right) ^{a/d}\frac{e^{x/d}}{d}\gamma \left( \frac{a}{d},\frac{x}{d} \right) ,\end{align*} 其中$\displaystyle\Gamma(s,x) = \int_x^{\infty} t^{s-1}\,e^{-t}\,{\rm d}t$为the upper incomplete gamma function,而$\displaystyle\gamma(s,x) = \int_0^x t^{s-1}\,e^{-t}\,{\rm d}t$为the lower incomplete gamma function.参考这里. $g(x) = x^a f(x^d)$ satifies $g'(x) = x^{a-1} + x^{d-1} g(x)$. Solve the associated differential equation and conclude. 令$a\in (0,\pi)$,设$n$为正整数.证明$$\int_0^{\pi}{\frac{\cos \left( nx \right) -\cos \left( na \right)}{\cos x-\cos a}dx}=\pi \frac{\sin \left( na \right)}{\sin a}.$$ 求$$\int_1^{\frac{\sqrt{5}+1}{2}}{\left( \frac{\arctan x}{\arctan x-x} \right) ^2dx},$$ $$\int_0^1{\frac{\arctan x}{x\sqrt{1-x^2}}dx}.$$ Let $n$ be a positive integer. Prove that, for $0<x<\frac\pi{n+1}$, $$\sin{x}-\frac{\sin{2x}}{2}+\cdots+(-1)^{n+1}\frac{\sin{nx}}{n}-\frac{x}{2}$$ is positive if $n$ is odd and negative if $n$ is even. \begin{align*}f_n(x) &= \sin{x} - \frac {\sin{2x}}{2} + \cdots + ( - 1)^{n + 1}\frac {\sin{nx}}{n} - \frac {x}{2},\\f_n'(x) &= - \mbox{Re}\left(\sum_{n = 1}^{n}z^n\right) - \frac12.\end{align*} After some simplifications we get $$ f_n'(x) = \frac {( - 1)^{n + 1}}{2}((1 - \cos(x))\frac {\sin((n + 1)x)}{\sin(x)} + \cos((n + 1)x))$$ and $$ f_n''(x) = \frac {( - 1)^{n}}{2}\frac {(n + 1)\sin(nx) + n\sin((n + 1)x)}{1 + \cos(x)}.$$ The formula for $ f_n''$ shows that $ ( - 1)^n f$ is convex for $ 0 < x < \frac {\pi}{n + 1}$. Since $ f_n(0) = 0$ and $ f_n'(0) = \frac {( - 1)^{n + 1}}{2}$.We are ready when we can show that $ ( - 1)^{n + 1}f_n(\frac {\pi}{n + 1}) > 0$. We have to distinct between two different, but very similar cases, namely $ n$ is odd, and $ n$ is even. Let's restrict to the case $ n$ is even. We prove $ f_{2n}(\frac {\pi}{2n + 1}) < 0$. \begin{align*}f_{2n}\left( \frac{\pi}{2n+1} \right) &=\sum_{k=1}^{2n}{\left( -1 \right)}^{k+1}\frac{\sin \left( \frac{k\pi}{2n+1} \right)}{k}-\frac{\pi}{2\left( 2n+1 \right)}\\&=\frac{\pi}{2n+1}\left( \sum_{k=1}^n{\frac{\sin \left( \frac{\left( 2k-1 \right) \pi}{2n+1} \right)}{\frac{\left( 2k-1 \right) \pi}{2n+1}}}-\sum_{k=1}^n{\frac{\sin \left( \frac{2k\pi}{2n+1} \right)}{\frac{2k\pi}{2n+1}}} \right) -\frac{\pi}{2\left( 2n+1 \right)}.\end{align*} The function $ x \mapsto \frac {\sin(x)}{x}$ is descending on $ [0,\pi]$, thus both sums lay between $ a$ and $ a + \frac {2\pi}{2n + 1}$, where $ a = \int_0^{\pi}\frac {\sin(x)}{x}\,dx$. Thus $$ f_{2n}\left(\frac {\pi}{2n + 1}\right) < \frac {\pi}{2n + 1}\cdot\frac {2\pi}{2n + 1} - \frac {\pi}{2(2n + 1)} < 0.$$ 竞赛 竞赛 西西 积分 Comments(1) 2017年7月28日 02:08 2015年丘赛分析组个人赛试题 Let $f_n\in L^2(R)$ be a sequence of measurable functions over the line, $f_n\rightarrow f$ almost everywhere. Let $||f_n||_{L^2}\rightarrow||f||_{L^2}$, prove that $||f_n-f||_{L^2}\rightarrow 0$. Proof.(Weingarten) $L^2(\mathbf R)$ is a Hilbert space, and $\|f_n\|_{L^2}\to\|f\|_{L^2}$ as $n\to\infty$ , so we only need to prove $f_n\in f$ weakly in $L^2(\mathbf R)$, that is, for each $g\in L^2(\mathbf R)$, there holds \[\lim_{n\to\infty}\int_{\mathbf R}f_ng=\int_{\mathbf R} fg.\] For each $\epsilon>0$, there exists $R>0$, such that \[\int_{|x|>R}|g|^2<\epsilon^2,\] and by the absolute continuity of integration of $g$, there exists a positive $\delta$, such that: for any Lebesgue measurable subset $E$ of $\mathbf R$ with $m(E)<\delta$, there holds \[\int_E|g|^2<\epsilon^2.\] By the Egoroff's thoerem, there exists a subset $E_\delta$ of $(-R,R)$ with $m((-R,R)\setminus E_\delta)<\delta$, such that the convergence $\lim\limits_{n\to\infty}f_n=f$ is uniform on the $E_\delta$, so there exists $N\in\mathbf Z_+$, such that \[(\forall n>N,x\in E_\delta),(|f_n-f|<\epsilon/\sqrt{2R}).\] Assume $M=\|g\|_{L^2}+\|f\|_{L^2}+\sup\limits_n\|f_n\|_{L^2}$, hence $\forall n>N$, we have the following estimations \begin{align*}\int_{\mathbf R}|f_n-f|\cdot|g|&=\left(\int_{E_\delta}+\int_{(-R,R)\setminus E_\delta}+\int_{|x|>R}\right)|f_n-f|\cdot|g|\\&\leq\sqrt{\int_{E_\delta}|f_n-f|^2\cdot\int_{E_\delta}|g|^2}+\sqrt{\int_{(-R,R)\setminus E_\delta}|f_n-f|^2\cdot\int_{(-R,R)\setminus E_\delta}|g|^2}\\&+\sqrt{\int_{|x|>R}|f_n-f|^2\cdot\int_{|x|>R}|g|^2}\\&\leq M\epsilon+2\sqrt{2}M\epsilon.\end{align*} the proof is finished. Let $f$ be a continuous function on $[a,b]$, define $M_n=\int ^b_a f(x)x^n{\rm d}x$. Suppose that $M_n=0$ for all $n$, show that $f(x)=0$ for all $x$. Determine all entire functions $f$ that satisfying the inequality $$|f(z)|\leq|z|^2|{\rm Im}(z)|^2$$ for $z$ sufficlently large. Describe all holomorphic functions over the unit disk $D=\{z||z|\leq 1\}$ which maps the boundary of the disk into the boundary of the disk. Let $T:H_1\rightarrow H_2, Q:H_2\rightarrow H_1$ be bounded linear operators of Hilbert spaces $H_1,\ H_2$. Let $QT={\rm Id}-S_1,TQ={\rm Id}-S_2$ where $S_1$ and $S_2$ are compact operators. Prove ${\rm Ker}T=\{v\in H_1,Tv=0\},{\rm Coker}T=H_2/\overline{{\rm Im}T}$, where ${\rm Im}T=\{Tv\in H_2,v\in H_1\}$ are finite dimensional and ${\rm Im}(T)$ is closed in $H_2$. Let $H_1$ be the Sobolev space on the unit interval $[0,1]$, i.e. the Hilbert space consisting of functions $f\in L^2([0,1])$ such that $$||f||_1^2=\sum^\infty_{n=-\infty}(1+n^2)|\hat f(n)|^2<\infty;$$ where $$\hat f(n)=\frac 1 {2\pi}\int_0^1f(x)e^{-2\pi inx}{\rm d}x$$ are Fourier coefficients of $f$. Show that there exists constant $C>0$ such that $$||f||_{L^\infty}\leq C||f||_1$$ for all $f\in H_1$, where $||\cdot||_{L^\infty}$ stands for the usual supremum norm. (Hint: Use Fourier series.) 竞赛 丘赛 Comments(0) 2015年8月10日 14:55
CommonCrawl
An automated framework for NMR chemical shift calculations of small organic molecules Yasemin Yesiltepe1,2, Jamie R. Nuñez2, Sean M. Colby2, Dennis G. Thomas2, Mark I. Borkum2, Patrick N. Reardon3, Nancy M. Washton2, Thomas O. Metz2, Justin G. Teeguarden2, Niranjan Govind2 & Ryan S. Renslow ORCID: orcid.org/0000-0002-3969-55701,2 When using nuclear magnetic resonance (NMR) to assist in chemical identification in complex samples, researchers commonly rely on databases for chemical shift spectra. However, authentic standards are typically depended upon to build libraries experimentally. Considering complex biological samples, such as blood and soil, the entirety of NMR spectra required for all possible compounds would be infeasible to ascertain due to limitations of available standards and experimental processing time. As an alternative, we introduce the in silico Chemical Library Engine (ISiCLE) NMR chemical shift module to accurately and automatically calculate NMR chemical shifts of small organic molecules through use of quantum chemical calculations. ISiCLE performs density functional theory (DFT)-based calculations for predicting chemical properties—specifically NMR chemical shifts in this manuscript—via the open source, high-performance computational chemistry software, NWChem. ISiCLE calculates the NMR chemical shifts of sets of molecules using any available combination of DFT method, solvent, and NMR-active nuclei, using both user-selected reference compounds and/or linear regression methods. Calculated NMR chemical shifts are provided to the user for each molecule, along with comparisons with respect to a number of metrics commonly used in the literature. Here, we demonstrate ISiCLE using a set of 312 molecules, ranging in size up to 90 carbon atoms. For each, calculation of NMR chemical shifts have been performed with 8 different levels of DFT theory, and with solvation effects using the implicit solvent Conductor-like Screening Model. The DFT method dependence of the calculated chemical shifts have been systematically investigated through benchmarking and subsequently compared to experimental data available in the literature. Furthermore, ISiCLE has been applied to a set of 80 methylcyclohexane conformers, combined via Boltzmann weighting and compared to experimental values. We demonstrate that our protocol shows promise in the automation of chemical shift calculations and, ultimately, the expansion of chemical shift libraries. Metabolomics is being increasingly applied in biomedical and environmental studies, despite the technical challenges facing comprehensive and unambiguous identification of detected metabolites [1,2,3]. The capability to routinely measure and identify even a modicum of biologically important molecules within all of chemical space—greater than 1060 compounds [4]—remains a grand challenge in biology. The prevention and treatment of metabolic diseases, determining the interactions between plant and soil microbial communities, and uncovering the building blocks that led to abiogenesis will all strongly depend on confidently identifying small molecules, and thus understanding the mechanisms involved in the complex processes of metabolic networks [5,6,7]. The current gold standard for chemical identification requires matching chemical features to those measured from an authentic chemical standard. However, this is not the case with the vast majority of molecules. For example, only 17% of compounds found in the Human Metabolome Database (HMDB) and less than 1% of compounds found in exposure chemical databases like the U.S. Environmental Protection Agency (EPA) Distributed Structure-Searchable Toxicity (DSSTox) Database [8] can be purchased in pure form [9, 10]. Although analytical techniques like nuclear magnetic resonance (NMR) spectroscopy [11,12,13] and mass spectrometry (MS) [14,15,16] have been applied for the identification of metabolites and to build libraries [17,18,19,20,21], determining the complete composition of entire metabolomes is still non-trivial for both technical and economic reasons. In this regard, libraries constructed of experimentally obtained data are too limited, expensive, and slow to build, even for libraries with thousands of metabolites [22,23,24,25]. The most practical approach expand reference libraries for comprehensive identification of compounds detected in metabolomics studies is through in silico calculation of molecular attributes. Molecular properties that can be both accurately predicted computationally and consistently measured experimentally may be used in "standards free" metabolomics identification approaches. The metabolomics community has made many advances in calculations of measurable chemical attributes, such as chromatographic retention time [26, 27], tandem mass spectra [28,29,30], ion mobility collision cross section [31, 32], and NMR chemical shifts [33]. Recently, high throughput computation of chemical properties has been demonstrated using machine learning approaches [34,35,36,37]. These tools are a good resource for the metabolomics community, however, machine learning methods are limited by the size and scope of the initial training set, and thus ultimately limited by the number of authentic chemical standards available for purchase. In contrast, structure-based approaches, utilizing first principles of quantum chemical calculations, leverage our understanding of the underlying chemistry and physics to directly predict chemical properties of any chemically valid molecule. Thus, quantum chemical calculations enable us to overcome the reliance on authentic chemical standards in metabolomics. In this study, we focus on expanding the utility of density functional theory (DFT), a widely used electronic structure approach, which has been applied to predict NMR chemical shifts [38,39,40,41]. DFT enables examination of molecular conformers [42,43,44,45] and allows custom solvent conditions [46,47,48]. Ultimately, computational modeling can be used in the rapid identification and study of thousands of metabolites, culminating in in silico metabolome libraries of multiple chemical properties. Furthermore, the same tools that can be used to aid identification of small molecules in complex samples can also be used for structure confirmation and correction. For example, we recently used the tool described in this manuscript to help correct the misidentification of the isoflavonoid wrightiadione to the actual structure as an isobaric isostere, the alkaloid tryptanthrin [49]. Metabolomics researchers unfamiliar with DFT or similar calculations may find the application of quantum chemical calculations complicated or challenging to apply quickly, and thus avoid these techniques. To this end, and to help bring DFT calculations to large sets of small organic molecules relevant to the mainstream metabolomics community, we have developed a Python-based workflow and analysis package, the ISiCLE (in silico Chemical Library Engine) NMR chemical shift module employs DFT methods through use of NWChem [50], a high-performance quantum chemistry software package developed at Pacific Northwest National Laboratory (PNNL). The module automates calculations of NMR chemical shifts, including solvent effects, via the COnductor-like Screening Model (COSMO) [51] of user-specified NMR-active nuclei for a given set of molecules for multiple DFT methods. ISiCLE also calculates the corresponding errors if experimental values are available. In this paper, we describe ISiCLE's NMR module, provide a working tutorial example, demonstrate its use through the calculation of chemical shifts for a large set of small molecules, and, finally, show how ISiCLE can be applied to rapidly calculate chemical shifts of arrays of Boltzmann-weighted conformers to yield high accuracy chemical shift calculations. In silico Chemical Library Engine (ISiCLE)—NMR module ISiCLE is a Python module that provides straightforward automation of DFT using NWChem, an open source, high-performance computational quantum chemistry package, developed at Pacific Northwest National Laboratory (PNNL), for geometry optimization and chemical shift and solvent effect calculations. Figure 1 shows a schematic representation of ISiCLE. For typical use, ISiCLE requires only a list of molecules and a list of desired levels of DFT theory from the user. For more advanced use cases, users may adjust NWChem parameters by modifying the provided .nw template file. Schematic representation of inputs and outputs of the ISiCLE NMR module Here, we describe each step of a typical ISiCLE run (see Fig. 2 for a general workflow for using the ISiCLE NMR module). The step-by-step conceptual workflow for the ISiCLE NMR module. Conformer generation with Boltzmann weighting is optional and will be automated in subsequent versions. Please see github.com/pnnl/isicle for the latest versions To start, users must prepare File A, containing a list of molecules, and File B, containing a list of DFT combinations, which both are required to be in Excel format (.xls or .xlsx). File A must contain all input molecules either as (i) International Union of Pure and Applied Chemistry (IUPAC) International Chemical Identifier (InChI) strings [52, 53] or (ii) XYZ files, a free-format text file having XYZ coordinates of atoms. In subsequent versions, alternative file formats will be supported, such as TSV for inputs and outputs. Once prepared, the user runs ISiCLE. First, ISiCLE opens File A for the input molecules. OpenBabel, an open-source chemical informatics toolbox available with Python wrappers [54, 55], is called to generate geometry files. For InChI inputs, OpenBabel generates .xyz files for each molecule, unless .xyz files are provided, and converts InChI to InChIKey for naming files (otherwise, the base names of XYZ files are used for naming subsequent files). Next, OpenBabel applies the Merck molecular force field (MMFF94) [56] to generate a rough three-dimensional (3D) structure for each molecule, resulting in associated .mol files. ISiCLE then prepares NWChem input files based on the specified DFT methods, solvents, shielding parameters and regarding task directives given by the user-prepared File B. Finally, ISiCLE submits the appropriate files to, if relevant, a remote NWChem installation (typically on a non-local, networked, high-performance computer), and then retrieves the output files once the calculations are complete. Additional information and further details about ISiCLE is provided in Additional file 1 (S1). Note that future versions of ISiCLE will automatically generate conformers of a given molecule, as part of the seamless pipeline. For each molecule, ISiCLE generates MDL Molfiles (.mol) [57] that contain isotropic shieldings and NMR chemical shifts. ISiCLE exports isotropic shieldings for each molecule and appends them to a MDL Molfile in the same atomic order of the original XYZ files. Then, ISiCLE converts isotropic shieldings to NMR chemical shifts by subtracting the isotropic shielding constants for the specified nuclei of the molecule of interest from those of a reference compound computed at the same level of theory (Eq. 1). For this manuscript, tetramethylsilane (TMS) is used as a reference compound. The experimental chemical shifts of TMS are assigned a value of zero, thus the calculation of NMR chemical shifts needs only isotropic shieldings of TMS [58,59,60,61]. Any molecule can be used as reference in ISiCLE as long as it has the specified nuclei and its experimental (or calculated) chemical shifts are supplied. It is explained in detail in a supplemental tutorial how a user inputs experimental data. The equation for calculating chemical shifts from isotropic shieldings is: $$ \delta_{i} = \sigma_{ref} - \sigma_{i} + \delta_{ref} $$ where \( \delta_{i} \) and \( \delta_{ref} \) are the chemical shifts of atom i (of the molecule of interest) and the reference molecule, respectively. \( \sigma_{i} \) and \( \sigma_{ref} \) are the isotropic shielding constants of atom i and the reference molecule, respectively. ISiCLE also calculates errors in NMR chemical shifts if experimental data is provided in the MDL Molfiles in a required way as explained in the tutorial. The errors are quantified in terms of mean absolute error (MAE) (Eq. 2), corrected mean absolute error (CMAE) (Eq. 3), root mean square error (RMSE) (Eq. 4), and maximum absolute error (Eq. 5). $$ MAE = \frac{{\mathop \sum \nolimits_{i = 1}^{N} \left| {\delta_{exp} - \delta_{calc} } \right|}}{N} $$ $$ CMAE = \frac{{\mathop \sum \nolimits_{i = 1}^{N} \left| {\delta_{exp} - (\delta_{calc} - intercept)/slope} \right|}}{N} $$ $$ RMSE = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{N} \left( {\delta_{exp} - \delta_{calc} } \right)^{2} }}{N}} $$ $$ \mathop {\hbox{max} }\limits_{i = 1,2, \ldots ,N} \left| {\delta_{exp} - \delta_{calc} } \right| $$ where N is the total number of chemical shifts, and \( \delta_{calc} \) and \( \delta_{exp} \) are the lists of calculated and experimental chemical shits, respectively. Empirical scaling of isotropic shieldings or NMR chemical shifts is the most common approach to remove systematic errors. If experimental data is provided, ISiCLE uses two optional approaches for its linear regression method, where slope and intercept values are derived from (i) regression of computed NMR chemical shifts versus experimental NMR chemical shifts using (Eq. 6), and/or (ii) regression of computed isotropic shieldings versus experimental NMR chemical shifts using (Eq. 7). $$ \delta_{exp} = \frac{{intercept - \delta_{calc} }}{ - slope} $$ $$ \delta_{exp} = \frac{{intercept - \sigma_{calc} }}{ - slope} $$ where \( \sigma_{calc} \) is the list of isotopic shielding constants of molecules. Alternatively, if the user does not provide experimental NMR chemical shifts, ISiCLE can scale NMR chemical shifts using provided intercept and slope values. The scaled NMR chemical shifts are appended to MDL Molfiles. A detailed description of InChIs and InChIKeys, and why they were chosen, can be found in the Additional file 1 (S2). Similarly, justification for the use of MDL Molfiles is explained in Additional file 1 (S2). In the next version, ISiCLE will be compatible with other file formats, such as the NMReDATA [62] format that has been recently designed for NMR data use. To help ease the use of our data, we provide NMReDATA files for the demonstration set in the Additional file 2. Furthermore, installation details for OpenBabel and other required Python packages are provided in the tutorial (see Additional file 2). The Windows-based tutorial provides step-by-step instructions for running ISiCLE for the first time, including information for installation of packages, properly preparing input files, running a calculation, and obtaining output files. The tutorial includes example molecules with anticipated output files for use as a practice set and for benchmarking purposes. It is designed to guide users of ISiCLE and NWChem in the use of the input files and scripts, demonstrated using three small molecules: methanol, methyl-isothiocyanate, and nitromethane. Calculation time may vary (depending on network speed, local computational power, etc.), but it is expected to take less than 10 min. Demonstration set For an initial demonstration of ISiCLE, we have compiled a molecule set of 312 compounds from previous studies: Alver [63], Asiri et al. [64], Bally and Rablen [65], Bagno et al. [66], Borkowski et al. [67], Coruh et al. [68], Fulmer et al. [69], Hill et al. [70], Izgi et al. [71], Karabacak et al. [72], Krishnakumar et al. [73,74,75], Kwan and Liu [45], Li et al. [76], Lomas [77], Osmialowski et al. [78], Parlak et al. [79], Perez et al. [80], Rablen et al. [81], Sarotti and Pellegrinet [82, 83], Sebastian et al. [84], Seca et al. [52], Senyel et al. [85, 86], Sridevi et al. [87], Tormena and da Silva [88], Vijaya and Sankaran [89], Watts et al. [53], Wiitala et al. [90, 91], Willoughby et al. [92], and Yang et al. [93]. We aimed to cover a broad chemical space and distribution of sizes. Our criteria also included the existence of all 1H and/or 13C NMR experimental data in chloroform solvent, referenced to TMS at room temperature, for comparisons. Note that the NMR spectra of each molecule set were not recorded at the same magnetic field strengths. A summary of the demonstration set compounds are given in Table 1. Detailed information about the individual sets is given in Additional file 1 (S3). Table 1 Demonstration set sources and details As a first demonstration of ISiCLE, a benchmark study was performed with 8 different DFT methods to predict 1H and 13C NMR chemical factors for the calculations of chemical shifts in chloroform. Each compound was optimized with the Becke three-parameter Lee–Yang–Parr (B3LYP) hybrid functional [94,95,96] and the 6-31G(d) split-valence basis set [97]. This level of theory in geometry optimization was chosen because of its broad application in the literature for organic molecules [98, 99]. Isotropic magnetic shielding constants were calculated with the 4 different functionals, BLYP [94, 95], B3LYP [97,98,99], B35LYP, and BHLYP [100]. DFT methods were selected with different Hartree–Fock (HF) ratios: BLYP (0% HF), B3LYP (20% HF), B35LYP (35% HF), BHLYP (50% HF). Each method was tested with 2 different correlation-consistent Dunning basis sets (double-zeta cc-pVDZ [101] or triple-zeta cc-pVTZ [101]). All basis sets were obtained from the Environmental Molecular Sciences Laboratory (EMSL) Basis Set Exchange [102,103,104]. For each optimized geometry, 1H and 13C NMR chemical shifts were computed relative to TMS using the Gauge Including Atomic Orbitals (GIAO) formalism [105]. Chloroform solvation effects were simulated using COSMO. For a second demonstration of ISiCLE, the NMR chemical shifts, along with frequency calculations (and subsequent Boltzmann weighting), two sets of axial and equatorial conformers (40 conformers each) of methylcyclohexane were processed. We performed in vacuo molecular dynamics (MD) simulations, using the sander MD software program from AmberTools (version 14) [106], to generate 80 conformers of the methylcyclohexane compound. These conformers were generated in four stages. First, the initial geometries of axial and equatorial conformers were taken from the study of Willoughby et al. [92]. Second, a short energy minimization run was performed to relax the initial structure and to remove any non-physical atom contacts. Third, a short 50 ps MD run was performed (in 0.5 fs time steps) to heat the structure from 0 to 300 K, without non-bonded cutoffs. In the fourth step, we performed 8 simulated annealing cycles, where each cycle was run for 1600 ps in 1 fs MD steps with the following temperature profile: heating from 300 to 600 K (0–300 ps), equilibration at 600 K (300–800 ps), cooling from 600 to 300 K (800–1100 ps), and equilibration at 300 K (1100–1600 ps). Ten conformers from the equilibration stage at 300 K, of each simulated annealing cycle, were randomly selected to obtain the 80 conformers. After the conformers were obtained, M06-2X was used with the basis set of 6-31 + G(d,p) for the geometry optimization and frequency calculations and B3LYP with 6-311 + G(2d,p) method for the calculations of NMR chemical shifts. Relative free energies of the conformations and Boltzmann weighted NMR chemical shifts were compared to those found in the literature [92, 107,108,109]. All results shown in this manuscript were generated using the Cascade high-performance computer (1440 compute nodes, 23,040 Intel Xeon E5-2670 processor cores, 195,840 Intel Xeon Phi 5110P coprocessor cores, and 128 GB memory per compute node [110]), in EMSL (a U.S. national scientific user facility) located at PNNL. Cascade is available for external users through a free, competitive proposal process. ISiCLE can utilize local clusters or high-performance computing resources available to the user. NWChem is freely available and can be downloaded from the website [111, 112]. NMR chemical shift calculations have been used successfully to identify new molecules, determine metabolite identifications, and eliminate structural misassignments [59, 113]. In the last two decades, many research groups have performed benchmark DFT studies on the accuracy of optimized molecular geometry [92, 114,115,116], functionals [117, 118], basis sets [88, 119], and solvation models [90, 120, 121] for NMR chemical shifts [60, 122,123,124]. Each group uses a molecule set focusing on a unique chemical class [78, 125,126,127,128,129] and several groups have recommended different exchange–correlation (XC) energy functionals with a different basis set for a particular condition or suitable to specific chemical functionalities and properties [70, 77, 130,131,132,133]. The prevailing opinion is that reliable isotropic NMR chemical shifts strongly depend on accurate calculations of molecular geometries and inclusion of HF exchange in selected DFT methods, to an extent [134, 135]. On the other hand, the size of the basis set does not increase the accuracy after a point [136, 137]. The ISiCLE software can be installed locally. As seen in Fig. 1, it requires only two input files, prepared in Excel: a sequence of InChI or XYZ molecule geometry files, and a sequence of DFT methods of the user's choice. Preparation of NWChem "run files," 3D molecule geometry files, and/or Linux/Unix shell script "drivers" are not required. As output, ISiCLE prints isotropic shielding, calculated by NWChem, and calculated chemical shifts with respect to a reference molecule and/or application of a user-specified linear regression technique. ISiCLE is a promising tool contributing to standards-free metabolomics, which depends on the ability to calculate properties for thousands of molecules and their associated conformers. Application 1: chemical shift calculations for a demonstration set of molecules To test ISiCLE, we generated a set of 312 molecules. This set is large relative to other metabolomic molecule sets found in the literature, which in our literature survey averaged 34 molecules (Table 1). Our molecule set ranges from small- to large-sized molecules (number of carbon atoms ranging from 1 to 90), and experimental 13C and 1H NMR data in chloroform were available for each of them. Our set also spans a wide array of chemical classes, including acetylides, alkaloids, benzenoids, hydrocarbons, lipids, organohalogens, and organic nitrogen and oxygen compounds. ISiCLE was used to successfully perform DFT calculations for this set under chloroform solvation using eight different levels of DFT theory (4 different functionals and 2 basis sets for 13C and 1H). A total of 2494 carbon nuclei and of 3127 hydrogen nuclei were calculated for all 312 molecules of the demonstration set and compared with experimental data. Deviation bars indicating MAE and MAXAE are plotted for each method in Fig. 3. For both 13C and 1H NMR chemical shifts, the MAE of each method with cc-pVTZ is higher than those with cc-pVDZ. For 13C, the MAE of each method with cc-pVTZ (7–10 ppm) is higher than those with cc-pVDZ (5–6 ppm). MAE of methods with a larger basis set deviate more compared to those with a smaller basis set. The smallest deviations are observed for B3LYP and B35LYP, both in MAE and MAXAE results. The same situation is observed for 1H NMR chemical shifts as well: MAE of each method with cc-pVTZ (~ 0.35 ppm) is higher than those with cc-pVDZ (~ 0.30 ppm). In contrast to 13C NMR chemical shifts, 1H NMR chemical shifts are better predicted with methods using larger basis sets (cc-pVTZ). Although the error differences among each method may be too low to confidently identify the outperforming method, B3LYP/cc-pVDZ is the most successful combination in the calculation of 13C and 1H NMR chemical shifts for our application shown here. Mean absolute errors (MAE) and maximum absolute errors (MAXAE) of chemical shifts for the demonstration set. The grey bars represent MAE, the black bars represent MAXAE. For all methods, geometries are optimized at B3LYP/6-31G(d) in chloroform Figure 4 shows computational costs of DFT combinations for the demonstration set. We found that the smaller basis set (cc-pVDZ) in the calculation of both 13C and 1H NMR chemical shifts was an acceptable compromise between accuracy and computational performance, compared with the larger cc-pVTZ basis. This finding is similar to a recent benchmark study [138] that showed B3LYP/cc-pVDZ is a reliable combination, balancing accuracy with computational cost in 13C chemical shifts calculation. The larger basis set (cc-pVTZ) took 2–3 times longer to complete than cc-pVDZ (in terms of total CPU time). The computational times of the isotropic shielding and chemical shift calculations for this demonstration set are given in the file of DemonstrationSet_CPUtimes.xlsx in the Additional file 2. Computational costs of DFT methods performed for the demonstration set. Each bar is for two DFT methods with basis sets of cc-pVDZ and cc-pVTZ. The grey bars represent CPU times for the methods with cc-pVDZ and the black bars represent those with cc-pVDZ and the black bars represent those with cc-pVTZ Effect of scaling by linear regression We performed the most general approach to error reduction, empirical scaling. Our molecule set has 1554 and 1830 experimental 13C and 1H NMR chemical shifts, respectively. It provides confidence for applying linear regression effectively as it reduces the possibility of overfitting. Empirical scaling was applied to the data obtained with the best combination, B3LYP/cc-pVDZ, using two different relationships: computed shifts versus experimental chemical shifts (Eq. 6), and computed isotropic shieldings versus experimental shifts (Eq. 7). Once the empirical scaling was applied, the accuracy for 13C chemical shifts and 1H chemical shifts improved by 0.7 and 0.11 ppm, respectively. Our computed NMR chemical shifts and shieldings deviate from unity (desired slope = 1) by 0.02 for both 13C and 1H NMR chemical shifts. Linear fits with correlation coefficients of 0.99 (Fig. 5a, b) and 0.93 (Fig. 5c, d) for 13C and 1H NMR chemical shifts, respectively, were observed, which also shows that B3LYP/cc-pVDZ is able to produce data free from random error. Results of linear regression to the 13C and 1H NMR chemical shifts obtained by other DFT methods are given in the Additional file 1 (S5). Linear correlation plots of a 13C and c 1H isotropic shielding values, and b 13C and d 1H NMR chemical shifts versus experimental NMR chemical shifts. Chemical shifts are calculated using the GIAO/B3LYP/cc-pVDZ//B3LYP/6-31G(d) level of theory for the demonstration set in CDCl3 (312 molecules (1554 carbons and 1830 hydrogens)). R2 indicates the correlation coefficient Detailed look at 13C NMR chemical shifts The carbon (13C) magnetic shieldings and chemical shifts derived from the various DFT methods are highly correlated, as shown by a correlation coefficient of 0.99 (Fig. 5). The inclusion of a scaling factor enhances the performance of theoretical calculations with B3LYP/cc-pVDZ//B3LYP/6-31G(d) and decreases the MAE in 13C NMR chemical shifts for this set by approximately 13%. There has been a trend toward using multiple references, such that each molecule should be referenced to a molecule with similar properties to improve accuracy of NMR chemical shifts [123, 134, 139]. Sarotti et al. examined the influence of the reference compound used in the 13C [83] and 1H [82] NMR chemical shift calculations over a set of organic compounds, all of which were included in our calculations. They recommended the use of benzene and methanol as a reference standard in the calculations of chemical shifts of sp-sp2- and sp3- hybridized carbon atoms, respectively, instead of TMS for all type of carbon atoms [140]. Propelled by the discussion in the study of Grimblat et al. [141] about the distribution of the errors observed in sp2- and sp3- carbons, we determined the distribution of the data of chemical shifts of sp2- (933 carbons) and sp3- (745 carbons) hybridized carbons (Fig. 6, the sp2- and sp3- derived series of carbons show two separate chemical shift distributions and two separate error distributions over a much larger variety of compounds than Grimblat et al. For our demonstration set, both errors between calculated and experimental sp2- and sp3- chemical shifts more closely resemble a Student's t-distribution [58, 142], rather than a normal distribution [138, 143]. The correlation coefficients of the errors of sp2- and sp3- carbons are 0.93 and 0.78, and 0.98 and 0.95 for Student's t-distribution and normal distribution, respectively. Chemical shifts of sp2- and sp3- hybridized carbon atoms. a Chemical shifts, b associated errors. Chemical shifts were calculated using the B3LYP/cc-pVDZ//B3LYP/6-31G(d) level of theory in CDCl3 Furthermore, we looked for the bonded neighbors of each carbon and hydrogen extensively in Fig. 7. For carbon shifts, error was measured for carbon (n = 1709), chlorine (n = 149), fluorine (n = 8), hydrogen (n = 1161), nitrogen (n = 199), oxygen (n = 251) and sulfur (n = 20) attachments. The largest deviations occur in carbon–chlorine and carbon–sulfur attachments with MAEs of 11.2 and 5.8 ppm and MAXAE 39.7 and 16.5 ppm, respectively. The study by Li et al. [76], which used a set of chlorinated carbons, reports the same conclusion: calculation accuracy decreases as the size of the basis set used increases, but improvement was obtained after linear regression corrections for B3LYP/6-31 + G(d,p) with slope of 0.98. Other than chlorine and sulfur, carbon-hydrogen attachments also make the 13C NMR chemical shift DFT calculations deviate significantly from experimental values, with MAE of 4.7 and 3.9 ppm and with MAXAE of 51.6 and 34.9 ppm. Carbon was found in rings in 70% of the cases, and these carbons show a MAE of 4.6 ppm. Also, the MAE of 13C NMR chemical shift is 3.9 ppm for carbons bonded to a hydrogen atom but reaches 9.7 ppm in all other cases. Chemical shift prediction errors for different functional groups. a 13C NMR chemical shifts, b 1H NMR chemical shifts. All molecules are from the demonstration set and are calculated using the GIAO/B3LYP/cc-pVDZ//B3LYP/6-31G(d) level of theory in chloroform Oxygen and nitrogen attachments to carbon led to 13C NMR chemical shifts with MAE up to 3.1 and 4.2 ppm and MAXAE of 32.5 and 23.6 ppm, respectively. Interestingly, half the C–O attachments found in ring-form had chemical shifts with a MAE of 2.8 ppm, compared to the chemical shifts of C–O attachments not found in a ring, which had a relatively higher MAE of 3.2 ppm, leading to a percent difference of 14.1%. NMR chemical shifts of C–N attachments, present in a ring or not, show close MAE of 4.19 and 4.26 ppm, respectively, a percent difference of 1.6%. This is to be expected, since C–O attachments are expected to show some deviation in chemical shift due to the polarization of the electron distribution caused by the high electronegativity of oxygen, while nitrogen atoms have a lower electronegativity, leading to a lower deviation in C–N chemical shifts. The correlation plot (Fig. 5a, b) shows a linear pattern with only minor deviations of the predicted 13C shieldings or 13C chemical shifts from the fitted line. It is verified by the correlation coefficient of 0.99, as observed in previous studies [130, 144], that the deviation of the slope from unity within the range of 0.95 and 1.05 is an indicator of a reliable method. However, when placed in subgroups of different attachment types, distant outliers are observed, with some more than 15 ppm away (Fig. 7a). Most outliers are observed in the C–C and C–H attachments, respectively. C–Cl and C–N chemical shifts have a high occurrence of outliers, which may be due to the chemical properties of chlorine and nitrogen such as being remarkably close to first ionization energies. We suspected that some cases of high calculation errors could be due to the consideration of only a single conformer. For the highest accuracy, proper conformational sampling must be considered, as demonstrated below in "Application 2: Boltzmann-weighted NMR chemical shifts of methylcyclohexane" section Detailed look at 1H NMR chemical shifts Proton (1H) chemical shifts are significantly affected by intermolecular interactions, particularly in aqueous states, especially compared to 13C chemical shifts. Agreement with experimental values improves as empirical linear scaling is performed for 1H chemical shifts. GIAO/B3LYP/cc-pVDZ//B3LYP/6-31G(d) yields scaled 1H chemical shifts in chloroform solution having a MAE of 0.30 ppm in comparison with solution experimental values. The 1H chemical shifts in the range of 10–17 ppm show the largest deviation, occurring higher than 5 ppm. In Fig. 7b, error bars are shown for 1H chemical shifts when the hydrogen attaches to carbon (n = 1793), nitrogen (n = 17), and oxygen (n = 41). Oxygen-bound hydrogen nuclei have the largest errors (up to 10 ppm), which is to be expected due to the electronegative property of oxygen atoms, as discussed in the previous section. It is followed by less electronegative nitrogen-bound hydrogen atoms, with an MAE of 0.71 ppm and a MAXAE of 2.25 ppm. About 95% of the 1H NMR chemical shifts calculated for this set are from H–C attachments. These chemical shifts had a MAE of 0.27 ppm and a MAXAE of 4.41 ppm. The high occurrence of outliers could be evidence of how 1H NMR chemical shifts are sensitive to intermolecular interactions. H–O attachments are highly sensitive (to concentration, solvent, temperature, etc.), and it is non-trivial to determine the NMR chemical shift value of arbitrary protons experimentally as well as predict them by using a single, "catch all" DFT method, which explains the relatively low correlation coefficient of 0.93 (Fig. 5c–d). For future studies, we may need to consider the use of different DFT methods, including the use of explicit solvation, particularly in the calculation of 1H NMR chemical shifts in the presence of H–O attachments. Application of empirical scaling to functional groups are given in detail in the Additional file 1 (see S7 and S8). As a final assessment for the data collected with the demonstration set, we assessed the stability and accuracy of the linear regression approach using cross-validation [145]. Cross-validation is a technique mostly used in prediction problems to evaluate how much a given model generalizes to an independent set of data. Specifically, we performed Monte Carlo cross-validation [146, 147]. The procedure of application of Monte Carlo cross-validation method is explained in Additional file 1 (S9). We observed that the estimated linear model parameters (i.e. slope and intercept) from the training set do not differ from that of the entire set. Therefore, the predictive linear model is stable to be accurately estimated and the subsets of 13C and 1H NMR chemical shifts generalize well to the groups that are not represented in the training fold. Application 2: Boltzmann-weighted NMR chemical shifts of methylcyclohexane Metabolites were experimentally interrogated using solution-state NMR, where the observed signal arises from the combined signals of present conformers. It is routine that NMR chemical shifts calculations are carried out on a single dominant conformer. However, it is well known that metabolites do not comprise a single conformer in solution and are instead found in a collection of various conformers [148], and the accuracy of NMR chemical shifts heavily depends on molecular geometries and conformation consideration [46]. It has been shown that for the highest accuracy NMR chemical shift calculations, consideration of conformers is critical, even for relatively small molecules [149]. As a second demonstration of ISiCLE, a conformational analysis based on DFT was performed on a set of 80 conformers of methylcyclohexane using a Boltzmann distribution technique. Boltzmann weighting determines the fractional population of each conformer based on its energy level [92]. High-throughput and straightforward DFT-based NMR chemical shift calculations of all 80 methylcyclohexane conformers was performed by ISiCLE then compared to experimental values. It has been shown by Willoughby et al. [92] that the effects of molecular flexibility on NMR chemical shifts can be captured by Boltzmann weighting analysis, as demonstrated with methylcyclohexane (Fig. 8). Methylcyclohexane is a well-studied small molecule [150,151,152,153,154]. It is flexible, composed of a single methyl group attached to a six-membered ring, and known to exist as an assembly of two chair conformers. There are two distinct conformations of which chair–chair interconversion is rapid and dominated by equatorial to axial conformation. We weighted 40 axial and 40 equatorial conformers in chloroform and obtained a relative free energy of 1.99 kcal/mol with NWChem, similar to calculations using Gaussian [155] by Willoughby et al. [92], and similar to experimental findings (1.73 kcal/mol [156], 1.93 kcal/mol] [108]) and computed (2.15–2.31 kcal/mol [107] and 1.68–2.48 kcal/mol [109]) values. We compared the Boltzmann-weighted 1H and 13C chemical shifts (a ratio of 3% axial to 97% equatorial) to experimental values reported by Willoughby et al. [92]. The Boltzmann-weighted, scaled MAE was 0.017 ppm for 1H chemical shifts (\( \delta_{exp} = 1.00 \times \delta_{comp} + \sim0.00 \)), similar to the experimental value of 0.018 ppm in the study of Willoughby et al. Also, the MAE for 13C chemical shifts was 4.4 ppm and decreases to 0.8 ppm when the chemical shifts were scaled (\( \delta_{exp} = 0.99 \times \delta_{comp} + 0.13 \)) (Fig. 9). Further details can be found in the Additional file 1 (S10). The a equatorial and b axial structures of methylcyclohexane Experimental and scaled chemical shifts (ppm) of methylcyclohexane We introduce the first release of ISiCLE, which predicts NMR chemical shifts of any given set of molecules relevant to metabolomics for a given set of DFT techniques. ISiCLE calculates the unscaled or scaled NMR chemical shifts (depending on the user's choice of DFT method) and writes the data to appended MDL Molfiles. It also quantifies the error in calculated NMR chemical shifts if the user provides experimental values. The functionality of ISiCLE is demonstrated on a molecule set consisting of 312 molecules, with experimental chemical shifts reported in chloroform solvent. 1H and 13C NMR chemical shifts were calculated using 8 different levels of DFT (BLYP, B3LYP, B35LYP, BHLYP and, cc-pVDZ, and ccpVTZ), referenced to TMS in chloroform by carrying initial geometry optimizations out at B3LYP/6-31G(d) for all molecules. The optimal combination for this set was found to be B3LYP/cc-pVDZ//B3LYP/6-31G(d) with mean absolute error of 0.33 and 3.93 ppm for proton and carbon chemical shifts, respectively. We show that DFT calculations followed by linear scaling do in fact provide an analytically useful degree of accuracy and reliability. Finally, we used ISiCLE for the calculation of NMR chemical shifts of 80 Boltzmann-weighted conformers of methylcyclohexane and compared our results with earlier studies in the literature. ISiCLE is a promising automated framework for accurate NMR chemical shift calculations of small organic molecules. Through this tool, we hope to expand chemical shift libraries, without the need for chemical standards run in the laboratory, which could lead to significantly more identifiable metabolites. Future work includes wrapping individual steps of the ISiCLE NMR module into a formal workflow management system such as Snakemake, to include better fault tolerance, modularization, and improved data provenance. Furthermore, additional chemical properties will be included, such as ion mobility collision cross section and infrared spectra. Finally, ISiCLE will be adapted to run seamlessly on cloud computer resources such as Amazon AWS, Microsoft Azure, and Google Cloud. ISiCLE is a promising tool contributing to standards-free metabolomics, which depends on the ability to calculate properties for thousands of molecules and their associated conformers. Fiehn O (2002) Metabolomics—the link between genotypes and phenotypes. Plant Mol Biol 48(1–2):155–171 Sumner LW, Mendes P, Dixon RA (2003) Plant metabolomics: large-scale phytochemistry in the functional genomics era. Phytochemistry 62(6):817–836 Bino RJ et al (2004) Potential of metabolomics as a functional genomics tool. Trends Plant Sci 9(9):418–425 Dobson CM (2004) Chemical space and biology. Nature 432(7019):824–828 Nicholson JK et al (2002) Metabonomics: a platform for studying drug toxicity and gene function. Nat Rev Drug Discov 1(2):153–161 Berendsen RL, Pieterse CM, Bakker PA (2012) The rhizosphere microbiome and plant health. Trends Plant Sci 17(8):478–486 Griffin JL, Bollard ME (2004) Metabonomics: its potential as a tool in toxicology for safety assessment and data integration. Curr Drug Metab 5(5):389–398 Richard AM, Gold LS, Nicklaus MC (2006) Chemical structure indexing of toxicity data on the internet: moving toward a flat world. Curr Opin Drug Discov Dev 9(3):314–325 Daviss B (2005) Growing pains for metabolomics. Scientist 19(8):25–28 Nicholson JK, Wilson ID (2003) Understanding 'global' systems biology: metabonomics and the continuum of metabolism. Nat Rev Drug Discov 2(8):668–676 Beckonert O et al (2007) Metabolic profiling, metabolomic and metabonomic procedures for NMR spectroscopy of urine, plasma, serum and tissue extracts. Nat Protoc 2(11):2692–2703 Nicholson JK, Lindon JC, Holmes E (1999) 'Metabonomics': understanding the metabolic responses of living systems to pathophysiological stimuli via multivariate statistical analysis of biological NMR spectroscopic data. Xenobiotica 29(11):1181–1189 Nicholson JK et al (1995) 750 MHz 1H and 1H-13C NMR spectroscopy of human blood plasma. Anal Chem 67(5):793–811 Soga T et al (2003) Quantitative metabolome analysis using capillary electrophoresis mass spectrometry. J Proteome Res 2(5):488–494 Smith CA et al (2006) XCMS: processing mass spectrometry data for metabolite profiling using nonlinear peak alignment, matching, and identification. Anal Chem 78(3):779–787 Dettmer K, Aronov PA, Hammock BD (2007) Mass spectrometry-based metabolomics. Mass Spectrom Rev 26(1):51–78 Smith CA et al (2005) METLIN: a metabolite mass spectral database. Ther Drug Monit 27(6):747–751 Wishart DS et al (2013) HMDB 3.0—the human metabolome database in 2013. Nucleic Acids Res 41(Database issue):D801-7 Ulrich EL et al (2008) BioMagResBank. Nucleic Acids Res 36(Database issue):D402-8 Pence HE, Williams A (2010) ChemSpider: an online chemical information resource. J Chem Educ 87(11):1123–1124 Tautenhahn R et al (2012) XCMS Online: a web-based platform to process untargeted metabolomic data. Anal Chem 84(11):5035–5039 Little JL et al (2012) Identification of "known unknowns" utilizing accurate mass data and ChemSpider. J Am Soc Mass Spectrom 23(1):179–185 Little JL, Cleven CD, Brown SD (2011) Identification of "known unknowns" utilizing accurate mass data and chemical abstracts service databases. J Am Soc Mass Spectrom 22(2):348–359 Patti GJ et al (2013) A view from above: cloud plots to visualize global metabolomic data. Anal Chem 85(2):798–804 Wishart DS et al (2009) HMDB: a knowledgebase for the human metabolome. Nucleic Acids Res 37(Database issue):D603-10 Randazzo GM et al (2017) Enhanced metabolite annotation via dynamic retention time prediction: steroidogenesis alterations as a case study. J Chromatogr, B: Anal Technol Biomed Life Sci 1071:11–18 Vinaixa M et al (2016) Mass spectral databases for LC/MS- and GC/MS-based metabolomics: state of the field and future prospects. TrAC Trends Anal Chem 78:23–35 Bocker S (2017) Searching molecular structure databases using tandem MS data: are we there yet? Curr Opin Chem Biol 36:1–6 PubMed Article CAS Google Scholar Allen F, Greiner R, Wishart D (2015) Competitive fragmentation modeling of ESI-MS/MS spectra for putative metabolite identification. Metabolomics 11(1):98–110 Wolf S et al (2010) In silico fragmentation for computer assisted identification of metabolite mass spectra. BMC Bioinform 11:148 Zheng XY et al (2017) Structural elucidation of cis/trans dicaffeoylquinic acid photoisomerization using ion mobility spectrometry–mass spectrometry. J Phys Chem Lett 8(7):1381–1388 Metz TO et al (2017) Integrating ion mobility spectrometry into mass spectrometry-based exposome measurements: what can it add and how far can it go? Bioanalysis 9(1):81–98 Graham TR et al (2016) Precursor ion–ion aggregation in the Brust–Schiffrin synthesis of alkanethiol nanoparticles. J Phys Chem C 120(35):19837–19847 Zhou ZW et al (2016) Large-scale prediction of collision cross-section values for metabolites in ion mobility-mass spectrometry. Anal Chem 88(22):11084–11091 Zhou ZW et al (2017) LipidCCS: prediction of collision cross-section values for lipids with high precision to support ion mobility-mass spectrometry-based lipidomics. Anal Chem 89(17):9559–9566 Brockherde F et al (2017) Bypassing the Kohn–Sham equations with machine learning. Nat Commun 8:872 Sarotti AM (2013) Successful combination of computationally inexpensive GIAO C-13 NMR calculations and artificial neural network pattern recognition: a new strategy for simple and rapid detection of structural misassignments. Org Biomol Chem 11(29):4847–4859 Forsyth DA, Sebag AB (1997) Computed C-13 NMR chemical shifts via empirically scaled GIAO shieldings and molecular mechanics geometries. Conformation and configuration from C-13 shifts. J Am Chem Soc 119(40):9483–9494 Casabianca LB, De Dios AC (2008) Ab initio calculations of NMR chemical shifts. J Chem Phys 128(5):052201 Auer AA, Gauss J, Stanton JF (2003) Quantitative prediction of gas-phase C-13 nuclear magnetic shielding constants. J Chem Phys 118(23):10407–10417 Mothana B, Ban FQ, Boyd RJ (2005) Validation of a computational scheme to study N-15 and C-13 nuclear shielding constants. Chem Phys Lett 401(1–3):7–12 Saito H (1986) Conformation-dependent C-13 chemical-shifts—a new means of conformational characterization as obtained by high-resolution solid-state C-13 NMR. Magn Reson Chem 24(10):835–852 Jaime C et al (1991) C-13 NMR chemical-shifts—a single rule to determine the conformation of calix[4]arenes. J Org Chem 56(10):3372–3376 Yannoni CS et al (1991) C-13 NMR-study of the C60 cluster in the solid-state—molecular-motion and carbon chemical-shift anisotropy. J Phys Chem 95(1):9–10 Kwan EE, Liu RY (2015) Enhancing NMR prediction for organic compounds using molecular dynamics. J Chem Theory Comput 11(11):5083–5089 Malkin VG et al (1996) Solvent effect on the NMR chemical shieldings in water calculated by a combination of molecular dynamics and density functional theory. Chem Eur J 2(4):452–457 Casanovas J et al (2001) Calculated and experimental NMR chemical shifts of p-menthane-3,9-diols. A combination of molecular dynamics and quantum mechanics to determine the structure and the solvent effects. J Org Chem 66(11):3775–3782 Benzi C et al (2004) Reliable NMR chemical shifts for molecules in solution by methods rooted in density functional theory. Magn Reson Chem 42:S57–S67 Garcellano RC et al (2018) Isolation of tryptanthrin and reassessment of evidence for its isobaric isostere wrightiadione in plants of the Wrightia Genus. J Nat Prod. https://doi.org/10.1021/acs.jnatprod.8b00567 Valiev M et al (2010) NWChem: a comprehensive and scalable open-source solution for large scale molecular simulations. Comput Phys Commun 181(9):1477–1489 Klamt A, Schuurmann G (1993) Cosmo—a new approach to dielectric screening in solvents with explicit expressions for the screening energy and its gradient. J Chem Soc Perkin Trans 2(5):799–805 Seca AML et al (2000) Chemical composition of the light petroleum extract of Hibiscus cannabinus bark and core. Phytochem Anal 11(6):345–350 Watts HD, Mohamed MNA, Kubicki JD (2011) Comparison of multistandard and TMS-standard calculated NMR shifts for coniferyl alcohol and application of the multistandard method to lignin dimers. J Phys Chem B 115(9):1958–1970 The Open Babel Package version 2.3.1. http://openbabel.org. Accessed 16 Oct 2018 O'Boyle NM et al (2011) Open Babel: an open chemical toolbox. J Cheminform 3:33 Halgren TA (1996) Merck molecular force field. 1. Basis, form, scope, parameterization, and performance of MMFF94. J Comput Chem 17(5–6):490–519 Dalby A et al (1992) Description of several chemical-structure file formats used by computer-programs developed at Molecular Design Limited. J Chem Inf Comput Sci 32(3):244–255 Smith SG, Goodman JM (2010) Assigning stereochemistry to single diastereoisomers by GIAO NMR calculation: the DP4 probability. J Am Chem Soc 132(37):12946–12959 Brown SG, Jansma MJ, Hoye TR (2012) Case study of empirical and computational chemical shift analyses: reassignment of the relative configuration of phomopsichalasin to that of diaporthichalasin. J Nat Prod 75(7):1326–1331 Aliev AE, Courtier-Murias D, Zhou S (2009) Scaling factors for carbon NMR chemical shifts obtained from DFF B3LYP calculations. J Mol Struct Theochem 893(1–3):1–5 Baldridge KK, Siegel JS (1999) Correlation of empirical delta(TMS) and absolute NMR chemical shifts predicted by ab initio computations. J Phys Chem A 103(20):4038–4042 Pupier M et al (2018) NMReDATA, a standard to report the NMR assignment and parameters of organic compounds. Magn Reson Chem 56(8):703–715 Alver O (2011) DFT, FT-Raman, FT-IR, solution and solid state NMR studies of 2,4-dimethoxyphenylboronic acid. C R Chim 14(5):446–455 Asiri AM et al (2011) Synthesis, molecular conformation, vibrational and electronic transition, isometric chemical shift, polarizability and hyperpolarizability analysis of 3-(4-Methoxy-phenyl)-2-(4-nitro-phenyl)-acrylonitrile: a combined experimental and theoretical analysis. Spectrochim Acta Part A Mol Biomol Spectrosc 82(1):444–455 Bally T, Rablen PR (2011) Quantum-chemical simulation of H-1 NMR spectra. 2. Comparison of DFT-based procedures for computing proton–proton coupling constants in organic molecules. J Org Chem 76(12):4818–4830 Bagno A, Rastrelli F, Saielli G (2008) Predicting the NMR spectra of nucleotides by DFT calculations: cyclic uridine monophosphate. Magn Reson Chem 46(6):518–524 Borkowski EJ, Suvire FD, Enriz RD (2010) Advances in correlation between experimental and DFT/GIAO computed C-13 NMR chemical shifts: a theoretical study on pentacyclic terpenoids (fernenes). J Mol Struct Theochem 953(1–3):83–90 Coruh A et al (2011) Synthesis, molecular conformation, vibrational, electronic transition, and chemical shift assignments of 4-(thiophene-3-ylmethoxy)phthalonitrile: a combined experimental and theoretical analysis. Struct Chem 22(1):45–56 Fulmer GR et al (2010) NMR chemical shifts of trace impurities: common laboratory solvents, organics, and gases in deuterated solvents relevant to the organometallic chemist. Organometallics 29(9):2176–2179 Hill DE, Vasdev N, Holland JP (2015) Evaluating the accuracy of density functional theory for calculating H-1 and C-13 NMR chemical shifts in drug molecules. Comput Theor Chem 1051:161–172 Izgi T et al (2007) FT-IR and NMR investigation of 2-(1-cyclohexenyl)ethylamine: a combined experimental and theoretical study. Spectrochim Acta Part A Mol Biomol Spectrosc 68(1):55–62 Karabacak M et al (2009) Experimental (UV, NMR, IR and Raman) and theoretical spectroscopic properties of 2-chloro-6-methylaniline. Mol Phys 107(3):253–264 Krishnakumar V et al (2012) Molecular structure, vibrational spectra, HOMO, LUMO and NMR studies of 2-chloro-4-nitrotoluene and 4-chloro-2-nitrotoluene. Spectrochim Acta Part A Mol Biomol Spectrosc 91:1–10 Krishnakumar V, Barathi D, Mathammal R (2012) Molecular structure, vibrational spectra, HOMO, LUMO and NMR studies of 1,2-dichloro-4-nitrobenzene and 2,3,5,6-tetrachloro-1-nitrobenzene based on density functional calculations. Spectrochim Acta Part A Mol Biomol Spectrosc 86:196–204 Krishnakumar V et al (2012) Molecular structure, spectroscopic studies (FTIR, FT-Raman and NMR) and HOMO–LUMO analysis of 6-chloro-o-cresol and 4-chloro-3-methyl phenol by density functional theoretical study. Spectrochim Acta Part A Mol Biomol Spectrosc 97:144–154 Li YJ et al (2011) Screening and characterization of natural antioxidants in four Glycyrrhiza species by liquid chromatography coupled with electrospray ionization quadrupole time-of-flight tandem mass spectrometry. J Chromatogr A 1218(45):8181–8191 Lomas JS (2016) H-1 NMR spectra of alcohols in hydrogen bonding solvents: DFT/GIAO calculations of chemical shifts. Magn Reson Chem 54(1):28–38 Osmialowski B, Kolehmainen E, Gawinecki R (2001) GIAO/DFT calculated chemical shifts of tautomeric species. 2-Phenacylpyridines and (Z)-2-(2-hydroxy-2-phenylvinyl)pyridines. Magn Reson Chem 39(6):334–340 Parlak C et al (2008) Molecular structure, NMR analyses, density functional theory and ab initio Hartree–Fock calculations of 4,4′-diaminooctafluorobiphenyl. J Mol Struct 891(1–3):151–156 Perez M et al (2006) Accuracy versus time dilemma on the prediction of NMR chemical shifts: a case study (chloropyrimidines). J Org Chem 71(8):3103–3110 Rablen PR, Pearlman SA, Finkbiner J (1999) A comparison of density functional methods for the estimation of proton chemical shifts with chemical accuracy. J Phys Chem A 103(36):7357–7363 Sarotti AM, Pellegrinet SC (2012) Application of the multi-standard methodology for calculating H-1 NMR chemical shifts. J Org Chem 77(14):6059–6065 Sarotti AM, Pellegrinet SC (2009) A multi-standard approach for GIAO C-13 NMR calculations. J Org Chem 74(19):7254–7260 Sebastian S et al (2011) Quantum mechanical study of the structure and spectroscopic (FT-IR, FT-Raman, C-13, H-1 and UV), first order hyperpolarizabilities, NBO and TD-DFT analysis of the 4-methyl-2-cyanobiphenyl. Spectrochim Acta Part A Mol Biomol Spectrosc 78(2):590–600 Senyel M, Unal A, Alver O (2009) Molecular structure, NMR analyses, density functional theory and ab initio Hartree–Fock calculations of 3-phenylpropylamine. C R Chim 12(6–7):808–815 Senyel M, Alver O, Parlak C (2008) H-1, C-13, N-15 NMR and (n)J(C, H) coupling constants investigation of 3-piperidino-propylamine: a combined experimental and theoretical study. Spectrochim Acta Part A Mol Biomol Spectrosc 71(3):830–834 Sridevi C, Shanthi G, Velraj G (2012) Structural, vibrational, electronic, NMR and reactivity analyses of 2-amino-4H-chromene-3-carbonitrile (ACC) by ab initio HF and DFT calculations. Spectrochim Acta Part A Mol Biomol Spectrosc 89:46–54 Tormena CF, da Silva GVJ (2004) Chemical shifts calculations on aromatic systems: a comparison of models and basis sets. Chem Phys Lett 398(4–6):466–470 Vijaya P, Sankaran KR (2015) A combined experimental and DFT study of a novel unsymmetrical azine 2-(4-methoxybenzylidene)-1-(1-(4-isobutylphenyl) ethylidene)hydrazine. Spectrochim Acta Part A Mol Biomol Spectrosc 138:460–473 Wiitala KW, Hoye TR, Cramer CJ (2006) Hybrid density functional methods empirically optimized for the computation of C-13 and H-1 chemical shifts in chloroform solution. J Chem Theory Comput 2(4):1085–1092 Wiitala KW et al (2007) Evaluation of various DFT protocols for computing H-1 and C-13 chemical shifts to distinguish stereoisomers: diastereomeric 2-, 3-, and 4-methylcyclohexanols as a test set. J Phys Org Chem 20(5):345–354 Willoughby PH, Jansma MJ, Hoye TR (2014) A guide to small-molecule structure assignment through computation of (H-1 and C-13) NMR chemical shifts. Nat Protoc 9(3):643–660 Yang J, Huang SX, Zhao QS (2008) Structure revision of hassananes with use of quantum mechanical (13)C NMR chemical shifts and UV–Vis absorption spectra. J Phys Chem A 112(47):12132–12139 Becke AD (1988) Density-functional exchange-energy approximation with correct asymptotic-behavior. Phys Rev A 38(6):3098–3100 Lee CT, Yang WT, Parr RG (1988) Development of the Colle–Salvetti correlation-energy formula into a functional of the electron-density. Phys Rev B 37(2):785–789 Becke AD (1993) Density-functional thermochemistry. 3. The role of exact exchange. J Chem Phys 98(7):5648–5652 Hehre WJ (1986) Ab initio molecular orbital theory. Wiley-Interscience, New York Ruiz E, Nunzi F, Alvarez S (2006) Magnetic communication through functionalized nanotubes: a theoretical study. Nano Lett 6(3):380–384 Johnson JRT, Panas I (2001) Water adsorption and hydrolysis on molecular Al oxides and hydroxides—solvation versus cluster formation. Phys Chem Chem Phys 3(24):5482–5488 Becke AD (1993) A new mixing of Hartree–Fock and local density-functional theories. J Chem Phys 98(2):1372–1377 Dunning TH (1989) Gaussian-basis sets for use in correlated molecular calculations. 1. The atoms boron through neon and hydrogen. J Chem Phys 90(2):1007–1023 Feller D (1996) The role of databases in support of computational chemistry calculations. J Comput Chem 17(13):1571–1586 Schuchardt KL et al (2007) Basis set exchange: a community database for computational sciences. J Chem Inf Model 47(3):1045–1052 Basis Set Exchange. https://bse.pnl.gov/bse/portal. Accessed 16 Oct 2018 Dupuis M (2001) New integral transforms for molecular properties and application to a massively parallel GIAO-SCF implementation. Comput Phys Commun 134(2):150–166 Case DA, Babin V, Berryman JT, Betz RM, Cai Q, Cerutti DS, Cheatham TE, III, Darden TA, Duke RE, Gohlke H, Goetz AW, Gusarov S, Homeyer N, Janowski P, Kaus J, Kolossváry I, Kovalenko A, Lee TS, LeGrand S, Luchko T, Luo R, Madej B, Merz KM, Paesani F, Roe DR, Roitberg A, Sagui C, Salomon-Ferrer R, Seabra G, Simmerling CL, Smith W, Swails J, Walker RC, Wang J, Wolf RM, Wu X, Kollman PA (2014) AMBER 14, University of California, San Francisco Ribeiro DS, Rittner R (2003) The role of hyperconjugation in the conformational analysis of methylcyclohexane and methylheterocyclohexanes. J Org Chem 68(17):6780–6787 Abraham RJ, Ribeiro DS (2001) Conformational analysis. Part 36. A variable temperature C-13 NMR study of conformational equilibria in methyl substituted cycloalkanes. J Chem Soc Perkin Trans 2(3):302–307 Freeman F, Kasner ML, Hehre WJ (2001) An ab initio molecular orbital theory study of the conformational free energies of 2-methyl-, 3-methyl-, and 4-methyltetrahydro-2H-pyran. J Mol Struct Theochem 574:19–26 Cascade Supercomputer. https://www.emsl.pnnl.gov/emslweb/instruments/computing-cascade-atipa-1440-intel-xeon-phi-node-fdr-infiniband-linux-cluster. Accessed 16 Oct 2018 NWChem: Open source high-performance computational chemistry. www.nwchem-sw.org. Accessed 16 Oct 2018 NWChem: Open source high-performance computational chemistry—Github. https://github.com/nwchemgit. Accessed 16 Oct 2018 Tuan NQ et al (2017) A grayanotox-9(11)-ene derivative from Rhododendron brachycarpum and its structural assignment via a protocol combining NMR and DP4 plus application. Phytochemistry 133:45–50 Barone G et al (2002) Determination of the relative stereochemistry of flexible organic compounds by ab initio methods: conformational analysis and Boltzmann-averaged GIAO C-13 NMR chemical shifts. Chem Eur J 8(14):3240–3245 Barone G et al (2002) Structure validation of natural products by quantum-mechanical GIAO calculations of C-13 NMR chemical shifts. Chem Eur J 8(14):3233–3239 Remya K, Suresh CH (2013) Which density functional is close to CCSD accuracy to describe geometry and interaction energy of small non-covalent dimers? A benchmark study using gaussian09. J Comput Chem 34(15):1341–1353 Zhao Y, Truhlar DG (2008) Improved description of nuclear magnetic resonance chemical shielding constants using the M06-L meta-generalized-gradient-approximation density functional. J Phys Chem A 112(30):6794–6799 Magyarfalvi G, Pulay P (2003) Assessment of density functional methods for nuclear magnetic resonance shielding calculations. J Chem Phys 119(3):1350–1357 Cimino P et al (2004) Comparison of different theory models and basis sets in the calculation of C-13 NMR chemical shifts of natural products. Magn Reson Chem 42:S26–S33 Cramer CJ, Truhlar DG (1999) Implicit solvation models: equilibria, structure, spectra, and dynamics. Chem Rev 99(8):2161–2200 Reddy G, Yethiraj A (2006) Implicit and explicit solvent models for the simulation of dilute polymer solutions. Macromolecules 39(24):8536–8542 Bagno A, Rastrelli F, Saielli G (2003) Predicting C-13 NMR spectra by DFT calculations. J Phys Chem A 107(46):9964–9973 Wang B et al (2001) Accurate prediction of proton chemical shifts. I. Substituted aromatic hydrocarbons. J Comput Chem 22(16):1887–1895 Wang B, Hinton JF, Pulay P (2002) Accurate prediction of proton chemical shifts. II. Peptide analogues. J Comput Chem 23(4):492–497 Smirnov SN et al (1996) Hydrogen deuterium isotope effects on the NMR chemical shifts and geometries of intermolecular low-barrier hydrogen-bonded complexes. J Am Chem Soc 118(17):4094–4101 Benedict H et al (1996) Hydrogen/deuterium isotope effects on the N-15 NMR chemical shifts and geometries of low-barrier hydrogen bonds in the solid state. J Mol Struct 378(1):11–16 Gidley MJ, Bociek SM (1988) C-13 Cp/MAS NMR-studies of amylose inclusion complexes, cyclodextrins, and the amorphous phase of starch granules—relationships between glycosidic linkage conformation and solid-state C-13 chemical-shifts. J Am Chem Soc 110(12):3820–3829 Buckingham AD (1960) Chemical shifts in the nuclear magnetic resonance spectra of molecules containing polar groups. Can J Chem Rev Can Chim 38(2):300–307 Gauss J (1993) Effects of electron correlation in the calculation of nuclear-magnetic-resonance chemical-shifts. J Chem Phys 99(5):3629–3643 Lodewyk MW, Siebert MR, Tantillo DJ (2012) Computational prediction of 1H and 13C chemical shifts: a useful tool for natural product, mechanistic, and synthetic organic chemistry. Chem Rev 112(3):1839–1862 Flaig D et al (2014) Benchmarking hydrogen and carbon NMR chemical shifts at HF, DFT, and MP2 levels. J Chem Theory Comput 10(2):572–578 Gregor T, Mauri F, Car R (1999) A comparison of methods for the calculation of NMR chemical shifts. J Chem Phys 111(5):1815–1822 Teale AM et al (2013) Benchmarking density-functional theory calculations of NMR shielding constants and spin–rotation constants using accurate coupled-cluster calculations. J Chem Phys 138(2):024111 Wu A et al (2007) Systematic studies on the computation of nuclear magnetic resonance shielding constants and chemical shifts: the density functional models. J Comput Chem 28(15):2431–2442 Giesen DJ, Zumbulyadis N (2002) A hybrid quantum mechanical and empirical model for the prediction of isotropic C-13 shielding constants of organic molecules. Phys Chem Chem Phys 4(22):5498–5507 Jain R, Bally T, Rablen PR (2009) Calculating accurate proton chemical shifts of organic molecules with density functional methods and modest basis sets. J Org Chem 74(11):4017–4023 Gao HW et al (2010) Comparison of different theory models and basis sets in the calculations of structures and C-13 NMR spectra of [Pt(en)(CBDCA-O, O′)], an analogue of the antitumor drug carboplatin. J Phys Chem B 114(11):4056–4062 Xin D et al (2017) Development of a (13)C NMR chemical shift prediction procedure using B3LYP/cc-pVDZ and empirically derived systematic error correction terms: a computational small molecule structure elucidation method. J Org Chem 82(10):5135–5145 Zhang Y et al (2006) OPBE: a promising density functional for the calculation of nuclear shielding constants. Chem Phys Lett 421(4–6):383–388 d'Antuono P et al (2008) A joined theoretical–experimental investigation on the 1H and 13C NMR signatures of defects in poly(vinyl chloride). J Phys Chem B 112(47):14804–14818 Grimblat N, Zanardi MM, Sarotti AM (2015) Beyond DP4: an improved probability for the stereochemical assignment of isomeric compounds using quantum chemical calculations of NMR shifts. J Org Chem 80(24):12526–12534 Navarro-Vazquez A (2017) State of the art and perspectives in the application of quantum chemical prediction of H-1 and C-13 chemical shifts and scalar couplings for structural elucidation of organic compounds. Magn Reson Chem 55(1):29–32 Ermanis K et al (2017) Doubling the power of DP4 for computational structure elucidation. Org Biomol Chem 15(42):8998–9007 Pierens GK (2014) H-1 and C-13 NMR scaling factors for the calculation of chemical shifts in commonly used solvents using density functional theory. J Comput Chem 35(18):1388–1394 Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. In: International joint conference on articial intelligence Dubitzky W, Granzow M, Berrar DP (2007) Fundamentals of data mining in genomics and proteomics. Springer, Berlin Xu Q-S, Liang Y-Z (2001) Monte Carlo cross validation. Chemom Intell Lab Syst 56(1):1–11 Li DW, Bruschweiler R (2010) Certification of molecular dynamics trajectories with NMR chemical shifts. J Phys Chem Lett 1(1):246–248 Di Micco S et al (2013) Plakilactones G and H from a marine sponge. Stereochemical determination of highly flexible systems by quantitative NMR-derived interproton distances combined with quantum mechanical calculations of C-13 chemical shifts. Beilstein J Org Chem 9:2940–2949 Hirsch JA (1967) Table of conformational energies. Top Stereochem 1:199 Beckett CW, Pitzer KS, Spitzer R (1947) The thermodynamic properties and molecular structure of cyclohexane, methylcyclohexane, ethylcyclonexane and the seven dimethylcyclohexanes. J Am Chem Soc 69(10):2488–2495 Streitwieser A, Heathcock CH, Kosower E (1992) Introduction to organic chemistry. Macmillan, New York Smith MB, March J (1992) March's advanced organic chemistry. Wiley, New York Solomons TWG (2000) Organic chemistry. Wiley, New York Frisch MJ et al (2016) Gaussian 16 Rev. B.01. Gaussian, Inc, Wallingford Booth H, Everett JR (1980) Experimental-determination of the conformational free-energy—enthalpy, and entropy differences for alkyl-groups in alkylcyclohexanes by low-temperature C-13 magnetic-resonance spectroscopy. J Chem Soc Perkin Trans 2(2):255–259 YY designed the tool, gathered the dataset, performed the calculations and provided the analysis. NG provided technical support. RR conceived and supervised the project. All authors participated in the manuscript preparation. All authors read and approved the final manuscript. This research was supported by PNNL Laboratory Directed Research and Development program, the Microbiomes in Transition (MinT) Initiative. This work was performed in the W. R. Wiley Environmental Molecular Sciences Laboratory (EMSL), a DOE national scientific user facility at the PNNL. The NWChem calculations were performed using the Cascade supercomputer at the EMSL. PNNL is operated by Battelle for the DOE under contract DE-AC05-76RL0 1830. Project name: ISiCLE. Project home page: github.com/pnnl/isicle. Programming language: Python. Operating system(s): Platform independent. License: GNU GPL license. Free academic and non-profit research use only. All data files, source code, and tutorial are provided in the additional files. The Gene and Linda Voiland School of Chemical Engineering and Bioengineering, Washington State University, Pullman, WA, USA Yasemin Yesiltepe & Ryan S. Renslow Earth and Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA, USA Yasemin Yesiltepe, Jamie R. Nuñez, Sean M. Colby, Dennis G. Thomas, Mark I. Borkum, Nancy M. Washton, Thomas O. Metz, Justin G. Teeguarden, Niranjan Govind & Ryan S. Renslow Nuclear Magnetic Resonance Facility, Oregon State University, Corvallis, OR, 97331, USA Patrick N. Reardon Yasemin Yesiltepe Jamie R. Nuñez Sean M. Colby Dennis G. Thomas Mark I. Borkum Nancy M. Washton Thomas O. Metz Justin G. Teeguarden Niranjan Govind Ryan S. Renslow Correspondence to Ryan S. Renslow. Supporting information document. Supporting information files. Including tutorial, code, and all other files. Yesiltepe, Y., Nuñez, J.R., Colby, S.M. et al. An automated framework for NMR chemical shift calculations of small organic molecules. J Cheminform 10, 52 (2018). https://doi.org/10.1186/s13321-018-0305-8 NWchem Quantum chemistry
CommonCrawl
Home Journals MMEP Experimental Investigation on the Thermal Performance of a Heat Pipe-based Cooling System Experimental Investigation on the Thermal Performance of a Heat Pipe-based Cooling System Améni Driss | Samah Maalej | Isra Chouat | Mohamed Chaker Zaghdoudi* Laboratoire Matériaux, Mesures et Applications (MMA), Institut National des Sciences Appliquées et de Technologie (INSAT) University of Carthage, Centre Urbain Nord, BP N° 676 – 1080 Cédex, Tunis, Tunisia [email protected] An experimental study is carried out in order to determine the thermal performances of a water-cooled heat pipe cooling system. An experiment rig is designed, fabricated and fully instrumented to test the cooling system prototype. The results show that the maximum heat transport capacity of the heat pipe increase with the water-cooling temperature; however, its overall thermal resistance decreases. Correlations for heat transfer in the evaporator and condenser sections are proposed. A model is also developed in order to determine the capillary limit as well as the heat transfer in the heat pipe. The model can predict the experimental results within -1.7 % and +7.9 % when estimating the capillary limit and underestimates the heat pipe overall thermal resistance within -17.8 % and -9.7 %; however, it overestimates the evaporator temperature within 4.4 % and 9.5 %. capillary pumping, electronics cooling, heat pipes, grooves Power conversion modules include semiconductor components that insure the control of the energy transfer. Components such as IGBT enable to increase the frequency of the converter operation. Because of the high commutative power levels and high operating frequencies, and since the power modules tend to become more and more compact, the power densities dissipated from such components are very high, which causes a reduction in their lifetime and could lead to their damage. Hence, it is necessary to use effective cooling systems capable to evacuate the heat generated and maintain the junction temperature of the electronic component at values allowing safe operation. An efficient classic cooling system is composed by cold plates that insure the cooling underneath the IGBT by fluid circulation. This solution presents drawbacks since the electrical operation can fail due to fluid leakage. A solution to this problem is to remove the heat by a heat pipe system placed underneath the IGBT. The heat is rejected by a cold plate, which is cooled by liquid circulation (Figure 1). This solution could be as efficient as the classic one if the heat pipe thermal resistance is very low. The thermal performances of the heat pipes depend on several parameters among them we can distinguish: (i) the geometrical characteristics, (ii) the operating conditions such as the heat input power, the heat sink temperature, and the external field forces (gravity, accelerations, vibrations, magneto-hydrodynamic, and electrohydrodynamic), (iii) the characteristics of the capillary structures (grooves, sintered powder, screen meshes, metal wires or a mixture of them), and (iv) the working fluid. The grooved cylindrical heat pipes are commonly used in standard electronics cooling applications because their thermal resistances are lower than those including other capillary structures; however, their thermal performances can be altered when considering special operating conditions, especially including gravity and acceleration. Previous studies on grooved cylindrical heat pipes have reported thermal performances in either steady state or transient regime [1-13]. Tests on transient state regime have been paid a special attention since it is important to identify the thermal behavior of the heat pipes in these conditions as the electronic components are working in the most of time in transient conditions [9, 10]. Figure 1. Heat pipe-based cooling system Although the experimental studies dealing with steady-state conditions are numerous; however, they do not propose generalized laws in order to identify the heat transfer in the evaporation and condensation zones. As a generalized correlation is important for the calculation of the evaporator and the condenser thermal resistances that are useful in theoretical models for the prediction of the heat pipe thermal behavior in steady state or transient regimes, this study addresses this issue. Hence, in the first part on this work, an experimental study is carried out to determine the thermal performances of a based-heat pipe cooling system. Since the heat pipe plays an important role in such systems, tests are carried out on a water-filled copper cylindrical heat pipe including helicoidally and trapezoidal capillary grooves. A test rig is developed in order to determine the thermal performance of the heat pipe, which is positioned horizontally, for different heat sink temperatures. In the second part of this study, a theoretical model is developed to determine the capillary limit and the heat transfer within the heat pipe for various operating conditions. Finally, a comparison between the simulated results and those obtained experimentally is realized. 2. Experimental Study 2.1 Description of the heat pipe A water-filled cylindrical copper heat pipe is used (Figure 2). The capillary structure is composed of 75 helicoidally and trapezoidal grooves (Figure 3). The main geometrical characteristics of the heat pipe are listed in Table 1. Figure 2. Cross-section of the heat pipe showing the grooves Figure 3. Geometrical characteristics of the grooves Table 1. Geometrical characteristics of the heat pipe Heat pipe length, Lt Evaporator length, Lev Condenser length, Lc Outer diameter, Do Wall thickness, tw Number of grooves, Ng Groove depth, Dg Groove width at the bottom of the groove, Wgb Groove width at the top of the groove, Wgt Angle between the groove and the heat pipe axis, a (Figure 2) Angle b (Figure 2) 2.2 Test rig and experimental procedures An experimental set-up was built up in order to determine the thermal performance of the cooling system for different positions at various heat input powers, Q, and heat sink temperatures, Ths. It is composed by two aluminum blocks. The first block is equipped with four electrical and cylindrical heaters dissipating 250 W each. This heating block (60 × 60 × 60 mm3) plays the role of the heat source (Figure 4a). The second block (cold plate), which has the same dimensions as the heating block, is cooled by water circulation that is insured by means of a pump (Figure 4b). The temperature of the water at the inlet of the cooling block is controlled by a refrigerated circulation bath. The heating and cooling blocks are mounted on a rotating support in order to study the thermal performances of the system as a function of the orientation (Figure 5). A thermal paste is used to improve the thermal contact between the heat pipe and the aluminum blocks. A HP 34970 data acquisition unit is used in order to monitor and record all the temperatures (Figure 6). Ten T-type thermocouples are placed along the heat pipe in order to measure the temperature distribution and its evolution in time (Figure 7). Figure 4. Sketches of the heating and cooling blocks Figure 5. View of the experimental set-up (without thermal insulation) Figure 6. Test rig arrangement Figure 7. Thermocouple location along the heat pipe The experimental procedures consist of positioning the cooling system in the proper orientation. Then, the heat sink temperature is fixed by adjusting the water temperature at the cooling block inlet. Then, the power is adjusted to the desired value and the system can reach the steady-state regime. The temperature readings from all thermocouples are recorded. After the steady-state regime is reached, the power to the evaporator is turned off. This cycle of experiments is repeated with higher input heat powers until the maximum heat power (capillary limit) is reached. This is characterized by a sudden and steady rise of the evaporator temperature. 2.3 Data reduction and uncertainty analysis The overall thermal resistance of the heat pipe is determined by the following equation ${{R}_{th}}\text{ }=\text{ }{{\text{R}}_{\text{thev}}}\text{ }+\text{ }{{\text{R}}_{\text{thad}}}\text{ }+\text{ }{{\text{R}}_{\text{thc}}}\text{ }=\text{ }{\left( {{{\text{\bar{T}}}}_{\text{ev}}}-{{{\bar{T}}}_{c}} \right)}/{Q}\;=\frac{\Delta {{{\bar{T}}}_{hp}}}{Q}$ (1) Rthev, Rthad, and Rthc are the thermal resistances of the evaporator, adiabatic and condenser zones, respectively, and Q is the heat input power. ${{\bar{T}}_{ev}}$ and $ {{\bar{T}}_{c}}$ are the average wall temperature of the evaporator and the condenser, respectively. $\Delta {{\bar{T}}_{hp}}$is the temperature difference ${{\bar{T}}_{ev}}- {{\bar{T}}_{c}}$. The evaporator and condenser thermal resistances are calculated according to ${{R}_{thev}}\text{ }=\text{ }\frac{{{{\text{\bar{T}}}}_{\text{ev}}}-{{{\bar{T}}}_{ad}}}{Q}\text{ }={{\text{R}}_{\text{thwev}}}+{{R}_{thevap}}=\frac{{{t}_{w}}}{{{\lambda }_{w}}\text{ }{{\text{A}}_{\text{ev}}}}+\frac{1}{{{h}_{ev}}\text{ }{{\text{A}}_{\text{ev}}}}$ (2) ${{R}_{thc}}\text{ }=\text{ }\frac{{{{\text{\bar{T}}}}_{\text{ad}}}-{{{\bar{T}}}_{c}}}{Q}\text{ }={{\text{R}}_{\text{thwc}}}+{{R}_{thcond}}=\frac{{{t}_{w}}}{{{\lambda }_{w}}\text{ }{{\text{A}}_{\text{c}}}}+\frac{1}{{{h}_{c}}\text{ }{{\text{A}}_{\text{c}}}}$ (3) Rthwev and Rthwc are the thermal resistances due to thermal conduction through the evaporator and the condenser walls, respectively. tw and lw are the thickness and the thermal conductivity of the wall, respectively. Aev and Ac are the inner evaporator and condenser areas, respectively. In Eqns. (2) and (3), the conductive thermal resistances of the heat pipe wall are calculated by assuming that the wall thickness is negligible when compared to the heat pipe diameter. Hence, under this assumption, the expression of the thermal resistance for a cylindrical wall is similar to that for a flat one. From Eqns. (2) and (3), the heat transfer coefficients for the evaporation and condensation phenomena are calculated according to the following expressions ${{h}_{ev}}\text{ }=\text{ }\frac{\text{1}}{\frac{\left( {{{\text{\bar{T}}}}_{\text{ev}}}\text{ - }{{{\text{\bar{T}}}}_{\text{ad}}} \right)}{{{\text{q}}_{\text{ev}}}}-\frac{{{t}_{w}}}{{{\lambda }_{w}}}}=\frac{1}{\frac{\Delta {{{\bar{T}}}_{ev}}}{{{q}_{ev}}}-\frac{{{t}_{w}}}{{{\lambda }_{w}}}}$ (4) ${{h}_{c}}\text{ }=\text{ }\frac{\text{1}}{\frac{\left( {{{\text{\bar{T}}}}_{\text{ad}}}\text{ - }{{{\text{\bar{T}}}}_{\text{c}}} \right)}{{{\text{q}}_{\text{c}}}}-\frac{{{t}_{w}}}{{{\lambda }_{w}}}}=\frac{1}{\frac{\Delta {{{\bar{T}}}_{c}}}{{{q}_{c}}}-\frac{{{t}_{w}}}{{{\lambda }_{w}}}}$ (5) qev and qc are the heat fluxes calculated on the basis the evaporator and condenser heat transfer areas, Aev and Ac.$\Delta {{\bar{T}}_{ev}}$.and $\Delta {{\bar{T}}_{c}}$ are the temperature differences ${{\bar{T}}_{ev}}- {{\bar{T}}_{ad}}$ and ${{\bar{T}}_{ad}}- {{\bar{T}}_{c}}$, respectively. The uncertainty for the thermal resistance, URth, is given by the root sum square of the uncertainties of the bias contribution to the uncertainty of Rth, BRth, and the precision contribution to the uncertainty of Rth, PRth, according to [14] ${{U}_{{{R}_{th}}}}={{({{B}^{2}}_{{{R}_{th}}}+{{P}^{2}}_{{{R}_{th}}})}^{1/2}}$ (6) The precision and bias limits contributions can be determined separately in terms of the sensitivity coefficients of the thermal resistance, Rth, according to the following expressions [14] $P_{Rth}^{2}={{\left( \frac{\partial {{R}_{th}}}{\partial {{{\bar{T}}}_{ev}}} \right)}^{2}}P_{{{{\bar{T}}}_{ev}}}^{2}+{{\left( \frac{\partial {{R}_{th}}}{\partial {{{\bar{T}}}_{c}}} \right)}^{2}}P_{{{{\bar{T}}}_{c}}}^{2}+{{\left( \frac{\partial {{R}_{th}}}{\partial Q} \right)}^{2}}P_{Q}^{2}$ (7) $B_{Rth}^{2}={{\left( \frac{\partial {{R}_{th}}}{\partial {{{\bar{T}}}_{ev}}} \right)}^{2}}B_{{{{\bar{T}}}_{ev}}}^{2}+{{\left( \frac{\partial {{R}_{th}}}{\partial {{{\bar{T}}}_{c}}} \right)}^{2}}B_{{{{\bar{T}}}_{c}}}^{2}+{{\left( \frac{\partial {{R}_{th}}}{\partial Q} \right)}^{2}}B_{Q}^{2}+\left( \frac{\partial {{R}_{th}}}{\partial {{{\bar{T}}}_{ev}}} \right)\times \left( \frac{\partial {{R}_{th}}}{\partial {{{\bar{T}}}_{c}}} \right)B_{{{{\bar{T}}}_{ev}}}^{'}B_{{{{\bar{T}}}_{c}}}^{'}$ (8) where B'Tc and B'Tev are the portions of BTc and BTev, that arise from identical error source and they are therefore presumed to be perfectly correlated. By considering the expression of Rth, the precision and bias limits contributions can be expressed as ${{\left( \frac{{{P}_{{{R}_{th}}}}}{{{R}_{th}}} \right)}^{2}}={{\left( \frac{{{P}_{{{{\bar{T}}}_{ev}}}}}{\Delta {{{\bar{T}}}_{hp}}} \right)}^{2}}+{{\left( \frac{{{P}_{{{{\bar{T}}}_{c}}}}}{\Delta {{{\bar{T}}}_{hp}}} \right)}^{2}}+{{\left( \frac{{{P}_{Q}}}{Q} \right)}^{2}}$ (9) ${{\left( \frac{{{B}_{{{R}_{th}}}}}{{{R}_{th}}} \right)}^{2}}={{\left( \frac{{{B}_{{{{\bar{T}}}_{ev}}}}}{\Delta {{{\bar{T}}}_{hp}}} \right)}^{2}}+{{\left( \frac{{{B}_{{{{\bar{T}}}_{c}}}}}{\Delta {{{\bar{T}}}_{hp}}} \right)}^{2}}+{{\left( \frac{{{B}_{Q}}}{Q} \right)}^{2}}-2\left( \frac{B_{{{{\bar{T}}}_{ev}}}^{'}}{\Delta {{{\bar{T}}}_{hp}}} \right)\left( \frac{B_{{{{\bar{T}}}_{c}}}^{'}}{\Delta {{{\bar{T}}}_{hp}}} \right)$ (10) If we suppose that the bias errors in the different temperatures are totally correlated, the last term on the right side of Eq. (10) would cancel the first and the second terms, and the bias limit in thermal resistance measurements, Rth, is simplified as follows [14] ${{\left( \frac{{{B}_{{{R}_{th}}}}}{{{R}_{th}}} \right)}^{2}}={{\left( \frac{{{B}_{Q}}}{Q} \right)}^{2}}$ (11) We suppose that the precision errors for the temperatures are also totally correlated. Hence, Eq. (9) becomes ${{\left( \frac{{{P}_{{{R}_{th}}}}}{{{R}_{th}}} \right)}^{2}}=2{{\left( \frac{{{P}_{{\bar{T}}}}}{\Delta {{{\bar{T}}}_{hp}}} \right)}^{2}}+{{\left( \frac{{{P}_{Q}}}{Q} \right)}^{2}}$ (12) From Eq. (6), the precision for the thermal resistance can be calculated according to $\frac{{{U}_{{{R}_{th}}}}}{{{R}_{th}}}=\sqrt{2{{\left( \frac{{{P}_{{\bar{T}}}}}{\Delta {{{\bar{T}}}_{hp}}} \right)}^{2}}+{{\left( \frac{{{P}_{Q}}}{Q} \right)}^{2}}+{{\left( \frac{{{B}_{Q}}}{Q} \right)}^{2}}}$ (13) The values PQ/Q and BQ/Q are equal to 1 % and the values of the precision limit of the temperature are equal to 0.5 °C, in our uncertainty estimation. The same reasoning can be followed for the determination of the relative uncertainty for calculating the thermal resistances of evaporation and condensation, Rthev and Rthc, and the following expressions are obtained $\frac{{{U}_{{{R}_{thev}}}}}{{{R}_{thev}}}=\sqrt{2{{\left( \frac{{{P}_{{\bar{T}}}}}{\Delta {{{\bar{T}}}_{ev}}} \right)}^{2}}+{{\left( \frac{{{P}_{Q}}}{Q} \right)}^{2}}+{{\left( \frac{{{B}_{Q}}}{Q} \right)}^{2}}}$ (14) $\frac{{{U}_{{{R}_{thc}}}}}{{{R}_{thc}}}=\sqrt{2{{\left( \frac{{{P}_{{\bar{T}}}}}{\Delta {{{\bar{T}}}_{c}}} \right)}^{2}}+{{\left( \frac{{{P}_{Q}}}{Q} \right)}^{2}}+{{\left( \frac{{{B}_{Q}}}{Q} \right)}^{2}}}$ (15) The relative uncertainties for calculating the heat transfer coefficients, hev and hc, are expressed as $\frac{{{U}_{{{h}_{ev}}}}}{{{h}_{ev}}}=\sqrt{2{{\left( \frac{{{P}_{{\bar{T}}}}}{\Delta {{{\bar{T}}}_{ev}}} \right)}^{2}}+{{\left( \frac{{{P}_{Q}}}{Q} \right)}^{2}}+{{\left( \frac{{{B}_{Q}}}{Q} \right)}^{2}}+{{\left( \frac{{{P}_{{{A}_{ev}}}}}{{{A}_{ev}}} \right)}^{2}}}$ (16) $\frac{{{U}_{{{h}_{c}}}}}{{{h}_{c}}}=\sqrt{2{{\left( \frac{{{P}_{{\bar{T}}}}}{\Delta {{{\bar{T}}}_{c}}} \right)}^{2}}+{{\left( \frac{{{P}_{Q}}}{Q} \right)}^{2}}+{{\left( \frac{{{B}_{Q}}}{Q} \right)}^{2}}+{{\left( \frac{{{P}_{{{A}_{c}}}}}{{{A}_{c}}} \right)}^{2}}}$ (17) PAev/Aev and PAc/Ac are the precisions of the determination of the evaporator and condenser areas, Aev and Ac. Table 2. Relative uncertainties for the calculation of Rth URth/Rth Ths= 10 °C Ths= 35°C Table 2 shows the values of the relative uncertainties for the calculation of the heat pipe thermal resistance as a function of the heat input power, for different heat sink temperatures, when the heat pipe is oriented horizontally. For a given heat sink temperature, the uncertainty for the thermal resistance decreases rapidly as the heat input power increases. For a given heat input power, the uncertainty for the thermal resistance increases with the heat sink temperature. Tables 3 and 4 list the relative uncertainties for the calculations of Rthev and Rthc. As for Rth, for a given heat sink temperature, the relative uncertainty decreases as the heat input increases; however, it increases with the heat sink temperature. These results can be explained by the fact that the temperature differences $\Delta {{\bar{T}}_{ev}}$ and $\Delta {{\bar{T}}_{c}}$ are small at low heat input powers and high heat sink temperatures. This contributes to the decrease of the precision of the thermal resistance measurements. Hence, in this case, the relative uncertainties are high. Table 3. Relative uncertainties for the calculation of Rthev URthev/Rthev Table 4. Relative uncertainties for the calculation of Rthc URthc/Rthc 3. Experimental Results and Analysis The axial temperature distribution along the heat pipe, for different input powers are illustrated in Figure 8. The heat pipe is horizontally oriented and the heat sink temperature, Ths, is set to 10 °C, 25 °C, 35 °C, and 45 °C, respectively. For a given power, we distinguish three types of the axial wall temperature evolution. Along the zone of evaporation, the wall temperature is the highest and remains nearly constant. From the evaporator to the adiabatic zone, the axial temperature decreases and remains practically constant along the adiabatic zone. From the adiabatic zone to the condenser, the axial temperature also decreases and stabilizes in a constant value along the condensation zone. The temperature gradient along the heat pipe illustrates the ability of the heat pipe to transfer heat in different areas. The axial distribution of the temperature depends on the heat input power and on the heat sink temperature, Ths. Note that for input powers exceeding 100 W, the evaporator temperature is no longer constant. Similarly, there is a very significant increase in the evaporator temperature indicating that the capillary limit is exceeded, and the evaporator starts to dry out. The profile of the evaporator temperature becomes parabolic indicating that the heat transfer is carried out by thermal conduction. In order to highlight the effectiveness of the heat pipe for this cooling solution, we have carried out experiments on the same cooling system including a cylindrical copper rod instead of a heat pipe. The axial wall temperature distributions along the copper rod are depicted in Figure 9 for a heat sink temperature Ths = 25 °C. As it can be noticed, the evaporator temperatures are higher than those obtained with a heat pipe. Indeed, for a heat input power, Q = 30 W, the evaporator temperature is nearly 150 °C which is obtained for the copper rod-based cooling system; against nearly 34 °C which is obtained with the heat pipe-based cooling system. Figure 8. Axial wall temperature variations for different heat sink temperatures: (a) 10 °C, (b) 25 °C, (c) 35 °C, (d) 45 °C The variations of the evaporator, adiabatic and condenser temperatures are depicted in Figure 10. It is observed that evaporator temperature increases sharply for heat input powers higher than 100 W and 120 W when operating at heat sink temperatures equal to 10 °C and 25 °C, respectively (Figure 10a). This indicates that dry-out occurs at the evaporator section. For heat sink temperatures equal to 35 °C and 45 °C, the evaporator temperature increases monotonously without a sharp increase. The adiabatic and condenser temperatures increase monotonously with both the heat input power and the heat sink temperature. It can be noticed that the condenser temperature is higher than the heat sink temperature because of the thermal resistance between the condenser wall and the cooling water. Figure 9. Axial wall temperature variations for a copper rod-based cooling system (Ths = 25 °C) Figure 10. Variations of the (a) evaporator, (b) adiabatic, and (c) condenser temperatures as a function of the heat input power for different heat sink temperatures The variations of the overall thermal resistance of the heat pipe with the heat input power for different heat sink temperatures are plotted in Figure 11. For a given heat sink temperature, the heat pipe thermal resistance decreases rapidly to a minimum value as the heat input power increases. This minimum value corresponds to the capillary limit. It corresponds to the maximum heat input power that the heat pipe can transport before dry-out in the evaporator starts. The capillary limit depends on the heat sink temperature. Indeed, for Ths = 10 °C, the maximum heat flux rate that the heat pipe can transport is approximately 80 W. However, the capillary limit is equal nearly to 100 W, 110 W, and 120 W for Ths = 25 °C, Ths = 35 °C, and Ths = 45 °C, respectively. This shows that the heat transfer is enhanced when the heat sink temperature increases. This is also demonstrated by the decrease of the heat pipe thermal resistance. For heat input powers exceeding the capillary limit, the thermal resistance starts to increase showing heat transfer degradation. This is due mainly to the beginning of the dry-out in the evaporator and flooding in the condenser. Figure 11. Overall thermal resistance variations vs. heat input, for different heat sink temperatures: (a) 10 °C, (b) 25 °C, (c) 35 °C, and (d) 45 °C Figure 12 illustrates the variations of the evaporator and condenser thermal resistances as a function of the heat input power, for different heat sink temperatures. They are calculated by Eqns. (2) and (3). For a given heat sink temperature, the evaporator thermal resistance increases with the heat input (Figure 12a). The degradation of the evaporation process is caused by the fact that the evaporator becomes starved of liquid since the capillary pumping pressure becomes insufficient to overcome the liquid and vapor pressure losses when the heat input increases. For given heat input, the evaporator thermal resistance decreases as the heat sink temperature increases. This is because augmenting the heat sink temperature causes an increase of the saturation temperature and pressure. Hence, the evaporation process is enhanced. The condenser thermal resistance decreases with the heat input power (Figure 12b). Indeed, increasing the heat input power causes an augmentation of the liquid mass flow rate along the condenser, and the condensation process is enhanced. These results clearly indicate that the heat pipe is correctly filled, and the flooding zone can be considered as negligible. For a given heat input power, the condenser thermal resistance decreases when the heat sink temperature increases. Hence, the condensation process is enhanced. Figure 12. Variations of the evaporator and condenser thermal resistances as a function of the heat input power, for different heat sink temperatures: (a) evaporator thermal resistance, (b) condenser thermal resistance In order to highlight the effectiveness of the heat pipe heat transfer capacity, we have proceeded in calculating the ratio of its effective thermal conductivity by that of a copper rod (l = 380 W/m.K). Figure 13 shows the variations of this ratio as a function of the heat input power for different heat sink temperatures. The curves present maxima corresponding to the capillary limit. Maximum effective thermal conductivity up to 21 times that of the copper are obtained for a heat sink temperature equal to 45 °C. For a heat sink temperature equal to 10 °C, the effective thermal conductivities are lower than those obtained for Ths = 45 °C, and the maximum effective thermal conductivity is nearly 11 times that of the copper. Figure 13. Variations of the ratio of the heat pipe effective thermal conductivity to that of a copper rod as a function of the heat input power for different heat sink temperatures 4. Heat Transfer Correlations In order to quantify the heat transfer mechanisms in the evaporator and condenser zones, we have processed the experimental data in dimensionless numbers in order to obtain heat transfer laws. The dimensionless analysis is carried out based on Vaschy-Buckingham theorem (or p theorem) [15]. The heat transfer coefficients in the evaporator and condenser zones are calculated according to Eqs(4) and (5). The following dimensionless numbers are evidenced from the p analysis: (i) the Reynolds number which is defined by $\operatorname{Re}\text{ }=\text{ }\frac{\text{Q}}{{{\mu }_{l}}\text{ }\pi \text{ }{{\text{D}}_{\text{o}}}\text{ }\Delta {{\text{h}}_{\text{v}}}}$ (18) ml is the liquid dynamic viscosity, and Dhv is the latent heat of vaporization. Do is the heat pipe outer diameter, and Q is the heat flux rate. (ii) thePrandtl number $\Pr \text{ }=\text{ }\frac{{{\mu }_{\text{l }}}{{c}_{pl}}}{{{\lambda }_{\text{l}}}}$ (19) cpl is the liquid specific heat, and ll is the liquid thermal conductivity. (iii) the Nusselt number $N{{u}_{{}}}\text{ }=\text{ }\frac{{{\text{h}}_{\text{ }}}L}{{{\lambda }_{l}}}\text{ }$ (20) h is the heat transfer coefficient for evaporation or condensation, and L is a reference length which is expressed as For evaporation ${{L}_{(ev)}}\text{ }=\text{ }\sqrt{\frac{\sigma }{\left( {{\rho }_{l}}-\text{ }{{\rho }_{v}} \right)\text{g}}}$ (21) For condensation ${{L}_{(c)}}\text{ }=\text{ }{{\left( \frac{\nu _{l}^{2}}{\text{g}} \right)}^{{1}/{3}\;}}$ (22) s is the liquid surface tension. rl and rv are the liquid and vapor densities, respectively. nl is the kinematic viscosity, and g is the gravity acceleration. (iv) the modified Jackob number $Ja*\text{ }=\text{ }\frac{{{\rho }_{\text{l}}}}{{{\rho }_{\text{v}}}}\text{ }\frac{{{\text{c}}_{\text{pl}}}\text{ }{{\text{T}}_{\text{sat}}}}{\Delta {{\text{h}}_{\text{v}}}}$ (23) Tsat is the saturation temperature. (v) the Kutateladze number ${{\text{K}}_{\text{p}}}\text{ }=\text{ }\frac{{{\text{P}}_{\text{sat}}}\text{ }{{\text{L}}_{\text{(ev)}}}}{\sigma }$ (24) Hence, the heat transfer coefficients can be calculated by the following correlation $Nu\text{ }=\text{ A R}{{\text{e}}^{{{\text{m}}_{\text{1}}}}}\text{ P}{{\text{r}}^{{{\text{m}}_{\text{2}}}}}\text{ Ja}{{\text{*}}^{{{\text{m}}_{\text{3}}}}}\text{K}_{\text{p}}^{{{\text{m}}_{\text{4}}}}$ (25) A, m1, m2, and m3 are constants, which are determined from the experimental results. For the evaporation heat transfer, the dimensionless numbers are determined by calculating the liquid physical properties at the saturation temperature and the vapor physical properties at the film temperature (Tf = (Tsat+ Tw)/2). For the condensation heat transfer, the liquid and vapor physical properties are determined by considering the film and saturation temperatures, respectively. The constants of Eq. (25) are determined by a linear regression analysis, for the evaporation and the condensation phenomena. The experimental results are well correlated when considering A = 339.3, m1 = -0.978, m2 = -0.968, m3 = 0.205, and m4 = 1.586, for the evaporation phenomenon, and A = 10.1, m1 = 0.384, m2 = - 1.738, m3 = - 1.099, and m4 = 0, for the condensation phenomenon. The variations of the calculated Nusselt number as a function of the Nusselt number obtained experimentally are depicted in Figure 14. As it can be seen, the experimental Nusselt number for the heat transfer by evaporation and condensation are well correlated by Eq. (25). The deviations from the experimental results are± 35% and ± 30 % for the evaporation and condensation heat transfer, respectively. The validity of Eq. (25) is insured for the dimensionless numbers ranging in the intervals listed in Table 6. Table 6. Range of the dimensionless numbers in Eq. (25) 1 ≤ Re ≤ 16 0.24≤ Re ≤ 3.6 2.7 ≤Pr≤ 6.6 1 ≤Kp≤ 16 1.4 ≤Kp≤ 7.5 127 ≤Ja* ≤ 11,628 Figure 14. Variations of the Nusselt number obtained from the experimental results and that calculated by Eq. (25): (a) case of the evaporation, and (b) case of the condensation 5. Modeling the Capillary Limit and the Heat Transfer Within the Heat Pipe 5.1 Modeling of the capillary limit The proper operation of the heat pipe is insured when the capillary pumping, DPc, is capable to overcome the pressure losses in the liquid and vapor phases, DPl and DPv, as well as the hydrostatic pressure, DPg, according to $\Delta {{P}_{c}}\ge \text{ }\Delta {{{P}}_{{l}}}+\Delta {{P}_{v}}+\Delta {{P}_{g}}$ (26) The driving capillary pressure can be expressed by $\Delta {{P}_{c}}{ = 2 }\sigma \text{ cos}\left( \theta \right)\text{ }\left( \frac{1}{{{\text{r}}_{\text{ce}}}}-\frac{1}{{{r}_{cc}}} \right)\text{ }$ (27) s is the surface tension and q is the contact angle. rce and rcc are the minimum and the maximum capillary radii in the evaporator and condenser sections, respectively. They are given by the following expressions ${{r}_{ce}}=\frac{{{D}_{g}}}{1+\sin \left( \beta +\theta \right)}$ (28) ${{r}_{cc}}=\frac{{{D}_{g}}\tan \left( \beta \right)+0.5\text{ }{{W}_{gb}}}{\cos \left( \beta +\theta \right)}$ (29) The pressure losses in the liquid and vapor phases are written as [16] $\Delta {{P}_{v}}\text{ = }{{\text{F}}_{_{\text{v}}}}{{\text{L}}_{\text{eff}}}\text{ Q }$ (30) $\Delta {{P}_{l}}\text{ = }{{\text{F}}_{_{\text{l}}}}{{\text{L}}_{\text{eff}}}\text{ Q }$ (31) Leff is the effective length which is equal to La + 0.5 (Le + Lc) where Le, La, and Lc are the lengths of the evaporation, adiabatic and condensation zones, respectively. Fv and Fl are the friction coefficients in the vapor and liquid phases, and Q is the heat input power. The axial and radial hydrostatic pressures are expressed as follows [16] $\Delta {{P}_{g,axial}}\text{ = }{{\rho }_{\text{l}}}\text{ g }{{\text{L}}_{\text{t}}}\text{ sin }\left( \psi \right)$ (32) $\Delta {{P}_{g,radial}}\text{ = }{{\rho }_{\text{l}}}\text{ g }{{\text{D}}_{\text{v}}}\text{ cos }\left( \psi \right)$ (33) y is the tilt angle with the respect to the horizontal, and Dv is the vapor diameter (Dv = Do – 2 Dg). Hence, referring to Eq(26), the capillary limit can be expressed as ${{Q}_{\max }}\text{ }=\text{ }\frac{\left( \Delta {{\text{P}}_{\text{c}}}-\Delta {{P}_{g,radial}}\text{ }\pm \Delta {{P}_{g,axial}} \right)}{\left( {{\text{F}}_{\text{l}}}+{{F}_{v}} \right)\text{ }{{\text{L}}_{\text{eff}}}}$ (34) The sign (+) in the numerator of Eq. (34) corresponds to the thermosyphon position for which the condenser is above the evaporator, and the sign (-) corresponds to the anti-gravity position for which the evaporator is elevated above the condenser. The vapor friction coefficient, Fv is expressed as [16] ${{F}_{v}}\text{ = }\frac{{{\mu }_{\text{v}}}}{{{K}_{v}}\text{ }{{{{\bar{A}}}}_{\text{v}}}\text{ }{{\rho }_{\text{v}}}\text{ }\Delta {{\text{h}}_{\text{v}}}\text{ }}\text{ }$ (35) ${{{\bar{A}}}_{\text{v}}}$is the mean vapor cross-section, and Kv is the permeability for the vapor flow [16] ${{K}_{v}}\text{ }=\text{ }\frac{\text{D}_{\text{hv}}^{\text{2}}\text{ }}{\text{2 }{{\text{P}}_{\text{ov}}}}$with ${{D}_{\text{hv}}}\text{ }=\text{ }\frac{\text{4 }{{{{\bar{A}}}}_{\text{v}}}\text{ }}{{{\text{p}}_{\text{v}}}}$ (36) Dhv is the hydraulic diameter of the vapor phase, and Pov is the Poiseuille number which is equal to 16 since the vapor flow is assumed to be circular. pv is the perimeter wetted by the vapor phase. The liquid friction coefficient, Fl, is given by [16] ${{F}_{l}}\text{ = }\frac{{{\mu }_{\text{l}}}}{{{K}_{g}}\text{ }{{{{\bar{A}}}}_{\text{l}}}\text{ }{{\rho }_{\text{l}}}\text{ }\Delta {{\text{h}}_{\text{v}}}\text{ }}\text{ }$ (37) Ng is the number of grooves. ${{{\bar{A}}}_{\text{l}}}$is the mean liquid cross-section, and Kg is the groove permeability which is given by the following relation [16] ${{K}_{g}}\text{ }=\text{ }\frac{\text{D}_{\text{hl}}^{\text{2}}\text{ }{{\phi }_{\text{g}}}}{\text{2 }{{\text{P}}_{\text{ol}}}}$with${{D}_{\text{hl}}}\text{ }=\text{ }\frac{\text{4 }{{{{\bar{A}}}}_{\text{l}}}\text{ }}{{{\text{p}}_{\text{l}}}}$ (38) Dhl is the hydraulic diameter of the liquid phase, and pl is the perimeter wetted by the liquid. φg is the groove porosity which is given by ${{\phi }_{g\text{ }=\text{ }}}\frac{0.5\text{ }{{\text{W}}_{\text{gb}}}\text{ }+\text{ }{{{\text{D}}_{\text{g}}}}/{\tan \left( {\pi }/{2-\beta }\; \right)}\;}{{{W}_{gt}}}$ (39) The Poiseuille number for the liquid flow, Pol, is calculated as follows [17] if${{{\text{A}}_{\text{s}}}={{\text{D}}_{\text{g}}}}/{{{W}_{gb}}}\;\text{ }\langle \text{ 1}\text{.5}$ $ P{{o}_{\text{l}}}=\text{ }{{\text{y}}_{\text{o}}}\text{ }+\text{ a }\times \text{ exp}\left( \text{- b }\times \text{ }{{A}_{\text{s}}} \right)\text{ }+\text{ c }\times \text{ }{{A}_{\text{s}}} $ $\left\{\begin{array}{l}{y_{o}=6.391 \theta^{0.121}} \\ {a=137-5.008 \theta+0.07312 \theta^{2}-0.0003808 \theta^{3}} \\ {b=4.901+0.01448 \theta} \\ {c=-0.8141+0.141 \theta-2.762^{-36^{2}}-1.758^{-56^{3}}}\end{array}\right.$ (40) if${{{\text{A}}_{\text{s}}}={{\text{D}}_{\text{g}}}}/{{{W}_{gb}}}\;\text{ }\rangle \text{ 1}\text{.5}$ $ P{{o}_{l}}\text{ }=\text{ a }\times \text{ exp}\left( \text{-0}\text{.5 }\times \text{ }{{\left( \text{log}\left( {{{A}_{s}}}/{{{\text{x}}_{\text{o}}}}\; \right)/b \right)}^{2}} \right) $ $\left\{\begin{array}{l}{a=11.23 \theta^{0.09313}} \\ {b=2.406 \theta^{0.01303}} \\ {x_{o}=19.29 \theta^{-0.3836}}\end{array}\right.$ (41) The cross-sectional areas of the liquid and the vapor phases depend on the curvature radius of the meniscus. In Eqns. (35)-(38), the mean values of Al and Av are considered. These values are determined by integrating the local cross-sectional areas along the FMHP, assuming a linear variation of the curvature radius between the value taken in the evaporator section (rce) and that taken in the condenser section (rcc). Hence, ${{ {\bar{A}}}_{\text{l}}}$and ${{ {\bar{A}}}_{\text{v}}}$are expressed as ${{\bar{A}}_{l}}={{a}_{l}}-{{b}_{l}}\text{ }\left( r_{ce}^{2}+{{r}_{ce}}\text{ }{{r}_{cc}}+r_{cc}^{2} \right)$ (42) ${{\bar{A}}_{v}}={{a}_{v}}+{{b}_{v}}\text{ }\left( r_{ce}^{2}+{{r}_{ce}}\text{ }{{r}_{cc}}+r_{cc}^{2} \right)$ (43) ${{a}_{l}}={{N}_{g}}\left( \left( {{W}_{gb}}+{{D}_{g}}\text{ tan}\left( \beta \right) \right)\text{ }{{D}_{g}} \right)$ (44) ${{b}_{l}}={{b}_{v}}={{N}_{g}}\frac{\left( \varphi -\cos \left( \varphi \right)\sin \left( \varphi \right) \right)}{3}$ (45) ${{a}_{v}}=\frac{\pi \text{ }D_{v}^{2}}{4}$ (46) $\varphi \text{ }=\text{ }\frac{\pi }{\text{2}}-\left( \beta +\theta \right)$ (47) The perimeters pv and pl are expressed by ${{p}_{l}}={{N}_{g}}\left( \frac{2\text{ }{{\text{D}}_{\text{g}}}}{\text{cos}\left( \beta \right)}+{{W}_{gb}} \right)$ (48) ${{p}_{v}}=\pi \text{ }{{D}_{v}}$ (49) 5.2 Modeling of the heat transfer The heat exchanges in a heat pipe are of various natures and they can be represented by the thermal resistance network shown in Figure 15. We can distinguish the thermal resistances, R1 and R7, due to the radial conduction through the evaporator and the condenser walls, the thermal resistance, R8, due to the axial conduction along the heat pipe wall, the thermal resistances, R2 and R6, due to the evaporation and condensation, the thermal resistances R3 and R5 due to the heat exchanges by phase change at the liquid-vapor interfaces, and the thermal resistance R4 due to the exchanges by convection between the vapor and the heat pipe wall. Figure 15. Thermal resistance network representing the heat exchanges in the heat pipe By supposing that the thermal resistance, R8, due to the axial conduction along the heat pipe is high, the heat pipe overall thermal resistance, Rtht can be expressed by ${{R}_{tht}}\text{ }=\text{ }\sum\limits_{\text{i}=\text{1}}^{\text{7}}{{{\text{R}}_{\text{i}}}}$ (50) The thermal resistances R3 and R5 are expressed by [16] $R_{3}^{{}}=R_{5}^{{}}=\frac{{{T}_{sat}}\sqrt{2\pi \text{ }r\text{ }{{T}_{sat}}}}{{{\rho }_{l}}\text{ }\Delta h_{v}^{2}}\frac{(2-{{a}_{c}})}{{{a}_{c}}}$ (51) ac is the accommodation coefficient and r is the gas constant. The thermal resistance R4 is given by [16] ${{R}_{4}}\text{ }=\text{ }\frac{{{\text{T}}_{\text{sat}}}\text{ }\Delta {{\text{P}}_{\text{v}}}}{{{\rho }_{\text{v}}}\text{ }\Delta {{h}_{v}}\text{ Q}}$ (52) The wall thermal resistances, R1,7, are given by ${{R}_{1,7}}\text{ }=\text{ }\frac{\text{1}}{\text{2 }\pi \text{ }\lambda {{\text{ }}_{\text{w }}}\ell }\text{ ln}\left( \frac{{{\text{D}}_{\text{o}}}}{{{\text{D}}_{\text{i}}}} \right)$ (53) Do et Di are the outer and inner diameters, respectively. lw is the wall thermal conductivity. R1and R7 are calculated by taking $\ell \text{ }=\text{ }{{\text{L}}_{\text{e}}}$ and $\ell \text{ }=\text{ }{{\text{L}}_{\text{c}}}$, respectively. R2 and R6 are calculated according to ${{R}_{2}}\text{ }=\text{ }\frac{\text{1}}{{{\text{h}}_{\text{ev}}}\text{ }{{\text{A}}_{\text{ev}}}}$ (54) ${{R}_{6}}\text{ }=\text{ }\frac{\text{1}}{{{\text{h}}_{\text{c}}}\text{ }{{\text{A}}_{\text{c}}}}$ (55) hev and hc are the heat transfer coefficient of evaporation and condensation, respectively. They are determined from Eq. (25). Aev and Ac are the evaporator and condensation heat transfer areas, respectively. 5.3 Calculation procedure The calculation procedure is as follows: Fixing and calculating the main geometrical characteristics of the heat pipe (overall length, lengths of the different zones, effective length, outer diameter, thickness, inner diameter at the top of the grooves, and the inner diameter at the bottom of the grooves), Fixing the main geometrical characteristics of the grooves (groove depth, Dg, width at the groove top, width at the groove bottom, angles α and b), Fixing the tilt angle, ψ, and the contact angle, θ, Fixing an arbitrary saturation temperature, Tsatinitial, Fixing the external wall temperature of the condenser (boundary condition), Calculating the thermophysical properties at the saturation and film temperatures, Calculating the capillary pressure according to Eq. (27), Calculating the liquid and vapor pressure losses according to Eqns. (30) and (31), Calculation the axial and radial hydrostatic pressures according to Eqns. (32)-(33), Calculating the capillary limit according to Eq. (34), Calculating the thermal resistances according to Eqns. (50)-(55), Calculating the new saturation temperature according to the following expression ${{T}_{satcalculated}}=\text{ }{{\text{T}}_{\text{wc}}}+\left( {{R}_{5}}+{{R}_{6}}+{{R}_{7}} \right)\text{ }{{\text{Q}}_{\text{max}}}$ (56) Comparing Tsatcalculated to Tsatinitial and steps 4-13 are repeated until a convergence on Tsat is insured, Calculating the evaporator wall temperature according to the following equation ${{T}_{ev}}=\text{ }{{\text{T}}_{\text{sat}}}+\left( {{R}_{1}}+{{R}_{2}}+{{R}_{3}} \right)\text{ }{{\text{Q}}_{\text{max}}}$ (57) Editing all the results: capillary pressure, pressure losses, capillary limit, thermal resistances, saturation temperature, and evaporator wall temperature. 5.4 Comparison between the model results and the experimental data The variations of the capillary limit, Qmax, obtained experimentally and that obtained theoretically from the model are depicted in Figure 16. The capillary limit increases with the heat sink temperature. This is mainly due to the decrease of the liquid pressure drop with the temperature (the vapor pressure drop is very negligible compared to the liquid one). The relative discrepancy between the experimental data and the calculated ones ranges between -1.7 % and +7.9 % indicating a very good agreement if we take into consideration the uncertainty on the experimental results. Note that this agreement depends on the value of the contact angle that is considered in the model. The value of the contact angle, which gives the best agreement, is 40 °. The variations of the overall heat pipe thermal resistance, Rtht, corresponding to the capillary limit, Qmax, are plotted in Figure 17. A good agreement is obtained between the experimental results and those determined from the model. Hence, the relative discrepancy between the experimental results and those issued from the model ranges between -1.7 % and + 7.9 %. Figure 16. Variations of Qmax as a function of Ths The variations of the evaporator wall temperature, Twev, are depicted in Figure 18. Twev increases with Ths and a good agreement is obtained between the experimental results and those calculated from the model. The relative discrepancy ranges between +4.4 % and 9.5 % indicating that the model overestimates slightly the experimental results. Figure 17. Variations of Rtht as a function of Ths Figure 18. Variations of Twev as a function of Ths In this study, a copper-water cylindrical heat pipe was manufactured and tested in order to determine its thermal performance for different heat sink temperatures and the heat input powers. For a given heat sink temperature, it is demonstrated that the evaporation process is altered when the heat input power increases. However, the condensation process is enhanced when the heat input power increases. Furthermore, both the evaporation and condensation processes are enhanced when the heat sink temperature increases whatever the heat input power. Heat transfer laws are proposed based on a dimensionless analysis of the evaporation and condensation phenomena. A model is developed in order to determine the capillary limit as well the heat transfer within the heat pipe. The model can predict the experimental results within -1.7 % and +7.9 % when estimating the capillary limit and underestimates the heat pipe overall thermal resistance within -17.8 % and -9.7 % and overestimates the evaporator temperature within 4.4 % and 9.5 %. parameter defined in Eqns. (40) and (41) constant in Eq. (25) accommodation coefficient inner area of the condenser section, m² inner area of the evaporator section, m² parameter defined by Eq. (44) ${{{\bar{A}}}_{{v}}}$ mean vapor cross-section, m² ${{{\bar{A}}}_{{l}}}$ mean liquid cross-section, m² parameter defined by Eqns. (40) and (41) bias limit contribution specific heat, J. kg-1. K-1 outer diameter, m groove depth, m hydraulic diameter of the liquid phase, m hydraulic diameter of the vapor phase, m vapor diameter, m friction factor, Pa/W.m gravitational acceleration, m/s² heat transfer coefficient of condensation, W/m².K heat transfer coefficient of evaporation, W/m².K Ja* modified Jackob number permeability, m2 Kutateladze number reference length, m condenser length, m evaporator length, m effective length, m overall length of the heat pipe, m m1, m2, m3, m4 constants in Eq. (25) number of grooves Nusselt number perimeter, m precision limit contribution Poiseuille number saturation temperature, Pa heat input power, W heat flux at the condenser section, W/m² qev heat flux at the evaporator section, W/m² gaz constant, J/kg.K thermal resistance due to thermal conduction through the evaporator wall, K/W thermal resistance due to evaporation, K/W liquid-vapor interfacial thermal resistance, K/W thermal resistance due to the exchanges between the vapor and the heat pipe wall, K/W thermal resistance due to condensation, K/W thermal resistance due to thermal conduction through the condensation wall, K/W thermal resistance due thermal conduction along the heat pipe, K/W capillary radius, m Reynolds number heat pipe overall thermal resistance, K/W Rtha thermal resistance of the adiabatic zone, K/W Rthc condenser thermal resistance, K/W Rthcond thermal resistance of condensation, K/W Rthev evaporator thermal resistance, K/W Rthevap thermal resistance of evaporation, K/W Rthwallc thermal resistance of the condenser wall, K/W Rthwallev thermal resistance of the evaporator wall, K/W groove spacing, m ${{{\bar{T}}}_{{ad}}}$ average temperature of the adiabatic zone, °C ${{{\bar{T}}}_{{c}}}$ average temperature of the condenser, °C ${{{\bar{T}}}_{{ev}}}$ average temperature of the evaporator, °C film temperature, °C heat sink temperature, °C wall temperature, °C Tsat saturation temperature, °C wall thickness, m width at the base of the groove, m width at the top of the groove, m constant defined by Eqns. (40) and (41) Greek symbols angle defined in Figure 2, ° $\Delta {{{h}}_{{v}}}$ latent heat of vaporization, J/kg $\Delta {{{P}}_{{c}}}$ capillary pressure, Pa $\Delta {{{P}}_{{g}}}$ hydrostatic pressure, Pa $\Delta {{{P}}_{{l}}}$ liquid pressure loss, Pa $\Delta {{{P}}_{{v}}}$ vapor pressure loss, Pa $\Delta {{{\bar{T}}}_{{c}}}$ average temperature difference between the condenser and the adiabatic sections, K $\Delta {{{\bar{T}}}_{{ev}}}$ average temperature difference between the evaporator and adiabatic sections, K $\Delta {{{\bar{T}}}_{{hp}}}$ average temperature difference between the evaporator and condenser sections, K contact angle, ° thermal conductivity, W/m.K dynamic viscosity, kg. m-1.s-1 kinematic viscosity, m/s² density, kg/m3 surface tension, N/m condenser, condensation evaporator, evaporation evap groove, hydrostatic groove bottom groove top [1] Tathgir RG, Singh G. (1983). Experimental study of a grooved heat pipe at moderate temperature range. Journal of Institution of Engineers (India): Mechanical Engineering division 64(3): 116-119. [2] Schlitt R. (1995). Performance characteristics of recently developed high-performance heat pipes. Heat Transfer Engineering 16(1): 44-52. https://doi.org/10.1080/01457639508939844 [3] Lataoui Z, Romestant C, Bertin Y, Jemni A, Petit D. (2008). Experimental investigation on the thermal behavior and performance of an axially grooved heat pipe. International Journal of Heat and Technology 26(2): 155-162. [4] Zhang C, Shi M, Wu J. (2008). Flow and heat transfer characteristics of heat with axial "Ω" shaped grooves. Journal of Chemical Industry and Engineering (China) 59(3): 544-550. [5] Zhu WF, Chen YP, Zhang CB, Shi M. (2009). Flowing and heat transfer characteristics of heat pipe with axially shallow-tailed microgrooves. Journal of Astronautics 30(6): 2380-2386. [6] Yao F, Chen YP, Zhang CB, Shi MH. (2011). Startup characteristics of heat pipe with axially "Ω" shaped grooves. Journal of Engineering Thermophysics 32(12): 2117-2119. [7] Yang KM, Wang NH, Jiang CH, Cheng L. (2012). An investigation of the thermal performance of a novel axial grooved heat pipe. Advanced Materials Research 580: 223-226. https://doi.org/10.4028/www.scientific.net/amr.580.223 [8] Yang KM, Wang NH, Jiang CH, Cheng L. (2012). Study on heat transfer characteristics of heat pipe with axial "Ω" shaped microgrooves. Advances Materials Research 580: 297-300. https://doi.org/10.4028/www.scientific.net/amr.580.297 [9] Bertoldo Junior J, Vlassov VV, Genaro G, Tuden Viera Gedes U. (2015). Dynamic test method to determine the capillary limit of axially grooved heat pipes. Experimental Thermal Sciences 60: 290-298. https://doi.org/10.1016/j.expthermflusci.2014.10.002 [10] Driss A, Maalej S, Zaghdoudi MC. (2016). Experimentation and modeling of the steady-state and transient thermal performances of a helicoidally grooved cylindrical heat pipe. Microelectronics Reliability 62: 102-112. https://doi.org/10.1016/j.microrel.2016.03.022 [11] Yang K, Mao Y, Cong Z, Zhang X. (2017). Experimental research of novel aluminum-ammonia heat pipes. Procedia Engineering 205: 3923-3930. https://doi.org/10.1016/j.proeng.2017.10.032 [12] Pis'mennyi EN, Khayrnasov SM, Rassamakin BM. (2018). Heat transfer in the evaporation zone of aluminum grooved heat pipes. Int. J. Heat and Mass Transfer 127: 80-88. https://doi.org/10.1016/j.ijheatmasstransfer.2018.07.154 [13] Ömür C, Bilge Uygur A, Howz I, GürgüçIsik H, Ayan S, Konar M. (2018). Incorporating of manufacturing constraints into an algorithm for the determination of maximum heat transport capacity of extruded axially grooved heat pipes. International Journal of Thermal Sciences 123: 181-190. https://doi.org/10.1016/j.ijthermalsci.2017.09.016 [14] Kline SJ, McClintock FA. (1953). The description of uncertainties in single sample experiments. Mechanical Engineering, ASME 75: 3-8. [15] Mansouri J, Maalej S, Sassi MBH, Zaghdoudi MC. (2011). Experimental study on the thermal performance of enhanced flat miniature heat pipes. International Review of Mechanical Engineering 5(1): 196-208. [16] Faghri A. (1995). Heat pipe science and technology. 1st Edition. Taylor and Francis. [17] Kim SJ, Seo JK, Do KH. (2003). Analytical and experimental investigation of the operational characteristics and the thermal optimization of a miniature heat pipe with a grooved wick structure. Int. J. Heat and Mass Transfer 46: 2051-2063. https://doi.org/10.1016/S0017-9310(02)00504-5
CommonCrawl
Real-time brain computer interface using imaginary movements Ahmad El-Madani1, Helge BD Sorensen1, Troels W. Kjær1, Carsten E. Thomsen1 & Sadasivan Puthusserypady1 Brain Computer Interface (BCI) is the method of transforming mental thoughts and imagination into actions. A real-time BCI system can improve the quality of life of patients with severe neuromuscular disorders by enabling them to communicate with the outside world. In this paper, the implementation of a 2-class real-time BCI system based on the event related desynchronization (ERD) of the sensorimotor rhythms (SMR) is described. Off-line measurements were conducted on 12 healthy test subjects with 3 different feedback systems (cross, basket and bars). From the collected electroencephalogram (EEG) data, the optimum frequency bands for each of the subjects were determined first through an exhaustive search on 325 bandpass filters. The features were then extracted for the left and right hand imaginary movements using the Common Spatial Pattern (CSP) method. Subsequently, a Bayes linear classifier (BLC) was developed and used for signal classification. These three subject-specific settings were preserved for the on-line experiments with the same feedback systems. Six of the 12 subjects were qualified for the on-line experiments based on their high off-line classification accuracies (CAs > 75 %). The overall mean on-line accuracy was found to be 80%. The subject-specific settings applied on the feedback systems have resulted in the development of a successful real-time BCI system with high accuracies. Brain Computer Interface (BCI) - the method of transforming mental thoughts and imagination into actions has been a very interesting and challenging research topic of neuroscience in recent years. The primary reason for such high interest is that it helps to improve the quality of life of patients with severe neuromuscular disorders. BCI based systems enable such patients to communicate with the outside world even without the output channels of peripheral nerves and muscles [1–5]. They can be used for the purpose of communication (e.g. spelling device) [6, 7], interaction with external devices (e.g. controlling a wheelchair) [8, 9], rehabilitation [10, 11] and/or for monitoring the mental states [12, 13]. Various types of non-invasive BCI systems have been developed using different types of brain signals (electroencephalograms (EEGs)). Commonly used EEG signals include the event-related P300 potentials [14–18], steady-state visual evoked potentials (SSVEPs) [19–23], and the motor imagery (MI)-related rhythms [24–30]. In these, the P300 potentials and SSVEPs are evoked by external stimuli and the MI rhythms are voluntarily modulated by the subjects. It has been well studied that the imagination of movements of left and right hands results in the event-related desynchronizing (ERD) of the sensory motor rhythms (SMR) in the contralateral sensorimotor areas and event related synchronization (ERS) on the ipsilateral side [31, 32]. The corresponding distinguishable features in the EEG signals can be used to design MI-based BCI systems [24–30]. Many BCI studies have reported good offline results with high accuracies. However, a BCI-system becomes interesting when it is able to work in real time. In this paper, a real-time MI-based BCI system was developed in which the imagination of the movements of left and right hands were tested resulting in a system with 2-class output [33–35]. Figure 1 illustrates the schematic of the BCI system which has been developed in our laboratory at the Technical University of Denmark (DTU), named as the DTU-BCI scheme from here on. Using this set-up, offline experiments were first conducted on 12 test subjects with the imaginary left/right hand movements as a calibration session to: (i) Determine the optimal frequencies that give the best discrimination between the two classes, (ii) Create a feature extraction filter that maximizes the distance between the two class features, and (iii) Train a classifier for online measurements. In the online measurements, first the online data are filtered with the optimal bandpass filters (obtained from the offline analysis), then the features are extracted using the feature extraction procedure from the offline analysis, and finally the feature vector is classified using the trained classifier. The DTU-BCI scheme In the DTU-BCI set-up shown in Fig. 1, 28 EEG surface electrodes placed on (and around) the motor cortex has been used [7, 36, 37]. Furthermore, EMG electrodes were placed on both the arm wrists during the offline measurements to verify the passivity of the arm muscles. Twelve healthy test-subjects (seven males and five females at an average age of 23±2.6 years) took part in this study. None of them had previous history of neurological diseases or disorders that may influence the experiments. Each participant went through an Edinburgh Handedness test [38]. The handedness test showed that six males and all females were right handed, and only one male was left-handed. Each test-subject was given instruction about the measurement procedures and protocols before the first session. All subjects received remuneration for their participation. Several studies have shown that the ERD signals can be localized at the sensorimotor cortex [39]. However, the SMR waves in the EEG are generally weak and it is impossible to classify the raw EEG directly [1, 40]. Therefore, EEG data were processed in order to extract the relevant features of the SMR which are distinguishable to be used as different control signals in a BCI set-up. Feature extraction We used the Common Spatial Patterns (CSP) algorithm to extract the features from the collected EEG as it has been shown to be very efficient in extracting the features with 2-class BCI systems based on movement imagination [7, 34]. In our approach, the CSP filter was found from the labeled offline data − a large matrix (V) of dimension N×T, where N(=28) is the number of channels and T is the number of samples in each channel (depends on the window length). The data matrix contains 160 mixed trials, 80 trials of right hand imaginary movements (r−trials) and 80 trials of left hand imagery movements (l−trials). Let V r and V l be the r−labeled and l−labeled trials, respectively. The corresponding covariance matrices, Σ r and Σ l , were estimated to calculate Σ c , the composite spatial covariance matrix of the data. $$ {\Sigma}_{c} = \overline{{\Sigma}}_{r} + \overline{{\Sigma}}_{l}, $$ ((1)) where the bar represents the averages. Using the eigenvector and eigenvalues of Σ c , we transformed the data into the eigenvector space: $$ \textbf{Y}_{r} = \textbf{B}_{c}^{T} \textbf{V}_{r} ~~~~ \text{and} ~~~~\textbf{Y}_{l} = \textbf{B}_{c}^{T} \textbf{V}_{l} $$ The next step was the whitening-transformation of the data. Using, \(\textbf {W} = \lambda _{c}^{-\frac {1}{2}}\textbf {B}_{c}^{T}\), both \(\overline {{\Sigma }}_{r}\) and \(\overline {{\Sigma }}_{l}\) were whitened as: $$ \textbf{S}_{r} = \textbf{W} \overline{{\Sigma}}_{r} \textbf{W}^{T}~~~~ \text{and} ~~~~\textbf{S}_{l} = \textbf{W} \overline{{\Sigma}}_{l} \textbf{W}^{T}. $$ Note that both S r and S l share the same eigenvectors [41], and hence they can be expressed as: $$ \textbf{S}_{r} = \textbf{B} \lambda_{r} \textbf{B}^{T}~~~\text{and} ~~~\textbf{S}_{l} = \textbf{B} \lambda_{l} \textbf{B}^{T}~~~\text{where} ~~~\lambda_{r}+\lambda_{l} = \textbf{I}. $$ The fact that λ r +λ l =I leads to an important and beneficial condition. This implies that the eigenvector with the largest eigenvalue in S r has the smallest eigenvalue in S l . In order to reduce the dimension, the m (=3 in our case) largest and m smallest eigenvalues with their corresponding eigenvectors were extracted and used for the data transformation. Using W, the final CSP filter was: $$ \textbf{F}_{csp} =\textbf{B}_{m}\textbf{W}. $$ The raw data matrix V was then projected onto the CSP space as follows: $$ \textbf{Z}=\textbf{F}_{csp}\textbf{V}. $$ Thus, the dimension was reduced from 28×T to 2 m×T. The variances along 2m rows in Z were calculated and normalized in order to extract the features (x). $$ \textbf{x} = \left(\begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{2m} \end{array}\right); ~~~\text{where} ~~~x_{i} = \log\left(\frac{var(Z_{i})}{\sum_{k=1}^{2m}var(Z_{k})}\right) $$ The feature vector (x) extracted from the EEG data need to be classified as l or r. In this work, the Bayes linear classifier (BLC) is used which is known to be optimal when the attributes are independent given the class [42]. It is one of the simplest classifiers and is based on minimizing the classification error probability [41, 43]. The classifier calculates the conditional probabilities P(w l |x) and P(w r |x), and the class with the largest probability given x is chosen. Assuming that both classes are Gaussian [40], and have equal covariance matrices, i.e., Σ l =Σ r =Σ, we obtain: $$\begin{array}{@{}rcl@{}} -0.5\textbf{x}_{l}^{T}{\Sigma}^{-1}\textbf{x}_{l}+c > -0.5\textbf{x}_{r}^{T}{\Sigma}^{-1}\textbf{x}_{r}+c & \Rightarrow & l \end{array} $$ $$\begin{array}{@{}rcl@{}} -0.5\textbf{x}_{l}^{T}{\Sigma}^{-1}\textbf{x}_{l}+c < -0.5\textbf{x}_{r}^{T}{\Sigma}^{-1}\textbf{x}_{r}+c & \Rightarrow & r \end{array} $$ where x l =x−μ l and x r =x−μ r . If we designate \(D_{l} ={\mu _{l}^{T}}{\Sigma }^{-1}\textbf {x}-0.5{\mu _{l}^{T}}{\Sigma }^{-1}\mu _{l} \) and \(D_{r}={\mu _{r}^{T}}{\Sigma }^{-1}\textbf {x}-0.5{\mu _{r}^{T}}{\Sigma }^{-1}\mu _{r}\), then it implies that, $$\begin{array}{@{}rcl@{}} D_{r} &> &D_{l} \Rightarrow r \end{array} $$ $$\begin{array}{@{}rcl@{}} D_{r} &< & D_{l} \Rightarrow l \end{array} $$ In order to perform an online classification using the equations above, the covariance matrix Σ and the two class means μ l and μ r were to be known. These were calculated from the offline data using 3×5-fold cross-validation. In the online BCI, no cross-validation processes were applied. Instead, data from a sliding window was first bandpass filtered and then CSP-filtered using parameters extracted from the offline measurements. Features of each online data segments were classified using the BLC. Each classification was then fed in to a voting system that gave the final classification accuracy (CA) after e.g., 8 data segments. The following parameters were used in the voting system: segment length range of 0.5−1.5 sec, the window overlapping of 90 and 95 %, and the number of segments of 6−15. All test parameters were individually selected using a graphical user interface. Though the estimated parameters from offline analysis has been used in the online measurements, the data from the online measurements could show considerable variation [44]. These variations may be due to non-stationeries caused by the small changes in electrode positions, drying conductive gel or electrodes with high impedances, brain plasticity, especially after several sessions, or variations in the cognitive state of the user, e.g. motivation, attention etc. This classifier uses the standard classification rules from Eqs. (10) and (11). The rules can be expressed as: $$\begin{array}{@{}rcl@{}} D_{r} - D_{l} & > & 0 \Rightarrow r \end{array} $$ $$\begin{array}{@{}rcl@{}} D_{r} - D_{l} & < & 0 \Rightarrow l \end{array} $$ Since the data are assumed to be Gaussian distributed, the expression (D r −D l ) is also Gaussian. To illustrate the decision criteria in BLC, Fig. 2 shows the probability density function (pdf) of (D r −D l ) when performing the left and right hand movement imagination. Probability density functions of during left (red dotted curve) and right (blue solid curve) hand imagery using NBLC. The threshold is equal to 0 The red dotted line in the figure represents the pdf of (D r −D l ) when performing imaginary left hand movements and the blue solid line represents the pdf of (D r −D l ) when performing imaginary right hand movements. The vertical green line is the decision threshold (=0). It is worth noting, that even though the subject-specific CSP filter maximizes the separation between the two classes, there will always be an overlap (due to the nature of the distribution). And because of the variations mentioned earlier, the means of (D r −D l ) for left and right imaginary movements are unlikely to be equidistant to the threshold. Therefore, this classifier may result in lower classification accuracy in online BCI. Feedback systems To ensure that the output of the online signal processing is utilized to its full, a proper feedback paradigm should be designed. A feedback system receives the control signal consisting of commands (left, right, no classifications), and convert them to a visualized event on the screen. We used three different feedback systems: Cross feedback, Basket feedback, and Bar feedback (Fig. 3). Feedback systems used: a Cross, b Basket and c Bars Cross feedback This consists of a blue cross in the middle of the screen, and two grey bars (the left goal and the right goal), on each side of the screen (Fig. 3 a). The objective is to move the cross to the left or the right goal, by performing imaginary left or right hand tapping, respectively. At the beginning of each trial, one of the two goals gets red and becomes the target goal. The cross moves one step to the side that corresponds to the online command. In other words, if the final classification decision is left (l), the cross moves one step to the left. If right (r), then the cross moves one step to the right. The cross stays in the same position if the command is "no classification". Each session consist of ten pseudo-random trials, five left and five right. After each trial, a 5-s pause is given. Once the cross has reached one of the goals, the system saves the selection and ends the trial. Basket feedback This system consists of a blue ball at the top of the screen, and two goals at the bottom (Fig. 3 b). When the trial begins, the ball starts falling with a constant speed. The objective is to move the ball, by means of imaginary hand movements, to the target goal (the one with red color). The ball speed can be adjusted before running the measurement. This system also had the same trial construction as the cross paradigm. Each test-subject went through this paradigm several times, and the ball speed has been varied for some of the subjects. Table 4 lists the mean trial durations (MTDs) and CAs of the subjects performing the Basket Feedback. Bar feedback There is no object to move in this feedback system. The feedback consists of two bars, one on each side of the screen. These bars are empty, and are filled gradually after each final classification (Fig. 3 c). The trial begins with an arrow appearing on the middle, instructing the test subject, which imaginary movement to perform, i.e. which bar to fill up. If the arrow is pointing to the right, then the test subject has to imagine right hand movement in order to fill up the right bar, and vice versa. It takes 10 steps to fill a bar up. Once one of the bars is filled, a selection is made. Therefore, a trial has a minimum duration of ten steps and maximum duration of 19 successful steps. The step duration is the time between two final classifications. Results and discussions The most important experiments and the results are presented in this section. The results are based on about 100 h of measurements and many hours of measurements preparation. Offline BCI The preliminary offline measurements were carried out to deselect BCI illiterates and to calculate the filters and other parameters for the test subjects [45, 46]. Each offline calibration measurement consisted of 80 left and 80 right trials in a pseudo-random order. ERD plots One way of visualizing the offline data is to generate ERD plots (showing the distribution of power) of the data from C3 and C4 electrode locations. The data from these locations are divided into segments corresponding to left and right hand movement trials. The trial averaged power is then calculated and the corresponding ERD plots for subject 9 is shown in Fig. 4. ERD plots of C3 and C4 during left and right hand imagery movements from subject 9's offline data. These plots illustrate the spectral power change versus time during imagery compared to the reference interval. Notice the power decease in C4 during left hand imagery (a), and the power decrease in C3 during right hand imagery (b) The ERD plots show larger power in C4 than in C3 during left hand movement imagination. On the other hand, C3 has large power than C4 during right hand movement imagination. Recall that C3 and C4 are located on the left and right sensorimotor cortex hand areas, respectively. Recall also, that the left sensorimotor cortex is responsible for the contralateral (right) body movements, and vice versa. The ERD plots confirm this phenomenon during movement imagination. It is worth noting, that the power is not uniformly distributed along the frequency axis. Figure 4 shows that the most significant power changes occur at 5–12 Hz. Another active frequency band is 18–23 Hz. When comparing across the subjects, we do not find precisely same patterns regarding active frequency bands. This finding confirms our objective to select subject-specific frequency bands in online BCI. Optimal offline results The offline data were analyzed using a total of 325 bandpass filters. For each pass-band, a CSP filter was calculated and applied on the data. Finally, the filtered data were classified and the CA was found. These exhaustive analysis was performed on a computer cluster via the DTU Newton server. The optimal frequencies and accuracies of each test-subject are listed in Table 1. Table 1 The offline results of all test-subjects. The optimal accuracies listed in the right column are found by analyzing the offline data using different bandpass filters. The optimal frequency ranges listed in the middle column are the bandpass filters that result in optimal classification with the optimal accuracy. While recording the EEG data, the electrode impedance was kept under 5 k Ω It can be seen that subject 9 has the highest offline CA (99.38 %). Subjects 10 and 12 also have high CAs. It is important to note that the optimal frequency ranges are different across the subjects. Only two subjects (1 and 9) have optimal results with frequencies roughly in the α-band. Remaining subjects have optimal frequencies either in the beta-band or combined α−β-band. Based on the offline results, the subjects were divided into different categories (Table 2). Table 2 Subject categories according to the offline analysis on CAs This shows that half of the subjects had accuracies below 75 %. Since well-functioning online BCI requires relatively high accuracies, the subjects with accuracies above 75 % have been considered (six subjects: 2, 4, 7, 9, 10 and 12) for online measurements. Online BCI This feedback system was tested 15 times in average for each of the six test subjects in order to reach the CAs and MTDs (shown in Table 3). Considering the CAs, it was found that all the test subjects except subject 2 could control the cross easily. Also, it is worth noting that in terms of the MTDs, two of the standard deviations are extremely high (in subjects 2 and 7). By studying the individual sessions of subject 7, it was found that the first session (MTD of 52 s) differed significantly from the rest of the sessions (9.89±3.37 seconds). Regarding subject 2, the huge standard deviation is realistic, since low accuracy generally leads to frequent misclassifications, and thereby results in prolonged trials. The MTDs of this paradigm are generally high. Table 3 Results from the online sessions (subject 10 did not participate in Bar feedback sessions due to personal reasons). In Basket feedback, the ball speed is predefined, and the subjects have no influence on the trial time. Therefore the standard deviations of the MTD's for this case is omitted Table 3 clearly show that all subjects, except subject 2 have CAs between 80 and 90 %. Subject 2 struggled to control the ball to move to the target side each time. The other subjects experienced the ease of moving the ball in the correct direction. The small standard deviation values indicate that this feedback is more stable than the Cross feedback. However, there is huge inter-subject variations in the MTDs. While subject 7 had a MTD of 7.1 s, subjects 2 and 10 had both MTDs around 19 s. This huge difference are mainly due to different ball speeds. Table 4 lists the average ball speeds for each subject (inter-session and not intra-session ball speed variation). Based on these results, it has been proved that using the basket paradigm, 90 % CA and around 8-s MTD is achievable. Table 4 Average ball speed for each subject This feedback system was tested on all online subjects, except subject 10 (could not participate due to personal reasons). In Table 3, the mean accuracies and trial durations for the Bar Feedback are given. Subjects 4, 7, 9, and 12 accomplished the sessions with a mean CA between 77.50 and 100.00 %. Subject 2's mean CA was higher than for the other two feedback systems, but still significantly lower than the other subjects. By observing the MTDs, it is found that the times are substantially lower than the trial durations for the two other feedback systems. Online results analysis Inter-subject analysis The online CAs along with the offline CAs for the 6 test subjects are illustrated in Fig. 5. It can be clearly seen that although subject 9 had the highest offline CA, the online results were the worst. Subject 7 on the other hand had the highest grand average online CA (94.17 %). It is worth noting that only subject 7 improved the CA from offline to the online. Subject 2 gave poor results compared with the other five subjects, both in terms of CA and MTD. By studying the metadata of the subjects, we found that subject 2 was the only left-handed subject among the six subjects. However, it is premature to conclude that the left-handed people perform worse than the right-handed ones from one isolated case. It is been reported that BCI control does not work for 15–30 % of subjects [45], and therefore it could be that subject 2 belong to this group of subjects. Comparative illustration of the accuracies obtained in offline, cross, basket and bar for all test-subjects Inter-feedback analysis From the results in Table 3, we can see that, (i) Although all three feedback systems resulted in more or less similar CAs, Cross feedback had shown significantly large deviations, (ii) The MTDs differ significantly from each other; 18.31, 12.35 and 8.08 s, respectively for the Cross, Basket and Bar feedbacks, and (iii) The cross feedback had the highest standard deviation of the mean trial duration compared to the two other paradigms. These findings indicate that the Cross feedback is not as stable as the other two paradigms. It is however possible to refine this paradigm e.g. by reducing the distance to the goals in order to reduce the MTD and at the same time improve the stability. The learning effect This is done by allowing one of the subjects (subject 4) to try the cross feedback on three (Mondays) consecutive weeks and the CAs are plotted in Fig. 6. It can be seen that a small but clear increase in the accuracy on the second and third measurement day. The red linear trend-line also confirms this improvement. Notice that not only the accuracy was improved each time, but also the standard deviation was reduced. The accuracy progression of subject 4 as a function of time. The blue dots are the mean accuracies on each day, and the red line is the linear trend-line Trial duration vs. accuracy Results from the online measurements showed, that when the CA was low, then reaching the target became a difficult task (e.g. in Cross feedback). Furthermore, the MTD became prolonged, since the wrong classifications should be corrected. However, the reverse relation is also valid: prolonged MTD leads to exhausting the user, leading to reduced imagery performance and thus a reduced CA. Therefore, CA and MTD affect each other. In Fig. 7, the mean CAs are plotted against their corresponding MTDs. The trend-line confirms that longer MTD is related to lower CA, and vice versa. Mean accuracies for all subjects plotted against trial duration. The red linear trend-line shows the relation between accuracy and trial duration Feedback improvements During the online measurements with the three feedback systems, few implementation errors were registered, and potential refinements were suggested. A general problem was detected in all three paradigms. At the beginning of each trial (except the first trial), the first few segments from the sliding window contained data from the previous trial due to overlapping. Therefore these 'old' data could affect the first few classifications of the new trial and in the worst case affect the first final classification of each trial (recall the voting process). Another potential improvement concerning all three feedback systems is to illustrate the stepwise moves (of the cross, ball and the bar fill) as continuous moves. Even though this change is only visual, it may minimize confusion, and prevent unconscious step-synchronized body movements. Finally the last common improvement is to instruct the subjects about the feedback paradigms (Cross and Basket: red target, Bars: direction arrow) at least few seconds before each trial. In the current paradigms, it was introduced concurrently with the beginning of the trial. Thus, the subjects spent up to couple of seconds to react on these commands while the segments from the sliding window were classified, possibly wrongly. In the following three subsections, specific improvements and changes of the paradigms are suggested. Improvements of cross Recall that there were ten steps to each target in the Cross feedback. The reason for implementing such a large distance to the targets was to ensure that there was enough space to correct wrong classifications. The analysis of the online measurements showed, that the Cross paradigm resulted in prolonged and varied MTDs. Since all online recordings have been saved, each online session was analyzed in order to investigate the movement behavior of the cross. In other words, the CA and MTD of each session were recalculated after reducing the target distance gradually down to one step. Figure 8 illustrates the mean CAs of each test subject as a function of number of steps to the targets. It shows that the CAs do not change significantly when reducing the target distance by few steps. Accuracy as a function of target distance for all six subjects If we assume, that the maximum tolerable accuracy reduction is 10 %, then three or four steps to the targets would be enough. According to this analysis, if the target distance is reduced to four steps, the corresponding percentage change of the mean accuracies for the test subjects are calculated and is tabulated in Table 5. Note that some of the CAs increase when the target distance is reduced. It may be attributed to the fact that some subjects become exhausted when passing the final steps (due to prolonged imagery). And since fatigue may reduce the imagery performance, this could lead to faulty classifications. Table 5 Analysis of results for target distance equal to four steps The time reduction is also considered in this analysis. Figure 9 illustrates the estimated MTDs for each subject as a function of target distance. A huge time reduction is achieved if the distance is reduced. For instance, if the distance was four steps instead of ten, the averaged MTD would be 5.42 s. It is worth noting, that the time reduction curves in Fig. 9 were exponentially decreasing with decreased target distance. This finding confirms the fact that large distance to the targets leads to prolonged imagery, which in turn leads to fatigue and thus bad performance. Table 5 summarizes the analytical results for target distance equal to four steps. The trial duration as a function of target distance Improvements of basket Since Basket feedback has limited MTD, reducing the target distance (number of steps to the side borders in this case) will have less effect. However, the large number of online measurements using Basket indicates that a distance of 20 steps is too long. Therefore, a distance reduction may result in a reduction in MTD. Improvements of bars Some subjects experienced, that the command arrow was thin and unclear. Another problem occurred, when the filling difference between the left and right bars were small. For instance, if both left and right bars were filled up with nine steps at the end of the trial, then the last step decides the class of the trial. Because the figure clears when the decision is made, the user will not be able to detect the decision. This problem can be solved by viewing the decision in the beginning of the 5-second pause. EMG contribution in online BCI Besides visually inspecting the subjects during the measurements, the EMG recorded during the offline measurements were used to ensure that the ERDs of the SMR were due to movement imagination rather than due to real muscle movements. The spectrogram of the EMG data did not show a significant power change that could indicate that the subject was making a real hand flexion. It was found that the EMG analysis was in accordance with the visual inspection, which showed that EMG activity was negligible for all subjects. During the online measurements, EMG was not measured but was only visually inspected. However, real and imaginary movements do not result in same EEG spatial patterns. Therefore during online BCI, a CSP filter that is calculated from offline data with minimal EMG activity will not result in optimal feature extraction if real movements were performed. Consequently, real movements may probably lead to bad classification. This hypothesis was tested during few online sessions: the subject was told to perform real hand movements instead of imaginary movements. Many of the resulted classifications were incorrect, and the feedback showed a more or less random path of the cross. This paper has focused on the challenges of developing a real-time BCI system using the desynchronization phenomenon of the SMR. The first part was to conduct offline calibration measurements to determine the optimal subject-specific parameters to use in the online part. Offline data were processed using CSP to extract the relevant features. BLC was trained using the labeled features from each data. Twelve test-subjects participated in the offline measurements and six of them qualified to participate in the online measurements. Three online feedback paradigms were designed and used (cross, basket, and bars) in this work. While all three paradigms resulted in similar CAs, the results of cross indicated instability. This was reflected by prolonged trial time, large standard deviation of the trial times, and the large deviation of the CAs. The overall online CA was 80 %. It was found, by studying possible improvements of cross, that reducing the target distance from ten steps to four steps resulted in 70 % reduction of MTD. This improvement will only reduce the CA by 2 %. Dornhege G, Millan J, Hinterberger T, McFarland DJ, Müller KR. Toward brain-computer interfacing. Cambridge: MIT Press; 2007. Wolpaw JR, Birbaumer N, Heetderks WJ, McFarland DJ, Peckham PH, Schalk G, et al.Brain-computer interface technology: a review of the first international meeting. IEEE Trans Neural Syst Rehabil Eng. 2000; 8(2):164–73. Birbaumer N, Ghanayim N, Hinterberger T, Iversen I, Kotchoubey B, Kübler A, et al.A spelling device for the paralysed. Nature. 1999; 398:297–8. Kübler A, Birbaumer N. Brain-computer interfaces and communication in paralysis: extinction of goal directed thinking in completely paralysed patients?Clin Neurophysiol. 2008; 119:2658–66. Kübler A. Brain-computer interfacing: science fiction has come true. Brain. 2013; 136:2001–4. Sellers EW, Donchin E. A P300-based brain-computer interface: initial tests by ALS patients.Clin Neurophysiol. 2006; 117:538–48. Wolpaw JR, Birbaumer N, Pfurtscheller G, Mcfarland DJ, Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002; 6:767–91. Del R Millan JJ, Galan F, Vanhooydonck D, Lew E, Philips J, Nuttin M. Asynchronous non-invasive brain-actuated control of an intelligent wheelchair. In: Conf. Proc. of the 31st IEEE Eng. Med. Biol. Soc. USA. Minnesota: Hilton Minneapolis: 2009. p. 3361–4. Mohebbi A, Engelsholm SK, Puthusserypady S, Kjaer TW, Thomsen CE, Sorensen HBD. A brain computer interface for robust wheelchair control application based on pseudorandom code modulated visual evoked potential. In: Proc. of the 37th Intl. Conf. of the IEEE Eng. Med. Biol. Soc. Milan, Italy: 2015. Ali A, Puthusserypady S. A 3D learning playground for potential attention training in ADHD: A brain computer interface approach. In: Proc. of the 37th Intl. Conf. of the IEEE Eng. Med. Biol. Soc. Milan, Italy: 2015. Daly JJ, Wolpaw JR. Brain-computer interfaces in neurological rehabilitation. Lancet Neurol. 2008; 7:1032–43. Blankertz B, Tangermann M, Vidaurre C, Fazli S, Sannelli C, Haufe S, et al.The Berlin brain-computer interface: non-medical uses of BCI technology. Front Neuroscience. 2010; 4(Article 198):1–17. Müller KR, Tangermann M, Dornhege G, Krauledat M, Curio G, Blankertz B. Machine learning for real-time single-trial EEG-analysis: from brain-computer interfacing to mental state monitoring. Jl Neurosci Methods. 2007; 167(1):82–90. Serby H, Yom-Tov E, Inbar GF. An improved P300-based brain-computer interface. IEEE Trans Neural Syst Rehabil Eng. 2005; 13(1):89–98. Lugo ZR, Rodriguez J, Lechner A, Ortner R, Gantner IS, Laureys S, et al.A vibrotactile p300-based brain-computer interface for consciousness detection and communication. Clin EEG Neurosci. 2014; 45(1):14–21. Piccione F, Giorgi F, Tonin P, Priftis K, Giove S, Silvoni S, et al.P300-based brain computer interface: reliability and performance in healthy and paralysed participants. Clin Neurophysiol. 2006; 117(3):531–7. Farwell LA, Donchin E. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroenceph Clin Neurophysiol. 1988; 70(6):510–23. Bayliss JD. Use of the evoked potential P3 component for control in a virtual apartment. IEEE Trans Neural Syst Rehabil Eng. 2003; 11(2):113–6. Vilic A, Kjaer TW, Thomsen CE, Puthusserypady S, Sorensen HBD. DTU BCI speller: an SSVEP-based spelling system with dictionary support. In: Proc. of the 35th IEEE Eng. Med. Biol. Soc. Osaka, Japan: 2013. p. 2212–5. Ortner R, Allison B, Korisek G, Gaggl H, Pfurtscheller G. An SSVEP BCI to control a hand orthosis for persons with Tetraplegia. IEEE Trans Neural Syst Rehabil Eng. 2011; 19(1):1–5. Leow R, Ibrahim F, Moghavvemi M. Development of a steady state visual evoked potential (SSVEP)-based brain computer interface (BCI) system. In: Intl. Conf. on Intell. and Adv. Syst. Kuala Lumpur: 2007. p. 321–4. Liavas AP, Moustakides GV, Henning G, Psarakis EZ, Husar P. A periodogram-based method for the detection of steady-state visually evoked potentials. IEEE Trans Biomed Eng. 1998; 45(2):242–8. Cheng M, Gao X, Gao S, Xu D. Design and implementation of a brain-computer interface with high transfer rates. IEEE Trans Biomed Eng. 2002; 49(10):1181–6. Wolpaw JR, McFarland DJ. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proc Nat Acad Sci. 2004; 17:849–54. McFarland DJ, Wolpaw JR. Sensorimotor rhythm-based braincomputer interface (BCI): feature selection by regression improves performance. IEEE Trans Neural Syst Rehabil Eng. 2005; 13(3):372–9. Yue J, Zhou Z, Jiang J, Liu Y, Hu D. Balancing a simulated inverted pendulum through motor imagery: an EEG-based real-time control paradigm. Neurosci Lett. 2012; 524(2):95–100. Friedricha E, Schererb R, Neuper C. Long-term evaluation of a 4-class imagery-based brain-computer interface. Clin. Neurophysiol. 2013; 124(5):916–27. Hazrati M, Erfanian A. An online EEG-based brain-computer interface for controlling hand grasp using an adaptive probabilistic neural network. Med Eng Phys. 2010; 32(7):730–9. Faller J, Vidaurre C, Solis-Escalante T, Neuper C, Scherer R. Auto-calibration and recurrent adaptation: towards a plug and play online ERD-BCI. IEEE Trans Neural Syst Rehabil Eng. 2012; 20(3):313–9. Iversen IH, Ghanayim N, Kübler A, Neumann N, Birbaumer N, Kaiser J. A brain-computer interface tool to assess cognitive functions in completely paralyzed patients with amyotrophic lateral sclerosis. Clin Neurophysiol. 2008; 119(10):2214–23. Decety J. The neurophysiological basis of motor imagery. Behav Brain Res. 1996; 77(1-2):45–52. Pfurtscheller G, Lopes DSF. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol. 1999; 110(11):1842–1457. Royer A, He B. Goal selection versus process control in a brain-computer interface based on sensorimotor rhythms. Jl Neural Eng. 2009;6(1). doi:10.1088/1741--2560/6/1/016005. Blankertz B, Losch F, Krauledat M, Dornhege G, Curio G Müller KR. The Berlin brain-computer interface: accurate performance from first-session in BCI-Naïve Subjects. IEEE Trans on Biomed Eng. 2008; 55(10):2452–62. Vuckovic A, Sepulveda F. Quantification and visualisation of differences between two motor tasks based on energy density maps for brain-computer interface applications. Clin Neurophysiol. 2008; 119(2):446–58. Kübler A, Müller KR. An introduction to brain-computer interfacing. Cambridge: MIT Press, p. 2007. El-Madani A. Introduction to brain computer interface. Study report. Denmark: Dep. Elec. Eng., DTU, Lyngby;2009. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971; 9(1):97–113. Müller-Gerking J, Pfurtscheller G, Flyvbjerg H. Designing optimal spatial filters for single-trial EEG classification in a movement task. Clin Neurophysiol. 1999; 110(5):787–98. Blankertz B, Tomioka R, Lemm S, Kawanabe M, Müller KR. Optimizing spatial filters for robust EEG single-trial analysis. IEEE Sig Process Mag. 2008; 25(1):41–56. Fukunaga K. Random vectors and their properties. In: Introduction to statistical pattern recognition. New York: Morgan Kaufmann Publishers: 1990. p. 10–35. Pedro D, Michael P. On the optimality of the simple Bayesian classifier uner zero-one loss. Mach Learn. 1997; 29:103–30. Jonsson M.Brain computer interface. MSc. Thesis. Denmark: Dep. Elec. Eng., DTU; 2008. Blumberg J, Rickert J, Waldert S, Schulze-Bonhage A, Aertsen A, Mehring C. 2007. Adaptive classification for brain computer interfaces, Vol. 1. France: Med. Biol. Soc. Conf. Lyon. Dickhaus T, Sannelli C, Müller KR, Curio G, Blankertz B. Predicting BCI performance to study BCI illiteracy. Bio. Med. Centr. Neuroscience. 2009; 10(Suppl 1):84. doi:10.1186/1471-2202-10-S1-P84. Vidaurre C, Blankertz B. Towards a cure for BCI illiteracy. Brain Topogr. 2010; 23(2):194–8. Department of Electrical Engineering, Technical University of Denmark, Lyngby, 2800 Kgs., Denmark Ahmad El-Madani , Helge BD Sorensen , Troels W. Kjær , Carsten E. Thomsen & Sadasivan Puthusserypady Search for Ahmad El-Madani in: Search for Helge BD Sorensen in: Search for Troels W. Kjær in: Search for Carsten E. Thomsen in: Search for Sadasivan Puthusserypady in: Correspondence to Sadasivan Puthusserypady. AE carried out all the simulation studies and helped with preparing the manuscript. All the remaining authors contributed to the preparation of the manuscript and contributed equally to the work. All authors read and approved the final manuscript. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. El-Madani, A., Sorensen, H.B., Kjær, T.W. et al. Real-time brain computer interface using imaginary movements. EPJ Nonlinear Biomed Phys 3, 9 (2015) doi:10.1140/epjnbp/s40366-015-0024-2 Brain computer interfaces (BCI) Movement imagery (MI) Event-related desynchronization (ERD) Bayes linear classifier (BLC)
CommonCrawl
LaTeX code for Papers Click here for a plain text version of this LaTeX code. \documentclass[a4paper,11pt]{article} \usepackage{ulem} \usepackage{a4wide} \usepackage[dvipsnames,svgnames]{xcolor} \usepackage[pdftex]{graphicx} % commands generated by html2latex \begin{document}\hypertarget{Congress_2013}{} \section{Congress 2013} On June 5, 2013 the History and Archives group will be presenting a paper at Congress in Victoria, Canada "Click, Whir, Zing, ZOT! You've Got a Date!: The Early Use of Computers on a University Campus"\hypertarget{Abstract}{} \subsubsection{\textit{Abstract}} How were computers discussed by a university public? Computers made their way onto the campuses of large post-secondary institutions beginning in the late 1940s1. This paper looks at how computers and computing were discussed through the student and university newspapers of the University of Alberta. A combination of content analysis and close reading methods teased out three themes that will be discussed in the paper: \\1. How computers became research and funding priorities in the University. How a breadth of faculties including Arts and Education were experimenting with computing early on. \\2. How the administrative use of computers contributed to growing anxieties on campus specifically about privacy and data collection. \\3. How the computer was used for entertainment purposes like dating and match-making. \\Though the University of Alberta had a Computing Centre beginning in 1957 the first mention of the world ???computer??? in the Gateway student newspaper appears in a 1959 article describing ongoing campus construction projects; one of which includes a designated space for the University???s new digital computer, also referred to as the ???electronic brain???2. Originally used for research purposes in math, science, and engineering, the computer quickly became a useful tool in other faculties including Arts, Education, and Agriculture. By 1965 the University of Alberta was on its third computer, the IBM 7040, and it was being used around the clock.???Image - Users of Computing Facility for Research In addition to research, the computer was used for a number of administrative purposes on campus as well. From the assembly of the telephone directory4 to accounting services5 computers assisted with day-to-day activities, though not always with the appreciation of staff and students. In the late 1960s anxieties over the use of computers in society and the potential threat to privacy grew as catchphrases like ???Do not Fold, Spindle or Mutilate??? indicate.6 At Sir George William University in Montreal students protesting discriminatory treatment by a faculty member occupied and then set fire to the University???s Computer Centre causing \$2 million dollars in damage7. The protestors??? choice to take over this building is symbolic of the importance of the computer for universities at this time as well as students??? distrust and contempt for these machines. Nevertheless in the student newspaper a number of articles at the same time were dedicated to computer dating and matchmaking. Regardless of the apparent suspicion of computers university students always found time for love. The project team???s ultimate goal is to understand public perceptions of computing (and humanities computing) in Canada but to do so we must first understand the history of the discourse around computing on campus. \\BibliographyLubar, Steven. ??????Do Not Fold, Spindle or Mutilate???: A Cultural History of the Punch Card.??? Journal of American Culture, Vol 15, issue 4 (Winter 1992), pages 43-55. DOI:??10.1111/j.1542-734X.1992.1504\_43.x Oke, David. ???Quaret-million deficit for SU.??? Gateway, October 5, 1976, Page 3. Scott, D.B. ???The Computer Centre.??? Folio, December 15, 1965, Page 1-2. Unknown. ???\$2 million damage at Sir George William as frustrated students burn, smash comp centre.??? Gateway, February 13, 1969, Page 3. Unknown. ???New Telephone Directory.??? Gateway, October 25, 1963, Page 4. Unknown. ???Three Modern Buildings with Better Facilities Nearing Completion.??? Gateway, November 6, 1959, Page 9. \\\hypertarget{Paper}{} \subsubsection{\textit{Paper}} In the early years of computing how were computers discussed by a university public? A 1966 article in the University of Alberta student newspaper \textit{The Gateway} begins with: \begin{description}"Attention, love-starved students. Tired of sitting home on Friday nights reading "Gulliver's Travels"? Board with lonely carrels in Cameron Library? Throw away your books, your solitude, and your inhibitions. Cupid Computer, the scientific approach to dating is presently being introduced at the U of A". \end{description} After filling out an 80 question survey that was then run through a computer to determine compatibility, Cupid Computer participants were promised the chance to find "ideal" dates. Some questions posed relate to: sex, race, religion, smoking and drinking behaviour, for males whether or not they have access to a car and for females whether or not this matters, how attractive one's friends find them, how political one is, one's position on skinny dipping, and whether or not one would try LSD. Originating out of London, Ontario from the company Computronics the program spread to campuses across Canada, brought to campus by university students themselves, and was hailed as "Canada's foremost IBM Dating Service" (Coryphaeus, 1966-67, v7, no09, pg 3-4). Computers made their way onto the campuses of large post-secondary institutions beginning in the late 1940s (Scott, "The Computing Centre") and being the first to deal with the coming of the computer campuses offer a unique perspective. By looking at these early discussions of computers, by way of three separate publications from the same campus, we can tease out an early social history of the arrival of computers in Canadian society through the lens of post-secondary education. This study looks at early references to computers in three University of Alberta publications: \textit{The Gateway}, \textit{New Trail}, and \textit{Folio}. \textit{The Gateway} is the newspaper of the Students`Union and has been in circulation since 1910. \textit{New Trail} is the publication of the University of Alberta Alumni Association. It began in 1920 under the name \textit{The Trail} eventually becoming \textit{New Trail} in 1942. In 1964 it was absorbed by \textit{Folio}, the newspaper of Public Relations Office that is also known as the "University of Alberta Staff Bulletin". The two publications remained a joint venture until 1982 when New Trail split off to again serve as the Alumni Association publication. All three of these publications continue to exist today and for our purposes here provide a unique look at the way computers were introduced on a university campus; from the student perspective, on behalf of Alumni, and from the University of Alberta itself. \begin{itemize} \item Describe our methodology; \item Present an historical account of the arrival of computers on the University of Alberta campus; \item Identify the themes that appear such as research and funding priorities, growing anxiety around privacy and data collection, rebellion on campus, and playful attitude of students as seen in the discussions around computer dating and matchmaking. \\\textit{Methodology} Our methodology was a combination of content analysis and close reading. Peel's Praire Provinces is a resource of the University of Alberta Libraries; an online bibliography of books, newspaper issues, and other materials related to the development of the Prairies, as well as a searchable full-text collection of many of these items. Using this website we searched \textit{The Gateway}, \textit{New Trail} and \textit{Folio} for all references to computer and downloaded for reading and coding all the relevant articles. The content analysis rubric was developed iteratively and coded for things such as: \item Title, Author and Year; \item Type of reference (i.e. news, classified, advertisement or opinion piece); \item The presence of photos or illustrations; \item Category of application (Science, Commerce, Industry, Government, the Arts and Humanities, the Library, or Education); \item Gender of named people; \item Discourse features such as the computer being described as a 'brain'; \item Departments or faculties mentioned; \item Hype or anxiety present in the reference; and \item Types of computers mentioned where applicable. \\The content analysis produced history and themes. \textit{History} \\The first reference to a digital computer comes in 1957 from \textit{New Trail}. The article "Electronic Brain Aids University Research" describes a direct line teletype communication system with FERUT, a high speed digital electronic computer from the University of Toronto???s computation centre; according to the article ???Ambitious problems in the fields of physics, mathematics, engineering, statistics, etc., can now be solved on the campus in a matter of minutes???. (\textit{New Trail}, Vol. 15, No. 1) \\Shortly following the success of this teletype link with FERUT in 1957 the University of Alberta became the third university in Canada with a computing facility, preceded by the University of Toronto in 1948 and the University of British Columbia also in 1957. The Royal McBee LGP 30 was primarily used for numerical calculations and after its arrival was quickly being used around the clock. Soon the LPG 30 was replaced with an IBM 1620 and an IBM 7040 after that. The improvement in the speed of calculations was measured in terms of electric desk calculators and human operators; the calculation speed of the LPG 30 was roughly equivalent to 500 human operators. With the purchase of the IBM 1620 in 1961 this speed increased 20-fold and then another 60-fold in 1964 with the addition of the IBM 740. The increased speed combined with accuracy of calculations by the digital computer meant that research that was "previously unthinkable" could be done. (Scott, Folio, "The Computer Centre Part One, Vol 2, No. 8, 1965)\href{/index.php/File:RoyalMcBee.jpg}{ \includegraphics{/AnnokiUploadAuth.php/thumb/9/91/RoyalMcBee.jpg/200px-RoyalMcBee.jpg}} \\\textit{Themes} \textbf{Research and Funding} By the time of the arrival of the IBM 740 on campus the Computer Centre was being used by over 30 different departments from nine different faculties. Though the highest rate of use was from departments located in the Faculty of Science researchers also included members of the Arts and Education communities. \\\href{/index.php/File:Departments.png}{ \includegraphics{/AnnokiUploadAuth.php/thumb/c/c2/Departments.png/400px-Departments.png}} Though we know that each of these faculties are making use of the computer for there research the discussions around the type of research are most relegated to the use of computers in medical research. Two articles from 1967, in \textit{Folio} ("Computer to analyze electrocardiograms" \textit{Folio}, 1967) and \textit{New Trail}("Electronic heart watching" \textit{New Trail}, 1967) respectively, discuss a three year study in which computers are used to interpret the electrocardiograms of healthy patients. One 1970 article discusses the use of computers in nuclear medicine to measure blood flow through a patient's heart ("Immunology and Transplantation" \textit{New Trail}, February 1970) and another describes the "great capacity and power" of the central computer at the U of A (Computers in Medicine, \textit{New Trail}, February 1970 )which is being used for health service utilization, assessing treatments, scientific investigatory tools, and by studying cells and simulating body functions. In addition to the use of the central computer the Faculty of Medicine at this time also has several smaller computers of its own for for local experiments. There is only limited mention of computers used in Arts research. The first mention is in a 1963 \textit{Gateway} article announcing a talk by an Associate Professor of Computer Science titled "Application of Computers to Behavioural Science Research".(Unknown, 1963, Sociology Club) \\The introduction of computers to the university public is presented through the lens of medical research. \textbf{Antagonism} One of the most powerful themes we found coming out of the student news paper the \textit{Gateway} is the antagonistic attitude of the magazine contributors. The 1966 article "the computer wins again" describes examination scheduling done by computer: "The examination schedule in use this year was efficient because more than 35,000 examination papers were written in slightly more than one week. But an electric chair is efficient too." The author goes on to elaborate the difficulties multiple exams have on students but posits the need to "beat the computer" rather than prepare for the exam. The author finishes with two suggestions: either holding exams over a two week period or option that s/he considers impossible: "build a small touch of mercy into the stainless steel soul of a certain university computer". (the computer wins again, 17 January 1966, Gateway). Another 1966 article describes semi-computerized registration at the U of A and the author states: "Since we are doomed to become slaves of bureaucracy and the computer anyway, we might just as well go whole hog and have the machine work out the gory details which are just messed up by the human elements involved" (unknown, all hail the machine, September 28, 1966, 4, Gateway.)Similar in sentiment is another 1996 article describing the author's take on the university system in general; over population, impersonal computer programming, lack of a well-balanced education, and depersonalization caused by the computer. He finishes with the statement "Where all this leads us, I won't attempt to answer. Perhaps the ultimate cynic would look forward to the day when cybernetics will be able to replace us all." (Walker, a way of life, 26 January 1966, Gateway)$<$/br$>$ Computer job placement programs also appear around this time similar to the aforementioned computer dating services. BIB - Biographical Inventory Blank - is a questionnaire designed to determine which areas of work a student is best suited for after it is fed through a computer. \item Jackson, 1972, point - Re: dehumanizing nature of U of A???s registration procedure and pre-registration report. Author states the U of A is not a pioneer in computer programming. \item Das, 1974, 1984? - Describes technological change - good and potentially bad: crime surveillance - police state; artificial insemination - motherhood in a test tube; computer dating (ideal mate) - gimmicky arranged marriages; \item Fisk, 1975, The evils of university ???education??? - Describes the ???evil tendencies??? of university education in Canada: anti-personal (computerized registration, payment), moral failures \item Krause, 1979, Lock up your data - Security concerns re: the university computer system. Issues regarding legal protection against computer theft. \item Parker, 1979, New feudalism dawning - Speculation on the evolution of society into a Feudal State. Predicts education will decrease while technology increases. \\\textbf{Rebellion} One of the most interesting themes we see develop is in the lack of discussion on the topic of campus rebellion in both \textit{Folio} and \textit{New Trail}. The 1960's are a time of major social upheaval and 1968 is widely remembered as a year of protest worldwide with a number of movements supported by college and university students and in many cases actually occurring on campuses. At Sir George Williams University (SGW) in Montreal the Computer Centre becomes a focal point for exactly this type of student rebellion. In 1969 disgruntled students that charged a professor of biology with racism led a protest that ultimately took over the university's computer centre. It was reported in \textit{The Gateway} that over 300 students held the centre for over five days before the centre was set on fire and destroyed incurring \$2 million in damages. Other articles in \textit{The Gateway} go on to describe the aftermath of the burning of the computer centre; in addition to the arrest and prosecution of many of those involved the response of university administrators, including the arrival of codes of student behaviour on campuses and on Ontario campuses the introduction of a working paper 'Order on Campus, are detailed. Though \textit{New Trail} and \textit{Folio} are not necessarily focused on external campus events it is noteworthy that there is no mention of the University of Alberta administrators' response in reference to the SGW incident. \textit{Computer Dating} End with computer datingSet of anxieties, but playful with students (dating brought by students) The arrival of Cupid Computer on campus in 1966 was not the first attempt at match-making at the U of A. The year prior it was excitedly reported that a computer would be used to select dates for the Wauneita dance hosted annually by a women???s fraternity. Sadly the necessary instructions did not arrive on time and ???computerized romance proved to be a failure??? (Unknown, 1965, Computerized romance proves to be a failure) IBM keeps it in the family \href{http://http://peel.library.ualberta.ca/newspapers/GAT/1966/12/09/16/Ar01600.html?query=newspapers|computer|%28pubyear%3A1966%29+AND+%28publication%3AGAT%29|score}{[1]} Retrieved from "http://circa.cs.ualberta.ca/index.php/CIRCA:Papers"
CommonCrawl
Conditional regularity for the 3D Navier-Stokes equations in terms of the middle eigenvalue of the strain tensor EECT Home Approximate controllability of nonlocal problem for non-autonomous stochastic evolution equations September 2021, 10(3): 491-509. doi: 10.3934/eect.2020077 New results on controllability of fractional evolution systems with order $ \alpha\in (1,2) $ Yong Zhou 1,2,, and Jia Wei He 1, Faculty of Mathematics and Computational Science, Xiangtan University, Hunan 411105, China Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China * Corresponding author: Yong Zhou Received April 2020 Revised April 2020 Published September 2021 Early access June 2020 This paper addresses some interesting results of mild solutions to fractional evolution systems with order $ \alpha\in (1,2) $ in Banach spaces as well as the controllability problem. Firstly, we deduce a new representation of solution operators and give a new concept of mild solutions for the objective equations by the Laplace transform and Mainardi's Wright-type function, and then we proceed to establish a new compact result of the solution operators when the sine family is compact. Secondly, the controllability results of mild solutions are obtained. Finally, an example is presented to illustrate the main results. Keywords: Fractional derivative, controllability, mild solutions, Mainardi's Wright-type function. Mathematics Subject Classification: 26A33, 34A08, 34K35, 35R11. Citation: Yong Zhou, Jia Wei He. New results on controllability of fractional evolution systems with order $ \alpha\in (1,2) $. Evolution Equations & Control Theory, 2021, 10 (3) : 491-509. doi: 10.3934/eect.2020077 R. P. Agarwal, D. Baleanu, J. J. Nieto, D. F. M. Torres and Y. Zhou, A survey on fuzzy fractional differential and optimal control nonlocal evolution equations, J. Comput. Applied Math., 339 (2018), 3-29. doi: 10.1016/j.cam.2017.09.039. Google Scholar W. Arendt, C. J. K. Batty, M. Hieber and F. Neubrander, Vector-valued Laplace Transforms and Cauchy Problems (Second Edition), Birkhauser Verlag, 2011. doi: 10.1007/978-3-0348-5075-9. Google Scholar K. Balachandran, V. Govindaraj, L. Rodríguez-Germa and J. J. Trujillo, Controllability results for nonlinear fractional-order dynamical systems, J. Optimization Theory, Appl., 156 (2013), 33-44. doi: 10.1007/s10957-012-0212-5. Google Scholar M. Bonforte, Y. Sire and J. L. Vázquez, Optimal existence and uniqueness theory for the fractional heat equation, Nonlinear Anal., 153 (2017), 142-168. doi: 10.1016/j.na.2016.08.027. Google Scholar L. A. Caffarelli and P. R. Stinga, Fractional elliptic equations, Caccioppoli estimates and regularity, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), 767-807. doi: 10.1016/j.anihpc.2015.01.004. Google Scholar L. A. Caffarelli and Y. Sire, Minimal surfaces and free boundaries: Recent developments, Bull. Amer. Math. Soc., 57 (2020), 91-106. doi: 10.1090/bull/1673. Google Scholar H. Dong and D. Kim, $L_p$-estimates for time fractional parabolic equations with coefficients measurable in time, Adv. Math., 345 (2019), 289-345. doi: 10.1016/j.aim.2019.01.016. Google Scholar H. O. Fattorini, Second Order Linear Differential Equations in Banach Spaces, North Holland, Elsevier, 1985. Google Scholar E. Fernández-Cara, Q. Lü and E. Zuazua, Null controllability of linear heat and wave equations with nonlocal spatial terms, SIAM J. Control Optim., 54 (2016), 2009-2019. doi: 10.1137/15M1044291. Google Scholar Y. Giga and T. Namba, Well-posedness of Hamilton-Jacobi equations with Caputo's time fractional derivative, Comm. Part. Diff. Eq., 42 (2017), 1088-1120. doi: 10.1080/03605302.2017.1324880. Google Scholar [11] J. A. Goldstein, Semigroups of Linear Operators and Applications, Oxford Univ. Press, New York, 1985. Google Scholar J. W. Hanneken, D. M. Vaught and B. N. Narahari Achar, Enumeration of the Real Zeros of the Mittag-Leffler Function $E_\alpha(z)$, $1 < \alpha < 2$, in Advances in Fractional Calculus, Springer, Dordrecht, 2007, 15–26. doi: 10.1007/978-1-4020-6042-7_2. Google Scholar Y. Kian and M. Yamamoto, On existence and uniqueness of solutions for semilinear fractional wave equations, Fract. Calc. Appl. Anal., 20 (2017), 117-138. doi: 10.1515/fca-2017-0006. Google Scholar A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier, Amsterdam, 2006. Google Scholar I. Kim, K. H. Kim and S. Lim, $L_q(L_p)$-theory for the time fractional evolution equations with variable coefficients, Adv. Math., 306 (2017), 123-176. doi: 10.1016/j.aim.2016.08.046. Google Scholar V. Komornik, Exact controllability in short time for the wave equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 6 (1989), 153-164. doi: 10.1016/S0294-1449(16)30327-4. Google Scholar K. Li, J. Peng and J. Jia, Cauchy problems for fractional differential equations with Riemann-Liouville fractional derivatives, J. Funct. Anal., 263 (2012), 476-510. doi: 10.1016/j.jfa.2012.04.011. Google Scholar K. Li, J. Peng and J. Gao, Controllability of nonlocal fractional differential systems of order $\alpha\in (1, 2]$ in Banach spaces, Rep. Math. Phys., 71 (2013), 33-43. doi: 10.1016/S0034-4877(13)60020-8. Google Scholar Y. Li, Regularity of mild Solutions for fractional abstract Cauchy problem with order $\alpha\in (1, 2)$, Z. Angew. Math. Phys., 66 (2015), 3283-3298. doi: 10.1007/s00033-015-0577-z. Google Scholar Y. Li, H. Sun and Z. Feng, Fractional abstract Cauchy problem with order $\alpha\in (1, 2)$, Dyn. Partial Differ. Equ., 13 (2016), 155-177. doi: 10.4310/DPDE.2016.v13.n2.a4. Google Scholar L. Li, J. G. Liu and L. Wang, Cauchy problems for Keller-Segel type time-space fractional diffusion equation, J. Diff. Equa., 265 (2018), 1044-1096. doi: 10.1016/j.jde.2018.03.025. Google Scholar C. Lin and G. Nakamura, Unique continuation property for multi-terms time fractional diffusion equations, Math. Ann., 373 (2019), 929-952. doi: 10.1007/s00208-018-1710-z. Google Scholar J.-L. Lions, Exact controllability, stabilization and perturbations for distributed systems, SIAM Rev., 30 (1988), 1-68. doi: 10.1137/1030001. Google Scholar [24] F. Mainardi, Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models, Imperial College Press, 2010. doi: 10.1142/9781848163300. Google Scholar [25] I. Podlubny, Fractional Differential Equations, Academic Press, San Diego, 1999. Google Scholar I. Podlubny, Fractional-order systems and $PI^\lambda D^\mu$ controller, IEEE Trans. Auto. Control, 44 (1999), 208-214. doi: 10.1109/9.739144. Google Scholar X. B. Shu and Q. Q. Wang, The existence and uniqueness of mild solutions for fractional differential equations with nonlocal conditions of order $1 < \alpha < 2$, Comput. Math. Appl., 64 (2012), 2100-2110. doi: 10.1016/j.camwa.2012.04.006. Google Scholar C. C. Travis and G. F. Webb, Cosine families and abstractnonlinear second order differential equations, Acta Math. Hungar., 32 (1978), 75-96. doi: 10.1007/BF01902205. Google Scholar V. V. Vasil'ev, S. G. Krein and S. I. Piskarev, Semigroups of operators, cosine operator functions, and linear differential equations, J. Soviet Math., 54 (1991), 1042-1129. Google Scholar R. N. Wang, D. H. Chen and T. J. Xiao, Abstract fractional Cauchy problems with almost sectorial operators, J. Differ. Equ., 252 (2012), 202-235. doi: 10.1016/j.jde.2011.08.048. Google Scholar Y. Zhou, Basic Theory of Fractional Differential Equations, World Scientific, Singapore, 2014. doi: 10.1142/9069. Google Scholar [32] Y. Zhou, Fractional Evolution Equations and Inclusions: Analysis and Control, Academic Press, Elsevier, 2016. Google Scholar E. Zuazua, Exact controllability for semilinear wave equations in one space dimension, Ann. Inst. H. Poincaré Anal. Non Linéaire, 10 (1993), 109-129. doi: 10.1016/S0294-1449(16)30221-9. Google Scholar Pallavi Bedi, Anoop Kumar, Thabet Abdeljawad, Aziz Khan. S-asymptotically $ \omega $-periodic mild solutions and stability analysis of Hilfer fractional evolution equations. Evolution Equations & Control Theory, 2021, 10 (4) : 733-748. doi: 10.3934/eect.2020089 Abbes Benaissa, Abderrahmane Kasmi. Well-posedeness and energy decay of solutions to a bresse system with a boundary dissipation of fractional derivative type. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4361-4395. doi: 10.3934/dcdsb.2018168 Yejuan Wang, Tongtong Liang. Mild solutions to the time fractional Navier-Stokes delay differential inclusions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3713-3740. doi: 10.3934/dcdsb.2018312 Eduardo Liz, Manuel Pinto, Gonzalo Robledo, Sergei Trofimchuk, Victor Tkachenko. Wright type delay differential equations with negative Schwarzian. Discrete & Continuous Dynamical Systems, 2003, 9 (2) : 309-321. doi: 10.3934/dcds.2003.9.309 Christina A. Hollon, Jeffrey T. Neugebauer. Positive solutions of a fractional boundary value problem with a fractional derivative boundary condition. Conference Publications, 2015, 2015 (special) : 615-620. doi: 10.3934/proc.2015.0615 Rui Zhang, Yong-Kui Chang, G. M. N'Guérékata. Weighted pseudo almost automorphic mild solutions to semilinear integral equations with $S^{p}$-weighted pseudo almost automorphic coefficients. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5525-5537. doi: 10.3934/dcds.2013.33.5525 S. Jiménez, Pedro J. Zufiria. Characterizing chaos in a type of fractional Duffing's equation. Conference Publications, 2015, 2015 (special) : 660-669. doi: 10.3934/proc.2015.0660 Ruiyang Cai, Fudong Ge, Yangquan Chen, Chunhai Kou. Regional gradient controllability of ultra-slow diffusions involving the Hadamard-Caputo time fractional derivative. Mathematical Control & Related Fields, 2020, 10 (1) : 141-156. doi: 10.3934/mcrf.2019033 Priscila Santos Ramos, J. Vanterler da C. Sousa, E. Capelas de Oliveira. Existence and uniqueness of mild solutions for quasi-linear fractional integro-differential equations. Evolution Equations & Control Theory, 2022, 11 (1) : 1-24. doi: 10.3934/eect.2020100 Jinrong Wang, Michal Fečkan, Yong Zhou. Approximate controllability of Sobolev type fractional evolution systems with nonlocal conditions. Evolution Equations & Control Theory, 2017, 6 (3) : 471-486. doi: 10.3934/eect.2017024 Xuan-Xuan Xi, Mimi Hou, Xian-Feng Zhou, Yanhua Wen. Approximate controllability of fractional neutral evolution systems of hyperbolic type. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021035 Hasib Khan, Cemil Tunc, Aziz Khan. Green function's properties and existence theorems for nonlinear singular-delay-fractional differential equations. Discrete & Continuous Dynamical Systems - S, 2020, 13 (9) : 2475-2487. doi: 10.3934/dcdss.2020139 Wenjing Chen. Multiplicity of solutions for a fractional Kirchhoff type problem. Communications on Pure & Applied Analysis, 2015, 14 (5) : 2009-2020. doi: 10.3934/cpaa.2015.14.2009 Sara Munday. On the derivative of the $\alpha$-Farey-Minkowski function. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 709-732. doi: 10.3934/dcds.2014.34.709 Ankit Kumar, Kamal Jeet, Ramesh Kumar Vats. Controllability of Hilfer fractional integro-differential equations of Sobolev-type with a nonlocal condition in a Banach space. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021016 Dariusz Idczak, Rafał Kamocki. Existence of optimal solutions to lagrange problem for a fractional nonlinear control system with riemann-liouville derivative. Mathematical Control & Related Fields, 2017, 7 (3) : 449-464. doi: 10.3934/mcrf.2017016 Yan Hu. Layer solutions for an Allen-Cahn type system driven by the fractional Laplacian. Communications on Pure & Applied Analysis, 2016, 15 (3) : 947-964. doi: 10.3934/cpaa.2016.15.947 Yinbin Deng, Wentao Huang. Least energy solutions for fractional Kirchhoff type equations involving critical growth. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 1929-1954. doi: 10.3934/dcdss.2019126 Yutian Lei. On finite energy solutions of fractional order equations of the Choquard type. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1497-1515. doi: 10.3934/dcds.2019064 Xijun Hu, Penghui Wang. Hill-type formula and Krein-type trace formula for $S$-periodic solutions in ODEs. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 763-784. doi: 10.3934/dcds.2016.36.763 Yong Zhou Jia Wei He \begin{document}$ \alpha\in (1,2) $\end{document}" readonly="readonly">
CommonCrawl
It seems that you're in USA. We have a dedicated site for USA Book Tracking Education & Language Librarians (Springer Nature) Open Access & Springer Societies & Publishing Partners Subscription Agencies (Springer Nature) Springer Shop Save today: Get 40% off titles in Popular Science! Mathematics Geometry & Topology Graduate Texts in Mathematics Smooth Manifolds and Observables Authors: Nestruev, Jet Motivates the algebraic study of smooth manifolds by looking at them from the point of view of physics, in particular by using the observability principle Bridges between ideas from physics, geometry, and algebra, offering new perspectives to students and researchers in each field Incorporates a wide array of exercises and detailed illustrations, allowing readers to better familiarize themselves with the material Includes ten new chapters in the second edition, as well as additional exercises, examples, and illustrations price for Mexico Digitally watermarked, DRM-free Included format: EPUB, PDF ebooks can be used on all reading devices Immediate eBook download after purchase Buy Hardcover Free shipping for individuals worldwide Institutional customers should get in touch with their account manager Please be advised Covid-19 shipping restrictions apply. Please review prior to ordering Usually ready to be dispatched within 3 to 5 business days, if in stock FAQ Policy This textbook demonstrates how differential calculus, smooth manifolds, and commutative algebra constitute a unified whole, despite having arisen at different times and under different circumstances. Motivating this synthesis is the mathematical formalization of the process of observation from classical physics. A broad audience will appreciate this unique approach for the insight it gives into the underlying connections between geometry, physics, and commutative algebra. The main objective of this book is to explain how differential calculus is a natural part of commutative algebra. This is achieved by studying the corresponding algebras of smooth functions that result in a general construction of the differential calculus on various categories of modules over the given commutative algebra. It is shown in detail that the ordinary differential calculus and differential geometry on smooth manifolds turns out to be precisely the particular case that corresponds to the category of geometric modules over smooth algebras. This approach opens the way to numerous applications, ranging from delicate questions of algebraic geometry to the theory of elementary particles. Smooth Manifolds and Observables is intended for advanced undergraduates, graduate students, and researchers in mathematics and physics. This second edition adds ten new chapters to further develop the notion of differential calculus over commutative algebras, showing it to be a generalization of the differential calculus on smooth manifolds. Applications to diverse areas, such as symplectic manifolds, de Rham cohomology, and Poisson brackets are explored. Additional examples of the basic functors of the theory are presented alongside numerous new exercises, providing readers with many more opportunities to practice these concepts. Jet Nestruev is a collective of authors, who originally convened for a seminar run by Alexandre Vinogradov at the Mechanics and Mathematics Department of Moscow State University in 1969. In the present edition, Jet Nestruev consists of Alexander Astashov (Senior Researcher at the State Research Institute of Aviation Systems), Alexandre Vinogradov (Professor of Mathematics at Salerno University), Mikhail Vinogradov (Diffiety Institute), and Alexey Sossinsky (Professor at the Independent University of Moscow). Table of contents (21 chapters) Nestruev, Jet Preview Buy Chapter $29.95 Cutoff and Other Special Smooth Functions on $${\mathbb R}^n$$ Algebras and Points Smooth Manifolds (Algebraic Definition) Charts and Atlases Smooth Maps Equivalence of Coordinate and Algebraic Definitions Points, Spectra and Ghosts Pages 85-100 Differential Calculus as Part of Commutative Algebra Symbols and the Hamiltonian Formalism Smooth Bundles Vector Bundles and Projective Modules Differential 1-forms and Jets Functors of the Differential Calculus and their Representations Cosymbols, Tensors, and Smoothness Spencer Complexes and Differential Forms The (Co)Chain Complexes Coming from the Spencer Sequence Differential Forms: Classical and Algebraic Approach Cohomology Differential Operators over Graded Algebras Show next xx Read this book on SpringerLink Author's errata Services for this Book Download Product Flyer Request eBook Instructor Sample Download High-Resolution Cover Facebook Twitter LinkedIn Google++ Jet Nestruev Springer Nature Switzerland AG eBook ISBN Hardcover ISBN Series ISSN Edition Number XVIII, 433 Number of Illustrations 88 b/w illustrations Manifolds and Cell Complexes (incl. Diff. Topology) MySpringer SpringerAlerts About Springer COVID-19 shipping FAQ © 2020 Springer Nature Switzerland AG. Springer is part of Springer Nature | General Terms & Conditions | Manage Cookies/Do Not Sell My Data Privacy Policy
CommonCrawl
TMF: TMF, 2012, Volume 172, Number 1, Pages 40–63 (Mi tmf6903) Classical double, $R$-operators, and negative flows of integrable hierarchies B. A. Dubrovinab, T. V. Skrypnikacd a Lomonosov Moscow State University, Moscow, Russia b International School for Advanced Studies, Trieste, Italy c Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine d Universita di Milano Bicocca, Milan, Italy Abstract: Using the classical double $\mathcal G$ of a Lie algebra $\mathfrak g$ equipped with the classical $R$-operator, we define two sets of functions commuting with respect to the initial Lie–Poisson bracket on $\mathfrak g^*$ and its extensions. We consider examples of Lie algebras $\mathfrak g$ with the "Adler–Kostant–Symes" $R$-operators and the two corresponding sets of mutually commuting functions in detail. Using the constructed commutative Hamiltonian flows on different extensions of $\mathfrak g$, we obtain zero-curvature equations with $\mathfrak g$-valued $U$–$V$ pairs. The so-called negative flows of soliton hierarchies are among such equations. We illustrate the proposed approach with examples of two-dimensional Abelian and non-Abelian Toda field equations. Keywords: classical $R$-operator, integrable hierarchy DOI: https://doi.org/10.4213/tmf6903 Received: 28.04.2011 Revised: 13.11.2011 Citation: B. A. Dubrovin, T. V. Skrypnik, "Classical double, $R$-operators, and negative flows of integrable hierarchies", TMF, 172:1 (2012), 40–63; Theoret. and Math. Phys., 172:1 (2012), 911–931 \Bibitem{DubSkr12} \by B.~A.~Dubrovin, T.~V.~Skrypnik \paper Classical double, $R$-operators, and negative flows of integrable hierarchies \jour TMF \pages 40--63 \mathnet{http://mi.mathnet.ru/tmf6903} \crossref{https://doi.org/10.4213/tmf6903} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2012TMP...172..911D} \elib{http://elibrary.ru/item.asp?id=20732493} \crossref{https://doi.org/10.1007/s11232-012-0086-6} http://mi.mathnet.ru/eng/tmf6903 https://doi.org/10.4213/tmf6903 http://mi.mathnet.ru/eng/tmf/v172/i1/p40 Dobrogowska A., "R-Matrix, Lax pair, and Multiparameter Decompositions of Lie Algebras", J. Math. Phys., 56:11 (2015), 113508 Skrypnyk T., "Reduction in Soliton Hierarchies and Special Points of Classical R-Matrices", J. Geom. Phys., 130 (2018), 260–287 Full text: 76
CommonCrawl
A bioinspired analogous nerve towards artificial intelligence Xinqin Liao ORCID: orcid.org/0000-0001-7772-32801 na1, Weitao Song1 na1, Xiangyu Zhang1 na1, Chaoqun Yan2, Tianliang Li3, Hongliang Ren ORCID: orcid.org/0000-0002-6488-15513, Cunzhi Liu2, Yongtian Wang ORCID: orcid.org/0000-0001-9422-08884,5 & Yuanjin Zheng1 Nature Communications volume 11, Article number: 268 (2020) Cite this article Sensors and biosensors A bionic artificial device commonly integrates various distributed functional units to mimic the functions of biological sensory neural system, bringing intricate interconnections, complicated structure, and interference in signal transmission. Here we show an all-in-one bionic artificial nerve based on a separate electrical double-layers structure that integrates the functions of perception, recognition, and transmission. The bionic artificial nerve features flexibility, rapid response (<21 ms), high robustness, excellent durability (>10,000 tests), personalized cutability, and no energy consumption when no mechanical stimulation is being applied. The response signals are highly regionally differentiated for the mechanical stimulations, which enables the bionic artificial nerve to mimic the spatiotemporally dynamic logic of a biological neural network. Multifunctional touch interactions demonstrate the enormous potential of the bionic artificial nerve for human-machine hybrid perceptual enhancement. By incorporating the spatiotemporal resolution function and algorithmic analysis, we hope that bionic artificial nerves will promote further development of sophisticated neuroprosthetics and intelligent robotics. Sensory neurons within the skin provide an easy and intuitive interface to convert the external stimulation into a physiological signal for touch recognition and brain learning1. Crucially, the development of artificial sensory neural systems helps to restore touch perception of disabled persons2,3,4,5, construct a hybrid bioelectronic reflex arc to actuate muscles6, and build interactive feedback of robotic manipulation7,8. Notable advancement is obtained in the bionic sensor9,10,11,12,13,14, signal cable15,16,17, and synaptic transistor18,19,20,21,22, which are the essential and main functional elements of the artificial sensory neural system. Alternatively, a highly sensitive tactile sensor based on a giant magneto-impedance material with a rational structural design was functionalized in a low-pressure regime23. To mimic the signal transport function of axons, an ionic cable was therefore proposed to transmit signal at high speed over a long distance24. For building a neuromorphic system, a laterally coupled indium-zinc-oxide-based artificial synaptic transistor was successfully fabricated to demonstrate spatiotemporal signal processing25. Although the great achievements have been made, most reported bionic sensory devices were integrated with discrete elements, which will require intricate interconnections and complicated structure, and may induce interference in signal transmission. Furthermore, the bionic sensory device should be low or no energy consumption to work as a biological synapse when there is no touch26. Pursuing the softness that endows mechanical compliance of bionic sensory device like a natural nerve still faces challenges. Here we propose an all-in-one bionic sensory transmission nerve to achieve the above goals. To demonstrate this concept, we fabricated an artificial perception and transmission nerve (APT nerve). The APT nerve was designed by adopting a separate electrical double-layers structure. Our design principle was based on electric contact theory27, which was the key to detect external mechanical stimulation such as finger or object touch. Based on the proposed structure, the APT nerve featured no energy consumption except that mechanical stimulation occurs. The biological nerve fibers feature a long strip morphology and are flexible in biological tissue28. Inspired by the biological morphology, the architecture of the APT nerve was initially designed as a long strip based on the separate electrical double-layers structure. Due to the universality of the design principle, in addition to the linear strip shape, the architecture of the APT nerve could be designed into a wide variety of prototype devices as needed, including L-shaped, S-shaped and square devices. This architecture enabled the APT nerve to further recognize the location of external mechanical stimulation without requiring the integration of multiple sensing units. Moreover, the extended strip-shaped architecture realized the APT nerve integrating the functions of mechano-perception and signal transmission. In order to achieve the characteristic of flexibility, a fiber-based paper was selected as the substrate to fabricate the APT nerve. Previous studies showed that a conductive film, which consisted of plentiful graphite slices, could be formed by pencil drawing on the paper-based substrate29. Overlaps or cracks were generated among the graphite slices by bending the paper-based substrate inward or outward so that the resistance of the conductive film would decrease or increase correspondingly. In our design, the APT nerve consisted of two graphite-based conductive films, which were face-to-face taped together and served as the active layers. Bending the APT nerve would make the upper and bottom active layers bent oppositely so that the changes in the resistance of the two active layers were reversed. As a result, the overall resistance of the APT nerve would be relatively stable and thus, the APT nerve could operate in bending state. In addition, the proposed APT nerve based on the bionic architecture also featured high stability, rapid response, and even cuttability. As a proof-of-concept, we demonstrated that the APT nerve was capable of not only converting mechanical stimulation into an electrical signal but also recognizing the location of the mechanical stimulation. After logical analysis, the mechanosensitive signal could be transmitted and used for subsequent interactive control, including playing music, controlling positioning in the two-dimensional plane, and free handling rotation, which confirmed the huge potentials of the APT nerve for smart prosthetics and socially intelligent robotics. Furthermore, we explored the spatiotemporal resolution function of the APT nerve, which could be incorporated with the algorithmic analysis of artificial intelligence that would genuinely promote neuroprosthetics and neurorobotics into sophistication. We believe that the APT nerve can also be fabricated by using different functional materials that will allow the APT nerve to become stretchable, transparent, and multifunctional as needed in the future. Biological inspiration and design of the APT nerve Biologically, the external mechanical stimulation is converted into receptor potentials by mechanoreceptors30, such as Merkel disk (MD) that detects static force at slow adaptation rate (Fig. 1a). The receptor potentials are encoded by synapses, which are formed between the multiple afferent neurons and the interneuron in the spinal cord, and then awaken the interneurons31. Subsequently, the encoded postsynaptic potentials are transmitted to the cortex in turn through the interneurons to recognize the location of the external mechanical stimulation32. Thus, the proposed APT nerve should have the four essential functions, including detecting the mechanical stimulation, consuming low or no energy unless mechanical stimulation happens, transmitting mechanosensitive signal, and recognizing the location of mechanical stimulation. Fig. 1: Biological sensory neurons and an artificial perception and transmission nerve. a Schematic of the function of biological sensory neurons. Mechanical stimulations are converted into receptor potentials by mechanoreceptors. The receptor potentials induce postsynaptic potentials by synapses. Postsynaptic potentials are transmitted to the cortex for information processing through the interneurons. b Architecture and working mechanism of an artificial perception and transmission nerve (APT nerve). Mechanical stimulation applied on the APT nerve is directly converted into mechanosensitive signals for information processing. Top left: Working mechanism of the APT nerve. Top right: Equivalent circuit. Bottom right: Structure of the APT nerve in right section view. To demonstrate this concept, the proposed APT nerve was designed to emulate the functions of sensory neurons (Fig. 1b). We used a conductive graphite film, which was prepared by a carbon pencil, and a piece of graph paper to serve as the active layer and substrate, respectively, and to construct the prototype of the APT nerve (Supplementary Fig. 1). Two parts that paper-based substrate combined with the conductive graphite film were face-to-face put together with a thin spacer, which ensured that the upper and bottom active layers were separated when no external mechanical stimulation was applied on the APT nerve (Seeing the inset image at the bottom right corner of Fig. 1b). Note that the conductive graphite film was drawn on the paper-based substrate, which was compatible with flexibility and had the potential for optionally preparing different shapes of the APT nerve. In our design, the upper and bottom active layers were in the noncontact state, so that no current would flow through the APT nerve, which made sure the APT nerve did not consume energy when no mechanical stimulation was applied on it. Among various types of sensing mechanisms, the electric contact sensing mechanism was adopted for the APT nerve. When a mechanical stimulation, such as a finger touch, was applied on the APT nerve, the upper and bottom active layers contacted together at the location of the mechanical stimulation. Subsequently, the electric contact caused the mechanosensitive signal to flow from the upper active layer to the bottom active layer. The simplified equivalent circuit was shown at the top right corner of Fig. 1b. Here, the APT nerve was equivalent to a device consisting of numerous resistor units, and the upper and bottom resistors were disconnected by switches. When there was an external mechanical stimulation, the corresponding switch would be closed so that the APT nerve had a corresponding response resistance. The response resistance (Rtotal) could be expressed as the following equation: $$R_{total} = \mathop {\sum }\limits_{i = 1}^n (R_{u1} + R_{b1} + \ldots + R_{ui} + R_{bi})$$ where Rui and Rbi were the resistance of the virtual resistor units of the upper and bottom active layers, respectively. Thus, the magnitude of the response resistance was determined by the location of the mechanical stimulation. As the location moved away from the electrodes, the response resistance would increase. By analyzing the magnitude of the response resistance, the APT nerve could recognize the location of the mechanical stimulation. The proposed APT nerve was limited to transmitting and recognizing one location at a time that was analogous to the function of the interneuron in the spinal cord. Performance and characteristic of the APT nerve To characterize the performance, the influence of the thickness of the spacer on the APT was evaluated (Fig. 2a). The APT nerve was applied by the mechanical stimulation with a blade every 5 mm from the beginning of the electrodes. The result showed that the relationship between the response resistance and the location of the mechanical stimulation was nearly linear. This phenomenon in which the response resistance slightly increased with the increase of the thickness of the spacer was due to incomplete electric contact between the upper and bottom of the active layers (Supplementary Fig. 2). Furthermore, the performance of the APT nerve with different parameters of the active layer was tested (Fig. 2b). Notably, no matter the width of the active layers was changed, the linear relationship between the response resistance and the location of the mechanical stimulation was still maintained. When the mechanical stimulation was applied on the same location of the APT nerve with different width of the active layer, the change in the response resistance was mainly caused by the fact that the resistance of the active layer was modulated by the width. On the other hand, the response resistance of the APT nerve was substantially fixed to the location of the mechanical stimulation when the width of the active layer and the thickness of the spacer were fixed (Fig. 2c). To normalize the size of APT nerve, the thickness of the spacer and the width of the active layer were fixed as 0.12 mm and 5 mm, respectively, unless otherwise noted. The extension of the active length brought a more changeable range in the response resistance of the APT nerve for a broader range of detection and recognition of the mechanical stimulation, which meant more possibilities and diversity for interactive applications based on artificial intelligence. Fig. 2: Performance and characteristic of the APT nerve. a Influence of the thickness of spacer on the response resistance of the APT nerve with the active width of 5mm. Relationship between the location of mechanical stimulation and the response resistance of the APT nerve with different active b width and c length when the spacer was 0.12 mm. d Diagram of one type of mechanical stimulation applying on the APT nerve. Top left: Schematic of a single synapse. Δt is the interval time between adjacent mechanical stimulations. e Change in the response resistance of the APT nerve triggered by the mechanical stimulation at different intervals (5.6 s and 2.7 s). The mechanical stimulation was applied at the location of 5 cm of the APT nerve. f Diagram of mechanical stimulations applying at different locations (3 cm and 6 cm) of the APT nerve. Top left: Schematic of spatiotemporally dynamical stimulations of two synapses. g Spatiotemporally dynamical response of the APT nerve to the mechanical stimulations. The interval was 2.7 s. Most devices would lose their functionality completely after some local damage. Due to the unique architecture of the APT nerve, even if a part of the device was sheared off, the rest was still able to sense mechanical stimulation, transmit signals, and recognize the location of the mechanical stimulation (Supplementary Fig. 3). This was because the response resistance of the APT nerve was only affected by the active length between the location of the mechanical stimulation and the electrodes. Loss of some terminal portion of the APT nerve did not change the connection of the electrodes and the remaining conductive graphite films. This characteristic was similar to the one of the biological sensory nerve that the partial loss of the end did not make a complete loss of neural perception33, and was beneficial to reduce some subsequent maintenance or frequent replacement of the device. In addition, since the final step of the device fabrication was to pack two paper-based components face-to-face together by a thin transparent tape, it could effectively isolate the paper substrate from contacting with the outside water. Thus, the as-fabricated APT nerve featured waterproofness and possessed the reproducibility of the response after storing in a humid place (Supplementary Fig. 4). An electrical or chemical signal propagating along the axon from a neuron to another one needs to pass a biological synapse (Fig. 2d). Stimulation below the detection threshold does not cause the generation of nerve signal for cortical sensation32. Analogously, as the sensing mechanism was based on electric contact, there would also be a detection threshold for the APT nerve. Thus, the response threshold that the pressure caused the APT nerve to generate a stable response signal was tested (Supplementary Fig. 5). The result showed that the APT nerve could stably respond to the mechanical stimulation requiring it larger than ~5 kPa. Subsequently, the response of the APT nerve was mainly influenced by the location of the mechanical stimulation. The change in the response of the APT nerve over time to mechanical stimulation was referred to the plasticity of the biological synapse, which was an essential basis for learning. A short-term and interval mechanical stimulation was adopted to study the responses of the APT nerve (Fig. 2e). In order to eliminate a potential pressure deviation from the loading equipment, a loading pressure of large than 5 kPa (as an example, about 10 kPa) was applied on the APT nerve for the test. It could be found that the APT nerve responded fast (<21 ms) to the mechanical stimulation and the mechanosensitive signal disappeared quickly (<21 ms) when the mechanical stimulation stopped (Supplementary Fig. 6). The response of regular fluctuations was similar to the excitation and inhibition of a biological synapse, which was also activated by the external stimulation and then returned to the original state after removing the external stimulation34,35. The results from multiple testing with the different intervals (2.7 s and 5.6 s) of the mechanical stimulations indicated the APT nerve was highly robust. Biologically, the spatiotemporally dynamic logic of neural network is formed by dynamically transmitting postsynaptic potentials from multiple presynapses at intervals (Fig. 2f). Since the response resistance was highly regional differentiated to the location of mechanical stimulation, the APT nerve could also be served as a geometrically hierarchical sensing device to mimic the spatiotemporally dynamic logic of the neural network. For the demonstration, the short-term mechanical stimulations were applied to different locations (3 cm and 6 cm) of the APT nerve at the interval of 2.7 s. Fast and stable changes in the resistance with different amplitudes were generated to respond to the spatiotemporally dynamic mechanical stimulation (Fig. 2g). The response of the APT nerve was analogous to that of neural potentials trigged from different presynapses when spatiotemporally dynamic stimulations were applied on different mechanoreceptors. Thus, a basic spatiotemporally dynamic logic was demonstrated by using the APT nerve. Note that this feature makes the APT nerve potentially useful for time-enhanced ciphers, of which the code changes over time to prevent the case that a hacker gets the correct sequence of code to access to confidential information (Supplementary Fig. 7). Another important consideration for the use of the APT nerve was the stability of the response to mechanical stimulation. The response resistance of the APT nerve under the mechanical stimulation at different locations for a durability test of >10,000 cycles was shown in Supplementary Fig. 8. For realistic applications, follow-up protections would be provided to make the functional nerve device durably work, while it would extend the preparation process, require more materials, increase the costs, and may limit the portability of the functional nerve device. The result showed that the APT nerve without the cumbersome protections possessed high repeatability and durability. Furthermore, it indicated that the as-prepared APT nerve based on the separate electrical double-layers structure provided the prototype sample that would be a beneficial alternative to remove the imperfections of follow-up protections for long-term reliable mechanosensitive interactions towards realistic applications. Multifunctional touch interaction of the APT nerve Next, as a proof-of-concept, the APT nerve was used to implement multifunctional touch interaction (Fig. 3). A driver circuit was designed to communicate the APT nerve with a computer. Figure 3a shows the circuit schematic to drive the APT nerve. The first stage was a linear transform from resistance into voltage so that it could be further converted into the digital domain by an analog to digital converter (ADC). This convertor was composed of an operational amplifier with a feedback resistor network. The output voltage (Vout) was proportional to the response resistance (Rtotal), which was shown in the following formula: $$V_{out} = \left( {1 + R_{total}/R_{fb}} \right) \times V_{ref}$$ Fig. 3: Multifunctional touch interaction of the APT nerve. a Circuit schematic to drive the APT nerve for applications. b A linear APT nerve used for playing music. The tonic solfa of Do, Re, Mi, Fa, Sol, La, and Si would be produced by touching corresponding segments of the APT nerve. c Response resistance of the linear APT nerve when touching different segments. d Change in the voltage producing corresponding tonic solfa. e An L-shaped APT nerve used for controlling the position of a chess piece in a two-dimensional plane. The horizontal branch controlled the horizontal movement of the chess piece and the other branch controlled the movement of the chess piece on the vertical axis. Bottom: Change in the voltage when touching different segments. f Demonstration of handling the rotation of an earth model by using a flexible APT nerve with a square shape. Each perception area was 10 × 10 mm2. Bottom: Touching different perception areas resulting in the changes in the voltage. where Rfb and Vref were the feedback resistor and the reference voltage, respectively. The analog output of the driver circuit was digitalized and synchronized by a microcontroller development board (Arduino Leonardo), which integrated a low-power microcontroller chip (ATmega32u4) as its central processor. The microcontroller development board had total number of 12 internal on-chip analog-to-digital convertors (ADC), and one of them was connected to the analog output voltage as its input sensing signal. Hence, the analog voltage was digitalized into the range of 0–1023 by such 10-bit ADC in a sample frequency of 100 Hz. There was a look up table built inside the microcontroller so as to map the digital value to the location of the mechanical stimulation. Furthermore, there was a built-in universal-serial-bus (USB) interface on this development board, which enabled microcontroller to communicate with personal computers (PC). Based on the digital value of the output voltage, this development board could send corresponding pre-defined commands to the PC side. Since there would be customized applications developed on the computer side, actions would be performed as the responses based on the received command. It should be noted that during the test of finger touch, the contact area between the fingertip and the APT nerve may change somewhat, resulting in a slight change in the contact area of the upper and bottom active layers. However, the influence of this change on the response resistance was limited. Each response resistance was still stable even with some negligible fluctuations. It could be directly converted into voltage, which was recognized by the microcontroller. Thus, this process did not need additional denoising. On the other hand, when the finger touch was removed from the APT nerve, the response resistance quickly disappeared. Accordingly, the output voltage would return to the original state. Therefore, baseline tracking processing was not required to prevent baseline drift. Figure 3b described a linear APT nerve with the active length of 140 mm that was virtually divided into seven segments, namely L1, L2, L3, L4, L5, L6, and L7, corresponding to the tonic solfa of Do, Re, Mi, Fa, Sol, La, and Si. Each perception segment was 5 × 20 mm2. When a finger was touching a certain segment, the response resistance of the linear APT nerve was quickly generated (Fig. 3c) and maintained. Accordingly, the output voltage of the driver circuit was changed (Fig. 3d). It could be found that these changes in the output voltage were different from each other so that the microcontroller could easily identify them and made the virtual piano produce the corresponding sounds. Vivid playing music by a finger touching on the linear APT nerve was shown in Supplementary Movie 1. Due to the scalability of the preparation process and the universality of the design principle, in addition to the linear strip shape, the architecture of the APT nerve could be designed to be L-shaped. The second demonstration was that the L-shaped APT nerve was designed and fabricated to control positioning in a two-dimensional plane (Fig. 3e and Supplementary Movie 2). The detailed parameter of the L-shaped APT nerve was shown in Supplementary Fig. 9a. The horizontal branch of the L-shaped APT nerve was virtually divided into seven segments, namely X1, X2, X3, X4, X5, X6, and X7, and would control the horizontal movement of a chess piece, which was created and could move in a 7 × 7 grid on PC. Similarly, the other seven segments from the vertical branch, naming as Y1, Y2, Y3, Y4, Y5, Y6, and Y7, were used to control the movement of the chess piece on the vertical axis. Based on this, the movement of the chess piece in any position in the two-dimensional plane would be realized by touching the corresponding segment of the L-shaped APT nerve. For example, when a finger touched X5, Yd, X7, and Yg in sequence, the response resistance of the L-shaped APT nerve would change the output voltage of the driver circuit (bottom of Fig. 3e). According to different voltages, the microcontroller would send the machine-recognized codes that moved the chess piece from P1a to P7g through P5a, P5d, and P7d. It should be noted that the L-shaped APT nerve only needed one channel so that contributed to simplify the signal processing and avoid the potential problems of the integration of numerous sensing elements, such as intricate interconnections, complicated structure, and interference in signal transmission. Alternatively, an S-shaped APT nerve could be also designed and would be effective for the positioning control in the two-dimensional plane (Supplementary Fig. 9b). Furthermore, in order to demonstrate the proposed APT nerve still worked well under bending state, we designed and fabricated a flexible APT nerve with a square shape and built a free-rotating earth model (Fig. 3f). Each perception area of the flexible APT nerve was 10 × 10 mm2 (Supplementary Fig. 9c) and named as A1, A2, A3, and A4. Touching different perception areas would cause the change in the output voltage of the driver circuit (bottom of Fig. 3f). Thereupon, the earth model could be rotated up, down, left, and right by touching the corresponding perception area of A2, A4, A1, and A3. The response resistance of the flexible APT nerve was generally stable by finger touch no matter it was in the flat state or the bending state (Supplementary Fig. 10). Supplementary Movie 3 shows that we fit the flexible APT nerve on a bottle with a radius of ~2.7 cm to handle the rotation of the earth model. This result demonstrated the proposed APT nerve featured flexibility that was similar to the biological nerve. Although the above three demonstrations were limited to one finger touch, the results indicated the method provided here and the proposed sensing mechanism could make the APT nerve available to be designed and fabricated into a variety of architectures as needed, which provided essential prototypes for building future all-in-one APT neural networks. In addition, considering the nonlinear relationship between the biological sensory neuron and the external mechanical stimulation, the linear relationship between the input mechanical stimulation and the output signal from the APT nerve could be converted to be nonlinear as needed by transduction circuits or taking use of microcontroller in a software manner (Supplementary Note 1). The APT nerve for perceptual learning The human brain learns and responds to different stimulation by processing and analyzing the signals from the biological sensory nerves. Like the functions of biological sensory nerves, the proposed APT nerve had the potential to target human-machine interaction based on artificial intelligence, such as user identification, feedback control of prosthesis, and intelligent action of a robot. Figure 4a illustrates a basic process flow combined with algorithmic analysis of artificial intelligence. The mechanical stimulations from different users' touching were firstly converted to digital signals by the APT nerve and driver circuit. Through a logical operation, the features of the mechanosensitive signals would be extracted and then put into an artificial neural network to decide to respond or not. Fig. 4: The APT nerve for perceptual learning. a Schematic diagram of the machine learning based on the APT nerve, including recognition of mechanical stimulation, signal acquisition, feature extraction, decision of neural network. b Mechanosensitive signals from the APT nerve corresponding to different input information. The touching characteristic parameters included the touching location (L), holding time (H), latency interval (I) to characterize different mechanical stimulation within a set time (10 s). c Three-factor feature map based on the touching characteristic parameters extracted from b. As a simple demonstration of the functionality, the APT nerve with the active length of 180 mm was virtually and evenly divided into nine perception segments. In this way, different perception segments stimulated by a finger could be distinguished by the magnitude of response signals. Above studies have shown that the APT nerve possesses a spatiotemporal resolution function like that of biological nerve. Thus, we could set three touching characteristic parameters, including touching location (L), holding time (H), latency interval (I), to characterize different mechanical stimulation of finger touch within a set time (10 s). For example, when we subsequently touched third, sixth, and eighth perception segments, the mechanosensitive signal from the APT nerve was generated quickly and showed the corresponding magnitude (Fig. 4b upper). Different mechanosensitive signals would also be generated to reflect the personal input information from other users through the APT nerve (Fig. 4b lower). Subsequently, the touching characteristic parameters were extracted from the mechanosensitive signals by using logical analysis. The results indicated that these different inputs could be characterized by the three-factor feature map (Fig. 4c). Therefore, through many times of sampling and analysis, we could get a touching feature database. After that, the k-nearest neighbors algorithm36 was used for the classification. The main idea of this algorithm was that if the majority touching characteristic parameters from one unknown input were most similar to the ones from the several known inputs, the unknown input and the several known inputs belonged to the same category. In details, after that multiple inputs from the authorized users were detected, a database of the input characteristics from the authorized users would be built. Thereafter, when a new input from an unknown user was detected, the characteristic parameters of this input would be compared with the database. If the similarity of input characteristics between them researched 90% (system-defined) or more, the system could judge that the unknown input was from an authorized user, otherwise, it was from an unauthorized user. The more data got, the result would be more accurate. The backend procedures would use this result to make appropriate feedback for the interaction. In this work, we have introduced an all-in-one bionic sensory nerve that is called as APT nerve and demonstrate its integrated functions of detecting mechanical stimulation, consuming no energy when no mechanical stimulation happens, recognizing the location of mechanical stimulation, and transmitting mechanosensitive signal for mimicking the main functions of the biological sensory neural system. Although the artificial sensory neural system is still progressing in the infancy stage, by discovering the performance and characteristic of the proposed APT nerve, we have exploited the remarkable multifunctional touch interactions of the APT nerve, including playing music, controlling positioning in the two-dimensional plane, and free handling rotation. The as-desired APT nerve was based on the separate electrical double-layers structure that addressed the critical challenges in intricate interconnections, complicated structure, and interference in signal transmission. The stable and high performance of the APT nerve ensured that the analysis and processing of data could performed without additional denoising and baseline tracking that would greatly simplify logic circuit design and fabrication processes. Due to the non-pixelated sensing characteristic, the APT nerve could be virtually divided into multiple perception segments according to the need of interactive applications towards socially intelligent robotics. Furthermore, the feature of spatiotemporally dynamic logic made the APT nerve potential for perceptual learning, which would genuinely promote neuroprosthetics and robotics into sophistication. Ultra-robustness and excellent durability of >10,000 cyclical tests enabled the APT nerve to operate for highly stable and reliable mechanosensitive interactions and to avoid frequent maintenance. The separate electrical double-layers structure made sure that the APT nerve featured no energy consumption when there was no mechanical stimulation, which could achieve long-term work. The APT nerve could operate perfectly in the flat and bending states. High flexibility and unique cuttability allowed integrating the APT nerve with the human body to be comfortable as needed. Geometrically hierarchical sensing, rapid response (<21 ms) and high variability in shape endowed the great potential of the APT nerve for multifunctional touch interactions. We believe this APT nerve provided an aspirational and significant alternative to construct artificial sensory neural system for advancing socially intelligent robotics and smart prosthetics. The method provided here made the APT nerve available for large-scale fabrication and allows the APT nerve to be designed into a wide variety of prototype devices. The achievement of the flexibility of the APT nerve attributed to the fiber-based paper substrate. Because of the water-absorbent characteristic of the substrate, a thin transparent tape used to fabricate the device would make the APT nerve waterproof to interact with sweaty fingers. The as-prepared APT nerve could operate perfectly in both flat and bending states due to the opposite changes in the resistance of the two active layers when being bent. In order to realize the stretchability of the APT nerve for a wider range of future applications, organic materials, such as poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate)37, nitrile butadiene rubber coating with carbon nanotubes and silver38, and poly(styrene-block-butadiene-block-styrene) absorbed with silver nanoparticles39, may be a feasible alternative due to their stretchability and conductivity, while it needs further exploration and research to achieve the stable signal output of the APT nerve during stretching. Analogous to the function of the interneuron in the spinal cord, the APT nerve was limited to recognize one mechanical stimulation per time effectively based on the separate electrical double-layers structure. Nevertheless, it is thought-provoking to further consider and provide a feasible strategy to make the device's structure still extremely simple and highly effective while the APT nerve can sense multiple mechanical stimulations and recognize their locations. Besides, there is some potentially effective noise, which is random or unpredictable fluctuations and disturbances, in the biological neural system, even without any mechanical stimulation40. And yet for all that, the random disturbances of signals in the proposed artificial sensory neural system may cause the microcontroller to misjudge and issue wrong commands. Currently, we adopted the separate electrical double-layers structure to reduce noise so that the system could clearly perform the actions as we wanted. Continuous exploration would be conducted to make the bionic devices towards realistic applications. In the future, the combination of different functional materials for fabricating devices will enable the APT nerve to possess versatile sensations, such as temperature, humidity, and even light, towards human-machine hybrid enhanced intelligence. Fabrication of the APT nerve Firstly, two pieces of adhesive tapes (Scotch MagicTM Tape 810#, 3 M Inc.) were pasted on a piece of graph paper (No.704, QIANCAILE) to form a specified hollow pattern. Typically, the hollow area was 5 × 140 mm2. An 8B pencil (No. 6841, Deli Company) was then adopted to draw on the hollow area of the paper substrate. Several times of drawing was performed to make the hollow area fully filled with graphite. After peeling off one layer of the tape-based pattern, a well-regulated and as-designed graphite-based film was formed and served as the conductive path. Subsequently, the new adhesive tapes were attached on both sides of the graphite strip along the long axis, which were served as the spacer. The typical thickness of the spacer was 0.12 mm. The conductive line was led out at one end of the long axis of the conductive path by silver paint (SPI-PAINT, Structure Probe, Inc.). The above steps fabricated a basic part of the APT nerve. Another part of the APT nerve was also prepared in the same steps, while its pattern was mirror-symmetrical to the one of the previous part. Finally, the as-prepared parts were face-to-face put together by the adhesive tape and assembled into the APT nerve. The adhesive tape used here also would provide the waterproofness of the APT nerve. Signal processing circuit During the test of finger touch, a response resistance would be generated by touching a part of the APT nerve, of which the active layer was 5 × 140 mm2 and the thickness of the spacer was 0.12 mm. After initial evaluation, the minimum response resistance was 0.5 kΩ that was generated by touching the portion that was closest to the electrodes. Oppositely, when the finger touched the portion that was farthest from the electrodes, the maximum response resistance was generated and its value was 15 kΩ. Thus, the values of other response resistances, which were generated by touching different portions of the APT nerve, would be within from 0.5 to 15 kΩ, but each of them was stable and not with an observable fluctuation. In order to make the output voltage within 5 V, the feedback resistor Rfb was selected with 5 kΩ and the reference voltage Vref was configured as 1 V. The output voltage Vout swung in this circuit was 2.9 V, which linearly mapped the response resistance of the proposed APT nerve. The amplifier (AMP) model we used was LM741 designed by Texas Instruments. In particular, regarding to the L-shaped APT nerve, as the active length became longer, the maximum response resistance increased up to 33 kΩ. So, the feedback resistor was selected with 10 kΩ. Characterization and measurement Scanning electron microscopy (JEOL, JSM 6360) was employed to observe the surface morphology of the graph paper and the graphite-based conductive film. The digital multimeters (UNIT UT39B and Agilent 34461A) were used to detect and record the response resistance of the APT nerve. The customized actuator (Beijing Times Brilliant Electric Technology Co., Ltd.) applied the dynamical pressure on the APT nerve for the tests. The applied force was calibrated by the standard force sensor (Bengbu Sensors System Engineering Co., Ltd., JHBM-7). The thickness of the spacer was measured by the vernier caliper (HANS.w, HS1044A). The data that support the findings of this study are available from the corresponding author upon reasonable request. Keysers, C., Kaas, J. H. & Gazzola, V. Somatosensation in social perception. Nat. Rev. Neurosci. 11, 417–428 (2010). Tee, B. C. et al. A skin-inspired organic digital mechanoreceptor. Science 350, 313–316 (2015). Bouton, C. E. et al. Restoring cortical control of functional movement in a human with quadriplegia. Nature 533, 247–250 (2016). George, J. A. et al. Biomimetic sensory feedback through peripheral nerve stimulation improves dexterous use of a bionic hand. Sci. Robot. 4, eaax2352 (2019). Salminger, S. et al. Long-term implant of intramuscular sensors and nerve transfers for wireless control of robotic arms in above-elbow amputees. Sci. Robot. 4, eaaw6306 (2019). Kim, Y. et al. A bioinspired flexible organic artificial afferent nerve. Science 360, 998–1003 (2018). Hochberg, L. R. et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372–375 (2012). Graczyk, E. L. et al. The neural basis of perceived intensity in natural and artificial touch. Sci. Transl. Med. 8, 362ra142 (2016). Ravi, S. K. et al. Photosynthetic bioelectronic sensors for touch perception, UV-detection, and nanopower generation: toward self-powered E-skins. Adv. Mater. 30, e1802290 (2018). He, J. et al. A Universal high accuracy wearable pulse monitoring system via high sensitivity and large linearity graphene pressure sensor. Nano Energy 59, 422–433 (2019). Yao, S., Swetha, P. & Zhu, Y. Nanomaterial-enabled wearable sensors for healthcare. Adv. Healthcare Mater. 7, 1700889 (2018). Liao, X. et al. A highly stretchable ZnO@fiber-based multifunctional nanosensor for strain/temperature/UV detection. Adv. Funct. Mater. 26, 3074–3081 (2016). Pan, L. et al. Mechano-regulated metal-organic framework nanofilm for ultrasensitive and anti-jamming strain sensing. Nat. Commun. 9, 3813 (2018). Guo, H. Y. et al. A highly sensitive, self-powered triboelectric auditory sensor for social robotics and hearing aids. Sci. Robot. 3, eaat2516 (2018). Liu, Z. F. et al. Hierarchically buckled sheath-core fibers for superelastic electronics, sensors, and muscles. Science 349, 400–404 (2015). Onoe, H. et al. Metre-long cell-laden microfibres exhibit tissue morphologies and functions. Nat. Mater. 12, 584–590 (2013). Fan, Y. J. et al. Highly robust, transparent, and breathable epidermal electrode. ACS Nano 12, 9326–9332 (2018). Wang, W. et al. Surface diffusion-limited lifetime of silver and copper nanofilaments in resistive switching devices. Nat. Commun. 10, 81 (2019). Wang, Z. et al. High-density NAND-like spin transfer torque memory with spin orbit torque erase operation. IEEE Electron Device Lett. 39, 343–346 (2018). Qian, K. et al. Direct observation of indium conductive filaments in transparent, flexible, and transferable resistive switching memory. ACS Nano 11, 1712–1718 (2017). Shim, H. et al. Stretchable elastic synaptic transistors for neurologically integrated soft engineering systems. Sci. Adv. 5, eaax4961 (2019). Lee, Y. et al. Stretchable organic optoelectronic sensorimotor synapse. Sci. Adv. 4, eaat7387 (2018). Wu, Y. et al. A skin-inspired tactile sensor for smart prosthetics. Sci. Robot. 3, eaat0429 (2018). Yang, C. H. et al. Ionic cable. Extreme Mech. Lett. 3, 59–65 (2015). Zhu, L. Q., Wan, C. J., Guo, L. Q., Shi, Y. & Wan, Q. Artificial synapse network on inorganic proton conductor for neuromorphic systems. Nat. Commun. 5, 3158 (2014). Harris, JuliaJ., Jolivet, R. & Attwell, D. Synaptic Energy Use and Supply. Neuron 75, 762–777 (2012). Holm, R. & Holm, E. A. Electric contacts: theory and application. (Springer-Verlag Berlin Heidelberg, New York, 1967). Wedeen, V. J. et al. The geometric structure of the brain fiber pathways. Science 335, 1628–1634 (2012). Liao, X. et al. Flexible and highly sensitive strain sensors fabricated by pencil drawn for wearable monitor. Adv. Funct. Mater. 25, 2395–2401 (2015). Chortos, A., Liu, J. & Bao, Z. Pursuing prosthetic electronic skin. Nat. Mater. 15, 937–950 (2016). Ho, V. M., Lee, J. A. & Martin, K. C. The cell biology of synaptic plasticity. Science 334, 623–628 (2011). Flesher, S. N. et al. Intracortical microstimulation of human somatosensory cortex. Sci. Transl. Med. 8, 361ra141 (2016). Bareyre, F. M. et al. The injured spinal cord spontaneously forms a new intraspinal circuit in adult rats. Nat. Neurosci. 7, 269–277 (2004). Zucker, R. S. & Regehr, W. G. Short-term synaptic plasticity. Annu. Rev. Physiol. 64, 355–405 (2002). Bargmann, C. I. & Marder, E. From the connectome to brain function. Nat. Methods 10, 483–490 (2013). Cover, T. & Hart, P. Nearest neighbor pattern classification. IEEE Trans on Inf. Theory 13, 21–27 (1967). Fan, X. et al. PEDOT:PSS for flexible and stretchable electronics: modifications, strategies, and applications. Adv. Sci. 6, 1900813 (2019). Chun, K. Y. et al. Highly conductive, printable and stretchable composite films of carbon nanotubes and silver. Nat. Nanotechnol. 5, 853–857 (2010). Park, M. et al. Highly stretchable electric circuits from a composite material of silver nanoparticles and elastomeric fibres. Nat. Nanotechnol. 7, 803–809 (2012). Faisal, A. A., Selen, L. P. & Wolpert, D. M. Noise in the nervous system. Nat. Rev. Neurosci. 9, 292–303 (2008). This work was partially supported by the Program of Nanoantenna Spatial Light Modulators for Next Generation Display Technology (No. A18A7b0058), the National Science Fund for Distinguished Young Scholars (No. 81825024), and National Natural Science Foundation of China (No. 61661146002). These authors contributed equally: Xinqin Liao, Weitao Song, Xiangyu Zhang. School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore Xinqin Liao, Weitao Song, Xiangyu Zhang & Yuanjin Zheng Department of Acupuncture and Moxibustion, Dongzhimen Hospital, Beijing University of Chinese Medicine, Hai Yun Cang on the 5th Zip, Beijing, 100700, China Chaoqun Yan & Cunzhi Liu Department of Biomedical Engineering, National University of Singapore, 21 Lower Kent Ridge Road, Singapore, 119077, Singapore Tianliang Li & Hongliang Ren Beijing Engineering Research Centre of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, 100081, China Yongtian Wang AICFVE of Beijing Film Academy, 4 Xitucheng Road, Beijing, 100088, China Xinqin Liao Weitao Song Xiangyu Zhang Chaoqun Yan Tianliang Li Hongliang Ren Cunzhi Liu Yuanjin Zheng X.L. and Y.Z. conceived the idea. X.L., W.S. and X.Z. designed and carried out the experiments. C.Y., T.L., H.R., C.L. and Y.W. assisted with experiment operations. X.L., C.L., Y.W. and Y.Z. analyzed data. X.L., X.Z., and Y.Z. wrote the paper. All authors discussed the results and commented on the manuscript. Correspondence to Xinqin Liao or Yuanjin Zheng. Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Description of Additional Supplementary Files Liao, X., Song, W., Zhang, X. et al. A bioinspired analogous nerve towards artificial intelligence. Nat Commun 11, 268 (2020). https://doi.org/10.1038/s41467-019-14214-x Nature-inspired materials: Emerging trends and prospects Nirmal Kumar Katiyar Gaurav Goel Saurav Goel NPG Asia Materials (2021) Magnetized Micropillar-Enabled Wearable Sensors for Touchless and Intelligent Information Communication Qian Zhou Bing Ji Bingpu Zhou Nano-Micro Letters (2021) Conductive Film with Flexible and Stretchable Capability for Sensor Application and Stealth Information Transmission Yi-Fei Shan Kun Yang Yong-Yan Cui Chinese Journal of Polymer Science (2021) A new viewpoint and model of neural signal generation and transmission: Signal transmission on unmyelinated neurons Zuoxian Xiang Chuanxiang Tang Guozhi Liu Nano Research (2021)
CommonCrawl
Practical implementation of Libor Market Model I am trying to implement a project about the BGM model, suggested in the book "The Concepts and Practice of mathematical finance" by Mark Joshi. My question is related to the forward volatility structure, particularly about the covariance matrix. First of all, the book assumes that forward $f_j$ has volatility $$ K_j \left(\left(a + b(t_j - t)\right)e^{-c(t_j-t)} + d\right) $$ for $t < t_j$ and $0$ otherwise. Also, the instantaneous correlation between the forward rates $f_i$ and $f_j$ is defined as $e^{-\beta|t_i - t_j|}$. Now, I need to write a method that "computes the covariance matrix for the time-step". I get confused with the fact that there are "simultaneous" forward rates at each time step that have to be simulated, which brings me to the following two questions: Concerning the dimensions of the covariance matrix, I see that they depend on the number of time periods (and not on the time step size), but how long are the time periods? Is there a convention? How do the elements of the covariance matrix come into play? This might sound a bit stupid, but when pricing a swaption, we can have the following discretization of the logarithm of the forward rate: $$ \ln F_k^{\Delta t}(t + \Delta t) = \ln F_k^{\Delta t}(t) + \sigma_k(t)\sum_{j = \alpha + 1}^k \frac{\rho_{k,j}\,\tau_j\,\sigma_j(t)\,F_j^{\Delta t}(t)}{1 + \tau_j F_j^{\Delta t}(t)} \Delta t - \\ \frac{\sigma_k(t)^2}{2}\Delta t + \sigma_k(t)\left(Z_k(t + \Delta t) - Z_k(t)\right)$$ I do not see how the covariance matrix helps in a Monte Carlo simulation. I might be confusing concepts, so some help would be welcome. fixed-income monte-carlo covariance-matrix forward-rate lmm AdamAdam $\begingroup$ Old quesiton, but I am interested. Can you tell me what $K_{j}$ represents? Is it a distribution? $\endgroup$ – user9078057 Oct 9 '20 at 13:45 For a swap, we have a sequence of re-setting and payment dates. The # of forward rates corresponding to the # of payment dates. For example, let us assume that we have $n$ payment dates $t_1, \ldots, t_n$, where $0< t_1 < \cdots < t_n$. Then there are $n$ forward rates. During the simulation, for time steps prior to $t_1$, there exist $n$ "simultaneous" forward rates, corresponding to payment dates $t_1, \ldots, t_n$, while for time steps between $t_1$ and $t_2$, there exist $n-1$ "simultaneous" forward rates, corresponding to payment dates $t_2, \ldots, t_n$. For time steps between $t_{n-1}$ and $t_n$, there is only a single forward rate that corresponds to the last payment date $t_n$. Because of the existence of these "simultaneous" forward rates, except for the time steps between $t_{n-1}$ and $t_n$, the Cholesky decomposition of the correlation matrix between the driving Brownian motions of the existing forward rate dynamics is needed. That is how the covariance matrix come into play in the Monte Carlo simulation. GordonGordon There are two things that might be confusing you. The time step in Time dimensions and time steps along the forward curve. The first is given a time t from today until a certain day in the future, this dt usually is the next reset date. The the other is tau representing a tenor for the forward curve maturing in tau days ahead. Dtau could vary depending how did you build your term-structure of interest rates. Regarding the covariance hole, it's a essential feature of the model because it's the drift condition that makes the model arbitrage free . it's not exclusively in swaption but a generic LIBOR Market Model simulation. Tulio CarnelossiTulio Carnelossi $\begingroup$ thanks for your answer. Unfortunately it is still not clear to me how long does the tenor have to be, or how does the covariance matrix impact the calculations exactly when pricing any product, most significantly a swaption. For example, in the above equation for the evolution of the forward log price, I do not see how does the covariance affect the calculations. $\endgroup$ – Adam Jan 17 '15 at 15:23 $\begingroup$ It's not a matter of how it affects it's a matter of arbitrage conditions. Take a look into peter jaeckel slides on his website. There is a very clear part explaining the drift condition $\endgroup$ – Tulio Carnelossi Jan 17 '15 at 15:26 Not the answer you're looking for? Browse other questions tagged fixed-income monte-carlo covariance-matrix forward-rate lmm or ask your own question. Is there a step-by-step guide for calculating portfolio VaR using monte carlo simulations Accuracy Rebonato Swaption Approximation Formula among Different Strikes Exploding Libor Rates in Libor Market Model Generally how to simulate bivariate (or multidimensional) BM sample paths? Libor Market Model Implementation LIBOR Market Model - tenors? Price Down and In Barrier Option Using Local Vol and Monte Carlo Pricing Swaption Analytically using Libor Market Model
CommonCrawl
WOTSwana: A Generalized Sleeve Construction for Multiple Proofs of Ownership David Chaum, Mario Larangeira, Mario Yaksetig The $\mathcal{S}_{leeve}$ construction proposed by Chaum et al. (ACNS'21) introduces an extra security layer for digital wallets by allowing users to generate a "back up key" securely nested inside the secret key of a signature scheme, i.e., ECDSA. The "back up key", which is secret, can be used to issue a "proof of ownership", i.e., only the real owner of this secret key can generate a single proof, which is based on the WOTS+ signature scheme. The authors of $\mathcal{S}_{leeve}$ proposed the formal technique for a single proof of ownership, and only informally outlined a construction to generalize it to multiple proofs. This work identifies that their proposed construction presents drawbacks, i.e., varying of signature size and signing/verifying computation complexity, limitation of linear construction, etc. Therefore we introduce WOTSwana, a generalization of $\mathcal{S}_{leeve}$, which is, more concretely, a more general scheme, i.e., an extra security layer that generates multiple proofs of ownership, and put forth a thorough formalization of two constructions: (1) one given by a linear concatenation of numerous WOTS+ private/public keys, and (2) a construction based on tree like structure, i.e., an underneath Merkle tree whose leaves are WOTS+ private/public key pairs. Furthermore, we present the security analysis for multiple proofs of ownership, showcasing that this work addresses the early mentioned drawbacks of the original construction. In particular, we extend the original security definition for $\mathcal{S}_{leeve}$. Finally, we illustrate an alternative application of our construction, by discussing the creation of an encrypted group chat messaging application.
CommonCrawl
Symmetry-based indicators of band topology in the 230 space groups Hoi Chun Po1,2, Ashvin Vishwanath1,2 & Haruki Watanabe3 Nature Communications volume 8, Article number: 50 (2017) Cite this article Electronic properties and materials Topological matter An Erratum to this article was published on 10 October 2017 This article has been updated The interplay between symmetry and topology leads to a rich variety of electronic topological phases, protecting states such as the topological insulators and Dirac semimetals. Previous results, like the Fu-Kane parity criterion for inversion-symmetric topological insulators, demonstrate that symmetry labels can sometimes unambiguously indicate underlying band topology. Here we develop a systematic approach to expose all such symmetry-based indicators of band topology in all the 230 space groups. This is achieved by first developing an efficient way to represent band structures in terms of elementary basis states, and then isolating the topological ones by removing the subset of atomic insulators, defined by the existence of localized symmetric Wannier functions. Aside from encompassing all earlier results on such indicators, including in particular the notion of filling-enforced quantum band insulators, our theory identifies symmetry settings with previously hidden forms of band topology, and can be applied to the search for topological materials. The discovery of topological insulators (TIs) has reinvigorated the well-established theory of electronic band structures (BSs)1, 2. Exploration along this dimension has led to an ever-growing arsenal of topological materials, which include, for instance, topological (crystalline) insulators3,4,5, quantum anomalous Hall insulators6 and Weyl and Dirac semimetals7. Such materials possess unprecedented physical properties, like quantized response and gapless surface states, that are robust against all symmetry-preserving perturbations as long as a band picture remains valid1, 2, 7. Soon after these developments, it was realized that symmetries of energy bands, a thoroughly studied aspect of band theory, is also profoundly intertwined with topology. This is exemplified by the celebrated Fu-Kane criterion for inversion-symmetric materials, which demarcates TIs from trivial insulators using only their parity eigenvalues8. This criterion, when applicable, greatly simplifies the topological analysis of real materials, and underpins the theoretical prediction and subsequent experimental verification of many TIs8,9,10,11,12. It is of fundamental interest to obtain results akin to the Fu-Kane criterion in other symmetry settings. Early generalizations in systems without time-reversal (TR) symmetry in two-dimensional (2D) constrained the Chern number (C). The eigenvalues of an n-fold rotation were found to determine C modulo n 13,14,15. This is characteristic of a symmetry-based indicator of topology—when the indicator is nonvanishing, band topology is guaranteed, but certain topological phases (i.e., C a multiple of n in this context) may be invisible to the indicator. In three-dimensional (3D) systems, it was also recognized that spatial inversion alone can protect nontrivial phases. A feature here is that these phases do not host protected surface states, since inversion symmetry is broken at the surface, but they do represent distinct phases of matter. For example, they possess nontrivial Berry phase structure in the Brillouin zone, which leads to robust entanglement signatures13, 14, 16, 17 and, in some cases, quantized responses13, 14, 18. Interestingly, in the absence of TR invariance the inversion eigenvalues can also protect Weyl semimetals13, 14, which informed early work on materials candidates19. Hence, these symmetry-based indicators are relevant both to the search for nontrivial insulating phases, and also to the study of topological semimetals. It is also important to note that the goal here is to identify signatures of band topology in the symmetry transformations of the state, which is distinct from the full classification of topological phases. An important open problem is to extend these powerful symmetry indicators for band topology to all space groups (SGs). Earlier studies have emphasized the topological perspective, which typically rely on constructions that are specifically tailored to particular band topology of interest8, 15, 20, 21. While some general mathematical frameworks have been developed22,23,24, obtaining a full list of concrete results from such an approach faces an inherent challenge stemming from the sheer multitude of physically relevant symmetry settings—there are 230 SGs in 3D, and each of them is further enriched by the presence or absence of both spin–orbit coupling and TR symmetry. A complementary, symmetry-focused perspective leverages the existing exhaustive results on band symmetries25, 26 to simplify the analysis. Previous work along these lines has covered restricted cases13, 14, 27, 28. For instance, in ref. 28, which focuses on systems in the wallpaper groups without any additional symmetry, such an approach was adopted to help develop a more physical understanding of the mathematical treatment of ref. 22. However, the notion of nontriviality is a relative concept in these approaches. While such formulation is well-suited for the study of phase transitions between different systems in the same symmetry setting, it does not always indicate the presence of underlying band topology. As an extreme example, such classifications generally regard atomic insulators (AIs) with different electron fillings as distinct phases, although all the underlying BSs are topologically trivial. Here, we adopt a symmetry-based approach that focuses on probing the underlying band topology. At the crux of our analysis is the observation that topological BSs arise whenever there is a mismatch between momentum-space and real-space solutions to symmetry constraints29, 30. To quantitatively expose such mismatches, we first develop a mathematical framework to efficiently analyze all possible BSs consistent with any symmetry setting, and then discuss how to identify the subset of BSs arising from AIs, which are formed by localizing electrons to definite orbitals in real space. The mentioned mismatch then follows naturally as the quotient between the allowed BSs and those arising from real-space specification. We compute this quotient for all 230 SGs with or without spin–orbit coupling and/or TR symmetry. Using these results, we highlight symmetry settings suitable for finding topological materials, including both insulators and semimetals. In particular, we will point out that, in the presence of inversion symmetry, stacking two strong 3D TIs will not simply result in a trivial phase, despite all the \({{\Bbb Z}_2}\) indices have been trivialized. Instead, it is shown to produce a quantum band insulator (QBI)30, which can be diagnosed through its robust gapless entanglement spectrum. Overview of strategy and results Our major goal is to systematically quantify the mismatch between momentum-space and real-space solutions to symmetry constraints in free-electron problems30. While AIs, which by definition possess localized symmetric Wannier orbitals, can be understood from a real-space picture with electrons occupying definite positions as if they were classical particles, topological BSs (that are intrinsic to dimensions greater than one) do not admit such a description. Whenever there is an obstruction to such a real-space reinterpretation for a band insulator, the insulating behavior can only be understood through the quantum interference of electrons, and we refer to such systems as QBIs. While all topological phases such as Chern insulators, weak and strong \({{\Bbb Z}_2}\) TIs and topological crystalline insulators with protected surface states in d > 1 are QBIs, more generally, QBIs may not have nontrivial surface states when the protecting symmetries are not compatible with any surface termination. Nonetheless, they represent distinct phases of matter and showcase nontrivial Berry phase in the Brillouin zone31, robust entanglement signatures13, 14, 16, 30, and sometimes quantized responses13, 14, 18. Building on this insight, we develop an efficient strategy for identifying topological materials indicated by symmetries. We will first outline a simple framework to organize the set of all possible BSs using only their symmetry labels. By extending the ideas in refs 13, 28 and allowing for both addition (stacking) as well as formal subtractions of bands, we show that BSs can be conveniently represented in terms of a special type of Abelian group, which is simply called a lattice in mathematical nomenclature. Next, to isolate topological BSs we quotient out those that can arise from a Wannier description. Since such band topology is uncovered from the symmetry representations of the bands, we will refer to it as being represented-enforced. In this work, we present the results of this computation for all of the 230 SGs, 80 layer groups, and 75 rod groups, covering all cases with or without TR symmetry and spin–orbit coupling. Our scheme automatically encompasses all previous results concerning symmetry indicators of band topology, including in particular the Fu-Kane criterion, the relation between Chern numbers and rotation eigenvalues, and the inversion-protected nontrivial phases. We will utilize the results and identify representation-enforced QBIs (reQBIs). We will also discuss a more constrained approach where one first specifies the microscopic lattice degrees of freedom. This is relevant to materials where a hierarchy of energy scales isolates a group of atomic orbitals. We find examples where these constraints lead to semimetallic behavior, despite band insulators at the same filling are symmetry-allowed. We will refer to these as lattice-enforced semimetals (leSMs) and give a concrete tight-binding example of them. Generalizations of these approaches should aid in the discovery of experimentally relevant topological semimetals and insulators. Finally, we make two remarks. First, we ignore electron–electron interactions. Second, while our approach is applicable in any dimension, in the special case of one-dimensional (1D) problems even topological phases are smoothly connected to AIs32, and therefore are regarded as trivial within our framework. These states, and their descendants in higher dimensions, are collectively known as frozen-polarization insulators13, and will be absent from our discussion on topological phases. BSs form an Abelian group Here, we argue that the possible set of BSs symmetric under an SG \({\cal G}\) can be naturally identified as the group \({{\Bbb Z}^{{d_{{\rm{BS}}}}}} \equiv {\Bbb Z} \times {\Bbb Z} \times \ldots \times {\Bbb Z}\), where d BS is a positive integer that depends on both \({\cal G}\) and the spin of the particles (Fig. 1). We will first set aside TR symmetry, and later discuss how it can be easily incorporated into the same framework. The discussion in this section follows immediately from well-established results concerning band symmetries25, and the same set of results was recently utilized in ref. 28 to discuss an alternative way to understand the more formal classification in ref. 22. Although there is some overlap between the discussion here and that in ref. 28, we will focus on a different aspect of the narration: instead of being solely concerned with the values of d BS, we will be more concerned with utilizing this framework to extract other physical information about the systems. Symmetry-based indicators of band topology. a Symmetry labeling of bands in a 1D inversion-symmetric example. k 0 = 0, π are high-symmetry momenta, where the bands are either even (+) or odd (−) under inversion symmetry (orange diamonds). From a symmetry perspective, a target set of bands (purple and boxed) separated from all others by band gaps can be labelled by the multiplicities of the two possible symmetry representations, which we denote by the integers \(n_{{k_0}}^ \pm \). Note that such labeling is insensitive to the detailed energetics within the set. In addition, the set is also characterized by the number of bands involved, which we denote by ν. Altogether, the set is characterized by five integers, which are further subjected to the constraints \(\nu = n_0^ + + n_0^ - = n_\pi ^ + + n_\pi ^ - \). b Symmetry labels like those described in a can be similarly defined for systems symmetric under any of the 230 space groups in three dimensions. Using such labels, one can reinterpret the set of band structures as an Abelian group. This is schematically demonstrated through the two labels ν and n α, which organize the set of all possible band structures into a two-dimensional lattice. Note that the dimensionality of this lattice is given by the number of independent symmetry labels, and is a property of the symmetry setting at hand. Organized this way, the band structures corresponding to atomic insulators, which are trivial by our definition, will generally occupy a sublattice. Any band structure that does not fall within this sublattice necessarily possesses nontrivial band topology We begin by reviewing some basic notions using a simple example. Consider free electrons in a 1D, inversion-symmetric crystal. The energy bands E m(k) are naturally labeled by the band index m and the crystal momentum k ∈ (−π, π). Since inversion P flips k ↔ −k, the Bloch Hamiltonian H(k) is symmetric under PH(k)P −1 = H(−k), which implies E m(k) = E m(−k), and the wavefunctions are similarly related. The two momenta k 0 = 0 and π are special as they satisfy P(k 0) = k 0 (up to a reciprocal lattice vector). As such, the symmetry constraint imposed by P becomes a local constraint at k 0, which implies the wavefunctions ψ m(k 0) (generically) furnish irreducible representations (irreps) of P: \(\psi _{\rm{m}}^\dag \left( {{k_0}} \right)P{\psi _{\rm{m}}}\left( {{k_0}} \right) = {\zeta _{\rm{m}}}\left( {{k_0}} \right)\), with ζ m(k 0) = ±1. The parities ζ m(k 0) = ±1 can be regarded as local (in momentum space) symmetry labels for the energy band E m(k), and such labels can be readily lifted to a global one assigned to any set of bands separated from others by a band gap. We will refer to such sets of bands as BSs, although, as we will explain, caution has to be taken when this notion is used in higher dimensions. Insofar as symmetries are concerned, we can label the BS by its filling, ν, together with the four non-negative integers, \(n_{{k_0}}^ \pm \), corresponding to the multiplicity of the irrep ± at k 0 (Fig. 1a). Generally, such labels are not independent, since the assumption of a band gap, together with the continuity of the energy bands, casts global symmetry constraints on the symmetry labels. These constraints are known as compatibility relations. For our 1D problem at hand, there are only two of them, which arise from the filling condition: \(\nu = n_0^ + + n_0^ - = n_\pi ^ + + n_\pi ^ - \). Consequently, the BS is fully specified by three non-negative integers, which we can choose to be \(n_0^ + \), \(n_\pi ^ + \), and ν. This discussion to this point is similar to that of ref. 28, but we now depart from the combinatorics point of view of that work. Instead, similar to ref. 13 we develop a mathematical framework to efficiently characterize energy bands in terms of their symmetry transformation properties, and then show that it provides a powerful tool for analyzing general BSs. To begin, we first note that any BS in this 1D inversion-symmetry problem can be represented by a five-component "vector" \({\bf{n}} \equiv \left( {n_0^ + ,n_0^ - ,n_\pi ^ + ,n_\pi ^ - ,\nu } \right) \in {\Bbb Z}_{ \ge 0}^5\), where \({{\Bbb Z}_{ \ge 0}}\) denotes the set of non-negative integers. In addition, n is subjected to the two compatibility relations. We can arrange these relations into a system of linear equations and denote them by a 2 × 5 matrix \({\cal C}\). The admissible BSs then satisfy \({\cal C}{\bf{n}} = {\rm{0}}\), and hence \({\rm{ker}}\,{\cal C}\), the solution space of \({\cal C}\), naturally enters the discussion. For the current problem, \({\rm{ker}}\,{\cal C}\) is 3D, which echoes with the claim that the BS is specified by three non-negative integers. At this point, it is natural to make a mathematical abstraction and lift the physical condition of non-negativity. We define $$\begin{array}{*{20}{l}}\\ \left\{ {{\rm{BS}}} \right\}{ \equiv {\rm{ker}}\,{\cal C} \cap {{\Bbb Z}^D},} \hfill \\ \end{array}$$ where for the 1D problem at hand we have D = 5. The main advantage of this abstraction is that, unlike \({\Bbb Z}_{ \ge 0}^D\), \({{\Bbb Z}^D}\) is an Abelian group, which greatly simplifies our forthcoming analysis. In particular, {BS} so defined can be identified with \({{\Bbb Z}^{{d_{{\rm{BS}}}}}}\), where d BS = 3 is the dimension of the solution space \({\rm{ker}}\,{\cal C}\). Physically, the addition in \({{\Bbb Z}^{{d_{{\rm{BS}}}}}}\) corresponds to the stacking of energy bands. Next, we generalize the discussion to any SG \({\cal G}\) in three dimensions. We call a momentum k a high-symmetry momentum if there is any \(g \in {\cal G}\) other than the lattice translations such that g(k) = k (up to a reciprocal lattice vector). We define a BS as a set of energy bands isolated from all others by band gaps above and below at all high-symmetry momenta. Note that, in 3D, the phrase "all high-symmetry momenta" includes all high-symmetry points, lines, and planes. The discussion for the 1D example carries through, except that one has to consider a much larger zoo of irreps and compatibility relations25 (see Methods and Supplementary Notes 1 and 2 for a detailed discussion). While Eq. 5 follows readily from definitions, it has interesting physical implications. As a group, \({{\Bbb Z}^{{d_{{\rm{BS}}}}}}\) is generated by d BS independent generators. In the additive notation, natural for an Abelian group, we can write the generators as {b i : i = 1, …, d BS}, and for any given BS we can expand it similar to elements in a vector space $$\begin{array}{*{20}{l}}\\ {{\rm{BS}} = \mathop {\sum}\limits_{i = 1}^{{d_{{\rm{BS}}}}} {{m_i}{{\bf{b}}_i}} ,} \hfill \\ \end{array}$$ where \({m_i} \in {\Bbb Z}\) are uniquely determined once the basis is fixed. Therefore, full knowledge of {BS} is obtained once the d BS generators b i are found. So far, we have not addressed the effect of TR symmetry, which, being anti-unitary, does not lead to new irreps when it is incorporated25. Instead, TR symmetry could force certain irreps to become paired with either itself or another, giving rise to additional constraints on n. Nonetheless, these constraints can be readily incorporated into the definition of \({\cal C}\), and therefore does not affect our mathematical formulation (Methods). AIs and mismatch classification While we have provided a systematic framework to probe the structure of {BS}, much insight can be gleaned from a study of AIs. AIs correspond to band insulators constructed by first specifying a symmetric set of lattice points in real space, and then fully occupying a set of orbitals on each of the lattice sites. The possible set of AIs can be easily read off from tabulated data of SGs26, 33 (Supplementary Note 2). In addition, once the real-space degrees of freedom are specified, one can compute the corresponding element in {BS}. As stacking two AIs lead to another AI, we see that {AI} ≤ {BS} as groups. Any subgroup of \({{\Bbb Z}^{{d_{{\rm{BS}}}}}}\) is again a free, finitely generated Abelian group, and therefore we conclude $$\left\{ {{\rm{AI}}} \right\} \simeq {{\Bbb Z}^{{d_{{\rm{AI}}}}}} \equiv \left\{ {\mathop {\sum}\limits_{i = 1}^{d_{\rm{AI}}} {{m_i}{{\bf{a}}_i}\,} {\rm{:}}\,{m_i} \in {\Bbb Z}} \right\},$$ where we denote by {a i } a complete set of basis for {AI}. Once {BS} and {AI} are separately computed, it is straightforward to evaluate the quotient group (Supplementary Note 3) $$\begin{array}{*{20}{l}}\\ {{X_{{\rm{BS}}}} \equiv \frac{{\left\{ {{\rm{BS}}} \right\}}}{{\left\{ {{\rm{AI}}} \right\}}}.} \hfill \\ \end{array}$$ Physically, an entry in X BS corresponds to an infinite class of BSs that, while distinct as elements of {BS}, only differ from each other by the stacking of an AI. By definition, the entire subgroup {AI} collapses into the trivial element of X BS. Conversely, any nontrivial element of X BS corresponds to BSs that cannot be be interpreted as AIs, and therefore X BS serves as a symmetry indicator of topological BSs. One can further show that every element of X BS can be realized by a physical BS (Methods), and therefore X BS indeed corresponds to indicators of band topology in physical systems. Following the described recipe, we compute {AI}, {BS}, and X BS for all 230 3D SGs in the four symmetry settings mentioned. Results for spinful fermions with TR symmetry, relevant for real materials with or without spin–orbit coupling and no magnetic order, are tabulated in Tables 1–4. The results for other symmetry settings and dimensions (Methods) are presented in Supplementary Tables 5–20. Table 1 Characterization of band structures for systems with time-reversal symmetry and significant spin–orbit coupling Table 2 Characterization of band structures for systems with time-reversal symmetry and negligible spin–orbit coupling Table 3 Symmetry-based indicators of band topology for systems with time-reversal symmetry and significant spin–orbit coupling Table 4 Symmetry-based indicators of band topology for systems with time-reversal symmetry and negligible spin–orbit coupling An interesting observation from this exhaustive computation is the following: for all the symmetry settings considered, we found d BS = d AI, and therefore X BS is always a finite Abelian group. Equivalently, when only symmetry labels are used in the diagnosis, a BS is nontrivial precisely when it can only be understood as a fraction of an AI. In addition, d BS = d AI implies that a complete set of basis for {BS} can be found by studying combinations of AIs, similar to Eq. 3 but with a generalization of the expansion coefficients \({m_i} \in {\Bbb Z}\) to \({q_i} \in {\Bbb Q}\), subjected to the constraint that the sum remains integer-valued. Although the full set of compatibility relations is needed in our computation establishing d BS = d AI, using our results the basis of {BS} can be readily computed directly from {AI} (Supplementary Note 3). Since {BS} can be easily found this way, we will refrain from providing a lengthy list of all the bases we found. To illustrate the ideas more concretely, we discuss a simple example concerning non-TR-symmetric spinless fermions symmetric under SG 106. In this setting, d BS = d AI = 3, and a 1, one of the three generators of {AI}, has the property that all irreps appear an even number of times, while the other two generators contain some odd entries. Now consider b 1≡a 1/2, which is still integer-valued. Clearly, by linearity b 1 satisfies all symmetry constraints, and therefore b 1∈{BS}. However, \({{\rm{b}}_1} \notin \left\{ {{\rm{AI}}} \right\} \equiv \left\{ {\mathop {\sum}\nolimits_{i = 1}^3 {{m_i}{{\bf{a}}_i}\!:\,{m_i} \in {\Bbb Z}} } \right\}\), and therefore b 1 corresponds to a quantum BS, and indeed it is a representative for the nontrivial element of \({X_{{\rm{BS}}}}{\rm{ = }}{{\Bbb Z}_2}\). In addition, if we consider a tight-binding model with a representation content corresponding to a 1, the decomposition a 1 = b 1 + b 1 implies that it is possible to open a band gap at all high-symmetry momenta at half filling, and thereby realizing the quantum BS b 1. It turns out that, in fact, b 1 corresponds to a filling-enforced QBI (feQBI)30. We will elaborate further on this point in the Supplementary Note 4. Before we move on to concrete applications of our results, we pause to clarify some subtleties in the exposition. Recall that the notion of BS is defined using the presence of band gaps at all high-symmetry momenta. Generally, however, there can be gaplessness in the interior of the Brillouin zone that coexist with our definition of BS. While in some cases such gaplessness is accidental in nature, in the sense that it can be annihilated without affecting the BS, in some more interesting cases it is enforced by the specification of the symmetry content. This was pointed out in refs 13, 14 for inversion-symmetric systems without TR symmetry, where certain assignments of the parity eigenvalues ensure the presence of Weyl points at some generic momenta. When a nontrivial element in X BS can be insulating, we refer to it as as a reQBI; when it is necessarily gapless, we call it a representation-enforced semimetal (reSM). We caution that X BS will naturally include both reQBIs and reSMs, although some symmetry settings naturally forbid the notion of reSMs. In fact, one can show that their individual diagnoses are related by X SM = X BS/X BI (Supplementary Note 5). Hence, given an entry of X BS one has to further decide whether it corresponds to a reSM or a reQBI. In Supplementary Note 5, we provide general arguments on the existence of reSMs for systems with significant spin–orbit coupling. In addition, we also note that, while every BS belonging to a nontrivial class of X BS is necessarily nontrivial, some systems in the trivial class can also be topological. By definition, the representation content of a BS belonging to the trivial class of X BS can be constructed by stacking of AIs. However, if the stacking necessarily involves negative coefficients, the BS cannot be attained from stacking physical AIs, and therefore is still topologically nontrivial. Some of the feQBIs discussed in ref. 30 also fall into this category. Alternatively, when the topological nature of a system is undetectable using only symmetry labels, say for the tenfold-way phases in the absence of any spatial symmetries beyond the lattice translations, the system belongs to the trivial element of X BS despite it is topological. The general relation between X BS and the conventional tenfold-way classification depends on the symmetry setting at hand, and its understanding is an important open question (Methods). Quantum Band Insulators in conventional settings Having derived a general theory for finding symmetry-based indicators of band topology, we now turn to applications of the results. As a first application, we utilize the results in Table 3 to look for reQBIs that are not diagnosed by previously available topological invariants. In particular, we will focus on a result concerning one of the most well-studied symmetry setting: materials with significant spin–orbit coupling symmetric under TR, lattice translations and inversion (SG 2). As shown in Table 3, \({X_{{\rm{BS}}}} = {\left( {{{\Bbb Z}_2}} \right)^3} \times {{\Bbb Z}_4}\) for SG 2. Using the Fu-Kane criterion8, one can verify that the strong and weak TIs, respectively, serve as the generators of the \({{\Bbb Z}_4}\) and \({{\Bbb Z}_2}\) factors. This identification, however, fails to account for the nontrivial nature of the doubled strong TI, which being a nontrivial element in \({{\Bbb Z}_4}\) corresponds to a reQBI. It is also not covered in the earlier lines of work focusing on inversion-symmetric insulators13, 14, 16. This reQBI possesses a trivial magnetoelectric response (θ = 0) and is not expected to have protected surface states. Nonetheless, the nontrivial nature of the reQBI can be seen from its entanglement spectrum, which exhibits protected gaplessness related to the parity eigenvalues of the filled bands (Fig. 2a). In the present context, we define the entanglement spectrum as the collection of single-particle entanglement energies arising from a spatial cut, which contains an inversion center and is perpendicular to a crystalline axis. Refs 13, 14, 16 showed that the entanglement spectrum of TR and inversion symmetric insulators generally have protected Dirac cones at the TR invariant momenta of the surface Brillouin zone. These Dirac cones carry effective integer charges under inversion symmetry, and as a result they are symmetry-protected. The doubled strong TI phase has twice the number of Dirac cones as the regular strong TI (Fig. 2b). Examples of topological band structures. a–c A representation-enforced quantum band insulator of spinful electrons with time-reversal and inversion symmetries, dubbed the "doubled strong TI". a Using the Fu-Kane parity criterion8, the strong and weak \({{\Bbb Z}_2}\) indices can be computed from the the parities of the occupied bands, which we indicate by ± at the eight time-reversal invariant momenta. Shown are the parities of one state from each Kramers pair for a doubled strong TI with four filled bands. b The entanglement spectrum at a spatial cut, parallel to the x–y plane and containing an inversion center, features two Dirac cones at Γ13, 14, 16. Such Dirac cones are known to possess integer-valued charges under the inversion symmetry, and we denote the positively charged and negatively charged cones, respectively, by blue and red. c Inversion-symmetric atomic insulators feature entanglement surface Dirac cones in general, but their presence depends on the arbitrary choice of the cut. We find that the possible Dirac-cone arrangement arising from atomic insulators can only be a linear combination of four basic configurations, illustrated as a sum with the integral weights m i . The arrangement in b cannot be reconciled with those in c, confirming the nontriviality of the doubled strong TI. d, e Example of a lattice-enforced semimetal for spinful electrons with time-reversal symmetry. d We consider a site (red sphere) under a local environment (beige) symmetric under the point group T, and suppose the relevant local energy levels form the four-dimensional irreducible representation, which is half-filled (boxed). e When the red site sits at the highest-symmetry position of space group 219, the specified local energy levels and filling gives rise to a half-filled eight-band model (each band shown is doubly degenerate). Such (semi-)metallic behavior is dictated by the specification of the microscopic degrees of freedom in this model Yet, one must use caution in interpreting the nontrivial nature of such entanglement, since inversion-symmetric AIs also have protected entanglement surface states whenever the center of mass of an electronic wavefunction is pinned to the entanglement cut. The presence of these entanglement signatures, however, is dependent on the arbitrary choice of the location of the cut, and therefore is not as robust as the other topological characterizations. In contrast, since we have already quotient out all AIs in the definition of X BS, the reQBI at hand must have a more topological origin. This is verified from the pictorial argument in Fig. 2a–c, where we contrast the entanglement spectrum of the doubled strong TI with those that can arise from AIs. Importantly, we see that the total Dirac-cone charge of an AI is always 0 mod 4, whereas the doubled strong TI has a charge of 2 mod 4. This implies that the entanglement gaplessness is independent of the arbitrary choice of the cut, and in fact shows that the bulk computation of X BS can be reproduced by considering the entanglement spectrum for this symmetry setting. Note that, if TR is broken, Kramers paring will be lifted and the irrep content of this reQBI becomes achievable with an AI. This suggests that the reQBI at hand is protected by the combination of TR and inversion symmetry. It is an interesting open question to study whether or not this reQBI has any associated quantized physical response13. We note that, since the strong TI is compatible with any additional spatial symmetry, the argument above is applicable to any centrosymmetric SGs. Indeed, as can be seen from Table 3, all of them have |X BS| ≥ 4, consistent with our claim. Therefore, the doubled strong TI phase could be realizable in a large number of materials classes. Finally, we remark that the same X BS is found for SG 2 in all the other symmetry settings, although their physical interpretations are different. In particular, the generators of \({{\Bbb Z}_4} < {X_{{\rm{BS}}}}\) correspond to a reSM in the other settings. This observation also shows that the doubled strong TI phase remains nontrivial in the absence of spin–orbit coupling. Lattice-enforced semimetals As another application of our results, we demonstrate how the structure of {BS} exposes constraints on the possible phases of a system arising from the specification of the microscopic degrees. We will in particular focus on the study of semimetals, but a similar analysis can be performed in the study of, say, reQBIs. As a warm-up, recall the physics of (spinless) graphene, where specifying the honeycomb lattice dictates that the irrep at the K point is necessarily 2D, and therefore the system is guaranteed to be gapless at half filling. Using the structure of {BS} we described, this line of reasoning can be efficiently generalized to an arbitrary symmetry setting: any specification of the lattice degrees of freedom corresponds to an element A ∈ {AI}, and one simply asks if it is possible to write A = B v + B c, where B v,c ∈ {BS} satisfies the physical non-negative condition, such that B v corresponds to a BS with a specified filling ν. Whenever the answer is no, the system is guaranteed to be (semi-)metallic. We refer to any such system as a leSM. Note that a stronger form of symmetry-enforced gaplessness can originate simply from the electron filling, and such systems were dubbed as filling-enforced semimetal (feSMs)34, 35. We will exclude feSMs from the definition of leSMs, i.e., we only call a system a leSM if the filling ν is compatible with some band insulators in the same symmetry setting, but is nonetheless gapless because of the additional lattice constraints. A preliminary analysis reveals that leSMs abound, especially for spinless systems with TR symmetry. This is in fact anticipated from the earlier discussions in refs 36,37,38. Instead, we will turn our attention to TR-invariant systems with significant spin–orbit coupling, which lies beyond the scope of these earlier studies and oftentimes leads to interesting physics1, 7, 30. A systematic survey of them will be the focus of another study. Here, we present a proof-of-concept leSM example we found, which arises in systems symmetric under SG 219 (F \(\bar 4\)3c). We will only sketch the key features of the model, and the interested readers are referred to the Methods section for details of the analysis. We consider a lattice with two sites in each primitive unit cell, and that each site has a local environment corresponding to the cubic point group T (Fig. 2d). We suppose the relevant on-site degrees of freedom transform under the four-dimensional irreducible co-representation of T under TR symmetry25, and that the system is at half filling, i.e., the filling is ν = 4 electrons per primitive unit cell. Although the local orbitals are partially filled, generically a band gap becomes permissible once electron hopping is incorporated. Naively, for the present problem this may appear to be the likely scenario, since the momentum-space irreps all have dimensions ≤425 and band insulators are known to be possible at this filling34, 35. However, using our framework one can prove that no BS is possible for this system at ν = 4, implying that there is irremovable lattice-enforced gaplessness at some high-symmetry line. This is indeed verified in Fig. 2e, where we plot the BS obtained from an example tight-binding model (Methods). In this work, we present a simple mathematical framework for efficiently analyzing BSs as entities defined globally over the Brillouin zone. We further utilize this result to systematically quantify the mismatch between the momentum-space and real-space descriptions of free electron phases, obtaining a plethora of symmetry settings for which topological materials are possible. Our results concern a fundamental aspect of the ubiquitous band theory. For electronic problems, we demonstrated the power of our approach by discussing three particular applications, predicting both QBIs and semimetals (see also Supplementary Note 4). We highlight four interesting future directions below: first, to incorporate the tenfold-way classification into our symmetry-based diagnosis of topological materials28; second, to discover quantized physical responses unique to the phases we predicted13, 14, 18; third, to extend the results to magnetic SGs25; and lastly, to screen materials database for topological materials relying on fast diagnosis invoking only symmetry labels39. More broadly, we expect our analysis to shed light on any other fields of studies, most notably photonics and phononics, where the interplay between topology, symmetry, and BSs is of interest. Note added: Recently, ref. 40 appeared, which has some overlap with the present work, in that it also identifies topological band insulators by contrasting them with AIs. However, the present work differs from ref. 40 in important ways in the formulation of the problem and the mathematical approach adopted. Glossary of abbreviations For brevity, we have introduced several abbreviations in the text. For the readers' convenience, we provide a glossary of the less-standard ones here. AI (atomic insulator): band insulators possessing localized symmetric Wannier functions. BS (band structure): a set of energy bands separated from all others by band gaps above and below at all high-symmetry momenta. fe (filling-enforced): referring to attributes that follow from the electron filling of the system. le (lattice-enforced): referring to attributes that follow from the specification of the microscopic degrees of freedom in the lattice. QBI (quantum band insulators): band insulators, with or without protected surface states, that do not admit any atomic limit provided the protecting symmetries are preserved. re (representation-enforced): referring to attributes that follow from knowledge on the symmetry representations of the energy bands. SG (space group): any one of the 230 spatial symmetry groups of crystals in three dimensions. SM (semimetals): filled bands with gap closings that are stable to infinitesimal perturbations. TI (topological insulator): \({{\Bbb Z}_2}\) TIs in two or three dimensions for spin–orbit-coupled system with TR symmetry (note that we use this phrase in a restricted sense in this work). Three-dimensional BSs In the main text, we have illustrated the definition and interpretation of {BS} using a simple 1D example. In the following we summarize the key generalizations required to address 3D systems. A more detailed discussion is presented in Supplementary Notes 1 and 2. Similar to the 1D example, in the general 3D setting a collection of integers, corresponding to the multiplicities of the irreps in the BS, is assigned to each high-symmetry momentum. By the gap condition imposed in the definition of a BS, these integers are invariant along high-symmetry lines. In addition, any pair of symmetry-related momenta will share the same labels. Altogether, we see that the symmetry content of a BS, together with the number of bands ν, is similarly specified by a finite number of integers, which we call D. Therefore, these symmetry labels can be identified as elements of the group \({{\Bbb Z}^D}\), where group addition corresponds to the physical operation of stacking BSs. As discussed, however, these integers are again subjected to the compatibility relations, which arise whenever a high-symmetry momentum is continuously connected to another with a lower symmetry. By continuity, the symmetry content of the BS at the lower-symmetry momentum is fully specified by that of the higher-symmetry one, giving rise to linear constraints we denote collectively by the matrix \({\cal C}\). The group {BS} is then defined as in Eq. 1, and again we find $$\begin{array}{*{20}{l}}\\ {\left\{ {{\rm{BS}}} \right\} \equiv {\rm{ker}}\,{\cal C} \cap {{\Bbb Z}^D} \simeq {{\Bbb Z}^{{d_{{\rm{BS}}}}}},} \hfill \\ \end{array}$$ where as before \({d_{{\rm{BS}}}} = {\rm{dim}}\,\ker \,{\cal C}\). Note that this result has a simple geometric interpretation: from the definition Eq. 1, we can picture \({\rm{ker}}\,{\cal C}\) as a d BS-dimensional hyperplane slicing through the hypercubic lattice \({{\Bbb Z}^D}\) embedded in \({{\Bbb R}^D}\) (Supplementary Note 3). This gives rise to the sublattice \({{\Bbb Z}^{{d_{{\rm{BS}}}}}}\) (Fig. 1b). Effect of TR symmetry Being anti-unitary, TR alone does not modify the irreps. However, under the action of TR an irrep can be paired to either a distinct copy of itself, or to another irrep25. The constraints arising from both cases can be readily incorporated into the definition of \({\cal C}\): when under TR an irrep α at k is paired with a different irrep β at k′, where k′ = k or k′ = −k, we simply add to \({\cal C}\) an additional compatibility relation \(n_{\bf{k}}^\alpha {\rm{ = }}n_{{\bf{k}}\prime }^\beta \); when α is paired with itself, we demand α to be an even integer, which can be achieved by redefining \(\tilde n_{\bf{k}}^\alpha \equiv n_{\bf{k}}^\alpha {\rm{/}}2\) and a corresponding rewriting of C in terms of \({\tilde {\bf n}}\). Note that, although TR is not included in our definition of high-symmetry momenta, we will always take Kramers degeneracy in spin–orbit-coupled systems into account. Physical aspects of the mathematical treatment While we have shown in the main text that {BS} is a well-defined mathematical entity and identified its general structure, it remains to connect it to the study of physical BSs. Here, we first argue that as long as B ∈ {BS} satisfies the physical condition of non-negativity, namely all entries of B are non-negative integers, then B corresponds to a physically realizable BS. Next, we will show that all entries of X BS have physical representatives. Below, we will only sketch the arguments involved. Interested readers are referred to Supplementary Note 3 for a more elaborated discussion. Recall that in motivating the definition Eq. 1, in order to obtain a group structure we have lifted the physical condition that all irreps must appear a non-negative number of times. This implies that any physical BS must correspond to elements in the subset \({\left\{ {{\rm{BS}}} \right\}_{\rm{P}}} \equiv {\rm{ker}}\,{\cal C} \cap {\Bbb Z}_{ \ge 0}^D \subset \left\{ {{\rm{BS}}} \right\}\). However, one should question whether all elements in {BS}P indeed correspond to some physical BSs. This can be reasoned by noting that as {BS} is defined as the solution of all compatibility relations, all necessary band crossings and degeneracies have been taken into account. Therefore, by adjusting the energetics of a sufficiently general physical model one can realize any element of {BS}P, up to accidental degeneracies that can be removed by symmetry-preserving perturbations. Next, we argue that all entries in X BS have physical representatives. Suppose an element of X BS is represented by a B ∈ {BS}, which does not satisfy the physical condition of non-negativity. Using a small technical corollary we discuss in the Supplementary Note 2, one can show that the representation content of any B can be rectified by stacking with some A ∈ {AI}, i.e., B + A ∈ {BS}P. Since B + A belongs to the same class as B in X BS, we arrive at a physical representative of the same element of X BS. Extension to other symmetry settings Results for TR-symmetric systems in any of the 230 SGs are presented in the main text, and the corresponding ones for systems without TR symmetry are presented in Supplementary Tables 5–8. Here, we remark that the corresponding results for quasi-1D and 2D systems, described, respectively, by rod and layer group symmetries26, 41, can be readily obtained (Supplementary Note 3). The results are presented in Supplementary Tables 9–20. In particular, we found \({X_{{\rm{BS}}}} = {{\Bbb Z}_1}\), the trivial group, for all quasi-1D systems. This is consistent with the picture that topological BSs in 1D can be understood as frozen polarization states, which are AIs and hence trivial in our definition. Relation to K-theory-based classifications As discussed, band topology identified within the K-theory framework may not be detectable using only symmetry labels. As an example, consider a 2D system with only lattice translation symmetries. For such systems, the K-theory classification of band insulators in refs 22, 24, 28 gives \({{\Bbb Z}^2}\), where the two factors correspond, respectively, to the electron filling (i.e., number of bands) and the Chern number. In contrast, within our approach we find \(\{ {\rm{BS}}\} = \{ {\rm{AI}}\} = {\Bbb Z}\), since in this setting the only symmetry label is the filling, which cannot detect the Chern number of the bands. Furthermore, as there exists an AI for any filling ν, we find \({X_{{\rm{BS}}}} = {{\Bbb Z}_1}\), the trivial group. However, in some other cases using symmetry labels alone one can also detect the tenfold-way phases, as in cases where the Fu-Kane parity criterion applies8. As a related problem, one can readily study how a centrosymmetric SG constrains the possible weak TI phases using our results in Table 3. This is related to the number of factors in X BS, i.e., the number of independent generators N g. As one such factor is reserved for the strong TI, the SG is compatible with at most N g − 1 independent weak TI phases. While this has been pointed out for certain cases in the literature42, our approach automatically encapsulates some of these result in a simple manner. Example of leSMs Here, we provide details on the leSM example discussed in the main text. We consider a TR-symmetric system in SG 219 with significant spin–orbit coupling. We will establish that for a particular lattice specification, a semimetallic behavior is unavoidable at a filling ν = 4, although band insulators are generally possible at this filling for the present symmetry setting34, 35. This arises from the fact that, given the available symmetry irreps specified by the lattice, corresponding to an element A ∈ {AI}, there is no way to satisfy all the compatibility relations at the filling ν = 4, i.e., A ≠ B v + B c for any non-zero B v, B c ∈ {BS} satisfying the physical condition of non-negativity. We consider a lattice in Wyckoff position a, which contains two sites at r 1 ≡ (0, 0, 0) and r 2 ≡ (1/2, 0, 0) in the unit cell. The two sites are related by a glide symmetry, and the site-symmetry group for each site is given by the point group T (i.e., the orientation-preserving symmetries of a tetrahedron, also known as the chiral tetrahedral symmetry group). We suppose the physically relevant degrees of freedom arise from the three p x,y,z orbitals on each site, which together with electron spin leads to a six-dimensional local Hilbert space. We will let L and S, respectively, denote the orbital and spin angular momentum operators in the single-particle basis. As described in the main text, we consider a TR-symmetric system with a strong crystal-field splitting: $$\begin{array}{*{20}{l}}\\ {{H_\Delta } = \Delta \mathop {\sum}\limits_{{\bf{r}}:{\rm{allsites}}} {{\bf{c}}_{\bf{r}}^\dag \left( {{\bf{L}} \cdot {\bf{S}}} \right){{\bf{c}}_{\bf{r}}}} ,} \hfill \\ \end{array}$$ where c r represents the six-dimensional (column) vector corresponding to the internal degrees of freedom. One can verify that when Δ > 0, H Δ splits the local energy levels to a total spin-1/2 doublet lying below the total spin-3/2 multiplet. While we have chosen H Δ to conserve the total spin L + S for convenience, such conservation is not required by the local symmetry, which is described by the point group T < SO(3). Therefore, the total spin quantum numbers are not a priori good quantum numbers for the problem at hand. However, one can verify that the spinful, TR symmetric irreps of T coincide with the total spin decomposition described above25, and hence insofar as symmetries are concerned H Δ is a sufficiently generic crystal-field Hamiltonian. We also note that, if TR symmetry is broken, the fourfold degenerate states originating from the total spin-3/2 states can be further split. As discussed in the main text, we are interested in the systems arising from half filling the fourfold degenerate local energy levels. To this end, we assume Δ is the dominant energy scale in the problem, which implies the low-lying doubly degenerate states can be decoupled from the description of the system as long as they are fully filled. This leaves behind the fourfold degenerate energy levels, which we assume are half-filled. As there are two symmetry-related sites in each unit cell, these considerations altogether imply that the BS around the Fermi energy is described by an effective eight-band tight-binding model at filling ν = 4. Next, we consider a nearest-neighbor hopping term $$\begin{array}{*{20}{l}}\\ {{H_{t,\lambda }} = \mathop {\sum}\limits_{g \in {\cal G}} {g\left( {{\bf{c}}_{{{\bf{r}}_1}}^\dag \left( {t + \lambda \,{{\hat {\bf x}}} \cdot ({\bf{L}} \times {\bf{S}})} \right){{\bf{c}}_{{{\bf{r}}_2}}}} \right){g^\dag } + {\rm{h}}{\rm{.c}}.,} } \hfill \\ \end{array}$$ where h.c. denotes Hermitian conjugate, and the notation \(\mathop {\sum}\nolimits_{g \in {\cal G}} {g\left( \ldots \right){g^\dag }} \) denotes all the terms generated by transforming the terms in the parenthesis by the symmetry elements of the SG \({\cal G}\). The BS of the full Hamiltonian H = H Δ + H t,λ is shown in Fig. 2e, with parameters (t/Δ, λ/Δ) = (0.01, 0.05). Note that we have only shown the eight bands near the Fermi energy; four fully filled bands arising from the doubly degenerate local orbitals are separated in energy by \({\cal O}(\Delta )\). As our computation dictates, the lattice specification gives rise to energy bands that are necessarily gapless along the high-symmetry lines at filling ν = 4. Interestingly, note that the lattice-enforced gaplessness is of a more subtle flavor: unlike spinless graphene, where the the gaplessness is enforced by the dimensions of the irreps involved, here all the irreps have dimensions ≤4, and therefore the impossibility of finding a BS at ν = 4 is reflected in the connectivity of the energy bands. In closing, we remark that the notion of leSM is not as robust as the other notions we introduced in this work, say feSM or reQBI. Specifically, the (semi-)metallic behavior of the system is protected by the specification of the microscopic degrees of freedom, which is only sensible assuming a certain knowledge about the energetics of the problem. Under stacking of a trivial phase, say when we incorporate into the description a set of fully filled bands corresponding to an AI, the enforced gaplessness may become unstable, as these apparently inert degrees of freedom can also supply the representations needed to open a gap at the targeted filling. This can be readily seen from the example above: if we switch the sign of Δ, the same electron filling will now correspond to the full filling of the fourfold degenerate multiplet on each site, which leads to an AI. Such instability should be contrasted with, say, the notion of reQBIs, which by definition remains nontrivial as long as the extra degrees of freedom we introduce are in the trivial class, i.e., correspond to AIs. All relevant data are available from the authors upon reasonable request. A correction to this article has been published and is linked from the HTML version of this article. Franz, M. & Molenkamp, L. (eds). Topological Insulators, Contemporary Concepts of Condensed Matter Science Vol. 6 (Elsevier, 2013). Bernevig, B. A. & Hughes, T. L. Topological Insulators and Topological Superconductors. (Princeton University Press, 2013). Hasan, M. Z. & Kane, C. L. Colloquium: topological insulators. Rev. Mod. Phys. 82, 3045–3067 (2010). Fu, L. Topological crystalline insulators. Phys. Rev. Lett. 106, 106802 (2011). Hsieh, T. H. et al. Topological crystalline insulators in the SnTe material class. Nat. Commun. 3, 982 (2012). Chang, C. Z. et al. Experimental observation of the quantum anomalous hall effect in a magnetic topological insulator. Science 340, 167–170 (2013). Bansil, A., Lin, H. & Das, T. Colloquium: topological band theory. Rev. Mod. Phys. 88, 021004 (2016). Fu, L. & Kane, C. L. Topological insulators with inversion symmetry. Phys. Rev. B 76, 045302 (2007). Bernevig, B. A., Hughes, T. L. & Zhang, S. C. Quantum spin hall effect and topological phase transition in HgTe quantum wells. Science 314, 1757–1761 (2006). König, M. et al. Quantum spin hall insulator state in HgTe quantum wells. Science 318, 766–770 (2007). Hsieh, D. et al. A topological Dirac insulator in a quantum spin hall phase. Nature 452, 970–974 (2008). Zhang, H. et al. Topological insulators in Bi2Se3, Bi2Te3 and Sb2Te3 with a single Dirac cone on the surface. Nat. Phys. 5, 438–442 (2009). Turner, A. M., Zhang, Y., Mong, R. S. K. & Vishwanath, A. Quantized response and topology of magnetic insulators with inversion symmetry. Phys. Rev. B 85, 165120 (2012). Hughes, T. L., Prodan, E. & Bernevig, B. A. Inversion-symmetric topological insulators. Phys. Rev. B 83, 245132 (2011). Fang, C., Gilbert, M. J. & Bernevig, B. A. Bulk topological invariants in noninteracting point group symmetric insulators. Phys. Rev. B 86, 115112 (2012). Turner, A. M., Zhang, Y. & Vishwanath, A. Entanglement and inversion symmetry in topological insulators. Phys. Rev. B 82, 241102 (2010). Taherinejad, M., Garrity, K. F. & Vanderbilt, D. Wannier center sheets in topological insulators. Phys. Rev. B 89, 115102 (2014). Lu, Y. M. & Lee, D. H. Inversion symmetry protected topological insulators and superconductors. Preprint at https://arxiv.org/abs/1403.5558 (2014). Wan, X., Turner, A. M., Vishwanath, A. & Savrasov, S. Y. Topological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates. Phys. Rev. B 83, 205101 (2011). Shiozaki, K., Sato, M. & Gomi, K. Z 2 topology in nonsymmorphic crystalline insulators: Möbius twist in surface states. Phys. Rev. B 91, 155120 (2015). Shiozaki, K., Sato, M. & Gomi, K. Topology of nonsymmorphic crystalline insulators and superconductors. Phys. Rev. B 93, 195413 (2016). Freed, D. S. & Moore, G. W. Twisted equivariant matter. Annales Henri Poincaré 14, 1927–2023 (2013). Gomi, K. Twists on the torus equivariant under the 2-dimensional crystallographic point groups. SIGMA 13, 014 (2017). Shiozaki, K., Sato, M. & Gomi, K. Topological crystalline materials - general formulation, module structure, and wallpaper groups. Preprint at https://arxiv.org/abs/1701.08725 (2017). Bradley, C. J. & Cracknell, A. P. The Mathematical Theory of Symmetry in Solids: Representation Theory for Point Groups and Space Groups (Oxford University Press, 1972). Aroyo, M. I., Kirov, A., Capillas, C., Perez-Mato, J. M. & Wondratschek, H. Bilbao Crystallographic Server II: Representations of crystallographic point groups and space groups. Acta Crystallogr. Sect. A 62, 115–128 (2006). Slager, R.-J., Mesaros, A., Juričić, V. & Zaanen, J. The space group classification of topological band-insulators. Nat. Phys. 9, 98–102 (2013). Kruthoff, J., de Boer, J., van Wezel, J., Kane, C. L. & Slager, R. J. Topological classification of crystalline insulators through band structure combinatorics. Preprint at https://arxiv.org/abs/1612.02007 (2016). Soluyanov, A. A. & Vanderbilt, D. Wannier representation of \({{\Bbb Z}_2}\) topological insulators. Phys. Rev. B 83, 035108 (2011). Po, H. C., Watanabe, H., Zaletel, M. P. & Vishwanath, A. Filling-enforced quantum band insulators in spin-orbit coupled crystals. Sci. Adv. 2, e1501782 (2016). Alexandradinata, A. & Bernevig, B. A. Berry-phase description of topological crystalline insulators. Phys. Rev. B 93, 205104 (2016). Read, N. Compactly-supported Wannier functions and algebraic K-theory. Phys. Rev. B 95, 115309 (2017). Hahn, T. (ed). International Tables for Crystallography. 5th edn, Vol. A (Springer, 2006). Watanabe, H., Po, H. C., Vishwanath, A. & Zaletel, M. P. Filling constraints for spin-orbit coupled insulators in symmorphic and nonsymmorphic crystals. Proc. Natl Acad. Sci. USA 112, 14551–14556 (2015). Watanabe, H., Po, H. C., Zaletel, M. P. & Vishwanath, A. Filling-enforced gaplessness in band structures of the 230 space groups. Phys. Rev. Lett. 117, 096404 (2016). Michel, L. & Zak, J. Connectivity of energy bands in crystals. Phys. Rev. B 59, 5998–6001 (1999). Michel, L. & Zak, J. Elementary energy bands in crystalline solids. Eur. Phys. Lett 50, 519–525 (2000). Michel, L. & Zak, J. Elementary energy bands in crystals are connected. Phys. Rep 341, 377–395 (2001). Chen, R., Po, H. C., Neaton, J. B. & Vishwanath, A. Topological materials discovery using electron filling constraints. Preprint at https://arxiv.org/abs/1611.06860 (2016). Bradlyn, B. et al. Topological quantum chemistry. Preprint at https://arxiv.org/abs/1703.02050 (2017). Kopský, V. & Litvin, D. B. (eds). International Tables for Crystallography. 2nd edn, Vol. E (Wiley, 2010). Varjas, D., de Juan, F. & Lu, Y. M., Space group constraints on weak indices in topological insulators. Preprint at https://arxiv.org/abs/1603.04450 (2016). We thank C.-M. Jian, A. Turner and M. Zaletel for insightful discussions and collaborations on earlier works. We also thank C. Fang for useful discussions. A.V. and H.C.P. were supported by NSF DMR-1411343. A.V. acknowledges support from a Simons Investigator Award. H.W. acknowledges support from JSPS KAKENHI Grant Number JP17K17678. Department of Physics, University of California, Berkeley, CA, 94720, USA Hoi Chun Po & Ashvin Vishwanath Department of Physics, Harvard University, Cambridge, MA, 02138, USA Department of Applied Physics, University of Tokyo, Tokyo, 113-8656, Japan Haruki Watanabe Search for Hoi Chun Po in: Search for Ashvin Vishwanath in: Search for Haruki Watanabe in: All authors contributed to all aspects of this work. Correspondence to Ashvin Vishwanath. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Change History: A correction to this article has been published and is linked from the HTML version of this article. An erratum to this article is available at https://doi.org/10.1038/s41467-017-00724-z. Peer Review File Po, H.C., Vishwanath, A. & Watanabe, H. Symmetry-based indicators of band topology in the 230 space groups. Nat Commun 8, 50 (2017) doi:10.1038/s41467-017-00133-2 Aperiodic topological crystalline insulators Huaqing Huang , Yong-Shi Wu & Feng Liu Physical Review B (2020) Classification of crystalline topological insulators and superconductors with point group symmetries Eyal Cornfeld & Adam Chapman Visualizing the connection between edge states and the mobility edge in adiabatic and nonadiabatic topological charge transport Mariya A. Lizunova , Florian Schreck , Cristiane Morais Smith & Jasper van Wezel Effective models for nearly ideal Dirac semimetals Feng Tang & Xiangang Wan Frontiers of Physics (2019) Topology on a new facet of bismuth Chuang-Han Hsu , Xiaoting Zhou , Tay-Rong Chang , Qiong Ma , Nuh Gedik , Arun Bansil , Su-Yang Xu , Hsin Lin & Liang Fu Proceedings of the National Academy of Sciences (2019) Nature Communications menu Editors' Highlights
CommonCrawl
Proceedings Monographs Memoirs Terms of Use License Agreement MARC Records Search eContent My Subscriptions AMS Bookstore For AMS eBook frontlist subscriptions or backfile collection purchases: 1a. To purchase any ebook backfile or to subscibe to the current year of Contemporary Mathematics, please download this required license agreement, 1b. To subscribe to the current year of Memoirs of the AMS, please download this required license agreement. 2. Complete and sign the license agreement. 3. Email, fax, or send via postal mail to: 201 Charles Street Providence, RI 02904-2213 USA Phone: 1-800-321-4AMS (4267) Email: [email protected] Visit the AMS Bookstore for individual volume purchases. Browse the current eBook Collections price list Your device is paired with for another days. Remote Access Probabilistic Methods in Geometry, Topology and Spectral Theory Yaiza Canzani, University of North Carolina, Chapel Hill, NC, Linan Chen, McGill University, Montreal, Quebec, Canada and Dmitry Jakobson, McGill University, Montreal, Quebec, Canada, Editors Publication: Contemporary Mathematics Publication Year: 2019; Volume 739 ISBNs: 978-1-4704-4145-6 (print); 978-1-4704-5599-6 (online) DOI: https://doi.org/10.1090/conm/739 Read more about this volume This volume contains the proceedings of the CRM Workshops on Probabilistic Methods in Spectral Geometry and PDE, held from August 22–26, 2016 and Probabilistic Methods in Topology, held from November 14–18, 2016 at the Centre de Recherches Mathématiques, Université de Montréal, Montréal, Quebec, Canada. Probabilistic methods have played an increasingly important role in many areas of mathematics, from the study of random groups and random simplicial complexes in topology, to the theory of random Schrödinger operators in mathematical physics. The workshop on Probabilistic Methods in Spectral Geometry and PDE brought together some of the leading researchers in quantum chaos, semi-classical theory, ergodic theory and dynamical systems, partial differential equations, probability, random matrix theory, mathematical physics, conformal field theory, and random graph theory. Its emphasis was on the use of ideas and methods from probability in different areas, such as quantum chaos (study of spectra and eigenstates of chaotic systems at high energy); geometry of random metrics and related problems in quantum gravity; solutions of partial differential equations with random initial conditions. The workshop Probabilistic Methods in Topology brought together researchers working on random simplicial complexes and geometry of spaces of triangulations (with connections to manifold learning); topological statistics, and geometric probability; theory of random groups and their properties; random knots; and other problems. This volume covers recent developments in several active research areas at the interface of Probability, Semiclassical Analysis, Mathematical Physics, Theory of Automorphic Forms and Graph Theory. Graduate students and research mathematicians interested in probability theory and its applications to various areas of mathematics. View other years and volumes: Front/Back Matter View this volume's front and back matter Linan Chen and Na Shu – A geometric treatment of log-correlated Gaussian free fields Suresh Eswarathasan – Tangent nodal sets for random spherical harmonics Joel Friedman – Formal Zeta function expansions and the frequency of Ramanujan graphs Dmitry Jakobson, Tomas Langsetmo, Igor Rivin and Lise Turner – Rank and Bollobás-Riordan polynomials: Coefficient measures and zeros V. Konakov, S. Menozzi and S. Molchanov – The Brownian motion on $\operatorname {Aff}(\mathbb {R})$ and quasi-local theorems Niko Laaksonen – Quantum limits of Eisenstein series in $\mathbb {H}^{3}$ Fabricio Macià and Gabriel Rivière – Observability and quantum limits for the Schrödinger equation on $\mathbb {S}^d$ Maurizia Rossi – Random nodal lengths and Wiener chaos Lior Silberman and Akshay Venkatesh – Entropy bounds and quantum unique ergodicity for Hecke eigenfunctions on division algebras View full volume PDF
CommonCrawl
Multiplicity of solutions for impulsive differential equation on the half-line via variational methods Huiwen Chen1,2, Zhimin He1 & Jianli Li3 Boundary Value Problems volume 2016, Article number: 14 (2016) Cite this article In this paper, the existence of solutions for a second-order impulsive differential equation with two parameters on the half-line is investigated. Applying variational methods, we give some new criteria to guarantee that the impulsive problem has at least one classical solution, three classical solutions and infinitely many classical solutions, respectively. Some recent results are extended and significantly improved. Two examples are presented to demonstrate the application of our main results. In this paper, we consider the following boundary value problem with impulses: $$ \begin{aligned} &{-}u^{\prime\prime}(t)+cu(t)=\lambda g \bigl(t,u(t) \bigr), \quad \mbox{a.e. } t\in[0,+\infty), \\ &\Delta u^{\prime}(t_{j})=I_{j} \bigl(u(t_{j}) \bigr), \quad j=1,2,\ldots,p, \\ &u^{\prime}\bigl(0^{+} \bigr)=h \bigl(u(0) \bigr), \quad u^{\prime}(+ \infty)=0, \end{aligned} $$ where c and λ are two positive parameters, \(0=t_{0}< t_{1}<\cdots <t_{p}<+\infty\), \(\Delta u^{\prime}(t_{j})=u^{\prime}(t_{j}^{+})-u^{\prime}(t_{j}^{-}) =\lim_{t\rightarrow t_{j}^{+}}u^{\prime}(t)-\lim_{t\rightarrow t_{j}^{-}}u^{\prime}(t)\), \(u^{\prime}(0^{+})=\lim_{t\rightarrow0^{+}}u^{\prime}(t)\), and \(u^{\prime}(+\infty)=\lim_{t\rightarrow+\infty}u^{\prime}(t)\), \(h,I_{j}\in C(\mathbb{R}, \mathbb{R})\), and \(g\in C([0,+\infty)\times \mathbb{R}, \mathbb{R})\). Boundary value problems (BVPs) on the half-line occur in many applications; see [1–3]. Due to its significance, many researchers have studied BVPs for differential equations on the half-line, we refer the reader to [4–11]. On the other hand, impulsive differential equations have been widely applied in biology, control theory, industrial robotics, medicine, population dynamics and so on; see [12–17]. Due to its significance, a lot of work has been done in the theory of impulsive differential equations, we refer the reader to [18–24]. Some classical approaches and tools have been used to investigate BVPs for impulsive differential equations. These classical approaches and tools include the method of upper and lower solutions [23, 25], fixed point theorems [26] and topological degree theory [27, 28]. Recently, some researchers have used variational methods to investigate the existence and multiplicity of solutions for impulsive BVPs on the finite intervals [29–37]. However, as far as we know, with the exception of [38, 39], the study of solutions of impulsive BVPs on the infinite intervals via variational methods has received considerably less attention. More precisely, in [38, 39], the authors studied the following BVP: $$ \begin{aligned} &{-}u^{\prime\prime}(t)+u(t)=\lambda g \bigl(t,u(t) \bigr), \quad \mbox{a.e. } t\in[0,+\infty), \\ &\Delta u^{\prime}(t_{j})=I_{j} \bigl(u(t_{j}) \bigr), \quad j=1,2,\ldots,p, \\ &u^{\prime}\bigl(0^{+} \bigr)=h \bigl(u(0) \bigr), \quad u^{\prime}(+ \infty)=0, \end{aligned} $$ where λ is a positive parameter, \(h,I_{j}\in C(\mathbb{R}, \mathbb{R})\) and \(g\in C([0,+\infty)\times\mathbb{R}, \mathbb{R})\). They obtained the existence and multiplicity of solutions for (1.2) via variational methods. Obviously, problem (1.1) is a generalization of problem (1.2). In fact, problem (1.2) follows from problem (1.1) by letting \(c=1\). Motivated by the above facts, in this paper, we will improve and generalize some results in [38, 39]. In this paper, we need the following conditions. (A1): \(h(u)\), \(I_{j}(u)\) are nondecreasing, and \(h(u)u\geq 0\), \(I_{j}(u)u\geq0\) for any \(u\in\mathbb{R}\). \(h(u)u\geq0\), \(I_{j}(u)u\geq0\) for any \(u\in\mathbb {R}\) (\(j=1,2,\ldots,p\)) and there exist constants \(L,L_{j}\geq0\) such that $$\bigl|h(u)-h(v) \bigr|\leq L|u-v|, \quad \bigl|I_{j}(u)-I_{j}(v) \bigr|\leq L_{j}|u-v|\quad \mbox{for any } u,v\in\mathbb{R}, $$ where L, \(L_{j}\) satisfy \(L+\sum_{j=1}^{p}L_{j}<\frac{1}{\beta^{2}}\), β will be given in (2.2). There exist \(d,q>0\) such that $$\frac{d^{2}}{\beta^{2}}< \frac{(1+c)q^{2}}{2}+2\sum_{j=1}^{p} \int _{0}^{qe^{-t_{j}}}I_{j}(s)\,ds+2 \int_{0}^{q}h(s)\,ds $$ $$\alpha_{1}:=\frac{2\beta^{2}\int_{0}^{+\infty}\max_{|\xi|\leq d}G(t,\xi)\,dt}{d^{2}}< \alpha_{2}:=\frac{\int_{0}^{+\infty }G(t,qe^{-t})\,dt}{\frac{1+c}{4}q^{2}+\sum_{j=1}^{p}\int _{0}^{qe^{-t_{j}}}I_{j}(s)\,ds+\int_{0}^{q}h(s)\,ds}, $$ where \(G(t,u)=\int_{0}^{u}g(t,s)\,ds\), β will be given in (2.2). Let \(|\cdot|_{k}\) denotes the usual norm on \(L^{k}[0,+\infty)\). Now, we state our main results. Assume that (A1) (or (A2)), (A3) hold and the following conditions are satisfied. There exist a positive constant \(\alpha\in(1, 2)\) and \(a_{1}, a_{2}, a_{3}\in L^{1}[0,+\infty)\) such that $$\bigl|g(t,u) \bigr|\leq a_{1}(t)|u|+a_{2}(t)|u|^{\alpha-1}+a_{3}(t) $$ for a.e. \(t\in[0,+\infty)\) and all \(u\in\mathbb{R}\). There exists a constant m satisfying $$\frac{(1+c)m^{2}}{2}+2\sum_{j=1}^{p} \int _{0}^{me^{-t_{j}}}I_{j}(s)\,ds+2 \int_{0}^{m}h(s)\,ds\leq\frac{d^{2}}{\beta^{2}} $$ such that $$|a_{1}|_{1}\leq\frac{\int_{0}^{+\infty}G(t,me^{-t})\,dt}{d^{2}}. $$ Then, for each \(\lambda\in\,]\frac{1}{\alpha_{2}},\frac{1}{\alpha _{1}}[\), problem (1.1) has at least three classical solutions. Remark 1.1 In (H2) of [38], \(l>1\) is needed; see (2.5) of [38]. Obviously, Theorem 1.1 generalizes Theorem 3.1 in [38]. Furthermore, the function $$ g(t,u)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sqrt{\beta}e^{-t}, & u\leq\beta, \\ e^{-t} (\frac{u}{100}+600u^{\frac{1}{2}}-599\sqrt{\beta}-\frac {\beta}{100} ), & u>\beta, \end{array}\displaystyle \right . $$ does not satisfy (H2) in [38], while it satisfies (A4), and there are indeed many functions h and \(I_{j}\) not satisfying (H1) in [38], while they satisfy (A2), for example, \(h(u)=-\theta_{1}u\) and \(I_{j}(u)=\theta_{2}u(1+\sin u)\), where \(0<\theta_{1}<\frac{1}{2\beta^{2}}\) and \(0<\theta_{2}<\frac{1}{4p\beta^{2}}\). Assume that the following conditions are satisfied. There exist positive constants \(c_{3}\), \(1<\sigma<2\), and \(c_{1},c_{2},c_{4},c_{5},c_{6}\in L^{1}[0,+\infty)\) such that $$\bigl|G(t,u) \bigr|\leq c_{1}(t)|u|^{2}+c_{2}(t) \bigl(|u|^{\sigma}+c_{3} \bigr), \qquad \bigl|g(t,u) \bigr|\leq c_{4}(t)|u|+c_{5}(t)|u|^{\sigma-1}+c_{6}(t) $$ \(h(u)u\geq0\), \(I_{j}(u)u\geq0\) for any \(u\in\mathbb {R}\) (\(j=1,2,\ldots,p\)). Then, for each \(\lambda\in\,]0,\frac{1}{2\beta^{2}|c_{1}|_{1}}[\), problem (1.1) has at least one classical solution. Let \(c=1\), it is clear that Theorem 1.2 improves Theorem 3.1 in [39]. In fact, there are many functions not satisfying the condition \(\mathrm{(S2)}\) in [39], while they satisfy (A6), for example, the function \(g(t,u)=e^{-t}(u+u^{\frac{1}{2}})\). There exist constants \(c^{\prime}, c_{j}^{\prime}>0\) and \(\delta,\delta_{j}\in(0,1)\) such that $$\bigl|I_{j}(u) \bigr|\leq c_{j}^{\prime}|u|^{\delta_{j}}, \qquad \bigl|h(u) \bigr|\leq c^{\prime}|u|^{\delta} \quad\textit{for any } u\in \mathbb{R}. $$ There exist \(k_{1},k_{2}\in L^{1}[0,+\infty)\) and \(\gamma _{1}\in(0,1)\) such that $$g(t,u)\leq k_{1}(t)|u|^{\gamma_{1}}+k_{2}(t), \quad \textit{for a.e. } t\in[0,+\infty) \textit{ and all } u\in\mathbb{R}. $$ (A10): There exist an open set \(J\subset[0,+\infty)\) and constants \(T,\eta>0\) and \(\gamma_{2}\in(1,2)\) with \(\gamma_{2}<\min\{\min_{1\leq j\leq p}\{\delta_{j}\},\delta\}+1\) such that $$G(t,u)\geq\eta|u|^{\gamma_{2}}, \quad\forall(t,u)\in J\times\mathbb {R}, |u| \leq T. $$ Furthermore, suppose that \(g(t,u)\), \(I_{j}(u)\), and \(h(u)\) are odd about u. Then problem (1.1) has infinitely many classical solutions for \(\lambda>0\). By (S3) and (3.3) in [39], one has \(d\in L^{\frac{2}{2-\alpha}}([0,+\infty),[0,+\infty))\) (in (S3)). Let \(c=1\), it is clear that Theorem 1.1 generalizes Theorem 3.2 in [39]. Furthermore, there are many functions g, h, and \(I_{ij}\) satisfying our Theorem 1.3 and not satisfying Theorem 3.2 in [39]. For example, let \(I_{j}(u)=-u^{\frac{3}{5}}\), \(h(u)=-u^{\frac{3}{5}}\), and \(g(t,u)= (\frac{1}{(1+t^{2})^{2}}-\frac{1}{(1+t)^{2}} )u^{\frac{1}{3}}\). The remainder of this paper is organized as follows. In Section 2, we present some preliminaries. In Section 3, we give the proof of Theorems 1.1-1.3. Finally, two examples are presented to illustrate the main results. In order to prove Theorem 1.1, we will need to the following critical points theorem. ([40, 41]) Let X be a reflexive real Banach space, let \(\Phi:X\rightarrow\mathbb{R}\) be a sequentially weakly lower semicontinuous, coercive and continuously Gâteaux differentiable functional whose Gâteaux derivative admits a continuous inverse on \(X^{\ast}\), and let \(\Psi:X\rightarrow \mathbb{R}\) be a sequentially weakly upper semicontinuous and continuously Gâteaux differentiable functional whose Gâteaux derivative is compact. Assume that there exist \(r\in\mathbb{R}\) and \(x_{0},x_{1}\in X\), with \(\Phi(x_{0})< r<\Phi(x_{1})\) and \(\Psi(x_{0})=0\) such that \(\sup_{\Phi(x)\leq r}\Psi(x)<(r-\Phi(x_{0}))\frac {\Psi(x_{1})}{\Phi(x_{1})-\Phi(x_{0})}\), for each \(\lambda\in\Lambda_{r}:=\,]\frac{\Phi (x_{1})-\Phi(x_{0})}{\Psi(x_{1})},\frac{r-\Phi(x_{0})}{\sup_{\Phi(x)\leq r}\Psi(x)}[\), the functional \(\Phi-\lambda\Psi\) is coercive. Then for each \(\lambda\in\Lambda_{r}\), the functional \(\Phi-\lambda \Psi\) has at three distinct critical points in X. In order to prove Theorem 1.3, we will need to the following definitions and theorems. Let X be a Banach space, \(\varphi\in C^{1}(X,\mathbb{R})\) and \(e\in \mathbb{R}\). Let $$\begin{aligned}& \Sigma:= \bigl\{ J\subset X-\{0\}:J \mbox{ is closed in } X \mbox{ and symmetric with respect to } 0 \bigr\} , \\& K_{e}:= \bigl\{ u\in X:\varphi(u)=e,\varphi^{\prime}(u)=0 \bigr\} , \quad \varphi^{e}:= \bigl\{ u\in X:\varphi(u)\leq e \bigr\} . \end{aligned}$$ For \(A\in\Sigma\), we say the genus of A is n (denoted by \(\gamma(A)=n\)) if there is an odd \(f\in C(A,\mathbb{R}^{n}\setminus\{0\})\) and n is the smallest integer with this property. Suppose that X is a Banach space and \(\varphi\in C^{1}(X,\mathbb{R})\). If any sequence \(\{u_{n}\}\subset X\) for which \(\varphi(u_{n})\) is bounded and \(\varphi^{\prime}(u_{n})\rightarrow0\) as \(n \rightarrow\infty\) possesses a convergent subsequence in X, we say that φ satisfies the Palais-Smale condition. Let φ be an even \(C^{1}\) functional on X and satisfy the Palais-Smale condition. For any \(n\in\mathbb{N}\), set $$\Sigma_{n}:= \bigl\{ A\in\Sigma:\gamma(A)\geq n \bigr\} ,\qquad d_{n}:=\inf_{A\in\Sigma_{n}}\sup_{u\in A} \varphi(u). $$ If \(\Sigma_{n}\neq\emptyset\) and \(d_{n}\in\mathbb{R}\), then \(d_{n}\) is a critical value of φ. If there exists \(k_{0}\in\mathbb{N}\) such that $$d_{n}=d_{n+1}=\cdots=d_{n+k_{0}}=e\in\mathbb{R}, $$ and \(e\neq\varphi(0)\), then \(\gamma(K_{e})\geq k_{0}+1\). Let us recall some basic concepts. Set $$E= \bigl\{ u:[0,+\infty)\rightarrow\mathbb{R}\mid u \mbox{ is absolutely continuous}, u^{\prime}\in L^{2}[0,+\infty) \bigr\} . $$ Denote the Sobolev space by $$X:= \biggl\{ u\in E\Bigm| \int_{0}^{+\infty} \bigl( \bigl|u^{\prime }(t) \bigr|^{2}+ \bigl|u(t) \bigr|^{2} \bigr)\,dt< \infty \biggr\} , $$ with the norm $$ \|u\|_{X}= \biggl( \int_{0}^{+\infty} \bigl( \bigl|u^{\prime}(t) \bigr|^{2}+c \bigl|u(t) \bigr|^{2} \bigr)\,dt \biggr)^{\frac{1}{2}}, $$ this norm is equivalent to the usual norm. Hence, X is a reflexive Banach space. Let \(C:=\{u\in C[0,+\infty)\mid\sup_{t\in[0,+\infty )}|u(t)|<+\infty\}\), with the norm \(\|u\|_{C}=\sup_{t\in[0,+\infty )}|u(t)|\). Then C is a Banach space. In addition, X is continuously embedded in C, thus, there exists a constant \(\beta>0\) such that $$ \|u\|_{C}\leq\beta\|u\|_{X} \quad\mbox{for any } u\in X. $$ Suppose that \(u\in C[0,+\infty)\). Moreover, assume that for every \(j=0,1,2,\ldots,p-1\), \(u_{j}=u|_{(t_{j},t_{j+1})}\) satisfy \(u_{j}\in C^{2}(t_{j},t_{j+1})\) and \(u_{p}=u|_{(t_{p},+\infty)}\in C^{2}(t_{p},+\infty)\). We say u is a classical solution of problem (1.1) if it satisfies the equation in problem (1.1) a.e. on \([0,+\infty )\), the limits \(u^{\prime}(0^{+})\), \(u^{\prime}(+\infty)\), \(u^{\prime }(t_{j}^{+})\), \(u^{\prime}(t_{j}^{-})\) (\(j=1,2,\ldots,p\)) exist, and the impulsive conditions and boundary conditions in problem (1.1) hold. For every \(u\in X\), put $$ \Phi(u)=\frac{1}{2}\|u\|_{X}^{2} + \sum_{j=1}^{p} \int_{0}^{u(t_{j})}I_{j}(s)\,ds + \int_{0}^{u(0)}h(s)\,ds $$ $$ \Psi(u)= \int_{0}^{+\infty}G(t,u)\,dt. $$ It is clear that Φ is Gâteaux differentiable at any \(u\in X\) and $$ \bigl\langle \Phi^{\prime}(u),v \bigr\rangle = \int_{0}^{+\infty} \bigl[u^{\prime}(t) v^{\prime}(t)+cu(t)v(t) \bigr]\,dt +\sum_{j=1}^{p}I_{j} \bigl(u(t_{j}) \bigr)v(t_{j})+h \bigl(u(0) \bigr)v(0) $$ for any \(v\in X\). Obviously, \(\Phi^{\prime}:X\rightarrow X^{\ast}\) is continuous. Clearly, \(\Psi:X\rightarrow\mathbb{R}\) is continuously Gâteaux differentiable functional at any \(u\in X\) and $$ \bigl\langle \Psi^{\prime}(u),v \bigr\rangle = \int_{0}^{+\infty}g \bigl(t,u(t) \bigr)v(t)\,dt $$ for any \(v\in X\). If \(u\in X\) is a critical point of \(\Phi-\lambda\Psi\), then u is a classical solution of problem (1.1). The proof is similar to that of [38], and we omit it here. □ Assume that \(\mathrm{(A_{2})}\) are satisfied, then Φ is sequentially weakly lower semicontinuous, coercive and its derivative admits a continuous inverse on \(X^{\ast}\). Let \(\{u_{n}\}\subset X\), \(u_{n}\rightharpoonup u\) in X, we see that \(\{u_{n}\}\) converges uniformly to u on \([0,M]\) for any \(M\in(0,+\infty)\) and \(\liminf_{n\rightarrow\infty}\|u_{n}\| _{X}\geq\|u\|_{X}\). Thus $$\begin{aligned} \liminf_{n\rightarrow\infty}\Phi(u_{n}) =&\liminf _{n\rightarrow\infty} \Biggl(\frac{1}{2}\|u_{n} \|_{X}^{2} +\sum_{j=1}^{p} \int_{0}^{u_{n}(t_{j})}I_{j}(s)\,ds+ \int _{0}^{u_{n}(0)}h(s)\,ds \Biggr) \\ \geq& \frac{1}{2}\|u\|_{X}^{2}+\sum _{j=1}^{p} \int _{0}^{u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{u(0)}h(s)\,ds=\Phi(u). \end{aligned}$$ So Φ is sequentially weakly lower semicontinuous. Furthermore, in view of (2.3) and \(\mathrm{(A_{2})}\), we have $$\begin{aligned} \Phi(u) =&\frac{1}{2}\|u\|_{X}^{2}+\sum _{j=1}^{p} \int _{0}^{u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{u(0)}h(s)\,ds \geq\frac{1}{2}\|u\|_{X}^{2}. \end{aligned}$$ Thus, Φ is coercive. Next we will show that \(\Phi^{\prime}\) admits a continuous inverse on \(X^{\ast}\). For each \(u\in X\backslash\{0\}\), by (2.5) and (A2), we have $$\begin{aligned} \bigl\langle \Phi^{\prime}(u),u \bigr\rangle =& \int_{0}^{+\infty} \bigl[ \bigl|u^{\prime}(t) \bigr|^{2}+c \bigl|u(t) \bigr|^{2} \bigr]\,dt+\sum _{j=1}^{p}I_{j} \bigl(u(t_{j}) \bigr)u(t_{j})+h \bigl(u(0) \bigr)u(0)\geq\|u \|_{X}^{2}. \end{aligned}$$ So \(\lim_{\|u\|_{X}\rightarrow+\infty}\langle\Phi^{\prime }(u),u\rangle/\|u\|_{X}=+\infty\), that is, \(\Phi^{\prime}\) is coercive. For any \(u, v\in X\), in view of (A2) and (2.2), we have $$\begin{aligned} \bigl\langle \Phi^{\prime}(u)-\Phi^{\prime}(v),u-v \bigr\rangle =& \|u-v\| _{X}^{2}+\sum_{j=1}^{p} \bigl[I_{j} \bigl(u(t_{j}) \bigr)-I_{j} \bigl(v(t_{j}) \bigr) \bigr] \bigl(u(t_{j})-v(t_{j}) \bigr) \\ &{}+ \bigl(h \bigl(u(0) \bigr)-h \bigl(v(0) \bigr) \bigr) \bigl(u(0)-v(0) \bigr) \\ \geq& \Biggl[1-\beta^{2} \Biggl(L+\sum_{j=1}^{p}L_{j} \Biggr) \Biggr]\|u-v\|_{X}^{2}. \end{aligned}$$ Since \(L+\sum_{j=1}^{p}L_{j}<\frac{1}{\beta^{2}}\), so \(\Phi^{\prime}\) is uniformly monotone. By [44], Theorem 26.A(d), we see that \(\Phi^{\prime}\) admits a continuous inverse on \(X^{\ast}\). □ Assume that \(\mathrm{(A_{1})}\) holds, then Φ is sequentially weakly lower semicontinuous, coercive and its derivative admits a continuous inverse on \(X^{\ast}\). The proof is similar to the proof of Lemma 2.2, and we omit it here. □ Suppose that (A4) is satisfied. If \(u_{n}\rightharpoonup u\) in E, then \(g(t,u_{n})\rightarrow g(t,u)\) in \(L^{1}[0,+\infty)\). Assume that \(u_{n}\rightharpoonup u\). In view of (A4) and (2.2), we have $$\begin{aligned} \bigl|g(t,u_{n})-g(t,u) \bigr| \leq& \bigl(a_{1}(t)|u_{n}|+a_{2}(t)|u_{n}|^{\alpha-1}+a_{3}(t) \bigr)+ \bigl(a_{1}(t)|u|+a_{2}(t)|u|^{\alpha-1}+a_{3}(t) \bigr) \\ \leq& \bigl(\|u_{n}\|_{C}+\|u\|_{C} \bigr)a_{1}(t)+ \bigl(\|u_{n}\|_{C}^{\alpha-1}+\|u \|_{C}^{\alpha -1} \bigr)a_{2}(t)+2a_{3}(t) \\ \leq&\beta \bigl(\|u_{n}\|_{X}+\|u\|_{X} \bigr)a_{1}(t)+ \beta^{\alpha-1} \bigl(\|u_{n}\| _{X}^{\alpha-1}+\|u \|_{X}^{\alpha-1} \bigr)a_{2}(t)+2a_{3}(t). \end{aligned}$$ Applying the Lebesgue dominated convergence theorem, the lemma is proved. □ The functional Ψ is a sequentially weakly upper semicontinuous and its derivative is compact. Let \(\{u_{n}\}\subset X\), \(u_{n}\rightharpoonup u\) in X, we see that \(\{u_{n}\}\) converges uniformly to u on \([0,M]\) for any \(M\in(0,+\infty)\). It follows from the reverse Fatou lemma that $$\begin{aligned} \limsup_{n\rightarrow+\infty}\Psi(u_{n}) =&\limsup _{n\rightarrow +\infty}\lim_{M\rightarrow+\infty} \int_{0}^{M}G(t,u_{n})\,dt \\ \leq&\lim_{M\rightarrow+\infty} \int_{0}^{M}\limsup_{n\rightarrow +\infty}G(t,u_{n}) \,dt \\ =& \int_{0}^{+\infty}G(t,u)\,dt=\Psi(u). \end{aligned}$$ So Ψ is sequentially weakly upper semicontinuous. Next we will show that \(\Psi^{\prime}\) is compact. Let \(\{u_{n}\} \subset X\), \(u_{n}\rightharpoonup u\) in X. By Lemma 2.4, we get $$\begin{aligned} \bigl\| \Psi^{\prime}(u_{n})-\Psi^{\prime}(u) \bigr\| _{X^{\ast}} =&\sup_{\|v\| _{X}=1} \bigl\| \bigl(\Psi^{\prime}(u_{n})- \Psi^{\prime}(u) \bigr)v \bigr\| \\ =&\sup_{\|v\|_{X}=1} \biggl\vert \int_{0}^{+\infty } \bigl(g(t,u_{n})-g(t,u) \bigr)v\,dt \biggr\vert \\ \leq&\|v\|_{C}\sup_{\|v\|_{X}=1} \int_{0}^{+\infty } \bigl|g(t,u_{n})-g(t,u) \bigr|\,dt \\ \leq&\beta \int_{0}^{+\infty} \bigl|g(t,u_{n})-g(t,u) \bigr|\,dt \rightarrow0 \end{aligned}$$ as \(k\rightarrow\infty\), for any \(u\in X\). Thus, \(\Psi^{\prime}\) is strongly continuous on X, which implies that \(\Psi^{\prime}\) is a compact operator by [44], Proposition 26.2. □ Proof of Theorems 1.1-1.3 Now we give the proof of Theorem 1.1. By Lemma 2.3, Φ is a sequentially weakly lower semicontinuous, continuously Gâteaux derivative and coercive functional whose Gâteaux derivative admits a continuous inverse on \(X^{\ast}\). By Lemma 2.5, Ψ is a sequentially weakly upper semicontinuous and continuously Gâteaux differentiable functional whose Gâteaux derivative is compact. Let \(r=\frac{d^{2}}{2\beta^{2}}\), \(u_{0}(t)=0\), \(u_{1}(t)=qe^{-t}\) for any \(t\in[0,+\infty)\), one has \(u_{0},u_{1}\in X\), \(\Phi(u_{0})=\Psi(u_{0})=0\), \(\Phi(u_{1})=\frac{(1+c)q^{2}}{4}+\sum_{j=1}^{p}\int _{0}^{qe^{-t_{j}}}I_{j}(s)\,ds+\int_{0}^{q}h(s)\,ds\), \(\Psi(u_{1})=\int_{0}^{+\infty}G(t,qe^{-t})\,dt\). Therefore, we get $$ \bigl(r-\Phi(u_{0}) \bigr)\frac{\Psi(u_{1})}{\Phi(u_{1})-\Phi(u_{0})}= \frac {d^{2}}{\beta^{2}}\frac{\int_{0}^{+\infty}G(t,qe^{-t})\,dt}{ \frac{(1+c)q^{2}}{2}+2\sum_{j=1}^{p}\int _{0}^{qe^{-t_{j}}}I_{j}(s)\,ds+2\int_{0}^{q}h(s)\,ds}, $$ and by (A3), we obtain \(\Phi(u_{0})< r<\Phi(u_{1})\). On the other hand, for any \(u\in X\) such that \(\Phi(u)\leq r\), we have \(\|u\|_{X}\leq(2r)^{\frac{1}{2}}\). Owing to (2.2), one has \(\|u\|_{C}\leq\beta\|u\|_{X}\leq\beta(2r)^{\frac{1}{2}}=d\). Therefore, $$ \sup_{\Phi(x)\leq r}\Psi(x)\leq \int_{0}^{+\infty}\max_{|\xi|\leq d}G(t,\xi) \,dt. $$ By (3.1), (3.2), and (A3), condition (i) in Theorem 2.1 is satisfied. For any \(u\in X\), in view of (A1), (A4), and (2.2), we obtain $$\begin{aligned} \Phi(u)-\lambda\Psi(u) =&\frac{1}{2}\|u\|_{X}^{2}+ \sum_{j=1}^{p} \int_{0}^{u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{u(0)}h(s)\,ds-\lambda \int _{0}^{+\infty}G \bigl(t,u(t) \bigr)\,dt \\ \geq& \biggl(\frac{1}{2}-\lambda\beta^{2}|a_{1}|_{1} \biggr)\|u\| _{X}^{2}-\lambda|a_{2}|_{1} \beta^{\alpha}\|u\|_{X}^{\alpha}-\lambda\beta |a_{3}|_{1}\|u\|_{X}. \end{aligned}$$ In view of \(\mathrm{(A_{5})}\), we get $$ \frac{r-\Phi(u_{0})}{\sup_{\Phi(u)\leq r}\Psi(u)}=\frac{d^{2}}{2\beta ^{2}\sup_{\Phi(x)\leq r}\Psi(x)}\leq \frac{d^{2}}{2\beta^{2}\int_{0}^{+\infty}G(t,me^{-t})\,dt}\leq \frac {1}{2\beta^{2}|a_{1}|_{1}}. $$ Then, for any \(\lambda\in\,]0,\frac{1}{2\beta^{2}|a_{1}|_{1}}[\) (with the conventions \(\frac{1}{0}=+\infty\)), we get \(\lim_{\|u\|_{X}\rightarrow +\infty}(\Phi(u)-\lambda\Psi(u))=+\infty\). So condition \(\mathrm{(ii)}\) in Theorem 2.1 is satisfied. Hence, by Theorem 2.1, for each \(\lambda\in\,]\frac{1}{\alpha_{2}},\frac{1}{\alpha_{1}}[\), the functional \(\Phi-\lambda\Psi\) has at three distinct critical points in X. That is, for each \(\lambda\in\,]\frac{1}{\alpha_{2}},\frac{1}{\alpha_{1}}[\), problem (1.1) has at least three classical solutions. □ First of all, we will show that \(\Phi-\lambda\Psi \) is weakly lower semicontinuous. Let \(\{u_{n}\}\subset X\), \(u_{n}\rightharpoonup u\) in X, we see that \(\{u_{n}\}\) converges uniformly to u on \([0,M]\) with \(M\in(0,+\infty)\) an arbitrary constant and \(\liminf_{n\rightarrow\infty}\|u_{n}\|_{X}\geq\|u\|_{X}\). By Lemma 2.4, we have $$\begin{aligned} \liminf_{n\rightarrow\infty} \bigl(\Phi(u_{n})-\lambda\Psi (u_{n}) \bigr) \geq&\liminf_{n\rightarrow\infty} \Biggl( \frac {1}{2}\|u_{n}\|_{X}^{2} +\sum _{j=1}^{p} \int_{0}^{u_{n}(t_{j})}I_{j}(s)\,ds+ \int _{0}^{u_{n}(0)}h(s)\,ds \Biggr) \\ &{}-\lambda\limsup_{n\rightarrow\infty}\Psi(u_{n}) \\ \geq&\frac{1}{2}\|u\|_{X}^{2}+\sum _{j=1}^{p} \int _{0}^{u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{u(0)}h(s)\,ds \\ &{}-\lambda\limsup_{n\rightarrow+\infty}\Psi(u_{n}) \\ \geq&\frac{1}{2}\|u\|_{X}^{2}+\sum _{j=1}^{p} \int _{0}^{u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{u(0)}h(s)\,ds \\ &{}-\lambda \int_{0}^{+\infty}G(t,u)\,dt \\ =&\Phi(u)-\lambda\Psi(u). \end{aligned}$$ Then \(\Phi-\lambda\Psi\) is sequentially weakly lower semicontinuous. Second, we will show that \(\Phi-\lambda\Psi\) is coercive. By (A6), (A7), and (2.2), we obtain $$\begin{aligned} \Phi(u)-\lambda\Psi(u) =&\frac{1}{2}\|u\|_{X}^{2}+ \sum_{j=1}^{p} \int_{0}^{u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{u(0)}h(s)\,ds-\lambda \int _{0}^{+\infty}G \bigl(t,u(t) \bigr)\,dt \\ \geq& \biggl(\frac{1}{2}-\lambda\beta^{2}|c_{1}|_{1} \biggr)\|u\| _{X}^{2}-\lambda|c_{2}|_{1} \bigl(\beta^{\sigma}\|u\|_{X}^{\sigma}+c_{3} \bigr), \end{aligned}$$ for any \(u\in X\). Since \(0<\sigma<2\), for any \(\lambda\in\,]0,\frac{1}{2\beta^{2}|c_{1}|_{1}}[\) (with the conventions \(\frac{1}{0}=+\infty\)), we obtain \(\lim_{\|u\|\rightarrow\infty}(\Phi(u)-\lambda\Psi(u)) = +\infty\), that is, \(\Phi-\lambda\Psi\) is coercive. Hence, \(\Phi-\lambda\Psi\) has a minimum (Theorem 1.1 of [45]), which is a critical point of \(\Phi-\lambda\Psi\). Thus, for each \(\lambda\in\,]0,\frac{1}{2\beta^{2}|c_{1}|_{1}}[\), problem (1.1) has at least one classical solution. □ Let \(\varphi=\Phi-\lambda\Psi\). Obviously, \(\varphi\in C^{1}(X,\mathbb{R})\). In the following, we first show that φ is bounded from below. By (A8), (A9), and (2.2), we have $$\begin{aligned} \varphi(u) =& \frac{1}{2}\|u\|_{X}^{2}+ \sum_{j=1}^{p} \int _{0}^{u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{u(0)}h(s)\,ds - \lambda \int _{0}^{+\infty}G \bigl(t,u(t) \bigr)\,dt \\ \geq& \frac{1}{2}\|u\|_{X}^{2}- \sum _{j=1}^{p}c_{j}^{\prime}\beta^{\delta_{j}+1}\|u\|_{X}^{\delta_{j}+1}-c^{\prime}\beta^{\delta+1}\|u\|_{X}^{\delta+1} \\ &{}- \lambda \int_{0}^{+\infty} \bigl(k_{1}(t)|u|^{\gamma _{1}+1}+k_{2}(t)|u| \bigr)\,dt \\ \geq& \frac{1}{2}\|u\|_{X}^{2}- \sum _{j=1}^{p}c_{j}^{\prime}\beta^{\delta_{j}+1}\|u\|_{X}^{\delta_{j}+1}-c^{\prime}\beta^{\delta+1}\|u\|_{X}^{\delta+1} \\ &{}-\lambda\beta^{\gamma_{1}+1}|k_{1}|_{1}\|u \|_{X}^{\gamma_{1}+1}-\lambda \beta|k_{2}|_{1}\|u \|_{X}. \end{aligned}$$ Since \(\delta_{j},\delta\in(0,1)\) and \(\gamma_{1}\in(0,1)\), (3.4) implies that \(\varphi(u)\rightarrow\infty\) as \(\| u\| _{X}\rightarrow\infty\). Consequently, φ is bounded from below. Next, we prove that φ satisfies the Palais-Smale condition. Suppose that \(\{u_{n}\}\subset X\) such that \(\{\varphi(u_{n})\}\) be a bounded sequence and \(\varphi^{\prime}(u_{n})\rightarrow0\) as \(n\rightarrow \infty\), it follows from (3.4) that \(\{u_{n}\}\) is bounded in X. From the reflexivity of X, we may extract a weakly convergent subsequence, which, for simplicity, we call \(\{u_{n}\}\), \(u_{n}\rightharpoonup u\) in X. Next we will prove that \(u_{n}\rightarrow u\) in X. By (2.5) and (2.6), we have $$\begin{aligned} \bigl(\varphi^{\prime}(u_{n})- \varphi^{\prime}(u) \bigr) (u_{n}-u) =&\|u_{n}-u\| _{X}^{2}+ \bigl[h(u_{n}(0)-h \bigl(u(0) \bigr) \bigr] \bigl(u_{n}(0)-u(0) \bigr) \\ &{}+ \sum_{j=1}^{p} \bigl(I_{j} \bigl(u_{n}(t_{j}) \bigr)-I_{j} \bigl(u(t_{j}) \bigr) \bigr) \bigl(u_{n}(t_{j})-u(t_{j}) \bigr) \\ &{}- \lambda \int_{0}^{+\infty} \bigl(g \bigl(t,u_{n}(t) \bigr)-g \bigl(t,u(t) \bigr) \bigr) \bigl(u_{n}(t)-u(t) \bigr)\,dt. \end{aligned}$$ Obviously, $$ \bigl(\varphi^{\prime}(u_{n})- \varphi^{\prime}(u) \bigr) (u_{n}-u)\rightarrow0. $$ We claim that if \(u_{k}\rightharpoonup u\) in E, then \(g(t,u_{k})\rightarrow g(t,u)\) in \(L^{1}[0,+\infty)\). The proof is similar to that of Lemma 2.4, and we omit it here. By (2.2), we obtain $$\begin{aligned} & \int_{0}^{+\infty} \bigl(g \bigl(t,u_{n}(t) \bigr)-g \bigl(t,u(t) \bigr) \bigr) \bigl(u_{n}(t)-u(t) \bigr)\,dt \\ &\quad\leq \bigl(\|u_{n}\|_{C}+\|u\|_{C} \bigr) \int_{0}^{+\infty } \bigl|g \bigl(t,u_{n}(t) \bigr)-g \bigl(t,u(t) \bigr) \bigr|\,dt \\ &\quad\leq\beta \bigl(\|u_{n}\|_{X}+\|u\|_{X} \bigr) \int_{0}^{+\infty } \bigl|g \bigl(t,u_{n}(t) \bigr)-g \bigl(t,u(t) \bigr) \bigr|\,dt\rightarrow0 \end{aligned}$$ as \(n\rightarrow\infty\). Since \(u_{n}\rightharpoonup u\) in X, for any \(M>0\), we get \(u_{n}\rightarrow u\) in \(C[0,M]\). So $$ \begin{aligned} &\sum_{j=1}^{p} \bigl(I_{j} \bigl(u_{n}(t_{j}) \bigr)-I_{j} \bigl(u(t_{j}) \bigr) \bigr) \bigl(u_{n}(t_{j})-u(t_{j}) \bigr) \rightarrow0, \\ & \bigl[h \bigl(u_{n}(0) \bigr)-h \bigl(u(0) \bigr) \bigr] \bigl(u_{n}(0)-u(0) \bigr)\rightarrow0. \end{aligned} $$ In view of (3.5)-(3.8), we obtain \(\|u_{n}-u\| _{X}\rightarrow0\) as \(n\rightarrow\infty\). Then φ satisfies the Palais-Smale condition. It is easy to see that φ is even and \(\varphi(0)=0\). In order to apply Theorem 2.2, we prove now that $$ \mbox{for each } n\in\mathbb{N}, \mbox{ there exists } \varepsilon>0 \mbox{ such that } \gamma \bigl(\varphi^{-\varepsilon } \bigr)\geq n. $$ For each \(n\in\mathbb{N}\), we take n disjoint open sets \(B_{i}\) such that $$\bigcup_{i=1}^{n}B_{i}\subset J. $$ For \(i=1,2,\ldots,n\), let \(u_{i}\in(W_{0}^{1,2}(B_{i})\cap X)\) and \(\| u_{i}\|_{X}=1\), and $$E_{n}= \operatorname{span}\{u_{1},u_{2}, \ldots,u_{n}\}, \qquad J_{n}= \bigl\{ u\in E_{n}:\|u \|_{X}=1 \bigr\} . $$ For any \(u\in E_{n}\), there exist \(\lambda_{i}\in\mathbb{R}\), \(i=1,2,\ldots,n\), such that $$ u(t)=\sum_{i=1}^{n} \lambda_{i}u_{i}(t) \quad\mbox{for } t\in[0,+\infty). $$ $$ |u|_{\gamma_{2}}= \biggl( \int_{0}^{+\infty} \bigl|u(t) \bigr|^{\gamma_{2}}\,dt \biggr)^{\frac{1}{\gamma_{2}}}= \Biggl(\sum_{i=1}^{n}| \lambda _{i}|^{\gamma_{2}} \int_{B_{i}} \bigl|u_{i}(t) \bigr|^{\gamma_{2}}\,dt \Biggr)^{\frac{1}{\gamma_{2}}} $$ $$\begin{aligned} \|u\|_{X}^{2} =& \int_{0}^{+\infty} \bigl( \bigl|u_{i}^{\prime}(t) \bigr|^{2}+c \bigl|u(t) \bigr|^{2} \bigr)\,dt \\ =& \sum_{i=1}^{n}\lambda_{i}^{2} \int_{B_{i}} \bigl( \bigl|u_{i}^{\prime}(t) \bigr|^{2}+c \bigl|u_{i}(t) \bigr|^{2} \bigr)\,dt \\ =& \sum_{i=1}^{n}\lambda_{i}^{2} \int_{0}^{+\infty} \bigl( \bigl|u_{i}^{\prime}(t) \bigr|^{2}+c \bigl|u_{i}(t) \bigr|^{2} \bigr)\,dt \\ =& \sum_{i=1}^{n}\lambda_{i}^{2} \|u_{i}\|_{X}^{2} \\ =& \sum_{i=1}^{n}\lambda_{i}^{2}. \end{aligned}$$ Since all norms of any finite dimensional normed space are equivalent, so there exists \(M_{0}>0\) such that $$ M_{0}\|u\|_{X}\leq|u|_{\gamma_{2}} \quad \mbox{for } u\in E_{n}. $$ In view of (A8), (A10), (2.2), (3.11), (3.12), and (3.13), we get $$\begin{aligned} \varphi(\rho u) =& \frac{\rho^{2}}{2}\|u\|_{X}^{2}+ \sum_{j=1}^{p} \int_{0}^{\rho u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{\rho u(0)}h(s)\,ds - \lambda \int_{0}^{+\infty}G \bigl(t,\rho u(t) \bigr)\,dt \\ =& \frac{\rho^{2}}{2}\|u\|_{X}^{2}+\sum _{j=1}^{p} \int _{0}^{\rho u(t_{j})}I_{j}(s)\,ds+ \int_{0}^{\rho u(0)}h(s)\,ds \\ &{}- \lambda\sum _{i=1}^{n} \int_{B_{i}}G \bigl(t,\rho u(t) \bigr)\,dt \\ \leq& \frac{\rho^{2}}{2}\|u\|_{X}^{2}+ \sum _{j=1}^{p}c_{j}^{\prime}(\rho \beta)^{\delta_{j}+1}\|u\|_{X}^{\delta_{j}+1} +c^{\prime}(\rho \beta)^{\delta+1}\|u\|_{X}^{\delta+1} \\ &{}-\lambda\eta\rho^{\gamma_{2}} \sum_{i=1}^{n}| \lambda _{i}|^{\gamma_{2}} \int_{B_{i}} \bigl|u_{i}(t) \bigr|^{\gamma_{2}}\,dt \\ =& \frac{\rho^{2}}{2}\|u\|_{X}^{2}+ \sum _{j=1}^{p}c_{j}^{\prime}(\rho \beta)^{\delta_{j}+1}\|u\|_{X}^{\delta_{j}+1} +c^{\prime}(\rho \beta)^{\delta+1}\|u\|_{X}^{\delta+1}-\lambda\eta \rho^{\gamma_{2}}|u|_{\gamma_{2}}^{\gamma_{2}} \\ \leq& \frac{\rho^{2}}{2}\|u\|_{X}^{2}+ \sum _{j=1}^{p}c_{j}^{\prime}(\rho \beta)^{\delta_{j}+1}\|u\|_{X}^{\delta_{j}+1} +c^{\prime}(\rho \beta)^{\delta+1}\|u\|_{X}^{\delta+1}-\lambda\eta (M_{0}\rho)^{\gamma_{2}}\|u\|_{X}^{\gamma_{2}} \\ =& \frac{\rho^{2}}{2}+ \sum_{j=1}^{p}c_{j}^{\prime}(\rho\beta)^{\delta_{j}+1} +c^{\prime}(\rho\beta)^{\delta+1}-\lambda \eta(M_{0}\rho)^{\gamma_{2}}, \end{aligned}$$ for \(\forall u\in J_{n}\), \(0<\rho\leq\frac{T}{\beta}\). Since \(\gamma_{2}\in(1,2)\) with \(\gamma_{2}<\min\{\min_{1\leq j\leq p}\{\delta_{j}\},\delta\}+1\), there exist \(\varepsilon>0\) and \(\delta>0\) such that $$ \varphi(\delta u)< -\varepsilon\quad\mbox{for } u\in J_{n}. $$ $$J_{n}^{\delta}=\{\delta u:u\in J_{n}\}, \qquad\Omega= \Biggl\{ (\lambda _{1},\lambda_{2},\ldots,\lambda_{n}) \in\mathbb{R}:\sum_{l=1}^{n}\lambda _{l}^{2}< \delta^{2} \Biggr\} , $$ then it follows from (3.13) that $$\varphi(u)< -\varepsilon\quad\mbox{for } u\in J_{n}^{\delta}. $$ Together with the fact that \(\varphi\in C^{1}(X,\mathbb{R})\) and is even, it implies that $$ J_{n}^{\delta}\subset\varphi^{-\varepsilon}\in \Sigma. $$ By virtue of (3.10) and (3.12), there exists an odd homeomorphism mapping \(f\in C(J_{n}^{\delta},\partial\Omega )\). By some properties of the genus (see 3∘ of Propositions 7.5 and 7.7 in [42]), one has $$ \gamma \bigl(\varphi^{-\varepsilon} \bigr)\geq\gamma \bigl(J_{n}^{\delta}\bigr)=n, $$ so the proof of (3.9) follows. Let $$d_{n}:=\inf_{J\in\Sigma_{n}}\sup_{u\in J} \varphi(u). $$ It follows from (3.17) and the fact that φ is bounded from below on X that \(-\infty< d_{n}\leq-\varepsilon<0\), that is, for any \(n\in\mathbb{N}\), \(d_{n}\) is a real negative number. By Theorem 2.2, φ has infinitely many critical points, and so problem (1.1) has infinitely many solutions. □ In order to illustrate our results, we give two examples. Consider the following problem: $$ \begin{aligned} &{-}u^{\prime\prime}(t)+u(t)=\lambda g \bigl(t,u(t) \bigr), \quad \mbox{a.e. } t\in[0,+\infty), \\ &\Delta u^{\prime}(t_{j})=I_{j} \bigl(u(t_{j}) \bigr), \quad j=1, \\ &u^{\prime}\bigl(0^{+} \bigr)=h \bigl(u(0) \bigr), \qquad u^{\prime}(+\infty)=0, \end{aligned} $$ where \(h(u)=u\), \(I_{j}(u)=u\). Compared to problem (1.1), \(c=1\). It is clear that (A1) is satisfied. β is defined in (2.2). When β lies in different intervals, we can choose different g satisfies the conditions. So we only consider one case. If \(\beta <\frac{\sqrt{10}}{12}\), we take $$g(t,u)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sqrt{\beta}e^{-t}, & u\leq\beta, \\ e^{-t} (\frac{u}{100}+600u^{\frac{1}{2}}-599\sqrt{\beta}-\frac {\beta}{100} ), & u>\beta. \end{array}\displaystyle \right . $$ $$G(t,u)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sqrt{\beta}e^{-t}u, & u\leq\beta, \\ e^{-t} [\frac{u^{2}}{200}+400u^{\frac{3}{2}}- (599\sqrt {\beta}+\frac{\beta}{100} )u+200\beta^{\frac{3}{2}}+\frac {\beta^{2}}{200} ], & u>\beta. \end{array}\displaystyle \right . $$ Take \(t_{1}=\ln\sqrt{2}\), \(a_{1}(t)=\frac{e^{-t}}{200}\), \(\alpha=\frac{3}{2}\), \(a_{2}(t)=400e^{-t}\), \(a_{3}=\frac{1}{\sqrt{\beta}}\), \(b_{3}(t)=\frac{e^{-t}}{100}\), \(b_{4}(t)=600e^{-t}\), \(b_{5}(t)=\sqrt{\beta}e^{-t}\), and choose constants \(d,q>0\) and m satisfying \(6\beta^{2}q< d<\min\{\beta, \frac{\sqrt{10}}{2}\beta q\}\) and \(\frac{5}{2}m^{2}=\frac{d^{2}}{\beta^{2}}\). A simple calculation shows that (A3), (A4), and (A5) are satisfied. Applying Theorem 1.1, then, for each \(\lambda\in\, ]\frac{1}{\alpha_{2}},\frac{1}{\alpha_{1}}[\), problem (4.1) has at least three classical solutions. $$ \begin{aligned} &{-}u^{\prime\prime}(t)+u(t)=\lambda g \bigl(t,u(t)\bigr), \quad \mbox{a.e. } t\in[0,+\infty), \\ &\Delta u^{\prime}(t_{j})=I_{j}\bigl(u(t_{j}) \bigr), \quad j=1, \\ &u^{\prime}\bigl(0^{+}\bigr)=h\bigl(u(0)\bigr), \qquad u^{\prime}(+ \infty)=0, \end{aligned} $$ where \(\lambda>0\), \(I_{j}(u)=-u^{\frac{3}{5}}\), \(h(u)=-u^{\frac{3}{5}}\), and \(g(t,u)= (\frac{1}{(1+t^{2})^{2}}-\frac{1}{(1+t)^{2}} )u^{\frac{1}{3}}\). Compared to problem (1.1), \(c=1\). By simple calculations, all conditions in Theorem 1.3 are satisfied. Applying Theorem 1.3, then (4.2) has infinitely many classical solutions. Aronson, D, Crandall, MG, Peletier, LA: Stabilization of solutions of a degenerate nonlinear diffusion problem. Nonlinear Anal. 6, 1001-1022 (1982) Kawano, N, Yanagida, E, Yotsutani, S: Structure theorems for positive radial solutions to \(\Delta u+K(|x|)u^{p}=0\) in \(\mathbb{R}^{n}\). Funkc. Ekvacioj 36, 557-579 (1993) Iffland, G: Positive solutions of a problem Emden-Fowler type with a type free boundary. SIAM J. Math. Anal. 18, 283-292 (1987) Agarwal, RP, O'Regan, D: Infinite Interval Problems for Differential, Difference and Integral Equations. Kluwer Academic, Dordrecht (2001) Book MATH Google Scholar Chen, S, Zhang, T: Singular boundary value problems on a half-line. J. Math. Anal. Appl. 195, 449-468 (1995) Gomes, JM, Sanchez, L: A variational approach to some boundary value problems in the half-line. Z. Angew. Math. Phys. 56, 192-209 (2005) Lian, H, Ge, W: Calculus of variations for a boundary value problem of differential system on the half line. Comput. Math. Appl. 58, 58-64 (2009) Liu, X: Solutions of impulsive boundary value problems on the half-line. J. Math. Anal. Appl. 222, 411-430 (1998) Liu, Y: Existence and unboundedness of positive solutions for singular boundary value problems on half-line. Appl. Math. Comput. 144, 543-556 (2003) Yan, B: Multiple unbounded solutions of boundary value problems for second-order differential equations on half-line. Nonlinear Anal. 51, 1031-1044 (2002) Zima, M: On positive solutions of boundary value problems on the half-line. J. Math. Anal. Appl. 259, 127-136 (2001) Gao, S, Chen, L, Nieto, JJ, Torres, A: Analysis of a delayed epidemic model with pulse vaccination and saturation incidence. Vaccine 24, 6037-6045 (2006) He, Z: Impulsive state feedback control of a predator-prey system with group defense. Nonlinear Dyn. 79, 2699-2714 (2015) Liu, X, Willms, AR: Impulsive controllability of linear dynamical systems with applications to maneuvers of spacecraft. Math. Probl. Eng. 2, 277-299 (1996) Article MATH Google Scholar Nenov, S: Impulsive controllability and optimization problems in population dynamics. Nonlinear Anal. 36, 881-890 (1999) D'Onofrio, A: On pulse vaccination strategy in the SIR epidemic model with vertical transmission. Appl. Math. Lett. 18, 729-732 (2005) Xiao, Q, Dai, B: Dynamics of an impulsive predator-prey logistic population model with state-dependent. Appl. Math. Comput. 259, 220-230 (2015) Ahmad, B, Nieto, JJ: Existence and approximation of solutions for a class of nonlinear impulsive functional differential equations with anti-periodic boundary conditions. Nonlinear Anal. 69, 3291-3298 (2008) Chen, D, Dai, B: Periodic solution of second order impulsive delay differential systems via variational method. Appl. Math. Lett. 38, 61-66 (2014) Hernandez, E, Henriquez, HR, McKibben, MA: Existence results for abstract impulsive second-order neutral functional differential equations. Nonlinear Anal. 70, 2736-2751 (2009) Lakshmikantham, V, Bainov, DD, Simeonov, PS: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989) Liang, RX: Existence of solutions for impulsive Dirichlet problems with the parameter inequality reverse. Math. Methods Appl. Sci. 36, 1929-1939 (2013) Luo, Z, Nieto, JJ: New results for the periodic boundary value problem for impulsive integro-differential equations. Nonlinear Anal. 70, 2248-2260 (2009) Samoilenko, AM, Perestyuk, NA: Impulsive Differential Equations. World Scientific, Singapore (1995) He, Z, He, X: Monotone iterative technique for impulsive integro-differential equations with periodic boundary conditions. Comput. Math. Appl. 48, 73-84 (2004) Li, J, Shen, J: Positive solutions for three-point boundary value problems for second-order impulsive differential equations on infinite intervals. J. Comput. Appl. Math. 235, 2372-2379 (2011) Liang, RX, Liu, ZM: Nagumo type existence results of Sturm-Liouville BVP for impulsive differential equations. Nonlinear Anal. 74, 6676-6685 (2011) Qian, D, Li, X: Periodic solutions for ordinary differential equations with sublinear impulsive effects. J. Math. Anal. Appl. 303, 288-303 (2005) Chen, H, Li, J: Variational approach to impulsive differential equations with Dirichlet boundary conditions. Bound. Value Probl. 2010, Article ID 325415 (2010) Chen, H, He, Z: New results for perturbed Hamiltonian systems with impulses. Appl. Math. Comput. 218, 9489-9497 (2012) Chen, H, He, Z: Variational approach to some damped Dirichlet problems with impulses. Math. Methods Appl. Sci. 36, 2564-2575 (2013) Chen, P, Tang, X: New existence and multiplicity of solutions for some Dirichlet problems with impulsive effects. Math. Comput. Model. 55, 723-739 (2012) Nieto, JJ, O'Regan, D: Variational approach to impulsive differential equations. Nonlinear Anal., Real World Appl. 10, 680-690 (2009) Sun, J, Chen, H, Yang, L: The existence and multiplicity of solutions for an impulsive differential equation with two parameters via a variational method. Nonlinear Anal. 73, 440-449 (2010) Tian, Y, Ge, W: Applications of variational methods to boundary-value problem for impulsive differential equations. Proc. Edinb. Math. Soc. 51, 509-527 (2008) Zhang, D, Dai, B: Existence of solutions for nonlinear impulsive differential equations with Dirichlet boundary conditions. Math. Comput. Model. 53, 1154-1161 (2011) Zhang, L, Ge, W: Solvability of a kind of Sturm-Liouville boundary value problems with impulses via variational methods. Acta Appl. Math. 110, 1237-1248 (2010) Chen, H, Sun, J: An application of variational method to second-order impulsive differential equation on the half-line. Appl. Math. Comput. 217, 1863-1869 (2010) Dai, B, Zhang, D: The existence and multiplicity of solutions for second-order impulsive differential equation on the half-line. Results Math. 63, 135-149 (2013) Bonanno, G, Marano, SA: On the structure of the critical set of non-differentiable functions with a weak compactness condition. Appl. Anal. 89, 1-10 (2010) Bonanno, G, Riccobono, G: Multiplicity results for Sturm-Liouville boundary value problems. Appl. Math. Comput. 210, 294-297 (2009) Rabinowitz, PH: Minimax Methods in Critical Point Theory with Applications to Differential Equations. CBMS Reg. Conf. Ser. in Math., vol. 65. Am. Math. Soc., Providence (1986) Salvatore, A: Homoclinic orbits for a special class of nonautonomous Hamiltonian systems. Nonlinear Anal. 30, 4849-4857 (1997) Zeidler, E: Nonlinear Functional Analysis and Its Applications, vol. 2. Springer, Berlin (1990) Mawhin, J, Willem, M: Critical Point Theory and Hamiltonian Systems. Springer, Berlin (1989) The first author was supported by the Doctor Priming Fund Project of University of South China (2014XQD13) and the Tianyuan Fund for Mathematics of NSFC (No. 11526111). The third author was supported by the Scientific Research Fund of Hunan Provincial Education Department (No. 14A098). School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, People's Republic of China Huiwen Chen & Zhimin He School of Mathematics and Physics, University of South China, Hengyang, Hunan, 421001, People's Republic of China Huiwen Chen Department of Mathematics, Hunan Normal University, Changsha, Hunan, 410081, People's Republic of China Jianli Li Zhimin He Correspondence to Zhimin He. All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Chen, H., He, Z. & Li, J. Multiplicity of solutions for impulsive differential equation on the half-line via variational methods. Bound Value Probl 2016, 14 (2016). https://doi.org/10.1186/s13661-016-0524-8 58E30 impulsive differential equation variational methods critical points half-line Special Collection in the honor of the Life and Research of Weigao Ge
CommonCrawl
Corporate Finance & Accounting Financial Ratios What is the average P/E ratio in the financial sector? By Melissa Horton The price-to-earnings ratio (P/E) is one of the most common metrics used by investors to analyze whether investment in a company is worthwhile. The P/E ratio formula is: P/E Ratio=Price Per ShareEarnings Per Share\begin{aligned} \text{P/E Ratio} = \frac{ \text {Price Per Share} }{ \text {Earnings Per Share} } \\ \end{aligned}P/E Ratio=Earnings Per SharePrice Per Share​​ Investors interpret the price per earnings ratio as the price they are willing to pay for each dollar of a company's earnings. For example, if a company's stock was trading at $50 per share and had earnings of $5 per share, its P/E ratio would be 10. Average P/E Ratio for the Financial Services Industry The financial services industry makes up a sizable portion of the U.S. gross domestic product (GDP). For this reason, it has been heavily watched by investors for years as an indicator of the overall health of the economy. Companies that operate within the industry include those focused on brokerage operations, conventional banking, asset management, as well as debt and credit services. Since the financial services industry plays an important role in the overall performance of the markets, investors should be concerned with the average P/E ratio of this sector. As of August 2018, the average P/E ratio of the financial services industry is 14.26. This metric includes the sector averages of specific financial service categories, including banks with a P/E ratio of 13.51, capital markets with a P/E ratio of 18.83, and insurance with a P/E ratio of 14.64. A smaller sector in the broader financial services category, thrifts and mortgage finance has the highest P/E at 32.17, while the lowest at this time is the mortgage REITs sector at 7.11. Assessing a Stock's Future With the Price-to-Earnings Ratio and PEG What is the average price-to-earnings ratio in the drugs sector? Unlocking the P/E Ratio for Apple Sectors & Industries Analysis What is the average return on equity for a company in the financial services sector? What is the average price-to-earnings ratio in the internet sector? What is the average price-to-earnings ratio in the banking sector? What the Price-to-Earnings Ratio – P/E Ratio Tells Us The price-to-earnings ratio (P/E ratio) is defined as a ratio for valuing a company that measures its current share price relative to its per-share earnings. Debt-To-Equity Ratio – D/E Definition The debt-to-equity (D/E) ratio indicates how much debt a company is using to finance its assets relative to the value of shareholders' equity. Dividend Payout Ratio Definition The dividend payout ratio is the measure of dividends paid out to shareholders relative to the company's net income. What Does the Price-to-Sales (P/S) Ratio Reveal? The price-to-sales (P/S) ratio is a valuation ratio that compares a company's stock price to its revenues. It is an indicator of the value placed on each dollar of a company's sales or revenues. P/E 30 Ratio P/E 30 ratio means that a company's stock price is trading at 30 times the company's earnings per share. A business said to be trading at a P/E ratio of 30:1 would indicate investors are willing to pay $30 in market price for every $1 in earnings. Why the Price/Earnings-to-Growth Ratio Matters The price/earnings-to-growth (PEG) ratio is a company's stock price to earnings ratio divided by the growth rate of its earnings for a specified time period.
CommonCrawl
CFP: SoTFoM II 'Competing Foundations?', 12-13 January 2015, London. The focus of this conference is on different approaches to the foundations of mathematics. The interaction between set-theoretic and category-theoretic foundations has had significant philosophical impact, and represents a shift in attitudes towards the philosophy of mathematics. This conference will bring together leading scholars in these areas to showcase contemporary philosophical research on different approaches to the foundations of mathematics. To accomplish this, the conference has the following general aims and objectives. First, to bring to a wider philosophical audience the different approaches that one can take to the foundations of mathematics. Second, to elucidate the pressing issues of meaning and truth that turn on these different approaches. And third, to address philosophical questions concerning the need for a foundation of mathematics, and whether or not either of these approaches can provide the necessary foundation. Date and Venue: 12-13 January 2015 - Senate House, University of London. Confirmed Speakers: Sy David Friedman (Kurt Gödel Research Center, Vienna), Victoria Gitman (CUNY), James Ladyman (Bristol), Toby Meadows (Aberdeen). Call for Papers: We welcome submissions from scholars (in particular, young scholars, i.e. early career researchers or post-graduate students) on any area of the foundations of mathematics (broadly construed). Particularly desired are submissions that address the role of and compare different foundational approaches. Applicants should prepare an extended abstract (maximum 1'500 words) for blind review, and send it to sotfom [at] gmail [dot] com, with subject `SOTFOM II Submission'. Submission Deadline: 15 October 2014 Notification of Acceptance: Early November 2014 Scientific Committee: Philip Welch (University of Bristol), Sy-David Friedman (Kurt Gödel Research Center), Ian Rumfitt (University of Birmigham), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Gödel Research Center), Neil Barton (Birkbeck College), Chris Scambler (Birkbeck College), Jonathan Payne (Institute of Philosophy), Andrea Sereni (Università Vita-Salute S. Raffaele), Giorgio Venturi (Université de Paris VII, "Denis Diderot" - Scuola Normale Superiore) Organisers: Sy-David Friedman (Kurt Gödel Research Center), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Gödel Research Center), Neil Barton (Birkbeck College), Carolin Antos-Kuby (Kurt Gödel Research Center) Conference Website: sotfom [dot] wordpress [dot] com Further Inquiries: please contact Carolin Antos-Kuby (carolin [dot] antos-kuby [at] univie [dot] ac [dot] at) Neil Barton (bartonna [at] gmail [dot] com) Claudio Ternullo (ternulc7 [at] univie [dot] ac [dot] at) John Wigglesworth (jmwigglesworth [at] gmail [dot] com) The conference is generously supported by the Mind Association, the Institute of Philosophy, and Birkbeck College. Published by Richard Pettigrew at 7:27 am No comments: What's the big deal with consistency? (Cross-posted at NewAPPS) It is no news to anyone that the concept of consistency is a hotly debated topic in philosophy of logic and epistemology (as well as elsewhere). Indeed, a number of philosophers throughout history have defended the view that consistency, in particular in the form of the principle of non-contradiction (PNC), is the most fundamental principle governing human rationality – so much so that rational debate about PNC itself wouldn't even be possible, as famously stated by David Lewis. It is also the presumed privileged status of consistency that seems to motivate the philosophical obsession with paradoxes across time; to be caught entertaining inconsistent beliefs/concepts is really bad, so blocking the emergence of paradoxes is top-priority. Moreover, in classical as well as other logical systems, inconsistency entails triviality, and that of course amounts to complete disaster. Since the advent of dialetheism, and in particular under the powerful assaults of karateka Graham Priest, PNC has been under pressure. Priest is right to point out that there are very few arguments in favor of the principle of non-contradiction in the history of philosophy, and many of them are in fact rather unconvincing. According to him, this holds in particular of Aristotle's elenctic argument in Metaphysics gamma. (I agree with him that the argument there does not go through, but we disagree on its exact structure. At any rate, it is worth noticing that, unlike David Lewis, Aristotle did think it was possible to debate with the opponent of PNC about PNC itself.) But despite the best efforts of dialetheists, the principle of non-contradiction and consistency are still widely viewed as cornerstones of the very concept of rationality. However, in the spirit of my genealogical approach to philosophical issues, I believe that an important question to be asked is: What's the big deal with consistency in the first place? What does it do for us? Why do we want consistency so badly to start with? When and why did we start thinking that consistency was a good norm to be had for rational discourse? And this of course takes me back to the Greeks, and in particular the Greeks before Aristotle. Variations of PNC can be found stated in a few authors before Aristotle, Plato in particular, but also Gorgias (I owe these passages to Benoît Castelnerac; emphasis mine in both): You have accused me in the indictment we have heard of two most contradictory things, wisdom and madness, things which cannot exist in the same man. When you claim that I am artful and clever and resourceful, you are accusing me of wisdom, while when you claim that I betrayed Greece, you accused me of madness. For it is madness to attempt actions which are impossible, disadvantageous and disgraceful, the results of which would be such as to harm one's friends, benefit one's enemies and render one's own life contemptible and precarious. And yet how can one have confidence in a man who in the course of the same speech to the same audience makes the most contradictory assertions about the same subjects? (Gorgias, Defence of Palamedes) You cannot be believed, Meletus, even, I think, by yourself. The man appears to me, men of Athens, highly insolent and uncontrolled. He seems to have made his deposition out of insolence, violence and youthful zeal. He is like one who composed a riddle and is trying it out: "Will the wise Socrates realize that I am jesting and contradicting myself, or shall I deceive him and others?" I think he contradicts himself in the affidavit, as if he said: "Socrates is guilty of not believing in gods but believing in gods", and surely that is the part of a jester. Examine with me, gentlemen, how he appears to contradict himself, and you, Meletus, answer us. (Plato, Apology 26e- 27b) What is particularly important for my purposes here is that these are dialectical contexts of debate; indeed, it seems that originally, PNC was to a great extent a dialectical principle. To lure the opponent into granting contradictory claims, and exposing him/her as such, is the very goal of dialectical disputations; granting contradictory claims would entail the opponent being discredited as a credible interlocutor. In this sense, consistency would be a derived norm for discourse: the ultimate goal of discourse is persuasion; now, to be able to persuade one must be credible; a person who makes inconsistent claims is not credible, and thus not persuasive. As argued in a recent draft paper by my post-doc Matthew Duncombe, this general principle applies also to discursive thinking for Plato, not only for situations of debates with actual opponents. Indeed, Plato's model of discursive thinking (dianoia) is of an internal dialogue with an imaginary opponent, as it were (as to be found in the Theaetetus and the Philebus). Here too, consistency will be related to persuasion: the agent herself will not be persuaded to hold beliefs which turn out to be contradictory, but realizing that they are contradictory may well come about only as a result of the process of discursive thinking (much as in the case of the actual refutations performed by Socrates on his opponents). Now, as also argued by Matt in his paper, the status of consistency and PNC for Aristotle is very different: PNC is grounded ontologically, and then generalizes to doxastic as well as dialogical/discursive cases (although one of the main arguments offered by Aristotle in favor of PNC is essentially dialectical in nature, namely the so-called elenctic argument). But because Aristotle postulates the ontological version of PNC -- a thing a cannot both be F and not be F at the same time, in the same way -- it is difficult to see how a fruitful debate can be had between him and the modern dialethists, who maintain precisely that such a thing is after all possible in reality. Instead, I find Plato's motivation for adopting something like PNC much more plausible, and philosophically interesting in that it provides an answer to the genealogical questions I stated earlier on. What consistency does for us is to serve the ultimate goal of persuasion: an inconsistent discourse is prima facie implausible (or less plausible). And so, the idea that the importance of consistency is subsumed to another, more primitive dialogical norm (the norm of persuasion) somehow deflates the degree of importance typically attributed to consistency in the philosophical literature, as a norm an sich. Besides dialetheists, other contemporary philosophical theories might benefit from the short 'genealogy of consistency' I've just outlined. I am now thinking in particular of work done in formal epistemology by e.g. Branden Fitelson, Kenny Easwaran (e.g. here), among others, contrasting the significance of consistency vs. accuracy. It seems to me that much of what is going on there is also a deflation of the significance of consistency as a norm for rational thought; their conclusion is thus quite similar to the one of the historically-inspired analysis I've presented here, namely: consistency is over-rated. Published by Catarina at 1:19 pm 26 comments: Servus, New York! Invitation to the MCMP Workshop "Bridges" (2 and 3 Sept, 2014) MCMP Workshop "Bridges 2014" New York City, 2 and 3 Sept, 2014 www.lmu.de/bridges2014 The Munich Center for Mathematical Philosophy (MCMP) cordially invites you to "Bridges 2014" in the German House, New York City, on 2 and 3 September, 2014. The 2-day trans-continental meeting in mathematical philosophy will focus on inter-theoretical relations thereby connecting form and content of this philosophical exchange. The workshop will be accompanied by an open-to-public evening event with Stephan Hartmann and Branden Fitelson on 2 September, 2014 (6:30 pm). Lucas Champollion (NYU) David Chalmers (NYU) Branden Fitelson (Rutgers) Alvin I. Goldman (Rutgers) Stephan Hartmann (MCMP/LMU) Hannes Leitgeb (MCMP/LMU) Kristina Liefke (MCMP/LMU) Sebastian Lutz (MCMP/LMU) Tim Maudlin (NYU) Thomas Meier (MCMP/LMU) Roland Poellinger (MCMP/LMU) Michael Strevens (NYU) Idea and Motivation We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Our spectrum of interest includes: reduction and emergence, mechanistic links between causal theories, belief vs. probability, mind and brain, relations between formal and informal accounts in the special sciences, cognition and the outer world. Program and Registration Due to security regulations at the German House registering is required (separately for workshop and evening event). Details on how to register and the full schedule can be found on the official website: Published by Unknown at 10:20 am No comments: Extending a theory with the theory of mereological fusions "Arithmetic with fusions" (draft) is a joint paper with my graduate student Thomas Schindler (MCMP). The abstract is: In this article, the relationship between second-order comprehension and unrestricted mereological fusion (over atoms) is clarified. An extension $\mathsf{PAF}$ of Peano arithmetic with a new binary mereological notion of ``fusion'', and a scheme of unrestricted fusion, is introduced. It is shown that $\mathsf{PAF}$ interprets full second-order arithmetic, $Z_2$. Roughly this shows: First-order arithmetic + mereology = second-order arithmetic. This implies that adding the theory of mereological fusions can be a very powerful, non-conservative, addition to a theory, perhaps casting doubt on the philosophical idea that once you have some objects, then having their fusion also is somehow "redundant". The additional fusions can in some cases behave like additional "infinite objects"; positing their existence allows one to prove more about the original objects. Published by Jeffrey Ketland at 12:20 pm 9 comments: L. A. Paul on transformative experience and decision theory II In the first part of this post, I considered the challenge to decision theory from what L. A. Paul calls epistemically transformative experiences. In this post, I'd like to turn to another challenge to standard decision theory that Paul considers. This is the challenge from what she calls personally transformative experiences. Unlike an epistemically transformative experience, a personally transformative experience need not teach you anything new, but it does change you in another way that is relevant to decision theory---it leads you to change your utility function. To see why this is a problem for standard decision theory, consider my presentation of naive, non-causal, non-evidential decision theory in the previous post. Published by Richard Pettigrew at 8:26 am 3 comments: Is the human referee becoming expendable in mathematics? Mathematics has been much in the news recently, especially with the announcement of the latest four Fields medalists (I am particularly pleased to see the first woman, and the first Latin-American, receiving the highest recognition in mathematics). But there was another remarkable recent event in the world of mathematics: Thomas Hales has announced the completion of the formalization of his proof of the Kepler conjecture. The conjecture: "what is the best way to stack a collection of spherical objects, such as a display of oranges for sale? In 1611 Johannes Kepler suggested that a pyramid arrangement was the most efficient, but couldn't prove it." (New Scientist) The official announcement goes as follows: We are pleased to announce the completion of the Flyspeck project, which has constructed a formal proof of the Kepler conjecture. The Kepler conjecture asserts that no packing of congruent balls in Euclidean 3-space has density greater than the face-centered cubic packing. It is the oldest problem in discrete geometry. The proof of the Kepler conjecture was first obtained by Ferguson and Hales in 1998. The proof relies on about 300 pages of text and on a large number of computer calculations. The formalization project covers both the text portion of the proof and the computer calculations. The blueprint for the project appears in the book "Dense Sphere Packings," published by Cambridge University Press. The formal proof takes the same general approach as the original proof, with modifications in the geometric partition of space that have been suggested by Marchal. So far, nothing very new, philosophically speaking. Computer-assisted proofs (both at the level of formulation and at the level of verification) have attracted the interest of a number of philosophers in recent times (here's a recent paper by John Symons and Jack Horner, and here is an older paper by Mark McEvoy, which I commented on at a conference back in 2005; there are many other papers on this topic by philosophers). More generally, the question of the extent to which mathematical reasoning can be purely 'mechanical' remains a lively topic of philosophical discussion (here's a 1994 paper by Wilfried Sieg on this topic that I like a lot). Moreover, this particular proof of the Kepler conjecture does not add anything substantially new (philosophically) to the practice of computer-verifying proofs (while being quite a feat mathematically!). It is rather something Hales said to the New Scientist that caught my attention (against the background of the 4 years and 12 referees it took to human-check the proof for errors): "This technology cuts the mathematical referees out of the verification process," says Hales. "Their opinion about the correctness of the proof no longer matters." Now, I'm with Hales that 'software intensive mathematics' (to borrow Symons and Horner's terminology) is of great help to offload some of the more tedious parts of mathematical practice such as proof-checking. But there are a number of reasons that suggest to me that Hales' 'optimism' is a bit excessive, in particular with respect to the allegedly expendable role of the human referee (broadly construed) in mathematical practice, even if only for the verification process. Indeed, and as I've been arguing in a number of posts, proof-checking is a major aspect of mathematical practice, basically corresponding to the role I attribute to the fictitious character 'opponent' in my dialogical conception of proof (see here). The main point is the issue of epistemic trust and objectivity: to be valid, a proof has to be 'replicable' by anyone with the relevant level of competence. This is why probabilistic proofs are still thought to be philosophically suspicious (as argued for example by Kenny Easwaran in terms of the notion of 'transferability'). And so, automated proof-checking will most likely never replace completely human proof-checking, if nothing else because the automated proof-checkers themselves must be kept 'in check' (lame pun, I know). (Though I am happy to grant that the role of opponent can be at least partially played by computers, and that our degree of certainty in the correctness of Hales' proof has been increased by its computer-verification.) Moreover, mathematics remains a human activity, and mathematical proofs essentially involve epistemic and pragmatic notions such as explanation and persuasion, which cannot be taken over by purely automated proof-checking. (Which does not mean that the burden of verification cannot be at least partially transferred to automata!) In effect, a good proof is not only one that shows that the conclusion is true, but also why the conclusion is true, and this explanatory component is not obviously captured by automata. In other words, a proof may be deemed correct by computer-checking, and yet fail to be persuasive in the sense of having true explanatory value. (Recall that Smale's proof of the possibility of sphere eversion was viewed with a certain amount of suspicion until models of actual processes of eversion were discovered.) Finally, turning an 'ordinary' mathematical proof* into something that can be computer-checked is itself a highly theoretical, non-trivial, and essentially informal endeavor that must itself receive a 'seal of approval' from the mathematical community. While mathematicians hardly ever disagree on whether a given proof is or is not valid once it is properly scrutinized, there can be (and has been, as once vividly described to me by Jesse Alama) substantive disagreement on whether a given formalized version of a proof is indeed an adequate formalization of that particular proof. (This is also related to thorny issues in the metaphysics of proofs, e.g. criteria of individuation for proofs, which I will leave aside for now.) A particular informal proof can only be said to have been computer-verified if the formal counterpart in question really is (deemed to be) sufficiently similar to the original proof. (Again, the formalized proof may have the same conclusion as the original informal proof, in which case we may agree that the theorem they both purport to prove is true, but this is no guarantee that the original informal proof itself is valid. There are many invalid proofs of true statements.) Now, evaluating whether a particular informal proof is accurately rendered in a given formalized form is not a task that can be delegated to a computer (precisely because one of the relata of the comparison is itself an informal construct), and for this task the human referee remains indispensable. And so, I conclude that, pace Hales, the human mathematical referee is not going to be completely cut out of the verification process any time soon. Nevertheless, it is a welcome (though not entirely new) development that computers can increasingly share the burden of some of the more tedious aspects of mathematical practice: it's a matter of teamwork rather than the total replacement of a particular approach to proof-verification by another (which may well be what Hales meant in the first place). * In some research programs, mathematical proofs are written directly in computer-verifiable form, such as in the newly created research program of homotopy type-theory. Published by Catarina at 9:12 am 1 comment: Bohemian gravity Tim Blais, a McGill University physics student made this really great a capella version of "Bohemian Rhapsody", called "Bohemian Gravity", with physics lyrics explaining superstring theory, like "Manifolds must be Kahler!" (lyrics here). Another article on this. Published by Jeffrey Ketland at 6:06 pm No comments: L. A. Paul on transformative experience and decision theory I I have never eaten Vegemite---should I try it? I currently have no children---should I apply to adopt a child? In each case, one might imagine, whichever choice I make, I can make it rationally by appealing to the principles of decision theory. Not according to L. A. Paul. In her rich and fascinating new book, Transformative Experience, Paul issues two challenges to orthodox decision theory---they are based upon examples such as these. (In this post and the next, I'd like to tryout some ideas concerning Paul's challenges to orthodox decision theory. The idea is that some of them will make it into my contribution to the Philosophy and Phenomenological Research book symposium on Transformative Experience.) Published by Richard Pettigrew at 1:47 pm 11 comments: Worlds Without Domain An article "Worlds Without Domain" arguing against the idea that possible worlds have domains. The abstract is: "A modal analogue to the "hole argument" in the foundations of spacetime is given against the conception of possible worlds having their own special domains". Published by Jeffrey Ketland at 11:44 am No comments: What is the significance of non-commutative geometry in mathematics?
CommonCrawl
Home Recycling In Space, Nobody Picks Up Your Trash: NASA Recycling in Space Award... In Space, Nobody Picks Up Your Trash: NASA Recycling in Space Award Winners Share this idea! Better solutions for trash collection and recycling are important not just here on Earth, but also in space. And to get fresh, innovative ideas from a wide creative pool, NASA, in partnership with NineSigma, has turned to idea crowdsourcing through a competition challenge. The challenge, Recycling in Space: Waste Handling in a Microgravity Environment, which ended on January 2019, accepted proposals from the public "for technologies and systems that will, in a microgravity environment, store & transfer logistical mission waste to a thermal processing unit for decomposition. The technology will improve the environmental footprint of future human spacecraft." Paul Hintze, a chemist with NASA's Kennedy Space Center Exploration Research and Technology Programs and a judge for the competition, said, "The challenge produced ideas that were innovative and that we had not yet considered. I look forward to further investigating these ideas and hope they will contribute to our human spaceflight missions." The challenge was, ahem, launched on October 18, 2018. And on April 1, 2019, NASA and NineSigma announced the winners. The top prize isn't, alas, a round-trip ticket to the moon, or even a space suit (unlike in Robert Heinlein's classic science fiction novel, Have Space Suit, Will Travel). But, in addition to the cash prizes, winners and other contestants have the satisfaction of knowing they are contributing to more successful space missions. Space Litter: You Can't Just Throw Stuff Down the Trash Chute As Mary Roach makes clear in her informative and entertaining book, Packing For Mars: The Curious Science of Life in the Void, dealing with life's ongoing chores and concerns requires new solutions in the absence of things we take for granted, like gravity and air. For example, disposing of trash — or recycling or repurposing it. In space proper, trash collection — whether of dead satellites, discarded pieces, or other detritus from human endeavors — or simply gathering up cosmic clutter, isn't just good housekeeping. It's a safety concern. After all, nobody likes a meteor falling on their city — or even nearby. (For a science fiction look at how these events play out, try Mary Robinette Kowal's two-booker, The Calculating Stars and The Fated Sky, Neal Stephenson's Seveneves [or my quasi-poetic summary on File770.com — scroll/search down to (18)], or Larry Niven and Jerry Pournelle's Lucifer's Hammer.) In space, even a grain of grit colliding with a satellite or vehicle can be a problem — if it's traveling fast enough. And, according to NASA, more than 500,000 pieces of "space junk" orbit Earth "at speeds up to 17,500 mph." Elon Musk sent a red Tesla Roadster off into deep space. We are filling our route off the planet with junk and even a relatively small piece of this space debris could damage spacecraft or one of many satellites in geosynchronous orbit. For a different perspective, watch a few episodes of Quark, the 1977 science fiction sitcom set on an United Galaxy Sanitation Patrol Cruiser (aka, a space trash collection truck). And if you're a gamer, Mark Crowe's 1989 Space Quest III: The Pirates of Pestulon apparently includes Garbage Freighters. Trash in space is a concern inside crewed vehicles and space stations, but the lack of gravity — and in some cases, atmosphere — makes proper trash-handling different and harder than down here on planet Earth. Depending on the mission, you may not want to simply bag that trash up until you get to wherever you're going to dispose of it, either. In a closed system, "garbage" represents a valuable source of materials for reuse, repurposing, or recycling — a point that James S.A. Corey's science fiction book series, The Expanse, makes throughout. (It's also a great TV series — originally on SyFy.com, now via Amazon Prime — check it out!) How Big Is the Space Litter Problem? According to NineSigma, "For a mission lasting 1 year, a team of four astronauts would generate approximately 2,500 kilograms of waste." Astronaut logistical waste can, says NineSigma, contain a variety of products, including: Fabrics (from discarded clothing) Human waste Hygienic wipes Low- and high-density plastics Paper. The big challenge in trash collection, recycling, and repurposing in microgravity environments is moving waste along the processing path — without relying on gravity to provide "down" force. The aim of this challenge, says NineSigma, was "to identify receptacle and feeder mechanisms suitable for a microgravity environment that can deliver mission waste for decomposition." And, according to NASA, "The purpose of the challenge is to engage the public to develop methods of processing and feeding trash into a high-temperature reactor. This will help NASA's Advanced Exploration Systems and space technology programs develop trash-to-gas technology that can recycle waste into useful gases." And the Winners Are … According to NASA and NineSigma's April 1 announcement, "The NASA Tournament Lab (NTL) crowdsourcing challenge received submissions from participants around the world. A panel of judges evaluated the solutions and selected one first place and two second place winners." The Recycling in Space challenge winners: First place ($10,000): Waste Pre-Processing Unit — Aurelian Zabciu, Romania Second place ($2,500): Microgravity Waste Management System — Derek McFall, United States Second place ($2,500): Trash-Gun (T-Gun) — Ayman Ragab Ahmed Hamdallah, Egypt According to NASA, "The three winners brought a variety of approaches to the table for the challenge. Zabciu's submission proposed incorporating space savings features and camera-actuated ejectors to move trash through the system, before bringing it to another mechanism to complete the feed into the reactor. McFall's submission indicated it would use a hopper for solid waste and managed air streams for liquids and gaseous waste. Hamdallah proposed using air jets to compress the trash and cycle it through the system instead of gravity." Mary Robinette Kowal, a three-time Hugo Award winning science fiction author whose recent novels, The Calculating Stars and The Fated Sky, include a lot of space mission planning and action, says, "As space missions get longer in duration — and farther from Earth — recycling and repurposing will be even more important." I asked Kowal in her capacity as both a science fiction writer and reader if she had any observations or suggestions for the creators inventing new space tech — and the people who will be using it on-site. "There is a difference between policy and the way people actually live," says Kowal. "For long-duration missions, you have to look at the latter. One way to get some real-world insights is by looking at communities like Iceland and other island nations where people have a fixed set of resources to draw upon." The big thing for me," says Kowal, "is that whatever procedures and policies that the planners come up with, it will be something that works fine in the long term … but people often forget the human part of these equations, how policies flex based on actual lived conditions. For example, you'll have the space equivalent of the 'junk drawer' — which the International Space Station already has one of." "If you have can break things down into their constituent parts and materials, and have 3D printers," she suggests, "that gives you more flexibility. And you will also have people repurposing existing materials for completely new, unanticipated uses, whether it's to fix things, build new devices, or make art." Image by Free-Photos from Pixabay You, Too, Can Be Part of Space Challenges & Citizen Science These recycling technologies could prove useful not just on future space missions, but on planetary surfaces, including here on Earth. For tech entrepreneurs and students of all ages, challenges like these offer opportunities to create, compete, and gain visibility — and to engage in space programs and other citizen science activities. See the NASA Tournament Lab to participate in open NTL challenges. And visit NASA Solve! for information challenges, citizen science activities, and prize competitions that help develop NASA-mission-related problems. Odds are you won't win a trip to the moon or even a space suit … but ya never know, it could happen. (function(d, s, id){ var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "http://connect.facebook.net/".$x_fb_locale.'/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); Previous articleJunk Removal, E-Waste And Your Old Phones Next articleSure creates first recyclable tweet to highlight power of recycling MUA Cracking Down on your Recycling Recycling and waste at the Transfer Station: Questions and Answers Local musicians can trade-in and recycle old strings for new at... $100M update will send less ash to landfill Has recycling become the elephant in the room? Report Highlights The Competitive Scenario With Impact Of Drivers And Challenges... Expert Says Colorado Measles Outbreak Is Certain; More Of Your Recycling... Inside Hayden Wrecking's Butterball demo Augusta scraps plan, for now, to fine nonresidents who put items... Page not found – Fort Bragg Advocate-News
CommonCrawl
Medical & Biological Engineering & Computing July 2017 , Volume 55, Issue 7, pp 1109–1122 | Cite as Multi-parametric study of temperature and thermal damage of tumor exposed to high-frequency nanosecond-pulsed electric fields based on finite element simulation Yan Mi Shaoqin Rui Chengxiang Li Chenguo Yao Jin Xu Changhao Bian Xuefeng Tang Special Issue - Original Article First Online: 16 November 2016 High-frequency nanosecond-pulsed electric fields were recently introduced for tumor or abnormal tissue ablation to solve some problems of conventional electroporation. However, it is necessary to study the thermal effects of high-field-intensity nanosecond pulses inside tissues. The multi-parametric analysis performed here is based on a finite element model of liver tissue with a tumor that has been punctured by a pair of needle electrodes. The pulse voltage used in this study ranges from 1 to 4 kV, the pulse width ranges from 50 to 500 ns, and the repetition frequency is between 100 kHz and 1 MHz. The total pulse length is 100 μs, and the pulse burst repetition frequency is 1 Hz. Blood flow and metabolic heat generation have also been considered. Results indicate that the maximum instantaneous temperature at 100 µs can reach 49 °C, with a maximum instantaneous temperature at 1 s of 40 °C, and will not cause thermal damage during single pulse bursts. By parameter fitting, we can obtain maximum instantaneous temperature at 100 µs and 1 s for any parameter values. However, higher temperatures will be achieved and may cause thermal damage when multiple pulse bursts are applied. These results provide theoretical basis of pulse parameter selection for future experimental researches. Multi-parameters Temperature Thermal damage Tumor High-frequency nanosecond-pulsed electric fields Finite element method Electroporation, as an intrinsically nonthermal phenomenon, is reversible when electric fields are used up to a specific level, but becomes irreversible at higher field levels. An irreversible electroporation (IRE) treatment includes electrode placement within the target region and delivery of a series of electric pulses of microsecond-scale single pulse duration with a low frequency. These microsecond-long high-voltage pulses can not only cause IRE on a cell membrane and then changes in the cell function, but can also induce biomedical effects such as apoptotic effects, anti-angiogenic effects and immune responses [35, 38]. Ultimately, IRE can achieve the goal of tumor ablation. IRE has recently also been considered as a nonthermal treatment modality to destroy tumors [14, 47, 55]. The most significant advantage of IRE is that it only affects the cell membrane while keeping the extracellular matrix (ECM) around the targeted cells intact by reducing Joule heating [47]. However, statistics from clinical trials show that muscle contraction appears during the pulsed electric field process and the patients suffer from muscle contraction discomfort during the treatment [7, 24]. When the width of the applied electric field pulse is reduced to the ns level, the electric field strength increases to the MV/m level, and the biological effects induced by nanosecond-pulsed electric fields (nsPEFs) are different from those of the aforementioned IRE. While no apparent irreversible electroporation phenomenon occurs on the cell membrane, a series of functional changes occur inside the cell, and then, apoptosis is induced [50, 51]. However, because of the high intensity of the pulsed electric fields that are applied to the electrodes, the treatment may cause surface discharges on the targeted tissue and skin burns. To combine the advantages of both microsecond pulsed electric field (μsPEF) and nsPEF treatments, we introduced a high-frequency nsPEF protocol for treatment of tumors. Studies have shown that when a high-repetition-rate nsPEF is applied, the number of pulses makes a greater contribution to the killing effects than the field strength and the pulse width. In fact, an increase in the electric field pulse repetition frequency can inhibit patient muscle contraction [1, 10, 37, 39, 46, 57]. Therefore, we forecast that a high-repetition-frequency nsPEF increased to the 100 kHz level will effectively restrain patient muscle contraction. In addition, when the field strength is reduced to less than the breakdown field strength of air (10 kV/cm level), it will also effectively solve the problem of skin burns caused by the electrode discharge during nsPEF treatment. Consequently, the protocols that are proposed in this study can solve the problems of μsPEF and nsPEF treatments in cancer therapy, but also, through a synergistic effect, simultaneously perform the tasks and enhance the effects of inducing tumor cell necrosis and apoptosis. In addition, the high-frequency pulses can produce a more uniform electric field distribution to prevent tumor recurrence [2, 5]. Thus, this protocol is expected to provide a better outcome from cancer treatments. Finally, it should be noted that high intensity pulsed electric fields will cause Joule heating, which should be avoided in electroporation applications, because temperature control is important even in IRE treatments. Lackovic et al. simulated the temperature distribution of a liver with needle electrodes during and after eight 100 μs, 1500 V/cm pulses and eight 50 ms, 250 V/cm pulses, with a repetition frequency of 1 Hz. The simulation results show that the Joule heating depends on the conductance of the tissue and the pulse parameters [32]. They also found that when the repetition rate increased from 1 Hz to 1 kHz, it could cause the tissue temperature to increase, but still by less than 3 °C [34]. Davalos et al. [15] elaborated on the determination of the temperature distribution and how to assess the thermal effects. They also investigated the temperature distribution and the thermal damage in the brain based on numerical models. The temperature was measured at the same time [21]. Thus, it is essential to pay greater attention to the thermal effects when tissue is exposed to high-frequency nsPEF treatment with a field strength that is greater than 1 kV/cm but less than 10 kV/cm. However, recent studies with regard to the temperature increase aspects of thermal damage are mainly concerned with the thermal effect under a given pulse parameter, or are simply research on the influence of a single parameter on tissue heating [12, 15, 21, 31, 32, 34, 41]. Therefore, in this study, we provide a multi-parameter analysis method to determine the relationship between the thermal effects and the pulse parameters (e.g., pulse width, pulse amplitude, repetition rate) and then to predict the temperature increase and the thermal damage. The results of this work can provide theoretical guidance for parameter selection in future tumor treatments using high-frequency nsPEFs. 2.1 Finite element model This study was based on use of a finite element model by using finite simulation element analysis software of COMSOL Multiphysics to calculate the electrothermal coupling. The tumor model adopted a spherical geometry and the normal tissue around the tumor was represented by a cylinder, with its size as shown in Fig. 1. The liver diameter is 10 cm, and height is 10 cm. A pair of needle electrodes was used for hepatic tumor ablation. To maximize tumor ablation while reducing the damage to normal tissue around the tumor, the electrode needles were inserted directly into the tumor. The distance between the electrodes and the depth of penetration were all based on our previous optimization of a simulation for a tumor with a diameter of 1 cm [56]. The needle diameter is 1 mm, while the distance between electrodes is 5.4 mm and the insertion depth of electrodes is 6 mm. The purpose of the optimization was to make the best use of the electric fields and maximize the ratio of the tumor melted by the electric fields to the normal tissue ablation volume. According to Fig. 1b, the structure of free split tetrahedral was used. The smallest element size of electrodes and tumor is 0.4 mm, while the smallest element size of liver is 1.8 mm. The number of degrees of freedom is 405,643. Geometrical model of tissue and electrodes. Liver diameter: 10 cm; liver height: 10 cm; tumor diameter: 1 cm; needle diameter: 1 mm; interelectrode distance: 5.4 mm; electrode insertion depth: 6 mm. a Geometrical model; b meshing model 2.2 Parameter model We introduced a type of high-frequency nanosecond pulses illustrated in Fig. 2. The electric field strength is from 1 to 10 kV/cm, the pulse width is range from 50 to 500 ns, while the repetition rate is from 100 kHz to 1 MHz. One of the characteristics of this pulse protocol is that the total pulse time is 100 µs no matter what the pulse width or repetition rate is. For example, when pulse width is 50 ns and pulse repetition rate is 1 MHz, the total high level duration is 5 µs. For pulse bursts with 1 Hz repetition frequencies, we run simulations for 1 s, which means one pulse burst. Schematic representation of pulse trains used in the simulations; pulse voltage: 1, 2, 3, 4 kV; pulse width: 50, 100, 250, 500 ns; pulse frequency: 100, 250, 500 kHz, 1 MHz; and repetition frequency of pulse bursts: 1 Hz The electrical properties and thermal properties of the tissue (rat liver) and the electrodes (stainless steel) were taken from the literature [3, 40, 52] and are listed in Table 1. The initial electrical conductivity of the rat liver that we used in this study was 0.067 S/m, and the conductivity of the tumor was 0.135 S/m [48]. It had previously been proved that tissue electrical conductivity increases because of electroporation during application of high-voltage pulses [52]. To analyze the temperature rise in the tissue, we ran simulations using the simplified model of the electrical conductivity. It was shown that when the tissue was electroporated, the electrical conductivity of the liver increased to 0.241 S/m and that of the tumor was 0.426 S/m [48]. The multiple of the increase in conductivity was in accord with the results of measurements by other researchers [40]. The reason for this simplification was that the parameters studied in this paper were already more enough. The purpose of the analysis was to study the relationship between the pulse parameters and the thermal effects. Therefore, it was most effective to simplify the calculation in this way. The threshold value of the field strength required to decide whether or not the tissues were electroporated was 800 V/cm [4, 14], which is considered to be the threshold for irreversible electroporation. The IRE pulse protocols are described as several 100 µs pulses with a frequency of 1 Hz. Because the total pulse time in this manuscript is 100 µs, it is reasonable to use 800 V/cm as the threshold. Mass density [ρ (kg/m3)] Heat capacity [Cp (J kg−1 K−1)] Thermal conductivity [k (W m−1 K−1)] Electrical conductivity [σ (S/m)] Blood perfusion [ω b (1/s)] Metabolic heat [Q m (W/m3)] 0.067 (initial) With regard to the blood perfusion and metabolic heat generation in biological heat transfer, the blood density and the heat capacity were ρ b = 1000 kg/m3 and c b = 4200 J/(kg K), respectively [28, 36]. Different values for blood perfusion and the metabolic heat of both the liver and the tumor are also listed in Table 1. The value of the blood perfusion and the metabolic heat of the tumor were both larger than that of the liver because of the specific characteristics of the tumor [16, 28, 36]. The temperature coefficient of electrical conductivity was 1.5%. Finally, the initial temperature and the arterial blood temperature were both 37 °C. 2.3 Computing method The electric potential distribution within the tissue was obtained by transient solution of the following: $$- \nabla \cdot (\sigma \nabla \varphi ) = 0 ,$$ where φ is the electric potential and σ is the tissue conductivity. Heat transfer in the tissue can be modeled using the bioheat equation that was proposed by Pennes [43]: $$\rho c\frac{\partial T}{\partial t} = \nabla \cdot (k\nabla T) + \rho_{\text{b}} w_{\text{b}} c_{\text{b}} (T - T_{\text{b}} ) + Q_{\text{m}} + Q$$ Here, T is the temperature, t is the time, ρ, c and k are the density, the heat capacity and the thermal conductivity of the tissue, ω b is the blood perfusion, ρ b and c b are the density and the heat capacity of blood, T b is the temperature of the arterial blood, Q m is the metabolic heat, and Q is the Joule heating caused by the electric field. $$E = - \nabla \varphi$$ $$J = \sigma E$$ $$Q = JE = \sigma |\nabla \varphi |^{2}$$ (3c) E is the electric field and J is the current density. The Pennes equation is a thermal–electric coupled field calculation, from which we can obtain the temperature of the biological tissue. The electrical boundary condition at one electrode–tissue interface was set to be φ = φ(t), and φ(t) was the time-varying voltage. The other electrode–tissue interface was set at φ = 0. The remaining boundaries were treated as electrical insulation and are described by \(\frac{{{\text{d}}\varphi }}{{{\text{d}}n}} = 0\). The outer surface of the liver tissue was set to be thermal insulation. Thermal damage is a process that depends on both the temperature and the time, and occurs when the tissue temperature is elevated over an extended period of time. Some entries in the literature explain that damage can occur at temperatures as low as 42 °C if the exposure is long enough, while 73.4 °C is regarded as the target temperature for instantaneous thermal damage in liver tissue [8, 34, 42, 53, 54]. Also, some researchers believe that temperatures higher than 43–45 °C will lead to protein denaturation and destruction of the cell structure, which will eventually lead to cell necrosis. If the tissue temperature increases in a transient manner but to less than 45–50 °C, the effects may be negligible in terms of thermal injury [20, 33]. This is largely in line with the accepted viewpoint that if the temperature increase exceeds 8 °C, proteins will tend to denature [2]. Consequently, in this study we investigated the parameters that maintained temperature at a level below 44 °C. Simultaneously, the well-known Arrhenius first-order kinetic model was also used to evaluate the thermal damage to the tissue. The thermal damage Ω accumulated for time t is represented by the following equation [17]: $$\varOmega (t) = A\int\limits_{0}^{t} {\exp ( - E/RT){\text{d}}t} ,$$ where A (1/s) is the pre-exponential factor, E (J/mol) is the activation energy, R (= 8.314 J/(mol K)) is the universal gas constant and T (K) is the absolute temperature. The damage process and the parameters are listed in Table 2. The parameters used in this computation are the pre-exponential factor A of 7.39e39 (1/s) and the activation energy E of 2.577e5 (J/mol), which represent protein coagulation [23]. Parameters of damage process Damage process E(J/mol) A(1/s) Microvascular blood flow stasis 1.98e106 Cell death 2.984e80 Protein coagulation 7.39e39 $$P\,(\% ) = 100(1 - \exp ( - \varOmega ))$$ In terms of finite element modeling of the thermal damage, the value of Ω = 1 corresponds to a 63% probability of cell death, while the value of Ω = 4.6 represents a 99% probability of cell death due to the thermal effects. And the value of 0.53 is used as the threshold needed for thermal damage [21]. The computations use parameter scanning and transient solutions. Because the elapsed pulse time is very short (the total pulse length is 100 μs), particular attention was paid to the control of the time steps in the variable-step solver. We introduced time steps of 10 ns during the first 100 μs, and then extended the time step to 1 ms up to a total time of 1 s. 3.1 Simulation results for temperature and thermal damage When different pulsed voltages were applied, different temperature rises occurred in the tumor. The pulse voltage used in this analysis ranged from 1 to 4 kV, the pulse width ranged from 50 to 500 ns and the frequency ranged between 100 kHz and 1 MHz. The total pulse length was 100 μs, while the total simulation ran for 1 s. According to the simulation results, tumor electric field distribution when applying voltage of 4000 V is shown in Fig. 3a. Since the field of needle electrode is uneven, the electric field strength is higher on the interface of electrodes and tissue. And when applying pulse voltage of 4000 V, the coverage of 800 V/cm is illustrated in Fig. 3b. According to Fig. 3b, we can see that in that situation, the whole tumor was electroporated. In this way, we can not only obtain the temperature increase due to the pulsed electric fields, but we can also determine the maximum instantaneous temperature at 1 s related to the heat dissipation process of the tissue. The distributions of the maximum instantaneous temperature and the thermal damage at 1 s in the tumor are shown in Fig. 3c, d. The temperature indicates that the maximum instantaneous temperature at 1 s reaches 40.4 °C in the tumor near the electrodes. Additionally, the main area of temperature increase is focused around the tissue regions near and between the electrodes. The tissue in this region electroporated and the electrical conductivity consequently increased, as did the temperature. Figure 3d indicates that the thermal damage distribution is similar to that of the temperature, and the maximum thermal damage is only 0.0016 at the end of the 1 s simulation. At the same time, the shapes of the temperature and thermal damage distributions are the same as the results reported by other researchers [21]. Spatial distribution (xy cross section, z = 50 mm) of electric field intensity (a), coverage of 800 V/cm (b), temperature (c) and thermal damage (d) in the tumor at the time point of 1 s when the applied voltage is 4000 V, the pulse width is 500 ns and the repetition rate is 1 MHz Figure 4 demonstrates that the maximum instantaneous temperature at 100 µs increases approximately linearly with time during the pulses, and when the pulse is removed, the temperature decreases exponentially. From Fig. 4b, which shows a detailed view of Fig. 4a, we can see that the temperature actually increases stepwise over time. Figure 4c and d also shows that the maximum thermal damage changes nonlinearly with time within the first 100 μs, and then increases linearly. Figure 4e shows the change in percentage of cell kill due to thermal damage (P) with time, and Fig. 4f shows the enlargement of Fig. 4e. They were so much like the figure of thermal damage; just the value of P (%) is 100 times larger than thermal damage. Changes in temperature, thermal damage and percentage of cell kill due to thermal damage with time: a, c and e show the temperature, thermal damage and percentage of cell kill due to thermal damage curves, respectively, when the pulse width is 500 ns and the repetition rate is 1 MHz; b, d and f show enlarged versions of (a), (c) and (e), respectively, focusing on the rise time. Different curves are the change under different pulse voltage: 1000, 2000, 3000 and 4000 V The maximum instantaneous temperature at 100 µs and maximum instantaneous thermal damage at 1 s that can be acquired are 49.26 °C and 0.0016, respectively. 3.2 Relationship between thermal effects and pulse parameters The graphs are all drawn from the results of simulations by interpolation. Figure 5 displays the relationship between the temperature and the pulse parameters. More specifically, Fig. 5a, b shows diagrams of the maximum instantaneous temperature at 100 µs and the maximum instantaneous temperature at 1 s, respectively, when the pulse width is 500 ns. Figure 5c and d corresponds to a frequency of 1 MHz. From these figures, we can conclude that the relationship among the temperature, pulse width and frequency is linear, and the relationship between the temperature and the voltage follows a square law. Similarly, Fig. 6a illustrates the relationship among the maximum thermal damage in the tumor, the pulse voltage and the repetition frequency when the pulse width is 500 ns. The relationship among the thermal damage, the applied voltage and the pulse width is shown in Fig. 6b when the repetition frequency is 1 MHz. Relationship among tumor temperature, pulse voltage and frequency when the pulse width is 500 ns: a maximum instantaneous temperature at 100 µs and b maximum instantaneous temperature at 1 s; when the repetition frequency is 1 MHz, (c) and (d) show the relationships among tumor temperature, pulse voltage and pulse width Relationship among thermal damage, pulse voltage, pulse width and frequency: a when the pulse width is 500 ns; b when the frequency is 1 MHz 3.3 Determination of pulse parameters without causing thermal damage Data were also processed to determine the upper specification limit to ensure that the temperature increase remains below 44 °C. The results are displayed in Figs. 7 and 8. Because our simulations only run for 1 s, the temperature does not get very high and the maximum instantaneous temperature at 100 µs can only reach 44 °C. Therefore, the following analyses were based entirely around this time point. In Fig. 7, via a two-dimensional parameter analysis, we can obtain the range of parameters that ensure that the temperature does not exceed 44 °C. When the value of the pulse width is a constant or when the frequency is constant, the parameters on the bottom left of the curve represent the desired parameter range. It is obvious that the temperature can only increase over 44 °C when the pulse width is 500 ns and the frequency is 1 MHz. For example, from Fig. 7a we can know that when pulse width is 500 ns and voltage is 4000 V, to make the temperature below 44 °C, repetition rate cannot be greater than 600 kHz. In the same way, when repetition rate is 1 MHz and voltage is 4000 V, pulse width cannot be greater than 300 ns. Also, we adopted a multiple parameter analysis method to evaluate the ranges of the three parameters for more specific results. Figure 8a, b shows the different angles of the three-dimensional curved surface, while the three axes represent the three pulse parameters: voltage, pulse width and frequency. From Fig. 8a, it is obviously a three-dimensional surface like a small piece of paper, and Fig. 8b is from the perspective of looking over the top right of Fig. 8a. The regions below the curved surface and close to the point of origin refer to temperatures of less than 44 °C, and the ranges of the three axes indicate which of the pulse parameters can be used at the same time. The results of our investigation, as shown in Figs. 7 and 8, can be used to provide theoretical guidance for parameter selection in practical experiments. Temperature contours for 44 °C under different voltages, pulse widths and repetition frequencies: a when the pulse width is 500 ns; b when the frequency is 1 MHz Curved surface for 44 °C under different voltages, pulse widths and repetition frequencies By calculating the electric field coupling with the thermal fields based on finite element simulations, the temperature and thermal damage profiles were obtained. On this basis, the data were analyzed to draw these figures and to pave the way for subsequent data fitting and estimation processes. 4.1 Prediction of temperature and thermal damage under high-frequency nanosecond pulse bursts This study provides an insight into the behavior of tissue thermal effects when short, square wave electric pulses of electroporation protocols are applied to the tissue. These protocols are different from the traditional IRE pulses and also from the pulse bursts of conventional nsPEF treatment. They are introduced as high-frequency nanosecond pulses, but the total pulse length is 100 μs. This kind of pulse can be considered as the use of microsecond pulses to modulate the nanosecond pulses to overcome the shortcomings of the two pulse protocols. However, it is noteworthy that these pulses may cause thermal damage to the tissue because of their high field strengths. This effect is important for planning of treatment protocols in the vicinity of sensitive structures such as blood vessels and nerves. Therefore, it is necessary to study the thermal effects under these high-frequency nanosecond-pulsed conditions. Based on the results from the simulations, we can obtain the maximum instantaneous temperature at 100 µs and maximum thermal damage at 1 s in the tumor under different voltages, pulse widths and repetition frequencies. The maximum instantaneous temperature at 100 µs and maximum thermal damage at 1 s that can be achieved are 49.26 °C and 0.0016, respectively, when the energy injection is a maximum. This suggests that thermal damage will not be caused within a single pulse burst. Because the relationships between the temperature and the pulse parameters were analyzed above, the maximum instantaneous temperature at 100 µs and the maximum instantaneous temperature at 1 s for the tumor can be fitted to the following formulas: $$T_{\text{m}} \approx T_{0} + (1.5 \times 10^{ - 12} p_{\text{w}} fV^{2} )N,\;\left( {N\,{\text{is not too large}}} \right)$$ $$T_{\text{f}} \approx T_{0} + (4.8 \times 10^{ - 13} p_{\text{w}} fV^{2} )N,\;\left( {N\,{\text{is not too large}}} \right)$$ where T m (°C) and T f (°C) are the maximum instantaneous temperature at 100 µs and the maximum instantaneous temperature at 1 s in the tumor, respectively, T 0 (= 37 °C) is the initial temperature, p w (ns) is the pulse width, f (kHz) is the repetition frequency, V is the voltage applied to the electrodes and N is the number of pulse bursts. In this simulation, we run for only one pulse burst. However, it can be roughly estimated that the temperature will increase after multiple pulse bursts by a factor of N. The temperature prediction curve is illustrated in Fig. 9, where the x-axis represents the number of pulse bursts and the y-axis represents the temperature increase in the tumor. The curve is when pulse voltage is 4000 V, pulse width is 500 ns, and pulse repetition is 1 MHz. From Fig. 9, we can see when increase the number of pulse burst to 8, temperature will reach to 75 °C, which may cause instantaneous thermal damage in tumor. According to this method, we can get the temperature rise under multiple pulse bursts. However, in fact, the temperature rise after each pulse burst is not the same as the number of pulses increases. When the temperature of biological tissue continues to rise, the cooling process is also become more obvious. If we assume that the temperature rise is same after each pulse burst, we can get an upper bound on the maximum temperature. Because the thermal damage is associated with the time integral of the temperature, it will reach such heights to cause the thermal damage when subjected to several bursts of pulses. This demonstrates the cumulative effect of the temperature and is related to the enclosed area below the curve. Consequently, we can roughly estimate the temperature increase in the tumor for a parameter choice that does not cause thermal damage. It should also be noted that when the treatment outcome is taken into consideration, we should impose more bursts of pulses. We should then wipe out a portion of the area near the electrodes because of the hot spots that always exist when performing an analysis of the thermal effects for parameter selection. However, it has still to be determined whether the removed segment is sufficient to meet the clinical requirements. Temperature prediction curve 4.2 Limitations of the simulations Because this simulation aims to study the relationship between the thermal effects and the pulse parameters, we have drawn a number of conclusions from the results. However, there are also some limitations to our simulations: First, this paper studies the influence of multiple parameters (voltage, pulse width and frequency) of high-frequency nanosecond pulses on the thermal effects. The number of values for each parameter that we discussed is only four. Even so, 64 parameter combinations are sufficient for a study of the rules, and more parameters could greatly increase the difficulty of the calculations. Unlike other studies in the literature, in which only a few parameters are studied [6, 19–24 31], this analysis was designed to be based on a multi-parameter perspective to determine the rules for fitting and estimating these parameters. Second, some measurements have been performed to study the nonlinear increase in the tissue conductivity during IRE and nsPEF therapies when the tissues are exposed to sufficiently high electric fields [22, 26, 40, 45, 52]. However, few studies have been performed on the changing conductivity characteristics of tissue when subjected to high-frequency composite pulses. Bhonsle et al. [6] measured the conductivity before and after application of high-frequency bipolar pulses. The protocols in this simulation used high-frequency unipolar pulses and the changes in conductivity remain unclear. To simplify the calculations, we used a simple model of conductivity changes instead. The initial electrical conductivity of the rat liver that we used in this study was 0.067 S/m and the conductivity of the tumor was 0.135 S/m. When the tissue was electroporated, the electrical conductivity of the liver increased to 0.241 S/m and that of tumor increased to 0.426 S/m. Neal et al. used an equivalent circuit of a cell to analyze the bioimpedance behavior. A variable resistance was introduced to represent the macroscopic behavior of tissue under the influence of pulsed electric fields. When effective electric fields are applied, the resistance is a function of only the intra- and extracellular resistances because the variable resistance is short-circuited. The same effect is produced when the frequency of the pulses is high enough to make the capacitive component of the cell membrane short circuit [52]. The behavior of a single cell can be scaled to represent that of a larger tissue sample [18, 19]. In this way, we can obtain a method to estimate the increase in electrical conductivity that occurs when electroporation and the high-frequency components of the pulses produce a synergistic effect for further study. Third, the threshold field for electroporation that was used in this study was 800 V/cm [4, 14]. However, as the pulse frequency increases, the permeabilization thresholds also increase [25, 49]. Different protocols may cause different increases in the threshold value. We can hardly set an increased electric field threshold for high-frequency nsPEF treatment casually without performing a great deal of preparatory experimental research. This article is intended to provide a simulation method to study the thermal effects, and therefore, it is acceptable to use the threshold field for irreversible electroporation. Finally, this study of temperature and thermal damage has been performed on the basis of numerical simulations and thus lacks experimental verification. Despite this, the study is useful from the perspective of using multiple parameters to investigate the relationship between the thermal effect and the pulse parameters (voltage, pulse width and repetition frequency) under application of high-frequency nanosecond composite pulses. 4.3 Future work It is important to obtain accurate values of the changes in conductivity to calculate the electric field distribution and predict the outcomes of use of high-frequency pulses and the thermal effects. There are many studies that have measured the increases in tissue conductivity during electroporation-based protocols [9, 11, 26, 27, 29, 45]. The feasibility of using electrical impedance tomography [11, 13] and magnetic resonance electrical impedance tomography [29, 30] to monitor the electric field distributions has also been suggested. Our next work is to measure the conductivity of tissue when subjected to high-frequency nanosecond pulses, and to verify the effects of the high-frequency components on the electrical conductivity. Temperature measurement is also vital to verify the accuracy of the models by comparing the experiment results with those of the theoretical calculations. Garcia et al. used a fiber optic temperature sensor to measure the temperature inside the tissue [12, 22]. A thermocouple was used by Pliquett et al [44]. for bulk temperature measurements, while temperature-sensitive liquid crystal was also used to measure the surface temperature. A thermal camera can also be used to capture the surface temperatures [6]. From a local viewpoint, the protocol proposed in this study can be viewed as use of high-frequency nanosecond pulses, but it also has the characteristics of microsecond pulses overall. Further research is necessary to assess the treatment outcomes, including the mechanism when a tumor is exposed to such pulses. In general, when applying IER pulses, it will appear on the cell membrane of several nanometers to several tens of nanometers pores. The poles with several nanometers size will recover while pores with tens of nanometers size will continue to expand to several hundred nanometers or micrometers, which are irreversible. But when applying low-frequency nsPEF, it will also appear on pores of several nanometers size, which are reversible. So when apply high-frequency nsPEF, it will produce some small nanopores at the beginning, and then, because the total pulse time is 100 μs, the nanopores may be expanded like IRE. We assume that nsPEFs can produce nanopores on the cell membrane, which will promote irreversible electroporation on the cell membrane by μsPEF. When the outer membranes have been corrupted, this will have beneficial effects for electroporation of the organelle membrane to induce apoptosis. There is a hypothesis that nsPEFs combined with μsPEFs are applied on both the inner and outer membrane, inducing tumor cell necrosis and apoptosis by a direct killing effect and slow indirect regulation, but numerous experiments are still required to verify this hypothesis. In this study, we have presented a type of pulse protocol for electroporation-based therapies. The pulse voltage used is in the range from 1 to 4 kV, and the pulse width ranges from 50 ns to 500 ns, while the repetition frequency is in the range between 100 kHz and 1 MHz. The total pulse length is 100 μs, and the repetition rate of the pulse bursts is 1 Hz. To analyze the thermal effect on the tumor, simulation models were developed based on finite element methods. Results from the simulations indicate that the maximum instantaneous temperature at 100 µs is up to 49.26 °C, and the maximum instantaneous temperature at 1 s and maximum instantaneous thermal damage at 1 s reach values of 40.4 °C and 0.0016, respectively, during a single pulse burst. Through multi-parameter analysis, we can obtain rules on how the pulse parameters affect the temperature and the thermal damage. By parameter fitting, maximum instantaneous temperature at 100 µs and 1 s for any parameter value after a single pulse burst or multiple pulse bursts can be calculated. In addition, higher temperatures are likely to be achieved and may cause thermal damage, based on parameter estimation when several bursts of pulses are applied. The results of temperature and thermal damage calculations performed using different high-frequency nsPEF parameters can provide a theoretical basis for selection of parameter options for experimental research. This study was funded by the National Natural Science Foundation of China (51477022, 51321063), the Natural Science Foundation Project of CQ CSTC (cstc2014jcyjjq90001) and the Fundamental Research Funds for the Central Universities (No. 106112015CDJZR158804). Arena CB, Sano MB, Rossmeisl JH, Caldwell JL, Garcia PA, Rylander MN, Davalos RV (2011) High-frequency irreversible electroporation (H-FIRE) for non-thermal ablation without muscle contraction. Biomed Eng Online 10:102CrossRefPubMedPubMedCentralGoogle Scholar Arena CB, Sano MB, Rylander MN, Davalos RV (2011) Theoretical considerations of tissue electroporation with high frequency bipolar pulses. IEEE Trans Bio-med Eng 58(5):1474–1482CrossRefGoogle Scholar Arena CB, Szot CS, Garcia PA (2012) A three-dimensional in vitro tumor platform for modeling therapeutic irreversible electroporation. Biophys J 103(9):2033–2042CrossRefPubMedPubMedCentralGoogle Scholar Bertacchini C, Margotti PM, Bergamini E et al (2007) Design of an irreversible electroporation system for clinical use. Technol Cancer Res Treat 6(4):313–320CrossRefPubMedGoogle Scholar Bhonsle SP, Arena CB, Davalos RV (2015) A feasibility study to mitigate tissue-tumor heterogeneity using high frequency bipolar electroporation pulses. 6th European conference of the international federation for medical and biological engineering, vol 45. Springer international publishing, pp 565–568Google Scholar Bhonsle SP, Arena CB, Sweeney DC, Davalos RV (2015) Mitigation of impedance changes due to electroporation therapy using bursts of high-frequency bipolar pulses. Biomed Eng Online 14(Suppl 3):S3CrossRefPubMedPubMedCentralGoogle Scholar Breton M, Mir LM (2012) Microsecond and nanosecond electric pulses in cancer treatments. Bioelectromagnetics 33:106–123CrossRefPubMedGoogle Scholar Chang IA, Nguyen UD (2004) Thermal modeling of lesion growth with radiofrequency ablation devices. Biomed Eng Online 3:27CrossRefPubMedPubMedCentralGoogle Scholar Cima LF, Mir LM (2004) Macroscopic characterization of cell electroporation in biological tissue based on electrical measurements. Appl Phys 85:4520–4522Google Scholar Daskalov I, Mudrov N, Peycheva E (1999) Exploring new instrumentation parameters for electrochemotherapy—Attacking tumors with bursts of biphasic pulses instead of single pulses. IEEE Eng Med Biol 18(1):62–66CrossRefGoogle Scholar Davalos RV, Rubinsky B, Otten DM (2002) A feasibility study for electrical impedance tomography as a means to monitor tissue electroporation for molecular medicine. IEEE Trans Biomed Eng 49(4):400–403CrossRefPubMedGoogle Scholar Davalos RV, Rubinsky B, Mir LM (2003) Theoretical analysis of the thermal effects during in vivo tissue electroporation. Bioelectrochemistry 61(1–2):99–107CrossRefPubMedGoogle Scholar Davalos RV, Otten DM, Mir LM, Rubinsky B (2004) Electrical impedance tomography for imaging tissue electroporation. IEEE Trans Biomed Eng 51(5):761–767CrossRefPubMedGoogle Scholar Davalos RV, Mir ILM, Rubinsky B (2005) Tissue ablation with irreversible electroporation. Ann Biomed Eng 33(2):223–231CrossRefPubMedGoogle Scholar Davalos RV, Garcia PA, Edd JF (2010) Thermal aspects of irreversible electroporation. In: Irreversible electroporation. Springer, Berlin, pp 123–154Google Scholar Deng ZS, Jing L (2004) Mathematical modeling of temperature mapping over skin surface and its implementation in thermal disease diagnostics. Comput Biol Med 34(6):495–521CrossRefPubMedGoogle Scholar Diller KR, Valvano JW, Pearce JA (2000) The CRC handbook of thermal engineering. CRC Press LLC, Boca RatonGoogle Scholar Esser Axel T, Smith Kyle C, Gowrishankar Thiravallur R, Weaver JC (2007) Towards solid tumor treatment by irreversible electroporation: Intrinsic redistribution of fields and currents in tissue. Technol Cancer Res Treat 6:261–273CrossRefPubMedGoogle Scholar Fricke H (1924) A mathematical treatment of the electric conductivity and capacity of disperse systems: I. The electric conductivity of a suspension of homogeneous spheroids. Phys Rev 24:575CrossRefGoogle Scholar Garcia PA, Rossmeisl JH, Ii REN, Ellis TL, Olson JD, Henao-Guerrero N et al (2010) Intracranial nonthermal irreversible electroporation: in vivo analysis. J Membr Biol 236(1):127–136CrossRefPubMedGoogle Scholar Garcia PA, Rossmeisl JH Jr, Neal RE 2nd, Ellis TL, Davalos RV (2011) A parametric study delineating irreversible electroporation from thermal damage based on a minimally invasive intracranial procedure. Biomed Eng Online 10:34CrossRefPubMedPubMedCentralGoogle Scholar Garcia PA, Neal RE, Sano MB, Robertson JL, Davalos RV (2011) An experimental investigation of temperature changes during electroporation. In: General assembly and scientific symposium, 2011 XXXth URSI. IEEE, Istanbul, Turkey, pp 1–4Google Scholar Garcia PA, Davalos RV, Miklavcic D (2014) A numerical investigation of the electric and thermal cell kill distributions in electroporation-based therapiesin tissue. PLoS ONE 8(9):1–12Google Scholar Golberg A, Yarmush ML (2013) Nonthermal irreversible electroporation fundamentals, applications, and challenges. IEEE Trans Biomed Eng 60(3):707–714CrossRefPubMedGoogle Scholar Ibey BL, Xiao S, Schoenbach KH, Murphy MR, Pakhomov AG (2009) Plasma membrane permeabilization by 60- and 600-ns electric pulses is determined by the absorbed dose. Bioelectromagnetics 30(2):92–99CrossRefPubMedPubMedCentralGoogle Scholar Ivorra A, Rubinsky B (2007) In vivo electrical impedance measurements during and after electroporation of rat liver. Bioelectrochemistry 70(2):287–295CrossRefPubMedGoogle Scholar Ivorra A, Al-Sakere B, Rubinsky B, Mir LM (2009) In vivo electrical conductivity measurements during and after tumor electroporation: conductivity changes reflect the treatment outcome. Phys Med Biol 54(19):5949–5963CrossRefPubMedGoogle Scholar Jamil M, Ng EYK (2013) To optimize the efficacy of bioheat transfer in capacitive hyperthermia: a physical perspective. J Therm Bio 38(5):272–279CrossRefGoogle Scholar Kranjc M, Bajd F, Sersa I, Miklavcic D (2011) Magnetic resonance electrical impedance tomography for monitoring electric field distribution during tissue electroporation. IEEE Trans Med Imaging 30(10):1771–1778CrossRefPubMedGoogle Scholar Kranjc M, Bajd F, Sersa I, Woo EJ, Miklavcic D (2012) Ex vivo and in silico feasibility study of monitoring electric field distribution in tissue during electroporation based treatments. PLoS ONE 7(9):e45737CrossRefPubMedPubMedCentralGoogle Scholar Kurata K, Nomura S, Takamatsu H (2014) Three-dimensional analysis of irreversible electroporation: estimation of thermal and non-thermal damage. Int J Heat Mass Transf 72(5):66–74CrossRefGoogle Scholar Lackovic I, Magjarević R, Miklavcic D (2007) Analysis of tissue heating during electroporation based therapy: a 3D FEM model for a pair of needle electrodes. 11th mediterranean conference on medical and biomedical engineering and computing 2007. Springer, Berlin, Heidelberg, pp 631–634Google Scholar Lackovic I, Magjarevic R, Miklavcic D (2009) Three-dimensional finite-element analysis of joule heating in electrochemotherapy and in vivo gene electrotransfer. IEEE Trans Dielectr Electr Insul 16(5):1338–1347CrossRefGoogle Scholar LackoviĆ I, Magjarević R, Miklavčič D (2009) A multiphysics model for studying the influence of pulse repetition frequency on tissue heating during electrochemotherapy. 4th European conference of the international federation for medical and biological engineering. Springer, Berlin, Heidelberg, pp 2609–2613Google Scholar Luo X (2007) An experimental study of energy controllable steep pulse in the treatment of rat with subcutaneous transplantive tumor. J Biomed Eng 24(3):492–495Google Scholar Lv YG, Deng ZS, Liu J (2005) 3-d numerical study on the induced heating effects of embedded micro/nanoparticles on human body subject to external medical electromagnetic field. IEEE Trans Nanobiosci 4(4):284–294CrossRefGoogle Scholar Marty M et al (2006) Electrochemotherapy—An easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: results of ESOPE study. Eur J Cancer Suppl 4(11):3–13CrossRefGoogle Scholar Mi Y (2007) Effect of steep pulsed electric fields on the immune response of tumor-bearing wistar mice. J Biomed Eng 24(2):253–256Google Scholar Miklavčič D, Pucihar G, Pavlovec M, Ribarič S, Mali M, Maček-Lebar A, Petkovšek M, Nastran J, Kranjc S, Čemažar M, Serša G (2005) The effect of high frequency electric pulses on muscle contractions and antitumor efficiency in vivo for a potential use in clinical electrochemotherapy. Bioelectrochemistry 65:121–128CrossRefPubMedGoogle Scholar Neal RE II, Garcia PA, Robertson JL, Davalos RV (2012) Experimental characterization and numerical modeling of tissue electrical conductivity during pulsed electric fields for irreversible electroporation treatment planning. IEEE Trans Biomed Eng 59:1076–1085CrossRefPubMedGoogle Scholar Pavšelj N, Miklavčič D (2011) Resistive heating and electropermeabilization of skin tissue during in vivo electroporation: a coupled nonlinear finite element model. Int J Heat Mass Trans 54:2294–2302CrossRefGoogle Scholar Pearce JA (2009) Relationship between Arrhenius models of thermal damage and the CEM 43 thermal dose. In: Proceedings of SPIE, Energy-based treatment of tissue and assessment V, vol 7181, p 718104. doi: 10.1117/12.807999 Pennes HH (1998) Analysis of tissue and arterial blood temperatures in the resting human forearm. J Appl Physiol 85:5–34PubMedGoogle Scholar Pliquett U, Nuccitelli R (2014) Measurement and simulation of joule heating during treatment of b-16 melanoma tumors in mice with nanosecond pulsed electric fields. Bioelectrochemistry 100:62–68CrossRefPubMedGoogle Scholar Pliquett U, Schoenbach K (2009) Changes in electrical impedance of biological matter due to the application of ultrashort high voltage pulses. IEEE Trans Dielectr Electr Insul 16:1273CrossRefGoogle Scholar Pucihar G, Mir LM, Miklavčič D (2002) The effect of pulse repetition frequency on the uptake into electropermeabilized cells in vitro with possible applications in electrochemotherapy. Bioelectrochemistry 57:167–172CrossRefPubMedGoogle Scholar Rubinsky B (2007) Irreversible electroporation in medicine. Technol Cancer Res Treat 6(4):255–260CrossRefPubMedGoogle Scholar Sahakian Alan V, Al-Angari Haitham M, Adeyanju Oyinlolu O (2012) Electrode activation sequencing employing conductivity changes in irreversible electroporation tissue ablation. IEEE Trans Biomed Eng 59(3):604–607CrossRefPubMedGoogle Scholar Sano MB, Arena CB, DeWitt MR, Saur D, Davalos RV (2014) In vitro bipolar nano-and microsecond electro-pulse bursts for irreversible electroporation therapies. Bioelectrochemistry 100:69–79CrossRefPubMedGoogle Scholar Schoenbach KH, Beebe SJ, Buescher ES (2001) Intracellular effect of ultrashort electrical pulses. Bioelectromagnetics 22(6):440–448CrossRefPubMedGoogle Scholar Schoenbach KH, Hargrave B, Joshi RP, Kolb JF, Nuccitelli R, Osgood C, Pakhomov A, Stacey M, Swanson RJ, White JA, Xiao S, Zhang J, Beebe SJ, Blackmore PF, Buescher ES (2007) Bioelectric effects of intense nanosecond pulses. IEEE Trans Dielectr Electr Insul 14(5):1088–1109CrossRefGoogle Scholar Sel D, Cukjati D, Batiuskaite D, Slivnik T, Mir LM et al (2005) Sequential finite element model of tissue electropermeabilization. IEEE Trans Biomed Eng 52:816–827CrossRefPubMedGoogle Scholar Shafiee H, Garcia PA, Davalos RV (2009) A preliminary study to delineate irreversible electroporation from thermal damage using the arrhenius equation. J Biomech Eng 131:074509CrossRefPubMedGoogle Scholar Thomsen S, Pearce JA (2011) Thermal damage and rate processes in biologic tissues. In: Welch AJ, van Gemert MJC (eds) Optical-thermal response of laser irradiated tissue, 2nd edn. Springer Science + Business Media B.V, Berlin, pp 487–549Google Scholar Thomson KR, Cheung W, Ellis SJ, Park D, Kavnoudias H, Loader-Oliver D, Roberts S, Evans P, Ball C, Haydon A (2011) Investigation of the safety of irreversible electroporation in humans. J Vasc Interv Radiol 22(5):611–621CrossRefPubMedGoogle Scholar Yao CG, Zhao YJ, Dong SL, Chen R, Liao RJ (2015) Optimization of the treatment planning for the tumor ablation of irreversible electroporation based on genetic algorithm. High voltage engineering (in press) Google Scholar Zupanic A, Ribaric S, Miklavcic D (2007) Increasing the repetition frequency of electric pulse delivery reduces unpleasant sensations that occur in electrochemotherapy. Neoplasma 54(3):246–250PubMedGoogle Scholar 1.State Key Laboratory of Power Transmission Equipment & System Security and New Technology, School of Electrical EngineeringChongqing UniversityShapingba District, ChongqingChina 2.The State Grid Tianjin Power Maintenance CompanyHebei District, TianjinChina Mi, Y., Rui, S., Li, C. et al. Med Biol Eng Comput (2017) 55: 1109. https://doi.org/10.1007/s11517-016-1589-3 First Online 16 November 2016 International Federation for Medical and Biological Engineering
CommonCrawl
Programmeringsolympiadens final 2019 - open Contest is over. Not yet started. Contest is starting in -18282 days 19:45:38 Time elapsed You have been captured by an evil giant. You are both in a $N \times M$ big cave consisting of all points with integer coordinates $(x, y)$ such that $0 \le x < N, 0 \le y < M$. The giant plans to eat you, so you must escape before it's too late! The giant is standing with his feet at two different points in the cave. To escape, you plan to put a bar of gold at a third point in the cave. The giant will then bend down to pick the bar up. If the positions for the giant's feet and the gold bar together form an obtuse triangle, the giant will lose his balance and fall down. In this case, you will get an opportunity to run. Write a program that, given the size of the cave, the coordinates for the giant's right foot $(x_1, y_1)$ and the coordinates for the giant's left foot ($x_2, y_2$), finds a point with integer coordinates at which to place the bar of gold, so that the three points form a non-degenerate1 obtuse triangle. The first line of the input contains two integers $N$ and $M$ ($1\leq N, M \leq 10^9$), the size of the cave. The second line contains four integers $x_1$, $y_1$, $x_2$ and $y_2$ ($0\leq x_1, x_2 < N$, $0\leq y_1, y_2 < M$), the coordinates for the giant's two feets. These points will always be different. Print two integers $x_3, y_3$ ($0\leq x_3 < N$, $0\leq y_3 < M$) on the same line, so that the point with these coordinates together with the two points in the input forms a non-degenerate obtuse triangle. The input is constructed so that there is always at least one such point. Your solution will be tested on a set of test case groups. To get the points for a group, you need to pass all the test cases in the group. $1$ $30$ $1\leq N \leq 1000$ and $1\leq M \leq 1000$ $1000\leq N\leq 10^9$ and $1000\leq M \leq 10^9$ $x_1 \neq x_2$ and $y_1 \neq y_2$ No further constraints Explanation of sample 1 In example 1, the points $(1,1)$, $(3,4)$ and $(1,2)$ form an obtuse triangle, with the obtuse angle at $(1,2)$. $(1,2)$ is also within the cave. The point $(1,4)$ would not have been a correct answer, since that would form a right triangle rather than an obtuse triangle. The point $(5,7)$ would not have been a correct answer, since that would form a degenerate triangle, and $(5,7)$ is also placed outside of the cave. 0 0 0 999999999 A triangle is non-degenerate if its three vertices do not lie on the same line, https://en.wikipedia.org/wiki/Degeneracy_(mathematics) Problem ID: pofinal19.jatten Language: en, sv Author: Erik Amirell Eklöf Source: Programmeringsolympiadens final 2019 Powered by Kattis
CommonCrawl
You are here: Home ∼ Weekly Papers on Quantum Foundations (18) Weekly Papers on Quantum Foundations (18) Published by editor on April 30, 2016 This is a list of this week's papers on quantum foundations published in various journals or uploaded to preprint servers such as arxiv.org and PhilSci Archive. The Reality of Casimir Friction. (arXiv:1508.00626v2 [quant-ph] UPDATED) quant-ph updates on arXiv.org on 2016-4-29 12:38am GMT Authors: K. A. Milton, J. S. Høye, I. Brevik For more than 35 years theorists have studied quantum or Casimir friction, which occurs when two smooth bodies move transversely to each other, experiencing a frictional dissipative force due to quantum electromagnetic fluctuations, which break time-reversal symmetry. These forces are typically very small, unless the bodies are nearly touching, and consequently such effects have never been observed, although lateral Casimir forces have been seen for corrugated surfaces. Partly because of the lack of contact with phenomena, theoretical predictions for the frictional force between parallel plates, or between a polarizable atom and a metallic plate, have varied widely. Here we review the history of these calculations, show that theoretical consensus is emerging, and offer some hope that it might be possible to experimentally confirm this phenomenon of dissipative quantum electrodynamics. Entanglement Conservation, ER=EPR, and a New Classical Area Theorem for Wormholes. (arXiv:1604.08217v1 [hep-th]) Authors: Grant Remmen, Ning Bao, Jason Pollack We consider the question of entanglement conservation in the context of the ER=EPR correspondence equating quantum entanglement with wormholes. In quantum mechanics, the entanglement between a system and its complement is conserved under unitary operations that act independently on each; ER=EPR suggests that an analogous statement should hold for wormholes. We accordingly prove a new area theorem in general relativity: for a collection of dynamical wormholes and black holes in a spacetime satisfying the null curvature condition, the maximin area for a subset of the horizons (giving the largest area attained by the minimal cross section of the multi-wormhole throat separating the subset from its complement) is invariant under classical time evolution along the outermost apparent horizons. The evolution can be completely general, including horizon mergers and the addition of classical matter satisfying the null energy condition. This theorem is the gravitational dual of entanglement conservation and thus constitutes an explicit characterization of the ER=EPR duality in the classical limit. A Modest View of Bell's Theorem. (arXiv:1604.08529v1 [physics.hist-ph]) In the 80 years since the seminal Einstein, Podolsky, and Rosen (EPR) paper, physicists and philosophers have mused about the `spooky action at a distance' aspect of quantum mechanics that so bothered Einstein. In his formal analysis of EPR-type entangled quantum states, Bell (1964) concluded that any hidden variable theory designed to reproduce the predictions of quantum mechanics must necessarily be nonlocal and allow superluminal interactions. This doesn't immediately imply that nonlocality is a characteristic feature of quantum mechanics let alone a fundamental property of nature; however, many physicists and philosophers of science do harbor this belief. Experts in the field often use the term `nonlocality' to designate particular non-classical aspects of quantum entanglement and do not confuse the term with superluminal interactions. However, many physicists seem to take the term more literally. I endeavor to disabuse the latter of this notion by emphasizing that the correlations of Bell-type entanglement are a result of ordinary quantum superposition with no need to introduce nonlocality. The conclusion of the EPR paper wasn't that quantum mechanics is nonlocal but rather that it is an incomplete description of reality. For different reasons, many physicists, including me, agree with Einstein that quantum mechanics is necessarily an incomplete description of reality. Probabilistic Foundations of Contextuality. (arXiv:1604.08412v1 [quant-ph]) Authors: Ehtibar Dzhafarov, Janne Kujala Contextuality is usually defined as absence of a joint distribution for a set of measurements (random variables) with known joint distributions of some of its subsets. However, if these subsets of measurements are not disjoint, contextuality is mathematically impossible even if one generally allows (as one must) for random variables not to be jointly distributed. To avoid contradictions one has to adopt the Contextuality-by-Default approach: measurements made in different contexts are always distinct and stochastically unrelated to each other. Contextuality is reformulated then in terms of the (im)possibility of imposing on all the measurements in a system a joint distribution of a particular kind: such that any measurements of one and the same property made in different contexts satisfy a specified property, $\mathcal{C}$. In the traditional analysis of contextuality $\mathcal{C}$ means "are equal to each other with probability 1". However, if the system of measurements violates the "no-disturbance principle", due to signaling or experimental biases, then the meaning of $\mathcal{C}$ has to be generalized, and the proposed generalization is "are equal to each other with maximal possible probability" (applied to any set of measurements of one and the same property). This approach is illustrated on arbitrary systems of binary measurements, including most of quantum systems of traditional interest in contextuality studies (irrespective of whether "no-disturbance" principle holds in them). Quantum Cognition Beyond Hilbert Space I: Fundamentals. (arXiv:1604.08268v1 [cs.AI]) Authors: Diederik Aerts, Lyneth Beltran, Massimiliano Sassoli de Bianchi, Sandro Sozzo, Tomas Veloz The formalism of quantum theory in Hilbert space has been applied with success to the modeling and explanation of several cognitive phenomena, whereas traditional cognitive approaches were problematical. However, this 'quantum cognition paradigm' was recently challenged by its proven impossibility to simultaneously model 'question order effects' and 'response replicability'. In Part I of this paper we describe sequential dichotomic measurements within an operational and realistic framework for human cognition elaborated by ourselves, and represent them in a quantum-like 'extended Bloch representation' where the Born rule of quantum probability does not necessarily hold. In Part II we apply this mathematical framework to successfully model question order effects, response replicability and unpacking effects, thus opening the way toward quantum cognition beyond Hilbert space. Quantum cognition beyond Hilbert space II: Applications. (arXiv:1604.08270v1 [cs.AI]) The research on human cognition has recently benefited from the use of the mathematical formalism of quantum theory in Hilbert space. However, cognitive situations exist which indicate that the Hilbert space structure, and the associated Born rule, would be insufficient to provide a satisfactory modeling of the collected data, so that one needs to go beyond Hilbert space. In Part I of this paper we follow this direction and present a general tension-reduction (GTR) model, in the ambit of an operational and realistic framework for human cognition. In this Part II we apply this non-Hilbertian quantum-like model to faithfully reproduce the probabilities of the 'Clinton/Gore' and 'Rose/Jackson' experiments on question order effects. We also explain why the GTR-model is needed if one wants to deal, in a fully consistent way, with response replicability and unpacking effects. Optimal processes for probabilistic work extraction beyond the second law. (arXiv:1604.08094v1 [quant-ph]) on 2016-4-28 2:07am GMT Authors: Vasco Cavina, Andrea Mari, Vittorio Giovannetti According to the second law of thermodynamics, for every transformation performed on a system which is in contact with an environment of fixed temperature, the extracted work is bounded by the decrease of the free energy of the system. However, in a single realization of a generic process, the extracted work is subject to statistical fluctuations which may allow for probabilistic violations of the previous bound. We are interested in enhancing this effect, i.e. we look for thermodynamic processes that maximize the probability of extracting work above a given arbitrary threshold. For any process obeying the Jarzynski identity, we determine an upper bound for the work extraction probability that depends also on the minimum amount of work that we are willing to extract in case of failure, or on the average work we wish to extract from the system. Then we show that this bound can be saturated within the thermodynamic formalism of quantum discrete processes composed by sequences of unitary quenches and complete thermalizations. We explicitly determine the optimal protocol which is given by two quasi-static isothermal transformations separated by a finite unitary quench. Locally Causal and Deterministic Interpretations of Quantum Mechanics: Parallel Lives and Cosmic Inflation. (arXiv:1604.07874v1 [quant-ph]) Authors: Mordecai Waegell Several locally deterministic interpretations of quantum mechanics are presented and reviewed. The fundamental differences between these interpretations are made transparent by explicitly showing what information is carried locally by each physical system in an idealized experimental test of Bell's theorem. This also shows how each of these models can be locally causal and deterministic. First, a model is presented which avoids Bell's arguments through the assumption that space-time inflated from an initial singularity, which encapsulates the entire past light cone of every event in the universe. From this assumption, it is shown how quantum mechanics can produce locally consistent reality by choosing one of many possible futures at the time of the singularity. Secondly, we review and expand the Parallel Lives interpretation of Brassard and Raymond-Robichaud, which maintains local causality and determinism by abandoning the strictest notion of realism. Finally, the two ideas are combined, resulting in a parallel lives model in which lives branch apart earlier, under the assumption of a single unified interaction history. The physical content of weak values within each model is discussed, along with related philosophical issues concerning free will. Quantum Shannon Theory. (arXiv:1604.07450v1 [quant-ph] CROSS LISTED) hep-th updates on arXiv.org Authors: John Preskill This is the 10th and final chapter of my book on Quantum Information, based on the course I have been teaching at Caltech since 1997. An early version of this chapter (originally Chapter 5) has been available on the course website since 1998, but this version is substantially revised and expanded. The level of detail is uneven, as I've aimed to provide a gentle introduction, but I've also tried to avoid statements that are incorrect or obscure. Generally speaking, I chose to include topics that are both useful to know and relatively easy to explain; I had to leave out a lot of good stuff, but on the other hand the chapter is already quite long. This is a working draft of Chapter 10, which I will continue to update. See the URL on the title page for further updates and drafts of other chapters, and please send me an email if you notice errors. Eventually, the complete book will be published by Cambridge University Press. Quantum Statistical Mechanical Derivation of the Second Law of Thermodynamics: A Hybrid Setting Approach PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. on 2016-4-27 2:00pm GMT Author(s): Hal Tasaki Based on quantum statistical mechanics and microscopic quantum dynamics, we prove Planck's and Kelvin's principles for macroscopic systems in a general and realistic setting. We consider a hybrid quantum system that consists of the thermodynamic system, which is initially in thermal equilibrium, and… [Phys. Rev. Lett. 116, 170402] Published Wed Apr 27, 2016 Towards better understanding of QBism. (arXiv:1604.07766v1 [quant-ph]) Authors: Andrei Khrennikov Recently I posted a paper entitled "External observer reflections on QBism". As any external observable, I was not able to reflect some features of QBism properly. Therefore comments which I received from one of its creators, C. Fuchs, are very valuable – to understand better the views of QBists. Some of QBism features are very delicate and to extract them from articles of QBists is not a simple task. Therefore I hope that the second portion of my reflection on QBism (or better to say my reflections on Fuchs' reflections on my reflections) might be interesting and useful for other experts in quantum foundations and quantum information theory (especially by taking into account my previous aggressively anti-QBism position). In the present paper I correct some of my previously posted critical comments on QBism. At the same time better understanding of QBists views on some problems leads to improvement and strengthening of other critical comments. Help us improve arXiv so we can better serve you. Take our user survey (survey closes April 27, 8PM EDT). Bounding quantum gravity inspired decoherence using atom interferometry. (arXiv:1604.07810v1 [quant-ph]) Authors: Jiří Minář, Pavel Sekatski, Nicolas Sangouard Hypothetical models have been proposed in which explicit collapse mechanisms prevent the superposition principle to hold at large scales. In particular, the model introduced by Ellis and co-workers [Phys. Lett. B ${\bf 221}$, 113 (1989)] suggests that quantum gravity might be responsible for the collapse of the wavefunction of massive objects in spatial superpositions. We here consider a recent experiment reporting on interferometry with atoms delocalized over half a meter for timescale of a second [Nature ${\bf 528}$, 530 (2015)] and show that the corresponding data strongly bound quantum gravity induced decoherence and rule it out in the parameter regime considered originally. Bypassing the Groenewold–van Hove Obstruction: A Physically Meaningful Quantization for R2n. (arXiv:1604.07754v1 [math-ph]) Authors: Maurice de Gosson There are known obstructions to a full geometric quantization of R2n, the most known being the Groenewold-van Hove no-go result. We show, following a suggestion of S. Kauffmann, that it is possible to construct a unique quantization procedure by weakening the usual requirement that commutators should correspond to Poisson brackets. The weaker requirement consists in demanding that this correspondence should only hold for Hamiltonian functions of the type T(p)+V(q). This reformulation leads to a non-injective quantization of all tempered distributions on R2n which, when restricted to polynomials, is the rule proposed by the physicists Born and Jordan in the early days of quantum mechanics. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation. (arXiv:1512.04930v2 [quant-ph] UPDATED) Authors: Anne Broadbent In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with non-signalling correlations. Furthermore, our construction establishes a quantum analogue of the classical communication complexity collapse under non-signalling correlations. Trading coherence and entropy by a quantum Maxwell demon. (arXiv:1604.07557v1 [quant-ph]) Authors: A. V. Lebedev, D. Oehri, G. B. Lesovik, G. Blatter The Second Law of Thermodynamics states that the entropy of a closed system is non-decreasing. Discussing the Second Law in the quantum world poses new challenges and provides new opportunities, involving fundamental quantum-information-theoretic questions and novel quantum-engineered devices. In quantum mechanics, systems with an evolution described by a so-called unital quantum channel evolve with a non-decreasing entropy. Here, we seek the opposite, a system described by a non-unital and, furthermore, energy-conserving channel that describes a system whose entropy decreases with time. We propose a setup involving a mesoscopic four-lead scatterer augmented by a micro-environment in the form of a spin that realizes this goal. Within this non-unital and energy-conserving quantum channel, the micro-environment acts with two non-commuting operations on the system in an autonomous way. We find, that the process corresponds to a partial exchange or swap between the system and environment quantum states, with the system's entropy decreasing if the environment's state is more pure. This entropy-decreasing process is naturally expressed through the action of a quantum Maxwell demon and we propose a quantum-thermodynamic engine with four qubits that extracts work from a single heat reservoir when provided with a reservoir of pure qubits. The special feature of this engine, which derives from the energy-conservation in the non-unital quantum channel, is its separation into two cycles, a working cycle and an entropy cycle, allowing to run this engine with no local waste heat. Quantum circuit dynamics via path integrals: Is there a classical action for discrete-time paths?. (arXiv:1604.07452v1 [quant-ph]) Authors: Mark D. Penney, Dax Enshan Koh, Robert W. Spekkens It is straightforward to give a sum-over-paths expression for the transition amplitudes of a quantum circuit as long as the gates in the circuit are balanced, where to be balanced is to have all nonzero transition amplitudes of equal magnitude. Here we consider the question of whether, for such circuits, the relative phases of different discrete-time paths through the configuration space can be defined in terms of a classical action, as they are for continuous-time paths. We show how to do so for certain kinds of quantum circuits, namely, Clifford circuits where the elementary systems are continuous-variable systems or discrete systems of odd-prime dimension. These types of circuit are distinguished by having phase-space representations that serve to define their classical counterparts. For discrete systems, the phase-space coordinates are also discrete variables. We show that for each gate in the generating set, one can associate a symplectomorphism on the phase-space and to each of these one can associate a generating function, defined on two copies of the configuration space. For discrete systems, the latter association is achieved using tools from algebraic geometry. Finally, we show that if the action functional for a discrete-time path through a sequence of gates is defined using the sum of the corresponding generating functions, then it yields the correct relative phases for the path-sum expression. These results are likely to be relevant for quantizing physical theories where time is fundamentally discrete, characterizing the classical limit of discrete-time quantum dynamics, and proving complexity results for quantum circuits. Single-world interpretations of quantum theory cannot be self-consistent. (arXiv:1604.07422v1 [quant-ph]) Authors: Daniela Frauchiger, Renato Renner According to quantum theory, a measurement may have multiple possible outcomes. Single-world interpretations assert that, nevertheless, only one of them "really" occurs. Here we propose a gedankenexperiment where quantum theory is applied to model an experimenter who herself uses quantum theory. We find that, in such a scenario, no single-world interpretation can be logically consistent. This conclusion extends to deterministic hidden-variable theories, such as Bohmian mechanics, for they impose a single-world interpretation. Classical approximations of relativistic quantum physics. (arXiv:1604.07654v1 [quant-ph]) Authors: Glenn Eric Johnson A correspondence of classical to quantum physics studied by Schr\"{o}\-dinger and Ehrenfest applies without the necessity of technical conjecture that classical observables are associated with Hermitian Hilbert space operators. This correspondence provides appropriate nonrelativistic classical interpretations to realizations of relativistic quantum physics that are incompatible with the canonical formalism. Using this correspondence, Newtonian mechanics for a $1/r$ potential provides approximations for the dynamics of nonrelativistic classical particle states within unconstrained quantum field theory (UQFT). Replacing the Singlet Spinor of the EPR-B Experiment in the Configuration Space with Two Single-Particle Spinors in Physical Space Latest Results for Foundations of Physics Recently, for spinless non-relativistic particles, Norsen (Found Phys 40:1858–1884, 2010) and Norsen et al. (Synthese 192:3125–3151,2015) show that in the de Broglie–Bohm interpretation it is possible to replace the wave function in the configuration space by single-particle wave functions in physical space. In this paper, we show that this replacment of the wave function in the configuration space by single-particle functions in the 3D-space is also possible for particles with spin, in particular for the particles of the EPR-B experiment, the Bohm version of the Einstein–Podolsky–Rosen experiment. Fine Structure Constant: Theme With Variations. (arXiv:1604.07092v1 [gr-qc]) gr-qc updates on arXiv.org Authors: V. B. Bezerra, M. S. Cunha, C. R. Muniz, M. O. Tahim, H. S. Vieira In this paper, we study the spatial variation of the fine structure constant $\alpha$ due to the presence of a static and spherically symmetric gravitational source. The procedure consists of calculating the solution including the energy eigenvalues of a massive scalar field around that source, considering the weak-field regimen, which yields the gravitational analog of the atomic Bohr levels. From this result, we obtain several values for the effective $\alpha$ by considering some scenarios of semi-classical and quantum gravities. Constraints on the parameters of the involved theories are calculated from astrophysical observations of the white dwarf emission spectra. Such constraints are compared with those ones obtained in the literature. Emergence, causation and storytelling: condensed matter physics and the limitations of the human mind. (arXiv:1604.06845v1 [physics.hist-ph]) physics.hist-ph updates on arXiv.org The physics of matter in the condensed state is concerned with problems in which the number of constituent particles is vastly greater than can be easily comprehended. The inherent physical limitations of the human mind are fundamental and restrict the way in which we can interact with and learn about the universe. This presents challenges for developing scientific explanations that are met by emergent narratives, concepts and arguments that have a non-trivial relationship to the underlying microphysics. By examining examples within condensed matter physics, and also from cellular automata, I show how such emergent narratives efficiently describe elements of reality. A strict epistemic approach to physics. (arXiv:1601.00680v2 [quant-ph] UPDATED) Authors: Per Östborn The general view is that all fundamental physical laws should be formulated within the framework given by quantum mechanics (QM). In a sense, QM therefore has the character of a metaphysical theory. Consequently, if it is possible to derive QM from more basic principles, these principles should be of general, philosophical nature. Here, we derive the formalism of QM from well-motivated epistemic principles. A key assumption is that a physical theory that relies on entities or distinctions that are unknowable in principle gives rise to wrong predictions. First, an epistemic formalism is developed, using concepts like knowledge and potential knowledge, identifying a physical state $S$ with the potential knowledge of the physical world. It is demonstrated that QM emerges from this formalism. However, Hilbert spaces, wave functions and probabilities are defined in certain well-defined observational contexts only. This means that the epistemic formalism is broader than QM. In the fundamental layer of description, the physical state $S$ is a subset of a state space $\mathcal{S}=\{Z\}$, such that $S$ always contains many elements $Z$. These elements correspond to unattainable states of complete knowledge of the world. The evolution of $S$ cannot be determined in terms of the individual evolution of the elements $Z$, unlike the evolution of an ensemble in classical phase space. The evolution of $S$ is described in terms of sequential time $n\in \mathbf{\mathbb{N}}$, which is updated according to $n\rightarrow n+1$ each time an event occurs, each time potential knowledge changes. Sequential time $n$ can be separated from relational time $t$, which describes distances between events in space-time. There is an entire space-time associated with each $n$, in which $t$ represents the knowledge at sequential time $n$ about the temporal relations between present and past events. Quantum Measurement, Complexity and Discrete Physics. (arXiv:quant-ph/0310033v2 UPDATED) Authors: Martin Leckey This paper presents a new modified quantum mechanics, Critical Complexity Quantum Mechanics, which includes a new account of wavefunction collapse. This modified quantum mechanics is shown to arise naturally from a fully discrete physics, where all physical quantities are discrete rather than continuous. I compare this theory with the spontaneous collapse theories of Ghirardi, Rimini, Weber and Pearle and discuss some implications of the theory for a realist view of the quantum realm. Logical inference approach to relativistic quantum mechanics: derivation of the Klein-Gordon equation. (arXiv:1604.07265v1 [quant-ph]) Authors: H. C. Donker, M. I. Katsnelson, H. De Raedt, K. Michielsen The logical inference approach to quantum theory, proposed earlier [Ann. Phys. 347 (2014) 45-73], is considered in a relativistic setting. It is shown that the Klein-Gordon equation for a massive, charged, and spinless particle derives from the combination of the requirements that the space-time data collected by probing the particle is obtained from the most robust experiment and that on average, the classical relativistic equation of motion of a particle holds. Quantum features of natural cellular automata. (arXiv:1604.06652v1 [quant-ph]) on 2016-4-25 12:06pm GMT Authors: Hans-Thomas Elze Cellular automata can show well known features of quantum mechanics, such as a linear rule according to which they evolve and which resembles a discretized version of the Schroedinger equation. This includes corresponding conservation laws. The class of "natural" Hamiltonian cellular automata is based exclusively on integer-valued variables and couplings and their dynamics derives from an Action Principle. They can be mapped reversibly to continuum models by applying Sampling Theory. Thus, "deformed" quantum mechanical models with a finite discreteness scale $l$ are obtained, which for $l\rightarrow 0$ reproduce familiar continuum results. We have recently demonstrated that such automata can form "multipartite" systems consistently with the tensor product structures of nonrelativistic many-body quantum mechanics, while interacting and maintaining the linear evolution. Consequently, the Superposition Principle fully applies for such primitive discrete deterministic automata and their composites and can produce the essential quantum effects of interference and entanglement. Relativistic collapse dynamics and black hole information loss. (arXiv:1604.06537v1 [gr-qc]) Authors: Daniel Bedingham, Sujoy K. Modak, Daniel Sudarsky We study a proposal for the resolution of the black hole information puzzle within the context of modified versions of quantum theory involving spontaneous reduction of the quantum state. The theories of this kind, which were developed in order to address the so called "measurement problem" in quantum theory have, in the past, been framed in a non-relativistic setting and in that form they were previously applied to the black hole information problem. Here, and for the first time, we show in a simple toy model, a treatment of the problem within a fully relativistic setting. We also discuss the issues that the present analysis leaves as open problems to be dealt with in future refinements of the present approach. Spacetime Quanta? : Real Discrete Spectrum of a Quantum Spacetime Four-Volume Operator in Unimodular Loop Quantum Cosmology. (arXiv:1604.06584v1 [gr-qc]) Authors: Joseph Bunao This study considers the operator $\hat{T}$ corresponding to the classical spacetime four-volume $T$ of a finite patch of spacetime in the context of Unimodular Loop Quantum Cosmology for the homogeneous and isotropic model with flat spatial sections and without matter sources. Since $T$ is canonically conjugate to the cosmological "constant" $\Lambda$, the operator $\hat{T}$ is constructed by solving its canonical commutation relation with $\hat{\Lambda}$ – the operator corresponding to $\Lambda$. %This is done by expanding $\hat{T}$ in terms of Bender-Dunne-like basis operators $\hat{T}_{m,n}$ and solving for the expansion coefficients. This conjugacy, along with the action of $\hat{T}$ on definite volume states reducing to $T$, allows us to interpret that $\hat{T}$ is indeed a quantum spacetime four-volume operator. The eigenstates $\Phi_{\tau}$ are calculated and, considering $\tau\in\mathbb{R}$, we find that the $\Phi_{\tau}$'s are normalizable suggesting that the real line $\mathbb{R}$ is in the discrete spectrum of $\hat{T}$. The real spacetime four-volume $\tau$ is then discrete or quantized. Is General Relativity a (partial) Return of Aristotelian Physics?. (arXiv:1604.06491v1 [physics.hist-ph]) Aristotle has split physics at the sphere of the moon; above this sphere there is no change except eternal spherical motion, below are two different kinds of motion: Natural motion (without specific cause) and enforced motion. In modern view motion is caused by gravity and by other forces. The split at the sphere of the moon has been definitely overcome through the observation of a supernova and several comets by Tycho Brahe. The second distinction was eradicated by Isaak Newton who showed that gravitational motion was caused by a force proportional to the inverse square of the distance. By the theory of General Relativity, Albert Einstein showed that there is no gravitational force but motion under gravity (i.e. Aristotles <natural motion>) is caused by the curved geometry of spacetime. In this way, the Aristotelian distinction between natural motion and enforced motion has come back in the form of two great theories: General Relativity and Quantum Field Theory which are today incompatible. To find a way out of this dilemma is the challenge of modern physics. Free Will – A road less travelled in quantum information. (arXiv:1604.06489v1 [physics.hist-ph]) Conway and Kochen's Free Will Theory is examined as an important foundational element in a new area of activity in computer science – developing protocols for quantum computing Irreversibility in physics stemming from unpredictable symbol-handling agents. (arXiv:1604.06771v1 [quant-ph]) Authors: John M. Myers, F. Hadi Madjid The basic equations of physics involve a time variable t and are invariant under the transformation $t goes to -t$. This invariance at first sight appears to impose time reversibility as a principle of physics, in conflict with thermodynamics. But equations written on the blackboard are not the whole story in physics. In prior work we sharpened a distinction obscured in today's theoretical physics, the distinction between obtaining evidence from experiments on the laboratory bench and explaining that evidence in mathematical symbols on the blackboard. The sharp distinction rests on a proof within the mathematics of quantum theory that no amount of evidence, represented in quantum theory in terms of probabilities, can uniquely determine its explanation in terms of wave functions and linear operators. Building on the proof we show here a role in physics for unpredictable symbol-handling agents acting both at the blackboard and at the workbench, communicating back and forth by means of transmitted symbols. Because of their unpredictability, symbol-handling agents introduce a heretofore overlooked source of irreversibility into physics, even when the equations they write on the blackboard are invariant under t goes to -t$. Widening the scope of descriptions admissible to physics to include the agents and the symbols that link theory to experiments opens up a new source of time-irreversibility in physics. Uncertain for A Century: Quantum Mechanics and The Dilemma of Interpretation. (arXiv:1604.06488v1 [physics.hist-ph]) Quantum Mechanics, the physical theory describing the microworld, represents one of science's greatest triumphs. It lies at the root of all modern digital technologies and offers unparalleled correspondence between prediction and experiments. Remarkably, however, after more than 100 years it is still unclear what quantum mechanics means in terms of basic philosophical questions about the nature of reality. While there are many interpretations of the mathematical machinery of quantum physics, there remains no experimental means to distinguish between most of them. In this contribution, (based on a discussion at the NYAS), I wish to consider the ways in which the enduring lack of an agreed upon interpretation of quantum physics influences a number of critical philosophical debates about physics and reality. I briefly review two problems effected by quantum interpretations: the meaning of the term 'Universe' and the nature of consciousness. In what follows I am explicitly not advocating for any particular quantum interpretation. Instead, I am interested in how the explicit inability of modern physics to experimentally distinguish between interpretations with wildly divergent ontological/epistemological implications plays into discussions of physics and its description of the world. arXiv:1604.02836 [pdf, ps, other] Symmetry and the Relativity of States and Observables in Quantum Mechanics Authors: Leon Loveridge, Paul Busch, Takayuki Miyadera Kazuya Okamura and Masanao Ozawa, Measurement Theory in Local Quantum Physics, Journal of Mathematical Physics 57 (1), 015209/1-015209/29 (2016). [Special Issue: Operator Algebras and Quantum Information Theory] http://dx.doi.org/10.1063/1.4935407 Approximating relational observables by absolute quantities: A quantum accuracy-size trade-off Authors: Takayuki Miyadera, Leon Loveridge, Paul Busch Comments: Abstract updated, typos corrected and some new results added Journal-ref: J. Phys. A: Math. Theor. 49 185301 (2016) Posted in @all Weekly Papers Article written by editor
CommonCrawl
Website | Service Status | Release Notes Unless specific geographical region requirements apply, the base URL of Portfolio Optimizer is: https://api.portfoliooptimizer.io/. The current version number of Portfolio Optimizer is v1. Portfolio Optimizer can be used: As an anonymous user No authentication information required Strict (but reasonable) API limits applied As an authenticated user API key required in the HTTP headers Higher API limits applied Let be: $n$, the number of assets $T$, the number of time periods $P_{t,i} \in \mathbb{R}^{+,*}$, the price of the asset $i$ at the time $t$, $i=1..n$, $t=1..T$ The arithmetic return $r_{t+1,i}$ of the asset $i$, $i=1..n$, over the period from the time $t$ to the time $t+1$, $t=1..T-1$, is defined as $$r_{t+1,i} = \frac{P_{t+1,i} - P_{t,i}}{P_{t,i}}$$ $r_1,...,r_T$, the arithmetic returns of the asset over each time period The average arithmetic return $\overline{r}$ of the asset over the $T$ time periods is defined as the the arithmetic average of the arithmetic returns $r_1,...,r_T$, that is $$ \overline{r} = \frac{1}{T} \sum_{t=1}^{T} r_t $$ $r_i = (r_{1,i},...,r_{T,i}) \in \mathbb{R}^{T}$, the arithmetic return of the asset $i$, $i=1..n$ over each time period $t=1..T$ $\overline{r} = \left ( \overline{r_1}, ..., \overline{r_n} \right ) \in \mathbb{R}^n$, the average arithmetic return of the assets $1..n$ over over the $T$ time periods The asset covariance matrix $\Sigma \in \mathcal{M}(\mathbb{R}^{n \times n})$ is defined by: $$\Sigma_{i,j} = \frac{1}{T} \sum_{k=1}^T (r_{k,i} - \overline{r_i}) (r_{k,j} - \overline{r_j}), i=1..n, j=1..n$$ Alternatively, let be: $C \in \mathcal{M}(\mathbb{R}^{n \times n})$, the asset correlation matrix $\sigma_1,...,\sigma_n$, the asset standard deviations (i.e., volatilities) $\sigma_1^2,...,\sigma_n^2$, the asset variances The asset covariance matrix $\Sigma \in \mathcal{M}(\mathbb{R}^{n \times n})$ is defined by: $$\Sigma_{i,j} = \sigma_i \sigma_j C_{i,j}, i=1..n, j=1..n$$ $\overline{r} = \left ( \overline{r_1}, ..., \overline{r_n} \right ) \in \mathbb{R}^n$, the average arithmetic return of the assets $1..n$ over the $T$ time periods $\lambda \in ]0,1[$ the decay factor The exponentially weighted asset covariance matrix $\Sigma \in \mathcal{M}(\mathbb{R}^{n \times n})$ is defined by: $$\Sigma_{i,j} = \frac{1 - \lambda}{1 - \lambda^{T}} \sum_{k=0}^{T-1} \lambda^{k} (r_{T-k,i} - \overline{r_i}) (r_{T-k,j} - \overline{r_j}), i=1..n, j=1..n$$ The decay factor $\lambda$ determines the weights applied to the returns, as well as the effective amount of time periods used in computing the covariance matrix The decay factor $\lambda$ can also be defined in terms of the half-life $\tau$, which is the time taken by the weights to decay by $\frac{1}{2}$, through the relationship $$\tau = -\frac{\ln 2}{\ln \lambda} \Leftrightarrow \lambda = \left ( \frac{1}{2} \right )^{\frac{1}{\tau}} $$ Let $\Sigma \in \mathcal{M}(\mathbb{R}^{n \times n})$ be a matrix. $\Sigma$ is an asset covariance matrix if and only if: $\Sigma$ is symmetric, i.e. $\Sigma {}^t = \Sigma$ $\Sigma$ is positive semi-definite, i.e. $x {}^t \Sigma x \geqslant 0, \forall x \in\mathbb{R}^n$ The asset correlation matrix $C \in \mathcal{M}(\mathbb{R}^{n \times n})$ is defined by: $$C_{i,j} = \frac{1}{T} \sum_{k=1}^T \frac{(r_{k,i} - \overline{r_i}) (r_{k,j} - \overline{r_j})}{\sigma_i \sigma_j}, i=1..n, j=1..n$$ $\Sigma \in \mathcal{M}(\mathbb{R}^{n \times n})$, the asset covariance matrix The asset correlation matrix $C \in \mathcal{M}(\mathbb{R}^{n \times n})$ is defined by: $$C_{i,j} = \frac{\Sigma_{i,j}}{\sigma_i \sigma_j}, i=1..n, j=1..n$$ Let $C \in \mathcal{M}(\mathbb{R}^{n \times n})$ be a matrix. $C$ is an asset correlation matrix if and only if: $C$ is symmetric, i.e. $C {}^t = C $ $C$ is unit diagonal, i.e. $C_{i,i} = 1, i=1..n $ $C$ is positive semi-definite, i.e. $x {}^t C x \geqslant 0, \forall x \in\mathbb{R}^n$ Let $n$ be the number of assets and $A \in \mathcal{M} \left( \mathbb{R}^{n \times n} \right)$ be an approximate asset correlation matrix (i.e., a matrix with no specific requirements) $\delta \in [0,1]$ $S_n^\delta = \{ X \in \mathcal{M} \left( \mathbb{R}^{n \times n} \right)$ such that $X {}^t = X$ and $\lambda_{min}(X) \geq \delta \}$ $\mathcal{N}$ the optional (so, possibly empty) index set of the fixed off-diagonal elements of the approximate correlation matrix $A$ $\mathcal{E}_n = \{ X \in \mathcal{M} \left( \mathbb{R}^{n \times n} \right)$ such that $X {}^t = X$ and $x_{ii} = 1, i = 1,...,n$ and $x_{ij}=a_{ij}$ for $(i,j) \in \mathcal{N} \}$ The nearest correlation matrix $C$ to the matrix $A$ is the solution of the problem: $$ C = \operatorname{argmin} \left\Vert X - A \right\Vert_F \text{ s.t. } X \in S_n^\delta \cap \mathcal{E}_n $$ The algorithm used internally to solve the optimization problem above is an alternating projection algorithm, similar to the algorithm described in the reference, with $\delta$ taken of order $10^{-4}$ to ensure that the computed correlation matrix $C$ is positive definite. If the set $\mathcal{N}$ is not empty, the optimization problem above might not have any solution, which will typically manifest by a response time out of this endpoint. Additionally, let be: $n_p$, the number of portfolios to simulate $w_{p} \in [0,1]^{n}$, the vector of the initial portfolio weights of the $p$-th portfolio to simulate, $p=1..n_p$, with $\sum_{i=1}^{n} w_{p,i} = 1$ $V_{t, p} \in \mathbb{R}^{+,*}$, the value of the $p$-th portfolio to simulate at the time $t$, $p=1..n_p$, $t=1..T$ Then, for $p=1..n_p$ and $t=1..T$ : $$ V_{t, p} = V_{1, p} \sum_{i=1}^{n} w_{p,i} \frac{P_{t,i}}{P_{1,i}}$$ By convention, $V_{1, p} = 100, p = 1..n_p$ $w_{p} \in [0,1]^{n}$, the vector of the fixed portfolio weights of the $p$-th portfolio to simulate, $p=1..n_p$, with $\sum_{i=1}^{n} w_{p,i} = 1$ Then, for $p=1..n_p$ and $t=2..T$ : $$ V_{t, p} = V_{t-1, p} \sum_{i=1}^{n} w_{p,i} \frac{P_{t,i}}{P_{t-1,i}}$$ Then, for $p=1..n_p$ and $t=2..T $: $$ V_{t, p} = V_{t-1, p} \sum_{i=1}^{n} w_{t-1,p,i} \frac{P_{t,i}}{P_{t-1,i}} $$ with $w_{t, p} \in [0,1]^{n}$ the vector of the $p$-th portfolio weights at the time $t$, $p=1..n_p$, $t=1..T$, generated at random and satisfying: $$ \begin{cases} 0 \leqslant w_{t,p,i} \leqslant 1, i = 1..n \newline \sum_{i=1}^{n} w_{t,p,i} = 1 \end{cases} $$ $\mu \in \mathbb{R}^{n}$, the vector of the asset arithmetic returns $n_p$, the number of portfolios $w_p \in [0,1]^{n}$, the vector of portfolio weights of the $p$-th portfolio, $p=1..n_p$ The arithmetic return of the $p$-th portfolio, $p=1..n_p$, is defined as: $$ \mu {}^t w_p $$ $T_p$, the number of time periods of the $p$-th portfolio, $p=1..n_p$ $V_{t, p} \in \mathbb{R}^{+,*}$, the value of the $p$-th portfolio at the time $t$, $p=1..n_p$, $t=1..T_p$ The arithmetic return of the $p$-th portfolio, $p=1..n_p$, is defined as:$$ \frac{V_{T_p,p} - V_{1,p}}{V_{1,p}}$$ The volatility $\sigma_p$ of the $p$-th portfolio, $p=1..n_p$, is defined as: $$ \sigma_p = \sqrt{ w_p {}^t \Sigma w_p} $$ $r_p = (r_{p,1},...,r_{p,T_p-1}) \in \mathbb{R}^{T_p-1}$, the arithmetic returns of the portfolio $p$ associated to the $T_p$ time periods, $p=1..n_p$ The volatility $\sigma_p$ of the $p$-th portfolio, $p=1..n_p$, is defined as the standard deviation of the arithmetic returns $r_p$: $$ \sigma_p = \sqrt{\frac{\sum_{t=1}^{T_p-1} (r_{p,t} - \overline{r_p})^2 }{T_p-1}} $$ With $\overline{r_p}$ the average arithmetic return of the $p$-th portfolio. In line with the second reference, the volatility is defined through the standard deviation and not through the sample standard deviation. $r_f \in \mathbb{R}$, the value of the risk free rate $w \in [0,1]^{n}$, the vector of portfolio weights The Sharpe ratio $SR$ of theportfolio is defined as: $$ SR = \frac{ \mu{}^t w - r_f}{\sqrt{ w {}^t \Sigma w }} $$ $V_{t} \in \mathbb{R}^{+,*}$, the value of the portfolio at the time $t$, $t=1..T$ $\overline{r}$, the average arithmetic return of the portfolio $\sigma_p$, the volatility of the portfolio The Sharpe ratio $SR$ of the portfolio is defined as: $$ SR = \frac{\overline{r} - r_f}{\sigma_p} $$ $\sigma = (\sigma_1,...,\sigma_n)$ the vector of the assets volatilities The diversification ratio $DR$ of the portfolio is defined as: $$ DR = \frac{ \sigma{}^t w}{\sqrt{ w {}^t \Sigma w }} $$ The diversification ratio $DR$ of the portfolio is defined as: $$ DR = \overline{DR} \; \rho_{P, \overline{P}} $$ , with $\overline{DR}$, the diversification ratio of the long-short most diversified portfolio computed over the universe of the $n$ assets using the covariance matrix of the assets arithmetic returns $\overline{P}_{t,i} \in \mathbb{R}^{+,*}$, the value of the long-short most diversified portfolio at the time $t$, $t=1..T$ $\rho_{P, \overline{P}}$, the correlation between the arithmetic returns of the portfolio and the arithmetic returns of the long-short most diversified portfolio The second formulation allows to compute the (realized) diversification ratio of a portfolio with unknown composition in terms of asset weights, but in this case, it might happen that the diversification ratio of the portfolio is lower than 1. If so, this either means that the portfolio is not long-only or that the portfolio is invested in other assets than the assets for which prices are provided. The second formulation makes the assumption that the portfolio and the long-short most diversified portfolio are constantly rebalanced to maintain their respective assets weights. The return contribution of the $i$-th asset to the return of the $p$-th portfolio, $i=1..n$ and $p=1..n_p$, is defined as: $$ w_{p,i} \mu_i $$ $n_k$, the optional number of groups of assets $\mathcal{N}_1,...,\mathcal{N}_{n_k}$ the optional $n_k$ groups of assets The return contribution of the group of assets $\mathcal{N}_k$ to the return of the $p$-th portfolio, $k=1..n_k$ and $p=1..n_p$, is defined as: $$ \sum_{j \in \mathcal{N}_k} w_{p,j} \mu_j $$ Return contribution analysis is also known as absolute return attribution analysis, because there is no reference to a benchmark In contribution analysis, a group of assets is also known as a segment, and is usually made of assets sharing common characteristics such as the asset class, the country, the industrial sector, etc. The risk contribution of the $i$-th asset to the risk of the $p$-th portfolio, $i=1..n$ and $p=1..n_p$, is defined as: $$ w_{p,i} \frac{(\Sigma w_p)_i}{\sqrt{w_p {}^t \Sigma w_p}} $$ The risk contribution of the group of assets $\mathcal{N}_k$ to the risk of the $p$-th portfolio, $k=1..n_k$ and $p=1..n_p$, is defined as: $$ \sum_{j \in \mathcal{N}_k} w_{p,j} \frac{(\Sigma w_p)_j}{\sqrt{w_p {}^t \Sigma w_p}} $$ The risk is defined in terms of standard deviation of the returns (i.e., volatility) $l \in [0,1]^{n} $, the optional minimum asset weights constraints $u \in [0,1]^{n} $, the optional maximum asset weights constraints $w_{min} \in [0,1]$, the optional minimum portfolio exposure $w_{max} \in [0,1]$, the optional maximal portfolio exposure $k$, the optional number of groups of assets $G \in \mathcal{M}(\mathbb{R}^{k \times n})$, the optional assets groups matrix defining $k$ group(s) of assets $u_g \in \mathbb{R}^{k}$, the optional maximum assets groups weights constraints The continuous mean-variance efficient frontier is the (infinite) set of portfolios whose weights $w \in [0,1]^{n}$ satisfy: $$ w = \operatorname{argmin} \frac{1}{2} w {}^t \Sigma w - \lambda \mu {}^t w \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ with $\lambda$ a parameter varying in $[0, +\infty[$. $n_p$, the number of portfolios to compute on the mean-variance efficient frontier The discretized mean-variance efficient frontier is the (finite) set of $n_p$ portfolios belonging to the continuous mean-variance efficient frontier with equally spaced arithmetic returns. The parameter $1/\lambda$ is usually called the risk aversion parameter When no group weights constraints are present, and if numerically possible, the algorithm used internally to solve the linearly constrained quadratic optimization problem above is the Critical Line Method from Harry_Markowitz The continuous mean-variance minimum variance frontier is the (infinite) set of portfolios whose weights $w \in [0,1]^{n}$ satisfy: $$ w = \operatorname{argmin} \frac{1}{2} w {}^t \Sigma w - \lambda \mu {}^t w \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ with $\lambda$ a parameter varying in $]-\infty, +\infty[$. $n_p$, the number of portfolios to compute on the mean-variance minimum variance frontier The discretized mean-variance minimum variance frontier is the (finite) set of $n_p$ portfolios belonging to the continuous mean-variance minimum variance frontier with equally spaced arithmetic returns. $n_p$, the number of portfolios to construct The $p$ vectors of portfolio weights $w_p \in [0,1]^{n}$, $p=1..n_p$, are generated at random and satisfy: $$ \begin{cases} l \leqslant w_p \leqslant u \newline w_{min} \leqslant \sum_{i=1}^{n} w_{p,i} \leqslant w_{max} \end{cases} $$ $X \in \mathcal{R}^{n \times T}$, the matrix of the assets returns $r_b \in \mathbb{R}^{T}$, the returns of the benchmark The mimicking portfolio weights $w \in [0,1]^{n}$ satisfy: $$ w = \operatorname{argmin} \frac{1}{T} \lVert w {}^t X - r_b \rVert_2^2 \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ The performance measure that is being minimized above is called the empirical tracking error The statistical technique used to construct the mimicking portfolio is called returns-based style analysis The equal-weighted portfolio weights $w \in [0,1]^{n}$ satisfy: $$ w_i = \frac{1}{n}, i=1..n$$ $\sigma_1^2,...,\sigma_n^2$, the assets variances The inverse variance-weighted portfolio weights $w \in [0,1]^{n}$ satisfy: $$ w_i = \frac{1/\sigma_i^2}{\sum_{j=1}^{n} 1/\sigma_j^2}, i=1..n$$ $\sigma_1,...,\sigma_n$, the assets volatilities The inverse volatility-weighted portfolio weights $w \in [0,1]^{n}$ satisfy: $$ w_i = \frac{1/\sigma_i}{\sum_{j=1}^{n} 1/\sigma_j}, i=1..n$$ The inverse volatility-weighted portfolio is also known as the naive-risk parity portfolio $mktcap_1,...,mktcap_n$ the assets market capitalizations The market capitalization-weighted portfolio weights $w \in [0,1]^{n}$ satisfy: $$ w_i = \frac{mktcap_i}{\sum_{j=1}^{n} mktcap_j}, i=1..n$$ The equal Sharpe ratio contributions portfolio weights $w \in [0,1]^{n}$ satisfy: $\forall i,j$ such that $\mu_i - r_f > 0$ and $\mu_j - r_f > 0$ $$ w_i \frac{\mu_i - r_f}{\sqrt{ w {}^t \Sigma w}} = w_j \frac{\mu_j - r_f}{\sqrt{ w {}^t \Sigma w}} $$ $\forall i$ such that $\mu_i - r_f \leq 0$ $$ w_i = 0 $$ The maximum return portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmax} w {}^t \mu \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ If some assets have identical returns, the maximum return portfolio will usually not be unique If some assets have identical returns, the maximum return portfolio will usually not be mean-variance efficient To enforce mean-variance efficiency, the covariance matrix of the assets $\Sigma \in \mathcal{M}(\mathbb{R}^{n \times n})$ must be provided The minimum variance portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmin} w {}^t \Sigma w \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ If the asset covariance matrix is not positive definite, the minimum variance portfolio will usually not be unique If the asset covariance matrix is not positive definite, the minimum variance portfolio will usually not be mean-variance efficient To enforce mean-variance efficiency, the arithmetic returns of the assets $\mu \in \mathbb{R}^{n}$ must be provided The equal risk contributions portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmin} \sqrt{ w {}^t \Sigma w} - \frac{\lambda}{n} \sum_{i=1}^{n} \ln(w_i) \newline \textrm{s.t. }l \leqslant w \leqslant u $$ with $\lambda \in \mathbb{R}^{+,*}$ a parameter to be determined such that $\sum_{i=1}^{n} w_i = 1$. Such a $\lambda$ might not exist, in which case the optimization problem is not feasible and the vector $w$ is undefined The algorithm used internally to solve the optimization problem above is a cyclical coordinate descent algorithm, similar to the algorithm described in the reference The maximum decorrelation portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmax} 1 - w {}^t C w \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ The maximum Sharpe ratio portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmax} \frac{w {}^t \mu - r_f }{\sqrt{ w {}^t \Sigma w}} \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ The value of the risk free rate $r_f$ is usually either taken as the interest rate on a riskless asset like cash or as the interest rate on borrowings The maximum Sharpe ratio portfolio is mean-variance efficient The minimum correlation portfolio is a portfolio where the assets are weighted proportionally to their average correlation with each other. In more details: The correlation matrix of the assets $C$ is converted to an adjusted correlation matrix $C'$ that does not have negative values, penalizing high correlation and rewarding low correlation The assets that act as portfolio diversifiers are then initially weighted more heavily than the others, using the adjusted correlation matrix $C'$ The initial weights are then normalized by the assets inverse volatilities $1/\sigma_i, i=1..n$, to ensure that each asset contributes to the same level of portfolio risk The most diversified portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmax} \frac{w {}^t \sigma }{\sqrt{ w {}^t \Sigma w}} \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ A mean-variance efficient portfolio is a portfolio whose weights $w^* \in [0,1]^{n}$ satisfy: $$ \exists \lambda \in [0, +\infty[, w^* = \operatorname{argmin} \frac{1}{2} w {}^t \Sigma w - \lambda \mu {}^t w \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ The parameter $\lambda$ is usually called the risk tolerance parameter, and is determined by an additional constraint on the portfolio: A return constraint $r_c$, in which case $\lambda$, if it exists, is determined such that the portfolio has a return equal to $r_c$ A volatility constraint $v_c \geq 0$, in which case $\lambda$, if it exists, is determined such that the portfolio has a volatility equal to $v_c$ A risk tolerance constraint $\lambda_c \geq 0$, in which case $\lambda$ always exist and is equal to $\lambda_c$ $w_t \in [0,1]^{n}$, the desired portfolio weights, with $\sum_{i=1}^{n} w_{t,i} = 1$ $TV$, the desired portfolio monetary value $P_1,...,P_n$, the prices of the assets $1,...,n$ $nl_1,...,nl_n$, the number of shares by which to purchase the assets $1,...,n$ $ml_1,...,ml_n$, the minimum number of shares to purchase for the assets $1,...,n$ $mv_1,...,mv_n$, the minimum monetary amount to purchase for the assets $1,...,n$ The investable portfolio weights $w \in [0,1]^{n}$ closest to the desired portfolio weights $w_t$ satisfy: $$ w = \operatorname{argmin} \frac{1}{2} \lVert w - w_t \rVert_2^2 \newline \textrm{s.t. } \begin{cases} \sum_{i=1}^{n} w_i \leq 1 \newline w_i = \frac{k_i nl_i P_i}{TV}, k_i \in \mathbb{N}, i=1,...,n \newline k_i \neq 0 \implies k_i nl_i \geq ml_i, i=1,...,n \newline k_i \neq 0 \implies k_i nl_i P_i \geq mv_i, i=1,...,n \end{cases} $$ Unfortunately, the above optimization problem is computationally intractable due to the integer constraints, so that it is only possible to compute an approximate solution. In case the desired portfolio weights $w_t$ do not satisfy $\sum_{i=1}^{n} w_{t,i} = 1$, the optimization problem above is reformulated to try to best accommodate the situation In case at least one group of assets is present, the investable portfolio weights $w \in [0,1]^{n}$ satisfy a more complex optimization problem than above, additionally involving: An assets groups matrix that identifies the membership of each asset within each assets group The desired portfolio groups weights The desired portfolio maximum groups weights In case at least one group of assets is present and the desired asset weights, assets groups weights and maximum assets groups weights are incompatible, the optimization problem above is reformulated to try to best accommodate the situation $r_b = (r_{b,1},...,r_{b,T}) \in \mathbb{R}^{T}$, the returns of the benchmark $r_p = (r_{p,1},...,r_{p,T}) \in \mathbb{R}^{T}$, the returns of the portfolio $p$, $p=1..n_p$ The tracking error of the $p$-th portfolio, $p=1..n_p$, is defined as the volatility of the difference of the returns of the portfolio and of the returns of the benchmark over the $T$ time periods: $$ \sqrt{\frac{\sum_{t=1}^{T} (r_{b,t} - r_{p,t})^2 }{T}} $$ The tracking error is sometimes defined differently, for example as the absolute difference in returns between a portfolio and a benchmark. The definition above corresponds to the most commonly used definition. $m$, the number of factors $X \in \mathcal{R}^{m \times T}$, the matrix of the factors returns The returns $R_{res, i} \in \mathcal{R}^{T}$ of the residualized factor $i \in {1..m}$ are defined as: $$ R_{res, i} {}^t = X_i - \alpha - \beta {}^t X_{-i} $$ where: $X_i$ represents the row $i$ of the matrix $X$ $X_{-i}$ represents the matrix $X$ after removing the row $i$ $(\alpha, \beta)$ is the unique solution of minimum euclidean norm of the linear least squares problem $$ \operatorname{argmin_{(\alpha \in \mathcal{R}, \beta \in \mathcal{R}^{m})}} \lVert X_i - \alpha - \beta {}^t X_{-i} \rVert_2^2 $$ The exposures $\beta_p \in \mathcal{R}^{m}$ of the $p$-th portfolio to the $m$ factors, $p=1..n_p$, are defined as the unique solution of minimum euclidean norm of the linear least squares problem: $$ \operatorname{argmin_{(\alpha_p \in \mathcal{R}, \beta_p \in \mathcal{R}^{m})}} \lVert r_p {}^t - \alpha_p - \beta_p {}^t X \rVert_2^2 $$ $\alpha_p$ represents the portion of the portfolio $p$ returns that cannot be attributed to the portfolio exposure to the $m$ factors $\beta_{p}$ represents the magnitude of the portfolio $p$ exposure to the $m$ factors $r_f = (r_{f,1},...,r_{f,T})\in \mathbb{R}^{T}$, the risk free returns The Jensen's alpha $\alpha_p \in \mathbb{R}$ of the $p$-th portfolio, $p=1..n_p$, is defined as the intercept of the regression equation in the Capital Asset Pricing Model: $$ r_{p,t} - r_{f,t} = \alpha_p + \beta_p (r_{b,t} - r_{f,t}) + \epsilon_t $$, with $t=1..T$ The Jensen's alpha $\alpha_p$ of the $p$-th portfolio corresponds to the excess return of the $p$-th portfolio adjusted for its systematic risk. The beta $\beta_p \in \mathbb{R}$ of the $p$-th portfolio, $p=1..n_p$, is defined as the slope of the regression equation in the Capital Asset Pricing Model: $$ r_{p,t} - r_{f,t} = \alpha_p + \beta_p (r_{b,t} - r_{f,t}) + \epsilon_t $$, with $t=1..T$ The beta $\beta_p$ of the $p$-th portfolio corresponds to the systematic risk of the $p$-th portfolio. $C_T \in \mathcal{M}(\mathbb{R}^{n \times n})$ a target correlation matrix $\lambda \in [0,1]$ a real number The correlation matrix $C_S$ defined by $$ C_S = (1-\lambda) C_T + \lambda T $$ corresponds to the (linear) shrinkage of the matrix $C$ toward the matrix $C_T$ with parameter $\lambda$. The matrices $C$ and $C_T$ are not required to be positive semi-definite, as the linear shrinkage operator is defined for any $n$ by $n$ matrix, but in this case, the matrix $C_S$ might not be positive semi-definite. The parameter $\lambda$ is usually called the shrinkage factor, or the shrinkage intensity, or the shrinkage constant. This endpoint provides 3 predefined target equicorrelation matrices $C_T$: The equicorrelation matrix made of 1, representing the maximum correlation matrix $$ \begin{bmatrix} 1 & 1 & ... & 1 \\ 1 & 1 & ... & 1 \\ ... & ... & ... & ... \\ 1 & 1 & ... & 1 \end{bmatrix} $$ The equicorrelation matrix made of 0, representing the minimum non-negative correlation matrix $$ \begin{bmatrix} 1 & 0 & ... & 0 \\ 0 & 1 & ... & 0 \\ ... & ... & ... & ... \\ 0 & 0 & ... & 1 \end{bmatrix} $$ The equicorrelation matrix made of $-\frac{1}{n-1}$, representing the minimum negative correlation matrix $$ \begin{bmatrix} 1 & -\frac{1}{n-1} & ... & -\frac{1}{n-1} \\ -\frac{1}{n-1} & 1 & ... & -\frac{1}{n-1} \\ ... & ... & ... & ... \\ -\frac{1}{n-1} & -\frac{1}{n-1} & ... & 1 \end{bmatrix} $$ The hierarchical risk parity portfolio is a portfolio blending graph theory and machine learning techniques where similar assets are first grouped together thanks to a hierarchical clustering algorithm and asset weights are then computed through a recursive top-down bisection of the resulting hierarchical tree. The matrix $\Sigma$ is not required to be invertible, that is, positive definite. There are 4 possible choices for the hierarchical clustering algorithm, influencing the way the assets are grouped together: Single linkage (default) Average linkage Complete linkage Ward's linkage There are 2 possible choices for the order to impose on the hierarchical clustering tree leaves, also influencing the way the assets are grouped together: Leaf ordering similar to the R function hclust (default) Leaf ordering that minimizes the distance between adjacent leaves, as described in Ziv Bar-Joseph, David K. Gifford, Tommi S. Jaakkola, Fast optimal leaf ordering for hierarchical clustering, Bioinformatics, Volume 17, Issue suppl_1, June 2001, Pages S22–S29 The management of minimum and maximum asset weights constraints is a proprietary adaptation of the method described in the second reference. The general idea is that constraints are enforced at the lowest possible level of the hierarchical tree. The hierarchical clustering-based risk parity portfolio is a portfolio building on the hierarchical risk parity portfolio, where similar assets are first grouped together thanks to an early stopped hierarchical clustering algorithm and asset weights are then computed through a recursive top-down division into two parts of the resulting hierarchical tree. Early stopping the hierarchical clustering algorithm produces a hierarchical tree cut at a certain height, with assets partitioned into clusters. The number of such clusters can either be provided or can be automatically computed thanks to the gap statistic method using as the null reference distribution the uniform distribution over the set of positive definite correlation matrices. Single linkage Ward's linkage (default) There are 3 possible choices for the within cluster allocation method and for the across cluster allocation method: Equal weighting (default) Inverse volatility Inverse variance Using Equal weighting for both cluster allocation methods corresponds to the Hierarchical Clustering-Based Asset Allocation (HCAA) of Thomas Raffinot. The management of minimum and maximum asset weights constraints is a proprietary adaptation of the method described in the fourth reference. The general idea is that constraints are enforced at the lowest possible level of the hierarchical tree. A random correlation matrix is a matrix $C \in \mathcal{M}(\mathbb{R}^{n \times n})$ generated uniformly at random over the space of positive definite correlation matrices, which is defined as $$ \mathcal{E}_n = \{ C \in \mathcal{M}(\mathbb{R}^{n \times n}) : C {}^t = C, C_{i,i} = 1, i=1..n, x {}^t C x > 0, \forall x \in\mathbb{R}^n \} $$ This endpoint uses a computationally more efficient algorithm than the one described in the reference. $n \ge 2$, the number of assets $\lambda_1 \ge \lambda_2 \ge ... \ge \lambda_n \ge 0$ the eigenvalues of the matrix $\Sigma$ $\rho_1 \ge \rho_2 \ge ... \ge \rho_n \ge 0$ the standardized eigenvalues of the matrix $\Sigma$ defined by $\rho_i = \frac{\lambda_i}{\sum_{i=1}^{n} \lambda_i}$, $i=1..n$ The effective rank of the matrix $\Sigma$ is defined as $$ \textrm{erank}(\Sigma) = e^{- \sum_{i=1}^{n} \rho_i \ln(\rho_i)} $$ In case of a null standardized eigenvalue, the convention taken is $0 ln(0) = 0$. $\lambda_1 \ge \lambda_2 \ge ... \ge \lambda_n \ge 0$ the eigenvalues of the matrix $C$ $\rho_1 \ge \rho_2 \ge ... \ge \rho_n \ge 0$ the standardized eigenvalues of the matrix $C$ defined by $\rho_i = \frac{\lambda_i}{\sum_{i=1}^{n} \lambda_i}$, $i=1..n$ The effective rank of the matrix $C$ is defined as $$ \textrm{erank}(C) = e^{- \sum_{i=1}^{n} \rho_i \ln(\rho_i)} $$ The average arithmetic return $\overline{r_p}$ of the $p$-th portfolio, $p=1..n_p$, is defined as the arithmetic average of the arithmetic returns $r_p$: $$ \overline{r_p} = \frac{\sum_{t=1}^{T_p-1} r_{p,t}}{T_p-1} $$ The Ulcer Index $UI_p$ of the $p$-th portfolio, $p=1..n_p$, is defined as: $$ UI_p = \sqrt{\frac{\sum_{t=1}^{T_p} \left(100 * \left(\frac{V_{t, p}}{\max_{t'=1..t} V_{t', p}} - 1\right)\right)^2 }{T_p}} $$ $\overline{r_p}$, the average arithmetic return of the $p$-th portfolio, $p=1..n_p$ $UI_p$, the Ulcer Index of the $p$-th portfolio, $p=1..n_p$ The Ulcer Performance Index $UPI_p$ of the $p$-th portfolio, $p=1..n_p$, is defined as: $$ UPI_p = \frac {\overline{r_p} - r_f}{UI_p} $$ The Ulcer Performance Index is also called the Martin Index, the Martin Ratio or the Return-to-Ulcer Ratio $y_t = \left ( y_{t,1}, ..., y_{t,n} \right ) \in \mathbb{R}^n$ the vector of the $n$ assets uncompounded cumulative return up to the time period $t$, defined by $y_{t,i} = \sum_{k=1}^{t} r_{k,i}$, $i=1..n$, $t=1..T$ The maximum Ulcer Performance Index portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmax} \frac{w {}^t \overline{r} - r_f }{UI(w)} \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ With $UI(w)$ the Ulcer Index of the portfolio with returns $\langle w , y_t \rangle$, $t=1..T$, that is: $$\text{UI}(w) =\sqrt{\frac{1}{T}\sum_{k=1}^{T} \left ( \max_{j = 1..k} \left ( \langle w , y_j \rangle \right ) - \langle w , y_k \rangle \right ) ^2}$$ The minimum Ulcer Index portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmin} UI(w) \newline \textrm{s.t. } \begin{cases} l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ $l_p = -r_p \in \mathbb{R}^{T_p-1}$, the $p$-th portfolio losses, $p=1..n_p$ $\alpha \in ]0,1[$, the confidence level The conditional value at risk at a confidence level $\alpha$, $CVaR_{\alpha}$, of the $p$-th portfolio, $p=1..n_p$, is the average of the distribution of the $p$-th portfolio losses $l_p$ over all the losses larger than the value at risk with confidence level $\alpha$. The conditional value at risk is also known as the expected shortfall or the expected tail loss. The conditional value at risk at a confidence level $\alpha$ answers to the following question: What is the expected loss incurred in the $\alpha$% worst cases of the portfolio? Typical values for $\alpha$ are 0.01 (= 1%) or 0.05 (= 5%). In the literature, the confidence level is also sometimes defined as $1 - \alpha$, in which case typical values for $1 - \alpha$ are 0.95 (= 95%) or 0.99 (= 99%). The value at risk at a confidence level $\alpha$, $VaR_{\alpha}$, of the $p$-th portfolio, $p=1..n_p$, is the $\alpha$-quantile of the distribution of the $p$-th portfolio losses $l_p$. The value at risk at a confidence level $\alpha$ is equivalently defined as the maximum portfolio loss that will not be exceeded with probability $1-\alpha$. The value at risk at a confidence level $\alpha$ answers to the following question: What is the minimum loss incurred in the $\alpha$% worst cases of the portfolio? In the literature, the confidence level is sometimes defined as $1 - \alpha$, in which case typical values for $1 - \alpha$ are 0.95 (= 95%) or 0.99 (= 99%). The diversified maximum return portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmin} \sum_{i=1}^{n} w_i^2 \newline \textrm{s.t. } \begin{cases} \sqrt{ w {}^t \Sigma w} \leqslant \sigma^* (1 + \delta_{\sigma}) \newline \mu^* (1 - \delta_{\mu}) \leqslant w {}^t \mu \newline l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ $\mu^*$ is the average arithmetic return of the maximum return portfolio $\delta_{\mu} \in \mathbb{R}^{+}$ is the relative tolerance over the maximum return portfolio average arithmetic return $\mu^*$ If applicable, $\sigma^*$ is the volatility of the maximum return portfolio If applicable, $\delta_{\sigma} \in \mathbb{R}^{+}$ is the relative tolerance over the maximum return portfolio volatility $\sigma^*$ The diversified maximum Sharpe ratio portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmin} \sum_{i=1}^{n} w_i^2 \newline \textrm{s.t. } \begin{cases} \sqrt{ w {}^t \Sigma w} \leqslant \sigma^* (1 + \delta_{\sigma}) \newline \mu^* (1 - \delta_{\mu}) \leqslant w {}^t \mu \newline l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ $\mu^*$ is the average arithmetic return of the maximum Sharpe ratio portfolio $\delta_{\mu} \in \mathbb{R}^{+}$ is the relative tolerance over the maximum Sharpe ratio portfolio average arithmetic return $\mu^*$ $\sigma^*$ is the volatility of the maximum Sharpe ratio portfolio $\delta_{\sigma} \in \mathbb{R}^{+}$ is the relative tolerance over the maximum Sharpe ratio portfolio volatility $\sigma^*$ The diversified minimum variance portfolio weights $w^* \in [0,1]^{n}$ satisfy: $$ w^* = \operatorname{argmin} \sum_{i=1}^{n} w_i^2 \newline \textrm{s.t. } \begin{cases} \sqrt{ w {}^t \Sigma w} \leqslant \sigma^* (1 + \delta_{\sigma}) \newline \mu^* (1 - \delta_{\mu}) \leqslant w {}^t \mu \newline l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ $\sigma^*$ is the volatility of the minimum variance portfolio $\delta_{\sigma} \in \mathbb{R}^{+}$ is the relative tolerance over the minimum variance portfolio volatility $\sigma^*$ If applicable, $\mu^*$ is the average arithmetic return of the minimum variance portfolio If applicable, $\delta_{\mu} \in \mathbb{R}^{+}$ is the relative tolerance over the minimum variance portfolio average arithmetic return $\mu^*$ The weights $w^* \in [0,1]^{n}$ of a diversified mean-variance efficient portfolio satisfy: $$ w^* = \operatorname{argmin} \sum_{i=1}^{n} w_i^2 \newline \textrm{s.t. } \begin{cases} \sqrt{ w {}^t \Sigma w} \leqslant \sigma^* (1 + \delta_{\sigma}) \newline \mu^* (1 - \delta_{\mu}) \leqslant w {}^t \mu \newline l \leqslant w \leqslant u \newline Gw \leqslant u_g \newline w_{min} \leqslant \sum_{i=1}^{n} w_i \leqslant w_{max} \end{cases} $$ $\mu^*$ is the average arithmetic return of a mean-variance efficient portfolio $\delta_{\mu} \in \mathbb{R}^{+}$ is the relative tolerance over the mean-variance efficient portfolio average arithmetic return $\mu^*$ $\sigma^*$ is the volatility of a mean-variance efficient portfolio $\delta_{\sigma} \in \mathbb{R}^{+}$ is the relative tolerance over the mean-variance efficient portfolio volatility $\sigma^*$ $\mu_{T'} \in \mathbb{R}^{n}$, the vector of the average asset returns over a historical reference period $T'$ $\Sigma_{T'} \in \mathcal{M}(\mathbb{R}^{n \times n})$, the asset covariance matrix over a historical reference period $T'$ $r_T \in \mathbb{R}^{n}$, the vector of the asset returns over a period $T \ne T'$ The turbulence index $d_T$ of the assets over the period $T$ is defined as: $$ d_T = \frac{1}{n} (r_T - \mu_{T'}) {}^t \Sigma{_{T'}}^{-1} (r_T - \mu_{T'}) $$ The turbulence index $d_T$ represents a statistical measure of financial turbulence based on the Mahalanobis distance, c.f. the first reference. The turbulence index $d_T$ is normalized by the number of assets $n$ so that its expected value is equal to 1, c.f. the second reference. The asset covariance matrix $\Sigma_{T'}$ is supposed to be invertible, that is, positive definite. $E_1,...,E_n$, the eigenvectors of $\Sigma$ ordered such that $\sigma_{E_1}^2 \geq ... \geq \sigma_{E_n}^2$, with $\sigma_{E_i}^2$ the variance of the eigenvector $E_i$, $i=1..n$ $1\leq N \leq n$, the number of eigenvectors $E_1,...,E_N$ to retain in the computation of the absorption ratio The absorption ratio $AR$ of the assets is defined as: $$ AR = \frac{\sum_{i=1}^N \sigma_{E_i}^2}{\sum_{i=1}^n \sigma_{E_i}^2} $$ The absorption ratio $AR$ is an indicator of financial risk, representing the fraction of the total variance of the assets explained (or absorbed, hence its name) by a finite set of eigenvectors, c.f. the first reference. The denominator of the absorption ratio $AR$ is also equal to $\sum_{i=1}^n \sigma_{A_i}^2$, with $\sigma_{A_i}^2$ the variance of the $i$-th asset, $i=1..n$, which is its usual definition. The subset resampling-based maximum return portfolio weights $w^* \in [0,1]^{n}$ are computed through the following procedure: Determine the number of assets $n_S$ to include in each subset, with $2 \le n_S \le n$ Determine the number of subsets to generate $n_B$, with $1 \le n_B$ For $b = 1..n_B$ do Generate uniformly at random without replacement a subset of $n_S$ assets from the original set of $n$ assets Compute the weights $w_b^*$ of the maximum return portfolio associated to the generated subset of $n_S$ assets, taking into account the applicable constraints Aggregate the $n_B$ portfolio weights $w_1^*,..,w_{n_B}^*$ by averaging them through the formula: $$ w^* = \frac{1}{n_B} \sum_{b=1}^{n_B} w_b^* $$ The subset resampling method as described above is actually the random subspace method, an ensemble learning technique, applied to mean-variance portfolio optimization. It is possible to generate all the subsets of the original set of $n$ assets containing $n_B$ assets; in this case, the subset generation procedure becomes non-random. In case too many subset optimization problems are infeasible, typically due to weight constraints, an error is returned by this endpoint; a threshold of 5% is enforced. It is possible to aggregate the $n_B$ portfolio $w_1^*,..,w_{n_B}^*$ by using a robust location estimator - the geometric median - instead of the average; this procedure is described in Peter Bühlmann, Bagging, Subagging and Bragging for Improving some Prediction Algorithms. The subset resampling-based maximum Sharpe ratio portfolio weights $w^* \in [0,1]^{n}$ are computed through the following procedure: Compute the weights $w_b^*$ of the maximum Sharpe ratio portfolio associated to the generated subset of $n_S$ assets, taking into account the applicable constraints The subset resampling-based minimum variance portfolio weights $w^* \in [0,1]^{n}$ are computed through the following procedure: Compute the weights $w_b^*$ of the minimum variance portfolio associated to the generated subset of $n_S$ assets, taking into account the applicable constraints A return constraint $r_c$, a volatility constraint $v_c \geq 0$ or a risk tolerance constraint $\lambda_c \geq 0$ The weights $w^* \in [0,1]^{n}$ of a subset resampling-based mean-variance efficient portfolio are computed through the following procedure: Determine the number of assets $n_S$ to include in each random subset of assets, with $2 \le n_S \le n$ Determine the number of random subsets of assets $n_B$ to generate, with $1 \le n_B$ Compute the weights $w_b^*$ of a mean-variance efficient portfolio associated to the generated subset of $n_S$ assets, taking into account the applicable constraints Combine the $n_B$ portfolio weights $w_1^*,..,w_{n_B}^*$ by averaging them through the formula: In case too many subset optimization problems are infeasible, typically due to return, volatility or weight constraints, an error is returned by this endpoint; a threshold of 5% is enforced. It is possible to combine the $n_B$ portfolio $w_1^*,..,w_{n_B}^*$ by using a robust location estimator - the geometric median - instead of the average; this procedure is described in Peter Bühlmann, Bagging, Subagging and Bragging for Improving some Prediction Algorithms. $\mathcal{B}$ the index set of a selected group of assets whose correlations are desired to be altered The lower bounds $L \in \mathcal{M}(\mathbb{R}^{n \times n})$ and the upper bounds $U \in \mathcal{M}(\mathbb{R}^{n \times n})$ of the asset correlation matrix $C$ associated to the selected group of assets are correlation matrices containing respectively the lowest and the highest possible values among which the correlations of the selected group of assets can linearly vary together while keeping the correlations between all the other assets fixed and while ensuring that the resulting correlation matrix is a valid correlation matrix. It is usually desired to alter the correlations for a selected group of assets in order to perform stress tests. $\mathring{l} \in \mathcal{M}(\mathbb{R}^{n \times n})$, a matrix representing an invertible linear transformation such that $\mathring{l} \Sigma \mathring{l} {}^t$ is a diagonal matrix $w_p \in \mathbb{R}^{n}$, the vector of portfolio weights of the $p$-th portfolio, $p=1..n_p$ The effective number of bets of the $p$-th portfolio, $p=1..n_p$, is defined as $$ \mathcal{N}_{Ent,p} = e^{- \sum_{i=1}^{n} {d_p}_i \ln({d_p}_i)} $$, where $d_p \in [0,1]^{n}$ is the diversification distribution of the $p$-th portfolio defined as $$d_p = \frac{ \left( \left( \mathring{l} {}^t \right)^{-1} w_p \right) \circ \left( \mathring{l} \Sigma w_p \right) }{w_p {}^t \Sigma w_p}$$ The matrix $\mathring{l}$ it called a decorrelating torsion matrix. There are 2 possible choices for the computation of the matrix $\mathring{l}$: The minimum torsion transformation described in the reference, leading to the effective number of minimum torsion bets (default) The matrix of the principal components of the matrix $\Sigma$, leading to the effective number of principal components bets $\mathcal{H}_1,...,\mathcal{H}_{n}$, a theoretical hierarchical classification of the assets $C \in \mathcal{M}(\mathbb{R}^{n \times n})$, an empirical asset correlation matrix The theory-implied correlation matrix associated with a theoretical hierarchical classification of a universe of assets - like the MSCI Global Industry Classification Standard for stocks - and an empirical asset correlation matrix is computed thanks to a machine learning technique. Similar assets are first grouped together thanks to a hierarchical clustering algorithm constrained to match the hierarchical classification of the assets and theory-implied asset correlations are then derived from the resulting hierarchical tree. The empirical asset correlation matrix $C$ does not need to be positive semi-definite. Average linkage (default) The computed theory-implied asset correlation matrix is not guaranteed to be positive semi-definite. $C_R \in \mathcal{M}(\mathbb{R}^{n \times n})$, a reference correlation matrix The Euclidean distance $\mathcal{d}_F$ between the matrices $C$ and $C_R$ is defined as: $$ \mathcal{d}_F \left( C, C_R \right) = \left\Vert C - C_R \right\Vert_F $$ The correlation matrix distance $\mathcal{d}_{corr}$ between the matrices $C$ and $C_R$ is defined as: $$ \mathcal{d}_{corr}\left( C, C_R \right) = 1 - \frac{< C, C_R>}{\left\Vert C \right\Vert_F \left\Vert C_R \right\Vert_F } $$ The Bures distance $\mathcal{d}_{Bures}$ between the matrices $C$ and $C_R$ is defined as: $$ \mathcal{d}_{Bures}^2\left( C, C_R \right) = \mathrm{tr} \left( C \right) + \mathrm{tr} \left( C_R \right) -2 \mathrm{tr} \left( C^{\frac{1}{2}} C_R C^{\frac{1}{2}} \right)^{\frac{1}{2}} $$ The matrices $C$ and $C_R$ are not required to be positive semi-definite for the distances $\mathcal{d}_F$ and $\mathcal{d}_{corr}$, as these distances are defined for any $n$ by $n$ matrix. $r_{t,i}, i=1..n, t=1..T$, the return (arithmetic,...) of the asset $i$ over each time period $t$ A bootstrap simulation of the original $n$ asset returns over $T'$ time periods is defined as the sampling with replacement of $T'$ cross-sectional returns from the returns $r_{t,i}, i=1..n, t=1..T$ using one of the bootstrap methods described in the references. There are 3 possible choices for the bootstrap method: IID bootstrap Circular block bootstrap Stationary block bootstrap (default) The IID bootstrap is theoretically applicable to independent data only, and does not require the selection of any additional parameter The circular block bootstrap is theoretically applicable to dependent data, and requires the selection of an integer block length $b \geq 2$ The stationary block bootstrap is theoretically applicable to dependent data, and requires the selection of an average block length $\bar{b} \geq 1$ corresponding to the inverse probability of the geometric distribution associated with the bootstrap method internals The default value for the parameter $b$ of the circular block bootstrap is the same as in the R package tseries, that is, the integer part of $3.15 n^{\frac{1}{3}}$ The default value for the parameter $\bar{b}$ of the stationary block bootstrap is the same as in the R package tseries, that is, $\frac{1}{3.15 n^{\frac{1}{3}}}$ $V_t \in \mathbb{R}^{+,*}$, the value of the portfolio at time $t$, $t=1..T+1$ $r_t \in \mathbb{R}, t=1..T $, the arithmetic returns of the portfolio over each time period $1 - \alpha \in ]0,1[$, the confidence level, in percentage Then, if $SR^* \in \mathbb{R}$ is a benchmark Sharpe ratio, the minimum track record length $MinTRL(SR^*)$ of the portfolio is defined as the (floating point) number of arithmetic returns $T^*$ that are required to ensure that the probabilistic Sharpe ratio of the portfolio $PSR(SR^*)$ is greater than or equal to $(1 - \alpha)$%, that is $$ MinTRL(SR^*) = T^* \textrm{ such that } PSR(SR^*) \geq 1 - \alpha $$ Alternatively, if $B_t \in \mathbb{R}^{+,*}$ is the value of a benchmark at time $t$, $t=1..T+1$, the minimum track record length $MinTRL(B)$ of the portfolio is defined as the (floating point) number of arithmetic returns $T^*$ that are required to ensure that $PSR(B)$ is greater than or equal to $(1 - \alpha)$%, that is $$ MinTRL(B) = T^* \textrm{ such that } PSR(B) \geq 1 - \alpha $$ The minimum track record length is not guaranteed to exist. The minimum track record length might be less than $T$, which means that the current number of observed arithmetic returns is already sufficient to ensure that the probabilistic Sharpe ratio of the portfolio is greater than or equal to $(1 - \alpha)$%. $\kappa \in \mathbb{R}$, the skewness of the arithmetic returns $r_1,...,r_T$ $\gamma \in \mathbb{R}$, the kurtosis of the arithmetic returns $r_1,...,r_T$ $SR \in \mathbb{R}$, the Sharpe ratio of the portfolio Then, if $SR^* \in \mathbb{R}$ is a benchmark Sharpe ratio, the probabilistic Sharpe ratio $PSR(SR^*)$ of the portfolio is defined as the probability that $SR$, considered as a statistical estimator subject to estimation error, is greater than or equal to $SR^*$, with formula $$ PSR(SR^*) = \Phi\left( \frac{SR - SR^*}{ \sqrt{\frac{1 - \kappa SR + (\gamma - 1) \frac{SR^2}{4}}{T}} } \right ) $$ Alternatively, if $B_t \in \mathbb{R}^{+,*}$ is the value of a benchmark at time $t$, $t=1..T+1$, the probabilistic Sharpe ratio $PSR(B)$ of the portfolio is defined as the probability that $SR$, considered as a statistical estimator subject to estimation error, is greater than or equal to the Sharpe ratio of the benchmark $SR_B \in \mathbb{R}$, also considered as a statistical estimator subject to estimation error, with formula $$ PSR(B) = \Phi\left( \frac{SR - SR_B}{ \sqrt{\frac{1 - \kappa SR + (\gamma - 1) \frac{SR^2}{4} + 1 - \kappa_B SR_B + (\gamma_B - 1) \frac{SR_B^2}{4} + ...}{T} } } \right ) $$, where $\kappa_B \in \mathbb{R}$ is the skewness of the arithmetic returns of the benchmark, $\gamma_B \in \mathbb{R}$ is the kurtosis of the arithmetic returns of the benchmark and $...$ depends on the multivariate central moments of the arithmetic returns of the portfolio and of the benchmark as described in the first reference. In both cases, $\Phi$ is the cumulative distribution function of the standard normal distribution. $V_t \in \mathbb{R}^{+,*}, t=1..T+1$, the value of the portfolio at time $t$ The Sharpe ratio adjusted for small sample bias of the portfolio is defined as $ \frac{SR}{\left( 1 + \frac{1}{4} \frac{\gamma - 1}{T} \right)} $. A confidence interval at a confidence level $(1 - \alpha)$% for the Sharpe ratio of a portfolio, considered as a statistical estimator subject to estimation error, is a real interval whose values are not statistically significantly different from $SR$ at a confidence level $(1 - \alpha)$%. There are 3 possible choices for the type of confidence interval: A two-sided confidence interval (default), with formula $$ \left[ SR - z_{1-\frac{\alpha}{2}} SE(SR), SR + z_{1-\frac{\alpha}{2}} SE(SR) \right] $$ An upper one-sided confidence interval, with formula $$ \left] -\infty, SR + z_{1-\alpha} SE(SR) \right] $$ A lower one-sided confidence interval, with formula $$ \left[SR - z_{1-\alpha} SE(SR) , +\infty \right[ $$ where $SE(SR)$ is defined by $$ SE(SR) = \sqrt{\frac{1 - \kappa SR + (\gamma - 1) \frac{SR^2}{4}}{T}} $$ $C \in \mathcal{M}(\mathbb{R}^{n \times n})$, an empirical asset correlation matrix, determined using $T$ observations per asset $q$, the aspect ratio of $C$ defined by $q = \frac{n}{T}$ The denoised asset correlation matrix is computed by altering the empirical asset correlation matrix $C$ using one of the methods described in the references. There is 1 possible choice for the denoising method: The eigenvalues clipping method, which consists in first finding the Marchenko-Pastur eigenvalue density that best matches the eigenvalue density of $C$ and then in replacing all the eigenvalues of $C$ below the upper edge of the Marchenko-Pastur eigenvalue density, which are considered to represent eigenvalues associated to noise, by their average. $\mathcal{N} \subset \mathcal{M}(\mathbb{R}^{n \times n})$, the set of equicorrelation matrices The informativeness of $C$ is defined as its normalized distance - belonging to the interval $[0,1]$ - to the set of equicorrelation matrices $\mathcal{N}$, c.f. the first reference. There are 3 possible choices for the distance metric, with 3 associated values for the informativeness: The Euclidean distance, with $$ \textrm{informativeness}(\textup{C}) = \frac{1}{n} \min_{N \in \mathcal{N}} \mathcal{d}_F \left( C, N \right) $$ The correlation matrix distance, with $$ \textrm{informativeness}(\textup{C}) = \min_{N \in \mathcal{N}} \mathcal{d}_{corr} \left( C, N \right) $$ The Bures distance, with $$ \textrm{informativeness}(\textup{C}) = \frac{1}{2n} \min_{N \in \mathcal{N}} \mathcal{d}_{Bures}^2 \left( C, N \right) $$ The matrix $C$ is not required to be positive semi-definite for the Euclidean and correlation matrix distance. The equal volatility-weighted portfolio weights $w \in [0,1]^{n}$ satisfy: $$ w_i = \frac{\sigma_i}{\sum_{j=1}^{n} \sigma_j}, i=1..n$$ The correlation spectrum of the portfolio is defined as the vector $\rho(w) \in [-1,1]^{n}$ with components $$ \rho(w)_i = \frac{ \left( \Sigma{} w \right)_i }{\sigma_p \sigma_i} $$ The correlation spectrum of the portfolio is defined as the vector $\rho(w) \in [-1,1]^{n}$ with components $$ \rho(w)_i = \rho_{p,i} $$ , with $\rho_{p,i} \in [-1,1]$ the correlation of the arithmetic returns of the portfolio with the arithmetic returns of the asset $i, i=1..n$.
CommonCrawl
1 Introduction to Simulation 1.1 About the Book 1.2 Systems and Models 1.3 Randomness and the Simulation Process 1.3.1 Randomness in Simulation and Random Variables 1.3.2 The Simulation Process 1.4 When to Simulate (and When Not To) 1.5 Simulation Success Skills 1.5.1 Project Objectives 1.5.2 Functional Specification 1.5.3 Project Iterations 1.5.4 Project Management and Agility 1.5.5 Stakeholder and Simulationist Bills of Rights 2 Basics of Queueing Theory 2.1 Queueing-System Structure and Terminology 2.2 Little's Law and Other Relations 2.3 Specific Results for Some Multiserver Queueing Stations 2.3.1 M/M/1 2.3.2 M/M/c 2.3.3 M/G/1 2.3.4 G/M/1 2.4 Queueing Networks 2.5 Queueing Theory vs. Simulation 2.6 Problems 3 Kinds of Simulation 3.1 Classifying Simulations 3.1.1 Static vs. Dynamic Models 3.1.2 Continuous-Change vs. Discrete-Change Dynamic Models 3.1.3 Deterministic vs. Stochastic Models 3.2 Static Stochastic Monte Carlo 3.2.1 Model 3-1: Throwing Two Dice 3.2.2 Model 3-2: Monte-Carlo Integration 3.2.3 Model 3-3: Single-Period Inventory Profits 3.2.4 Model 3-4: New Product Decision Model 3.3 Dynamic Simulation Without Special Software 3.3.1 Model 3-5: Manual Dynamic Simulation 3.3.2 Model 3-6: Single-Server Queueing Delays 3.4 Software Options for Dynamic Simulation 3.4.1 General-Purpose Programming Languages 3.4.2 Special-Purpose Simulation Software 4 First Model 4.1 The Basic Simio User Interface 4.1.1 Ribbons 4.1.2 Support Ribbon 4.1.3 Project Model Tabs 4.1.4 Object Libraries 4.1.5 Properties Window 4.1.6 Navigation Window 4.1.7 SimBits 4.1.8 Moving/Configuring Windows and Tabs 4.2 Model 4-1: First Project Using the Standard Library Objects 4.2.1 Building the Model 4.2.2 Initial Experimentation and Analysis 4.2.3 Replications and Statistical Analysis of Output 4.2.4 Steady-State vs. Terminating Simulations 4.2.5 Model Verification 4.3 Model 4-2: First Model Using Processes 4.4 Model 4-3: Automated Teller Machine (ATM) 4.5 Beyond Means: Simio MORE (SMORE) Plots 4.6 Exporting Output Data for Further Analysis 4.7 Interactive Logs and Dashboard Reports 4.8 Basic Model Animation 4.9 Model Debugging 4.9.1 Discovering Subtle Problems 4.9.2 The Debugging Process 4.9.3 Debugging Tools 4.10 Summary 4.11 Problems 5 Intermediate Modeling With Simio 5.1 Simio Framework 5.1.1 Introduction to Objects 5.1.2 Properties and States 5.1.3 Tokens and Entities 5.1.4 Processes 5.1.5 Objects as Resources 5.1.6 Data Scope 5.1.7 Expression Builder 5.1.8 Costing 5.2 Model 5-1: PCB Assembly 5.3 Model 5-2: Enhanced PCB Assembly 5.3.1 Adding a Rework Station 5.3.2 Using Expressions with Link Weights 5.3.3 Resource Schedules 5.3.4 Machine Failures 5.3.5 Verification of Model 5-2 5.4 Model 5-3: PCB Model With Process Selection 5.5 Model 5-4: Comparing Multiple Alternative Scenarios 6 Input Analysis 6.1 Specifying Univariate Input Probability Distributions 6.1.1 General Approach 6.1.2 Options for Using Observed Real-World Data 6.1.3 Choosing Probability Distributions 6.1.4 Fitting Distributions to Observed Real-World Data 6.1.5 More on Assessing Goodness of Fit 6.1.6 Distribution-Fitting Issues 6.2 Types of Inputs 6.2.1 Deterministic vs. Stochastic 6.2.2 Scalar vs. Multivariate vs. Stochastic Processes 6.2.3 Time-Varying Arrival Rate 6.3 Random-Number Generators 6.4 Generating Random Variates and Processes 6.5 Using Simio Input Parameters 6.5.1 Response Sensitivity 6.5.2 Sample Size Error Estimation 7 Working With Model Data 7.1 Data Tables 7.1.1 Basics of Tables 7.1.2 Model 7-1: An ED Using a Data Table 7.1.3 Sequence Tables 7.1.4 Model 7-2: Enhanced ED Using Sequence Tables 7.1.5 Arrival Tables and Model 7-3 7.1.6 Relational Tables 7.1.7 Table Import/Export 7.2 Schedules 7.2.1 Calendar Work Schedules 7.2.2 Manual Schedules 7.3 Rate Tables and Model 7-4 7.4 Lookup Tables and Model 7-5 7.5 Lists and Changeovers 7.6 State Arrays 7.7 Data Driven Models 7.7.1 Tables and Repeat Groups 7.8 Data Generated Models 8 Animation and Entity Movement 8.1 Animation 8.1.1 Why Animate? 8.1.2 Navigation and Viewing Options 8.1.3 Background Animation With the Drawing Ribbon 8.1.4 Status Animation With the Animation Ribbon 8.1.5 Editing Symbols with the Symbols Ribbon 8.1.6 More Realistic Entity Animation 8.1.7 Model 8-1: Animating the PCB Assembly 8.2 Entity Movement 8.2.1 Entity Movement Through Free Space 8.2.2 Using Connectors, TimePaths, and Paths 8.2.3 Using Conveyors 8.2.4 Model 8-2: PCB Assembly with Conveyors 8.2.5 Using Workers 8.2.6 Using Vehicles 8.2.7 Model 8-3: ED Enhanced with Hospital Staff 9 Advanced Modeling With Simio 9.1 Model 9-1: ED Model Revisited 9.1.1 Seeking Optimal Resource Levels in Model 9-1 With OptQuest 9.1.2 Ranking and Selection of Alternate Scenarios in Model 9-1 With Subset Selection and KN 9.2 Model 9-2: Pizza Take-out Model 9.2.1 Experimentation With Model 9-2 9.3 Model 9-3: Fixed-Capacity Buffers 10 Miscellaneous Modeling Topics 10.1 Search Step 10.1.1 Model 10-1: Searching For and Removing Entities from a Station 10.1.2 Model 10-2: Accumulating a Total Process Time in a Batch 10.2 Model 10-3: Balking and Reneging 10.3 Task Sequences 10.4 Event-based Decision Logic 10.5 Other Libraries and Resources 10.5.1 Flow Library 10.5.2 Extras Library 10.5.3 Shared Items Forum 10.6 Experimentation 10.6.1 Parallel Processing 10.6.2 Cloud Processing 10.7 AI and Neural Networks 10.7.1 Neural Networks 10.9 Problems 11 Customizing and Extending Simio 11.1 Basic Concepts of Defining Objects 11.1.1 Model Logic 11.1.2 External View 11.1.3 Sub-classing an Object Definition 11.1.4 Properties, States, and Events 11.2 Model 11-1: Building a Hierarchical Object 11.3 Model 11-2: Building a Base Object 11.4 Model 11-3: Sub-Classing an Object 11.5 Working With User Extensions 11.5.1 How to Create and Deploy a User Extension 12 Simulation-based Scheduling in Industry 4.0 12.1 Industrial Revolutions through the Ages 12.1.1 First Industrial Revolution – Mechanical Production 12.1.2 Second Industrial Revolution – Mass Production 12.1.3 Third Industrial Revolution – The Digital Age 12.2 The Fourth Industrial Revolution – The Smart Factory 12.2.1 What's Next? 12.3 The Need for a Digital Twin 12.4 Role of Design Simulation in Industry 4.0 12.5 The Role of Simulation-based Scheduling 12.5.1 Simulation as the Digital Twin 12.6 Tough Problems in Planning and Scheduling 12.7 Simulation-based Scheduling 12.8 Risk-based Planning and Scheduling 12.9 Planning and Scheduling With Simio RPS 12.10 The Scheduling Interface 12.11 Model 12-1: Model First Approach to Scheduling 12.11.1 Building a Simple Scheduling Model 12.11.2 Making a More Realistic Model 12.11.3 Adding Performance Tracking and Targets 12.11.4 Additional Evaluation Tools 12.12 Model 12-2: Data First Approach to Scheduling 12.12.1 Configuring the Model for Data Import 12.12.2 Data Import 12.12.3 Running and Analyzing the Model 12.13 Additional Information and Examples 12.13.1 Simio E-books on Planning and Scheduling 12.13.2 Scheduling Examples 12.14 Summary 12.15 Problems A Case Studies Using Simio A.1 Machining-Inspection Operation A.1.1 Problem Description A.1.2 Sample Model and Results A.2 Amusement Park A.3 Simple Restaurant A.4 Small Branch Bank A.5 Vacation City Airport A.6 Simply The Best Hospital B Simio Student Competition Problems B.1 Innovative Car Rentals B.2 Simio Drilling Logistics B.3 Urgent Care Centers of Simio B.4 Aerospace Manufacturing Problem B.5 Latin American Supply Chain B.6 Pulp and Paper Manufacturing Supply B.7 Global Currency Exchange B.8 Sunrun Solar Panel Installation B.9 Seed Production B.10 Megastore Distribution Center B.11 Regional Airport Planning B.12 Miebach Distribution Center B.13 Small Restaurant Operations B.14 Supply Chain Planning Using DDMRP C Using This Book C.1 Formatting Guidelines C.2 Using the Online Book Toolbar C.3 Simio Software C.3.1 Simio on a Mac C.4 Book Files and Resources C.4.1 Model and data files C.4.2 MMC Queueing Analysis C.4.3 Stat::Fit Input Analysis C.4.4 @Risk Software C.4.5 Solutions and Other Files For Instructors Simio and Simulation - Modeling, Analysis, Applications - 6th Edition Chapter 4 First Model The primary goal of this chapter is to introduce the simulation model-building process using Simio. Hand-in-hand with simulation-model building goes the statistical analysis of simulation output results, so as we build our models we'll also exercise and analyze them to see how to make valid inferences about the system being modeled. The chapter first will build a complete Simio model and introduce the concepts of model verification, experimentation, and statistical analysis of simulation output data. Although the basic model-building and analysis processes themselves aren't specific to Simio, we'll focus on Simio as an implementation vehicle. The initial model used in this chapter is very simple, and except for run length is basically the same as Model 3-4 done manually in Section 3.3.1 and Model 3-5 in a spreadsheet model in Section 3.3.2. This model's familiarity and simplicity will allow us to focus on the process and the fundamental Simio concepts, rather than on the model. We'll then make some easy modifications to the initial model to demonstrate additional Simio concepts. Then, in subsequent chapters we'll successively extend the model to incorporate additional Simio features and simulation-modeling techniques to support more comprehensive systems. This is a simple single-server queueing system with arrival rate \(\lambda=48\) entities/hour and service rate \(\mu=60\) entities/hour (Figure 4.1). Figure 4.1: Example single-server queueing system. This system could represent a machine in a manufacturing system, a teller at a bank, a cashier at a fast-food restaurant, or a triage nurse at an emergency room, among many other settings. For our purposes, it really doesn't matter what is being modeled — at least for the time being. Initially, assume that the arrival process is Poisson (i.e., the interarrival times are exponentially distributed and independent of each other), the service times are exponential and independent (of each other and of the interarrival times), the queue has infinite capacity, and the queue discipline will be first-in first-out (FIFO). Our interest is in the typical queueing-related metrics such as the number of entities in the queue (both average and maximum), the time an entity spends in the queue (again, average and maximum), utilization of the server, etc. If our interest is in long-run or steady-state behavior, this system is easily analyzed using standard queueing-analysis methods (as described in Chapter 2), but our interest here is in modeling this system using Simio. This chapter actually describes two alternative methods to model the queuing system using Simio. The first method uses the Facility Window and Simio objects from the Standard Library (Section 4.2) . The second method uses Simio Processes (Section 4.3) to construct the model at a lower level, which is sometimes needed to model things properly or in more detail. These two methods are not completely separate — the Standard Library objects are actually built using Processes. The pre-built Standard-Library objects generally provide a higher-level, more natural interface for model building, and combine animation with the basic functionality of the objects. Custom-constructed Processes provide a lower-level interface to Simio and are typically used for models requiring special functionality or faster execution. In Simio, you also have access to the Processes that comprise the Standard Library objects, but that's a topic for a future chapter. The chapter starts with a tour around the Simio window and user interface in Section 4.1. As mentioned above, Section 4.2 guides you through how to build a model of the system in the Facility Window using the Standard Library objects. We then experiment a little with this model, as well as introduce the important concepts of statistically independent replications, warm-up, steady-state vs. terminating simulations, and verify that our model is correct. Section 4.3 re-builds the first model with Simio Processes rather than objects. Section 4.4 adds context to the initial model and modifies the interarrival and service-time distributions. Sections 4.5 and 4.6 show how to use innovative approaches enabled by Simio for effective statistical analysis of simulation output data. Section 4.8 describes the basic Simio animation features and adds animation to the models. As your models start to get more interesting you will start finding unexpected behavior. So we will end this chapter with Section 4.9 describing the basic procedure to find and fix model problems. Though the systems being modeled in this chapter are quite simple, after going through this material you should be well on your way to understanding not only how to build models in Simio, but also how to use them. Before we start building Simio models, we'll take a quick tour in this section through Simio's user interface to introduce what's available and how to navigate to various modeling components. When you first load Simio you'll see either a new Simio model — the default behavior — or the most recent model that you had previously opened if you have the Load most recent project at startup checkbox checked on the File page. Figure 4.2 shows the default initial view of a new Simio model. Although you may have a natural inclination to start model building immediately, we encourage you to take time to explore the interface and the Simio-related resources provided through the Support ribbon (described below). These resources can save you an enormous amount of time. Figure 4.2: Facility window in the new model. Ribbons are the innovative interface components first introduced with Microsoft \(^{TM}\) Office 2007 to replace the older style of menus and toolbars. Ribbons help you quickly complete tasks through a combination of intuitive organization and automatic adjustment of contents. Commands are organized into logical groups, which are collected together under tabs. Each tab relates to a type of activity, such as running a model or drawing symbols. Tabs are automatically displayed or brought to the front based on the context of what you're doing. For example, when you're working with a symbol, the Symbols tab becomes prominent. Note that which specific ribbons are displayed depends on where you are in the project (i.e., what items are selected in the various components of the interface). The Simio Support ribbon (see Figure 4.3) includes many of the resources available to learn and get the most out of Simio, as well as how to contact the Simio people with ideas, questions, or problems. Additional information is available through the link to Simio Technical Support (http://www.simio.com/resources/technical-support/) where you will find a description of the technical-support policies and links to the Simio User Forum and other Simio-related groups. Simio version and license information is also available on the Support ribbon. This information is important whenever you contact Support. Figure 4.3: Simio Support ribbon.. Simio includes comprehensive help available at the touch of the F1 key or the ? icon in the upper right of the Simio window. If you prefer a printable version, you'll find a link to the Simio Reference Guide (a .pdf file). The help and reference guides provide an indexed searchable resource describing basic and advanced Simio features. For additional training opportunities you'll also find links to training videos and other on-line resources. The Support ribbon also has direct links to open example projects and SimBits (covered below), and to access Simio-related books, release and compatibility nodes, and the Simio user forum. In addition to the ribbon tabs near the top of the window, if you have a Simio project open, you'll see a second set of tabs just below the ribbon. These are the project model tabs used to select between multiple windows that are associated with the active model or experiment. The windows that are available depend on the object class of the selected model, but generally include Facility, Processes, Definitions, Data, and Results. If you are using an RPS Simio license, you will also see the Planning tab. Each of these will be discussed in detail later, but initially you'll spend most of your time in the Facility Window where the majority of model development, testing, and interactive runs are done. Simio object libraries are collections of object definitions, typically related to a common modeling domain or theme. Here we give a brief introduction to Libraries — Section provides additional details about objects, libraries, models and the relationships between them. Libraries are shown on the left side of the Facility Window. In the standard Simio installation, the Standard Library, the Flow Library, and the Extras Libary are attached by default and the Project Library is an integral part of the project. The Standard, Flow, and Extras libraries can be opened by clicking on their respective names at the bottom of the libraries window (only one can be open at a time). The Project Library remains open and can be expanded/condensed by clicking and dragging on the .... separator. Other libraries can be added using the Load Library button on the Project Home ribbon. The Standard Object Library on the left side of the Facility Window is a general-purpose set of objects that comes standard with Simio. Each of these objects represents a physical object, device, or item that you might find if you looked around a facility being modeled. In many cases you'll build most of your model by dragging objects from the Standard Library and dropping them into your Facility Window. Table 4.1 lists the objects in the Simio Standard Library. Table 4.1: Simio Standard Library objects. Source Generates entities of a specified type and arrival pattern. Sink Destroys entities that have completed processing in the model. Server Represents a capacitated process such as a machine or service operation. Combiner Combines multiple entities together with a parent entity (e.g., a pallet). Separator Splits a batched group of entities or makes copies of a single entity. Resource A generic object that can be seized and released by other objects. Vehicle A transporter that can follow a fixed route or perform on-demand pickups/dropoffs. Worker Models activities associated with people. Can be used as a moveable object or a transporter and can follow a shift schedule. BasicNode Models a simple intersection between multiple links. TransferNode Models a complex intersection for changing destination and travel mode. Connector A simple zero-time travel link between two nodes. Path A link over which entities may independently move at their own speeds. TimePath A link that has a specified travel time for all entities. Conveyor A link that models both accumulating and non-accu-mulating conveyor devices. The Project Library includes the objects defined in the current project. As such, any new object definitions created in a project will appear in the Project Library for that project. Objects in the Project Library are defined/updated via the Navigation Window (described below) and they are used (placed in the Facility Window) via the Project Library. In order to simplify modeling, the Project Library is pre-populated with a ModelEntity object. The Flow Library includes a set of objects for modeling flow processing systems and the Extras Library includes a set of material handling and warehouse-related objects. Refer to the Simio Help for more information on the use of these libraries. Other domain-specific libraries are available on the Simio User Forum and can be accessed using the Shared Items button on the Support ribbon. The methods for building your own objects and libraries will be discussed in Chapter 11. The Properties Window on the lower right side displays the properties (characteristics) of any object or item currently selected. For example, if a Server has been placed in the Facility Window, when it's selected you'll be able to display and change its properties in the Properties Window (see Figure 4.4. The gray bars indicate categories or groupings of similar properties. By default the most commonly changed categories are expanded so you can see all the properties. The less commonly changed categories are collapsed by default, but you can expand them by clicking on the + sign to the left. If you change a property value it will be displayed in bold and its category will be expanded to make it easy to discern changes from default values. To return a property to its default value, right click on the property name and select Reset. Figure 4.4: Properties for the Server object. A Simio project consists of one or more models or objects, as well as other components like symbols and experiments. You can navigate between the components using the Navigation Window on the upper right. Each time you select a new component you'll see the tabs and ribbons change accordingly. For example, if you select ModelEntity in the navigation window, you'll see a slightly different set of project model tabs available, and you might select the Definitions tab to add a state to that entity object. Then select the original Model object in the navigation window to continue editing the main model. If you get confused about what object you're working with, look to the title bar at the top of the navigation window or the highlighted bar within the navigation window. Note that objects appearing in a library and in the Navigation Window are often confusing to new users — just remember that objects are placed from the library and are defined/edited from the Navigation Window. Additional details are provided in Section 5.1.1.1. One feature you'll surely want to exploit is the SimBits collection. SimBits are small, well-documented models that illustrate a modeling concept or explain how to solve a common problem. The full documentation for each can be found in an accompanying automatically loaded .pdf file, as well as in the on-line help. Although they can be loaded directly from the Open menu item (replacing the currently open model), perhaps the best way to find a helpful SimBit is to look for the SimBit button on the Support ribbon. On the target page for this button you will find a categorized list of all of the SimBits with a filtering mechanism that lets you quickly find and load SimBits of interest (in this case, loading into a second copy of Simio, preserving your current workspace). SimBits are a helpful way to learn about new modeling techniques, objects, and constructs. The above discussions refer to the default window positions, but some window positions are easily changed. Many design-time and experimentation windows and tabs (for example the Process window or individual data table tabs) can be changed from their default positions by either right-clicking or dragging. While dragging, you'll see two sets of arrows called layout targets appear: a set near the center of the window and a set near the outside of the window. For example Figure 4.5 illustrates the layout targets just after you start dragging the tab for a table. Dropping the table tab onto any of the arrows will cause the table to be displayed in a new window at that location. Figure 4.5: Dragging a tabbed window to a new display location. You can arrange the windows into vertical and horizontal tab groups by right clicking any tab and selecting the appropriate option. You can also drag some windows (Search, Watch, Trace, Errors, and object Consoles) outside of the Simio application, even to another monitor, to take full advantage of your screen real estate. If you ever regret your custom arrangement of the windows or you lose a window (that is, it should be displayed but you can't find it), use the Reset button on the Project Home ribbon to restore the default window configuration. In this section we'll build the basic model described above in Simio, and also do some experimentation and analysis with it, as follows: Section 4.2.1 takes you through how to build the model in Simio, in what's called the Facility Window using the Standard Library, run it (once), and look through the results. Next, in Section 4.2.2 we'll use it to do some initial informal experimentation with the system to compare it to what standard queueing theory would predict. Section 4.2.3 introduces the notions of statistically replicating and analyzing the simulation output results, and how Simio helps you do that. In Section 4.2.4 we'll talk about what might be roughly described as long-run vs. short-run simulations, and how you might need to warm up your model if you're interested in how things behave in the long run. Section 4.2.5 revisits some of the same questions raised in Section 4.2.2, specifically trying to verify that our model is correct, but now we are armed with better tools like warm-up and statistical analysis of simulation output data. All of our discussion here is for a situation when we have only one scenario (system configuration) of interest; we'll discuss the more common goal of comparing alternative scenarios in Sections 5.5 and 9.1.1, and will introduce some additional statistical tools in those sections for such goals. Using Standard Library objects is the most common method for building Simio models. These pre-built objects will be sufficient for many common types of models. Figure 4.6 shows the completed model of our queueing system using Simio's Facility Window (note that the Facility tab is highlighted in the Project Model Tabs area). We'll describe how to construct this model step by step in the following paragraphs. Figure 4.6: Completed Simio model (Facility Window) of the single-server queueing system — Model 4-1. The queueing model includes entities, an entity-arrival process, a service process, and a departure process. In the Simio Facility Window, these processes can be modeled using the Source, Server, and Sink objects. To get started with the model, start the Simio application and, if necessary, create a new model by clicking on the New item in the File page (accessible from the File ribbon). Once the default new model is open, make sure that the Facility Window is open by clicking on the Facility tab, and that the Standard Library is visible by clicking on the Standard Library section heading in the Libraries bar on the left; Figure 4.2 illustrates this. First, add a ModelEntity object by clicking on the ModelEntity object in the ProjectLibrary panel, then drag and drop it onto the Facility Window (actually, we're dragging and dropping an instance of it since the object definition stays in the ProjectLibrary panel). Next, click on the Source object in the Standard Library, then drag and drop it into the Facility Window. Similarly, click, drag, and drop an instance of each of the Server and Sink objects onto the Facility Window. The next step is to connect the Source, Server, and Sink objects in our model. For this example, we'll use the standard Connector object, to transfer entities between nodes in zero simulation time. To use this object, click on the Connector object in the Standard Library. After selecting the Connector, the cursor changes to a set of cross hairs. With the new cursor, click on the Output Node of the Source object (on its right side) and then click on the Input Node of the Server object (on its left side). This tells Simio that entities flow (instantly, i.e., in zero simulated time) out of the Source object and into the Server object. Follow the same process to add a connector from the Output Node of the Server object to the Input Node of the Sink object. Figure 4.7 shows the model with the connector in place between the Source and Server objects. Figure 4.7: Model 4-1 with the Source and Server objects linked by a Connector object. By the way, now would be a good time to save your model ("save early, save often," is a good motto for every simulationist). We chose the name Model_04_01.spfx (spfx is the default file-name extension for Simio project files), following the naming convention for our example files given in Section 3.2; all our completed example files are available on the book's website, as described in the Preface. Before we continue constructing our model, we need to mention that the Standard Library objects include several default queues. These queues are represented by the horizontal green lines in Figure 4.7. Simio uses queues where entities potentially wait — i.e., remain in the same logical place in the model for some period of simulated time. Note that, technically, tokens rather than entities wait in Simio queues, but we'll discuss this issue in more detail in Chapter 5 and for now it's easier to think of entities waiting in the queues since that is what you see in the animation. Model 4-1 includes the following queues: Source1 OutputBuffer.Contents — Used to store entities waiting to move out of the Source object. Server1 InputBuffer.Contents — Used to store entities waiting to enter the Server object. Server1 Processing.Contents — Used to store entities currently being processed by the Server object. Server1 OutputBuffer.Contents — Used to store entities waiting to exit the Server object. Sink1 InputBuffer.Contents — Used to store entities waiting to enter the Sink object. In our simple single-server queueing system in Figure 4.1, we show only a single queue and this queue corresponds to the InputBuffer.Contents queue for the Server1 object. The Processing.InProcess queue for the Server1 object stores the entity that's being processed at any point in simulated time. The other queues in the Simio model are not used in our simple model (actually, the entities simply move through these queues instantly, in zero simulated time). Now that the basic structure of the model is complete, we'll add the model parameters to the objects. For our simple model, we need to specify probability distributions governing the interarrival times and service times for the arriving entities. The Source object creates arriving entities according to a specified arrival process. We'd like a Poisson arrival process at rate \(\lambda = 48\) entities per hour, so we'll specify that the entity interarrival times are exponentially distributed with a mean of 1.25 minutes (a time between entities of \(= 60/48\) corresponds to a rate of 48/hour). In the formal Simio object model, the interarrival time is a property of the Source object. Object properties are set and edited in the Properties Window — select the Source object (click on the object) and the Properties Window will be displayed on the right panel (see Figure 4.8). Figure 4.8: Setting the interarrival-time distribution for the Source object. The Source object's interarrival-time distribution is set by assigning the Interarrival Time property to Random.Exponential(1.25) and the Units property to Minutes; click on the arrow just to the left of Interarrival Time to expose the Units property and use the pull-down on the right to select Minutes. This tells Simio that each time an entity is created, it needs to sample a random value from an exponential distribution with mean \(1.25\), and to create the next entity that far into the future for an arrival rate of \(\lambda = 60 \times (1/1.25) = 48\) entities/hour, as desired. The random-variate functions available via the keyword Random are discussed further in Section 4.4. The Time Offset property (usually set to 0) determines when the initial entity is created. The other properties associated with the Arrival Logic can be left at their defaults for now. With these parameters, entities are created recursively for the duration of the simulation run. The default object name (Source1, for the first source object), can be changed by either double-clicking on the name tag below the object with the object selected, or through the Name property in the General properties section. Or, like most items in Simio, you can rename by using the F2 key. Note that the General section also includes a Description property for the object, which can be quite useful for model documentation. You should get into the habit of including a meaningful description for each model object because whatever you enter there will be displayed in a tool tip popup note when you hover the mouse over that object. In order to complete the queueing logic for our model, we need to set up the service process for the Server object. The Processing Time property of the Server module is used to specify the processing times for entities. This property should be set to Random.Exponential(1) with the Units property being Minutes. Make sure that you adjust the Processing Time property rather than the Process Type property (this should remain at its default value of Specific Time (the other options for processing type will be discussed in Section 10.3). The final step for our initial model is to tell Simio to run the model for 10 hours. To do this, click on the Run ribbon/tab, then in the Ending Type pull-down, select the Run Length option and enter 10 Hours. Before running our initial model, we'll set the running speed for the model. The Speed Factor is used to control the speed of the interactive execution of the model explicitly. Changing the Speed Factor to 50 (just type it into the Speed Factor field in the Run ribbon) will speed up the run to a speed that's more visually appealing for this particular model. The optimal Speed Factor for an interactive run will depend on the model and object parameters and the individual preferences, as well as the speed of your computer, so you should definitely experiment with the Speed Factor for each model (technically, the Speed Factor is the amount of simulation time, in tenths of a second, between each animation frame). At this point, we can actually run the model by clicking on the Run icon in the upper left of the ribbon. The model is now running in interactive mode. As the model runs, the simulated time is displayed in the footer section of the application, along with the percentage complete. Using the default Speed Factor, simulation time will advance fairly slowly, but this can be changed as the model runs. When the simulation time reaches 10 (the run length that we set), the model run will automatically pause. In Interactive Mode, the model results can be viewed at any time by stopping or pausing the model and clicking on the Results tab on the tab bar. Run the model until it reaches 10 hours and view the current results. Simio provides several different ways to view the basic model results. Pivot Grid, and Reports (the top two options are on the left panel — click on the corresponding icon to switch between the views) are the most common. Figure 4.9 shows the Pivot Grid for Model 4-1 paused at time 10 hours. Figure 4.9: Pivot Grid report for the interactive run of Model 4-1. Note that as mentioned in the Preface, Simio uses an agile development process with frequent minor updates, and occasional major updates. It's thus possible that the output values you get when you run our examples interactively may not always exactly match the output numbers we're showing here, which we got as we wrote the book. This could, as noted at the end of Section 1.4, be due to small variations across releases in low-level behavior, such as the order in which simultaneous events are processed. Regardless of the reason for these differences, their existence just emphasizes the need to do proper statistical design and analysis of simulation experiments, and not just run it once to get "the answer," a point that we'll make repeatedly throughout this book. The Pivot Grid format is extremely flexible and provides a very quick method to find specific results. If you're not accustomed to this type of report, it can look a bit overwhelming at first, but you'll quickly learn to appreciate it as you begin to work with it. The Pivot Grid results can also be easily exported to a CSV (comma-separated values) text file, which can be imported into Excel and other applications. Each row in the default Pivot Grid includes an output value based on: So, in Figure 4.9, the Average (Statistic) value for the TimeInSystem (Data Item) of the DefaultEntity (Object Name) of the ModelEntity type (Object Type) is \(0.0613\) hours (0.0613 hours \(\times\) 60 minutes/hour = 3.6780 minutes. Note that units for the Pivot Grid times, lengths, and rates can be set using the Time Units, Length Units, and Rate Units items in the Pivot Grid ribbon; if you switch to Minutes the Average TimeInSystem is 3.6753, so our hand-calculated value of 3.6780 minutes has a little round-off error in it. Further, the TimeInSystem data item belongs to the FlowTime Category and since the value is based on entities (dynamic objects), the Data Source is the set of Dynamic Objects. If you're looking at this on your computer (as you should be!), scrolling through the Pivot Grid reveals a lot of output performance measures even from a small model like this. For instance, just three rows below the Average TimeInSystem of 0.0613 hours, we see under the Throughput Category that a total of 470 entities were created (i.e., entered the model through the Source object), and in the next row that 470 entities were destroyed (i.e., exited the model through the Sink object). Though not always true, in this particular run of this model all of the 470 entities that arrived also exited during the 10 hours, so that at the end of the simulation there were no entities present. You can confirm this by looking at the animation when it's paused at the 10-hour end time. (Change the run time to, say, 9 hours, and then 11 hours, to see that things don't always end up this way, both by looking at the final animation as well as the NumberCreated and NumberDestroyed in the Throughput Category of the Pivot Grid). So in our 10-hour run, the output value of 0.0613 hours for average time in system is just the simple average of these 470 entities' individual times in system. While you're playing around with the simulation run length, try changing it to 8 minutes and compare some of the Pivot Grid results with what we got from the manual simulation in Section 3.3.1 given in Figure 3.8. Now we can confess that those "magical" interarrival and service times for that manual simulation were generated in this Simio run, and we recorded them via the Model Trace capability. The Pivot Grid supports three basic types of data manipulation: Grouping: Dragging column headings to different relative locations will change the grouping of the data. Sorting: Clicking on an individual column heading will cause the data to be sorted based on that column. Filtering: Hovering the mouse over the upper right corner of a column heading will expose a funnel-shaped icon. Clicking on this icon will bring up a dialog that supports data filtering. If a filter is applied to any column, the funnel icon is displayed (no mouse hover required). Filtering the data allows you quickly to view the specific data in which you're interested regardless of the amount of data included in the output. Pivot Grids also allow the user to store multiple views of the filtered, sorted, and grouped Pivot Grids. Views can be quite useful if you are monitoring a specific set of performance metrics. The Simio documentation section on Pivot Grids includes much more detail about how to use these specific capabilities. The Pivot Grid format is extremely useful for finding information when the output includes many rows. The Reports format gives the interactive run results in a formatted, detailed report format, suitable for printing, exporting to other file formats, or emailing (the formatting, printing, and exporting options are available from the Print Preview tab on the ribbon). Figure 4.10 shows the Reports format with the Print Preview tab open on the ribbon. Scrolling down to the TimeInSystem - Average (Hours) heading on the left will show a Value of 0.06126, the same (up to roundoff) as we saw for this output performance measure in the Pivot Grid in Figure 4.9. Figure 4.10: Standard report view for Model 4-1. Now that we have our first Simio model completed, we'll do some initial, informal experimenting and analysis with it to understand the queueing system it models. As we mentioned earlier, the long-run, steady-state performance of our system can be determined analytically using queueing analysis (see Chapter 2 for details). Note that for any but the simplest models, this type of exact analysis will not be possible (this is why we use simulation, in fact). Table 4.2 gives the steady-state queueing results and the simulation results taken from the Pivot Table in Figure 4.9. Table 4.2: Comparison of the queueing analysis and initial model results for the first model. Queueing Utilization (\(\rho\)) \(0.800\) \(0.830\) Number in system (\(L\)) \(4.000\) \(2.879\) Number in queue (\(L_q\)) \(3.200\) \(2.049\) Time in system (\(W\)) \(0.083\) \(0.061\) Time in queue (\(W_q\)) \(0.067\) \(0.044\) You'll immediately notice that the numbers in the Queueing column are not equal to the numbers in the Model column, as we might expect. Before discussing the possible reasons for the differences, we first need to discuss one more important and sometimes-concerning issue. If you return to the Facility Window (click on the Facility tab just below the ribbon), reset the model (click on the Reset icon in the Run ribbon), re-run the model, allow it to run until it pauses at time 10 hours, and view the Pivot Grid, you'll notice that the results are identical to those from the previous run (displayed in Figure 4.9). If you repeat the process again and again, you'll always get the same output values. To most people new to simulation, and as mentioned in Section 3.1.3, this seems a bit odd given that we're supposed to be using random values for the entity interarrival and service times in the model. This illustrates the following critical points about computer simulation: The random numbers used are not truly random in the sense of being unpredictable, as mentioned in Section 3.1.3 and discussed in Section 6.3 — instead they are pseudo-random, which, in our context, means that the precise sequence of generated numbers is deterministic (among other things). Through the random-variate-generation process discussed in Section 6.4, some simulation software can control the pseudo-random number generation and we can exploit this control to our advantage. The concept that the "supposedly random numbers" are actually predictable can initially cause great angst for new simulationists (that's what you're now becoming). However, for simulation, this predictability is a good thing. Not only does it make grading simulation homework easier (important to the authors), but (more seriously) it's also useful during model debugging. For example, when you make a change in the model that should have a predictable effect on the simulation output, it's very convenient to be able to use the same "random" inputs for the same purposes in the simulation, so that any changes (or lack thereof) in output can be directly attributable to the model changes, rather than to different random numbers. As you get further into modeling, you'll find yourself spending significant time debugging your models so this behavior will prove useful to you (see Section 4.9 for detailed coverage of the debugging process and Simio's debugging tools). In addition, this predictability can be used to reduce the required simulation run time through a variety of techniques called variance reduction, which are discussed in general simulation texts (such as (Banks et al. 2005) or (Law 2015)). Simio's default behavior is to use the same sequence of random variates (draws or observations on model-driving inputs like interarrival and service times) each time a model is run. As a result, running, resetting, and re-running a model will yield identical results unless the model is explicitly coded to behave otherwise. Now we can return to the question of why our initial simulation results are not equal to our queueing results in Table 4.2. There are three possible explanations for this mismatch: Our Simio model is wrong, i.e., we have an error somewhere in the model itself. Our expectation is wrong, i.e., our assumption that the simulation results should match the queueing results is wrong. Sampling error, i.e., the simulation model results match the expectation in a probabilistic sense, but we either haven't run the model long enough, or for enough replications (separate independent runs starting from the same state but using separate random numbers), or are interpreting the results incorrectly. In fact, if the results are not equal when comparing simulation results to our expectation, it's always one or more of these possibilities, regardless of the model. In our case, we'll see that our expectation is wrong, and that we have not run the model long enough. Remember, the queueing-theory results are for long-run steady-state, i.e., after the system/model has run for an essentially infinite amount of time. But we ran for only 10 hours, which for this model is evidently not sufficiently close to infinity. Nor have we made enough replications (items 2 and 3 from above). Developing expectations, comparing the expectations to the simulation-model results, and iterating until these converge is a very important component of model verification and validation (we'll return to this topic in Section 4.2.5). As just suggested, a replication is a run of the model with a fixed set of starting and ending conditions using a specific and separate, non-overlapping sequence of input random numbers and random variates (the exponentially distributed interarrival and service times in our case). For the time being, assume that the starting and ending conditions are dictated by the starting and ending simulation time (although as we'll see later, there are other possible kinds of starting and ending conditions). So, starting our model empty and idle, and running it for 10 hours, constitutes a replication. Resetting and re-running the model constitutes running the same replication again, using the same input random numbers and thus random variates, so obviously yielding the same results (as demonstrated above). In order to run a different replication, we need a different, separate, non-overlapping set of input random numbers and random variates. Fortunately, Simio handles this process for us transparently, but we can't run multiple replications in Interactive mode. Instead, we have to create and run a Simio Experiment. Simio Experiments allow us to run our model for a user-specified number of replications, where Simio guarantees that the generated random variates are such that the replications are statistically independent from one another, since the underlying random numbers do not overlap from one replication to the next. This guarantee of independence is critical for the required statistical analysis we'll do. To set up an experiment, go to the Project Home ribbon and click on the New Experiment icon. Simio will create a new experiment and switch to the Experiment Design view as shown in Figure 4.11 after we changed both Replications Required near the top, and Default Replications on the right, to 5 from their default values of 10. Figure 4.11: Initial experiment design for running five replications of a model. To run the experiment, select the row corresponding to Scenario1 (the default name) and click the Run icon (the one with two white right arrows in it in the Experiment window, not the one with one right arrow in it in the Model window). After Simio runs the five replications, select the Pivot Grid report (shown in Figure 4.12). Figure 4.12: Experiment Pivot Grid for the five replications of the model. Compared to the Pivot Grid we saw while running in Interactive Mode (Figure 4.9), we see the additional results columns for Minimum, Maximum, and Half Width (of 95% confidence intervals on the expected value, with the Confidence Level being editable in the Experiment Design Properties), reflecting the fact that we now have five independent observations of each output statistic. To understand what these cross-replication output statistics are, focus on the entity TimeInSystem values in rows 3-5. For example: The 0.0762 for the Average of Average (Hours) TimeInSystem (yes, we meant to say "Average" twice there) is the average of five numbers, each of which is a within-replication average time in system (and the first of those five numbers is 0.0613 from the single-replication Pivot Grid in Figure 4.9. The 95% confidence interval \(0.0762 \pm 0.0395\), or \([0.0367, 0.1157]\) (in hours), contains, with 95% confidence , the expected within-replication average time in system, which you can think of as the result of making an infinite number of replications (not just five) of this model, each of duration 10 hours, and averaging all those within-replication average times in system. Another interpretation of what this confidence interval covers is the expected value of the probability distribution of the simulation output random variable representing the within-replication average time in system. (More discussion of output confidence intervals appears below in the discussion of Table 4.4.) Still in the Average (Hours) row, 0.1306 is the maximum of these five average within-replication times in system, instead of their average. In other words, across the five replications, the largest of the five Average TimeInSystem values was 0.1306, so it is the maximum average. Average maximum, anyone? In the next row down for Maximum (Hours), the 0.2888 on the left is the average of five numbers, each of which is the maximum individual-entity time in system within that replication. And the 95% confidence interval \(0.2888 \pm 0.1601\) is trying to cover the expected maximum time in system, i.e., the maximum time in system averaged over an infinite number of replications rather than just five. Maybe more meaningful as a really bad worst-case time in system, though, would be the 0.5096 hour, being the maximum of the five within-replication maximum times in system. Table 4.3 gives the queueing metrics for each of the five replications of the model, as well a the sample mean (Avg) and sample standard deviation (StDev) across the five replications for each metric. To access these individual-replication output values, click on the Export Details icon in the Pivot Grid ribbon; click Export Summaries to get cross-replication results like means and standard deviations, as shown in the Pivot Grid itself. The exported data file is in CSV format, which can be read by a variety of applications, such as Excel. The first thing to notice in this table is that the values can vary significantly between replications (\(L\) and \(L_q\), in particular). This variation is specifically why we cannot draw inferences from the results of a single replication. Table 4.3: Five replications of data for the first model. Metric being estimated Utilization (\(\rho\)) \(0.830\) \(0.763\) \(0.789\) \(0.769\) \(0.785\) \(0.787\) \(0.026\) Number in system (\(L\)) \(2.879\) \(2.296\) \(3.477\) \(2.900\) \(6.744\) \(3.659\) \(1.774\) Number in queue (\(L_q\)) \(2.049\) \(1.532\) \(2.688\) \(2.131\) \(5.959\) \(2.872\) \(1.774\) Time in system (\(W\)) \(0.061\) \(0.049\) \(0.075\) \(0.065\) \(0.131\) \(0.076\) \(0.032\) Time in queue (\(W_q\)) \(0.044\) \(0.033\) \(0.058\) \(0.048\) \(0.115\) \(0.059\) \(0.032\) Since our model inputs (entity interarrival and service times) are random, the simulation-output performance metrics (simulation-based estimates of \(\rho\), \(L\), \(L_q\), \(W\), and \(W_q\), which we could respectively denote as \(\widehat{\rho}\), \(\widehat{L}\), \(\widehat{L_q}\), \(\widehat{W}\), and \(\widehat{W_q}\)) are random variables . The queueing analysis gives us the exact steady-state values of \(\rho\), \(L\), \(L_q\), \(W\), and \(W_q\). Based on how we run replications (the same model, but with separate independent input random variates), each replication generates one observation on each of \(\widehat{\rho}\), \(\widehat{L}\), \(\widehat{L_q}\), \(\widehat{W}\), and \(\widehat{W_q}\). In statistical terms, running \(n\) replications yields \(n\) independent, identically distributed (IID) observations of each random variable. This allows us to estimate the mean values of the random variables using the sample averages across replications. So, the values in the Average column from Table 4.3 are estimates of the corresponding random-variable expected values. What we don't know from this table is how good our estimates are. We do know, however, that as we increase the number of replications, our estimates get better, since the sample mean is a consistent estimator (its own variance decreases with \(n\)), and from the strong law of large numbers (as \(n \rightarrow \infty\), the sample mean across replications \(\rightarrow\) the expected value of the respective random variable, with probability 1). Table 4.4 compares the results from running five replications with those from running 50 replications. Since we ran more replications of our model, we expect the estimates to be better, but averages still don't give us any specific information about the quality (or precision) of these estimates. What we need is an interval estimate that will give us insight about the sampling error (the averages are merely point estimates). The \(h\) columns give such an interval estimate. These columns give the half-widths of 95% confidence intervals on the means constructed from the usual normal-distribution approach (using the sample standard deviation and student's \(t\) distribution with \(n-1\) degrees of freedom, as given in any beginning statistics text). Consider the 95% confidence intervals for \(L\) based on five and 50 replications: \[\begin{align*} 5\ \textrm{replications}: 3.659 \pm 2.203\ \textrm{or}\ [1.456, 5.862] \\ 50\ \textrm{replications}: 3.794 \pm 0.433\ \textrm{or}\ [3.361, 4.227] \end{align*}\] Table 4.4: Comparing 5 replications (left side) with 50 replications (right side). \(h\) Utilization (\(\rho\)) \(0.787\) \(0.033\) \(0.789\) \(0.014\) Number in system (\(L\)) \(3.659\) \(2.203\) \(3.794\) \(0.433\) Number in queue (\(L_q\)) \(2.872\) \(2.202\) \(3.004\) \(0.422\) Time in system (\(W\)) \(0.076\) \(0.040\) \(0.078\) \(0.008\) Time in queue (\(W_q\)) \(0.059\) \(0.040\) \(0.062\) \(0.008\) Based on five replications, we're 95% confident that the true mean (expected value or population mean) of \(\widehat{L}\) is between 1.456 and 5.862, while based on 50 replications, we're 95% confident that the true mean is between 3.361 and 4.227. (Strictly speaking, the interpretation is that 95% of confidence intervals formed in this way, from replicating, will cover the unknown true mean.) So the confidence interval on the mean of an output statistic provides us a measure of the sampling error and, hence, the quality (precision) of our estimate of the true mean of the random variable. By increasing the number of replications (samples), we can make the half-width increasingly small. For example, running 250 replications results in a CI of \([3.788, 4.165]\) — clearly we're more comfortable with our estimate of the mean based on 250 replications than we are based five replications. In cases where we make independent replications, the confidence-interval half-widths therefore give us guidance as to how many replications we should run if we're interested in getting a precise estimate of the true mean; due to the \(\sqrt{n}\) in the denominator of the formula for the confidence-interval half width, we need to make about four times as many replications to cut the confidence interval half-width in half, compared to its current size from an initial number of replications, and about 100 times as many replications to make the interval \(1/10\) its current size. Unfortunately, there is no specific rule about "how close is close enough" — i.e., what values of \(h\) are acceptably small for a given simulation model and decision situation. This is a judgment call that must be made by the analyst or client in the context of the project. There is a clear trade-off between computer run time and reducing sampling error. As we mentioned above, we can make \(h\) increasingly small by running enough replications, but the cost is computer run time. When deciding if more replications are warranted, two issues are important: What's the cost if I make an incorrect decision due to sampling error? Do I have time to run more replications? So, the first answer as to why our simulation results shown in Table 4.2 don't match the queueing results is that we were using the results from a singe replication of our model. This is akin to rolling a die, observing a 4 (or any other single value) and declaring that value to be the expected value over a large number of rolls. Clearly this would be a poor estimate, regardless of the individual roll. Unfortunately, using results from a single replication is quite common for new simulationists, despite the significant risk. Our general approach going forward will be to run multiple replications and to use the sample averages as estimates of the means of the output statistics, and to use the 95% confidence-interval half-widths to help determine the appropriate number of replications if we're interested in estimating the true mean. So, instead of simply using the averages (point estimates), we'll also use the confidence intervals (interval estimates) when analyzing results. The standard Simio Pivot Grid report for experiments (see Figure 4.12) automatically supports this approach by providing the sample average and 95% confidence-interval half-widths for all output statistics. The second reason for the mismatch between our expectations and the model results is a bit more subtle and involves the need for a warm up period for our model. We will discuss that in the next section. Generally, when we start running a simulation model that includes queueing or queueing-network components, the model starts in a state called empty and idle, meaning that there are no entities in the system and all servers are idle. Consider our simple single-server queueing model. The first entity that arrives will never have to wait for the server. Similarly, the second arriving entity will likely spend less time in the queue (on average) than the 100th arriving entity (since the only possible entity in front of the second entity will be the first entity). Depending on the characteristics of the system being modeled (the expected server utilization, in our case), the distribution and expected value of queue times for the third, fourth, fifth, etc. entities can be significantly different from the distribution and expected value of queue times at steady state, i.e., after a long time that is sufficient for the effects of the empty-and-idle initial conditions to have effectively worn off. The time between the start of the run and the point at which the model is determined to have reached steady state (another one of those judgment calls) is called the initial transient period, which we'll now discuss. The basic queueing analysis that we used to get the results in Table 4.2 (see Chapter 2) provides exact expected-value results for systems at steady state. As discussed above, most simulation models involving queueing networks go through an initial-transient period before effectively reaching steady state. Recording model statistics during the initial-transient period and then using these observations in the replication summary statistics tabulation can lead to startup bias, i.e., \(E(\widehat{L})\) may not be equal to \(L\). As an example, we ran four experiments where we set the run length for our model to be 2, 5, 10, 20, and 30 hours and ran 500 replications each. The resulting estimates of \(L\) (along with the 95% confidence intervals, of course) were: \[\begin{align*} 2\ \textrm{hours}: 3.232 \pm 0.168\ \textrm{or}\ [3.064, 3.400] \\ 5\ \textrm{hours}: 3.622 \pm 0.170\ \textrm{or}\ [3.622, 3.962] \\ 10\ \textrm{hours}: 3.864 \pm 0.130\ \textrm{or}\ [3.734, 3.994] \\ 20\ \textrm{hours}: 3.888 \pm 0.096\ \textrm{or}\ [3.792, 3.984] \\ 30\ \textrm{hours}: 3.926 \pm 0.080\ \textrm{or}\ [3.846, 4.006] \end{align*}\] For the 2, 5, 10, and 20 hour runs, it seems fairly certain that the estimates are still biased downwards with respect to steady state (the steady-state value is \(L=4.000\)). At 30 hours, the mean is still a little low, but the confidence interval covers 4.000, so we're not sure. Running more replications would likely reduce the width of the confidence interval and \(4.000\) may be outside it so that we'd conclude that the bias is still significant with a 30-hour run, but we're still not sure. It's also possible that running additional replications wouldn't provide the evidence that the startup bias is significant — such is the nature of statistical sampling. Luckily, unlike many scientific and sociological experiments, we're in total control of the replications and run length and can experiment until we're satisfied (or until we run out of either computer time or human patience). Before continuing we must point out that you can't "replicate away" startup bias. The transient period is a characteristic of the system and isn't an artifact of randomness and the resulting sampling error. Instead of running the model long enough to wash out the startup bias through sheer arithmetic within each run, we can use a warm-up period. Here, the model run period is divided so that statistics are not collected during the initial (warm-up) period, though the model is running as usual during this period. After the warm-up period, statistics are collected as usual. The idea is that the model will be in a state close to steady-state when we start recording statistics if we've chosen the warm-up period appropriately, something that may not be especially easy in practice. So, for our simple model, the expected number of entities in the queue when the first entity arrives after the warm-up period would be \(3.2\) (\(L_q=3.2\) at steady state). As an example, we ran three additional experiments where we set the run lengths and warm-up periods to be \((20, 10)\), \((30, 10)\), and \((30, 20)\), respectively (in Simio, this is done by setting the Warm-up Period property for the Experiment to the length of the desired warm-up period). The results when estimating \(L = 4.000\) are: \[\begin{align*} \textrm{(Run length, warm-up)} = (20, 10): 4.033 \pm 0.155\ \textrm{or}\ [3.978, 4.188] \\ \textrm{(Run length, warm-up)} = (30, 10): 4.052 \pm 0.103\ \textrm{or}\ [3.949, 4.155] \\ \textrm{(Run length, warm-up)} = (30, 20): 3.992 \pm 0.120\ \textrm{or}\ [3.872, 4.112] \end{align*}\] It seems that the warm-up period has helped reduce or eliminate the startup bias in all cases and we have not increased the overall run time beyond 30 hours. So, we have improved our estimates without increasing the computational requirements by using the warm-up period. At this point, the natural question is "How long should the warm-up period be?" In general, it's not at all easy to determine even approximately when a model reaches steady state. One heuristic but direct approach is to insert dynamic animated Status Plots in the Simio model's Facility Window (in the Model's Facility Window, select the Animation ribbon under Facility Tools — see Chapter 8 for animation details) and just make a judgment about when they appear to stop trending systematically; however, these can be quite noisy" (i.e., variable) since they depict only one replication at a time during the animation. We'll simply observe the following about specifying warm-up periods: If the warm-up period is too short, the results will still have startup bias (this is potentially bad); and If the warm-up period is too long, our sampling error will be higher than necessary (as we increase the warm-up period length, we decrease the amount of data that we actually record). As a result, the "safest" approach is to make the warm-up period long and increase the overall run length and number of replications in order to achieve acceptable levels of sampling error (measured by the half-widths of the confidence intervals). Using this method we may expend a bit more computer time than is absolutely necessary, but computer time is cheap these days (and bias is insidiously dangerous since in practice you can't measure it)! Of course, the discussion of warm-up in the previous paragraphs assumes that you actually want steady-state values; but maybe you don't. It's certainly possible (and common) that you're instead interested in the "short-run" system behavior during the transient period; for example, same-day ticket sales for a sporting event open up (with an empty and idle system) and stop at certain pre-determined times, so there is no steady state at all of any relevance. In these cases, often called terminating simulations, we simply ignore the warm-up period in the experimentation (i.e., default it to 0), and what the simulation produces will be an unbiased view of the system's behavior during the time period of interest, and relative to the initial conditions in the model. The choice of whether the steady-state goal or the terminating goal is appropriate is usually a matter of what your study's intent is, rather than a matter of what the model structure may be. We will say, though, that terminating simulations are much easier to set up, run, and analyze, since the starting and stopping rules for each replication are just part of the model itself, and not up to analysts' judgment; the only real issue is how many replications you need to make in order to achieve acceptable statistical precision in your results. Now that we've addressed the replications issue, and possible warm-up period if we want to estimate steady-state behavior, we'll revisit our original comparison of the queueing analysis results to our updated simulation model results (500 replications of the model with a 30-hour run length and 20-hour warm-up period). Table 4.5 gives both sets of results. As compared to the results shown in Table 4.2, we're much more confident that our model is "right." In other words, we have fairly strong evidence that our model is verified (i.e., that it behaves as we expect it to). Note that it's not possible to provably verify a model. Instead, we can only collect evidence until we either find errors or are convinced that the model is correct. Table 4.5: Comparison of the queueing analysis and our final experiment. Utilization (\(\rho\)) \(0.800\) \(0.800 \pm 0.004\) Number in system (\(L\)) \(4.000\) \(4.001 \pm 0.133\) Number in queue (\(L_q\)) \(3.200\) \(3.201 \pm 0.130\) Time in system (\(W\)) \(0.083\) \(0.083 \pm 0.003\) Time in queue (\(W_q\)) \(0.067\) \(0.066 \pm 0.003\) Recapping the process that we went through: We developed a set of expectations about our model results (the queueing analysis). We developed and ran the model and compared the model results to our expectations (Table 4.2). Since the results didn't match, we considered the three possible explanations: Our Simio model is wrong (i.e., we have an error somewhere in the model itself) — we skipped over this one. Our expectation is wrong (i.e., our assumption that the simulation results should match the queueing results is wrong) — we found that we needed to warm up the model to get it close to steady state in order effectively to eliminate the startup bias (i.e., our expectation that our analysis including the transient period should match the steady-state results was wrong). Adding a warm-up period corrected for this. Sampling error (i.e., the simulation-model results match the expectation in a probabilistic sense, but we either haven't run the model long enough or are interpreting the results incorrectly) — we found that we needed to replicate the model and increase the run length to account appropriately for the randomness in the model outputs. We finally settled on a model that we feel is correct. It's a good idea to try to follow this basic verification process for all simulation projects. Although we'll generally not be able to compute the exact results that we're looking for (otherwise, why would we need simulation?), we can always develop some expectations, even if they're based on an abstract version of the system being modeled. We can then use these expectations and the process outlined above to converge to a model (and set of expectations) about which we're highly confident. Now that we've covered the basics of model verification and experimentation in Simio, we'll switch gears and discuss some additional Simio modeling concepts for the remainder of this chapter. However, we'll definitely revisit these basic issues throughout the book. Although modeling strictly with high-level Simio objects (such as those from the Standard Library) is fast, intuitive, and (almost) easy for most people, there are often situations where you'll want to use the lower-level Simio Processes. You may want to construct your model or augment existing Simio objects, either to do more detailed or specialized modeling not accommodated with objects, or improve execution speed if that's a problem. Using Simio Processes requires a fairly detailed understanding of Simio and discrete-event simulation methodology, in general. This section will only demonstrate a simple, but fundamental Simio Process model of our example single-server queueing system. In the following chapters, we'll go into much more detail about Simio Processes where called for by the modeling situation. In order to model systems that include finite-capacity resources for which entities compete for service (such as the server in our simple queueing system), Simio uses a Seize-Delay-Release model. This is a standard discrete-event-simulation approach and many other simulation tools use the same or a similar model. Complete understanding of this basic model is essential in order to use Simio Processes effectively. The model works as follows: Define a resource with capacity \(c\). This means that the resource has up to \(c\) arbitrary units of capacity that can be simultaneously allocated to one or more entities at any point in simulated time. When an entity requires service from the resource, the entity seizes some number \(s\) of units of capacity from the resource. At that point, if the resource has \(s\) units of capacity not currently allocated to other entities, \(s\) units of capacity are immediately allocated to the entity and the entity begins a delay representing the service time, during which the \(s\) units remain allocated to the entity. Otherwise, the entity is automatically placed in a queue where it waits until the required capacity is available. When an entity's service-time delay is complete, the entity releases the \(s\) units of capacity of the resource and continues to the next step in its process. If there are entities waiting in the resource queue and the resource's available capacity (including the units freed by the just-departed entity) is sufficient for one of the waiting entities, the first such entity is removed from the queue, the required units of capacity are immediately allocated to that entity, and that entity begins its delay. From the modeling perspective, each entity simply goes through the Seize-Delay-Release logic and the simulation tool manages the entity's queueing and allocation of resource capacity to the entities. In addition, most simulation software, including Simio, automatically records queue, resource, and entity-related statistics as the model runs. Figure 4.13 shows the basic seize-delay-release process. In this figure, the "Interarrival time" is the time between successive entities and the "Processing time" is the time that an entity is delayed for processing. The Number In System tracks the number of entities in the system at any point in simulated time and the marking and recording of the arrival time and time in system tracks the times that all entities spend in the system. Figure 4.13: Basic process for the seize-delay-release model. For our single-server queueing model, we simply set \(c=1\) and \(s=1\) (for all entities). So our single-server model is just an implementation of the basic seize-delay-release logic illustrated in Figure 4.13. Creating this model using processes is a little bit more involved than it was using Standard Library Objects, but it's instructive to go through the model development and see the mechanisms for collecting user-defined statistics. Figure 4.14 shows the completed Simio process. Figure 4.14: Process view of Model 4-2. The steps to implement this model in Simio (see ) are as follows: Open Simio and create a new model. Create a Resource object in the Facility Window by dragging a Resource object from the Standard Library onto the Facility Window. In the Process Logic section of the object's properties, verify that the Initial Capacity Type is Fixed and that the Capacity is 1 (these are the defaults). Note the object Name in the General section (the default is Resource1). Make sure that the Model is highlighted in the Navigation Window and switch to the Definitions Window by clicking on the Definitions tab and choose the States section by clicking on the corresponding panel icon on the left. This prepares us to add a state to the model. Create a new discrete (integer) state by clicking on the Integer icon in the States ribbon. Change the default Name property of IntegerState1 for the state to WIP. Discrete States are used to record numerical values. In this case, we're creating a place to store the current number of entities in the model by creating an Integer Discrete State for the model (the Number In System from Figure 4.13). Switch to the Elements section by clicking on the panel icon and create a Timer element by clicking on the Timer icon in the Elements ribbon (see Figure 4.15). The Timer element will be used to trigger entity arrivals (the loop-back arc in Figure 4.13). In order to have Poisson arrivals at the rate of \(\lambda=48\) entities/hour, or equivalently exponential interarrivals with mean \(1/0.8 = 1.25\) minutes, set the Time Interval property to Random.Exponential(1.25) and make sure that the Units are set to Minutes. Figure 4.15: Timer element for the Model 4-2. Create a State Statistic by clicking on the State Statistic icon in the Statistics section of the Elements ribbon. Set the State Variable Name property to WIP (the previously defined model Discrete State so it appears on the pull-down there) and set the Name property to CurrentWIP. We're telling Simio to track the value of the state over time and record a time-dependent statistic on this value. Create a Tally Statistic by clicking on the Tally Statistic icon in the Statistics section of the Elements ribbon. Set the Name property to TimeInSystem and set the Unit Type property to Time. Tally Statistics are used to record observational (i.e., discrete-time) statistics. Switch to the Process Window by clicking on the Processes tab and create a new Process by clicking on the Create Process icon in the Process ribbon. Set the Triggering Event property to be the newly created timer event (see Figure 4.16). This tells Simio to execute the process whenever the timer goes off. Figure 4.16: Setting the triggering event for the process. Add an Assign step by dragging the Assign step from the Common Steps panel to the process, placing the step just to the right of the Begin indicator in the process. Set the State Variable Name property to WIP and the New Value property to WIP + 1, indicating that when the event occurs, we want to increment the value of the state variable to reflect the fact that an entity has arrived to the system (the "Increment" in Figure 4.13). Next, add the Seize step to the process just to the right of the Assign step. To indicate that Resource1 should be seized by the arriving entity, click the ... button on the right and select the Seizes property in the Basic Logic section, click Add, and then indicate that the specific resource Resource1 should be seized (see Figure 4.17). Figure 4.17: Setting the Seize properties to indicate that Resource1 should be seized. Add the Delay step immediately after the Seize step and set the Delay Time property to Random.Exponential(1) minutes to indicate that the entity delays should be exponentially distributed with mean \(1\) minute (equivalent to the original service rate of 60/hour). Add the Release step immediately after the Delay step and set the Releases property to Resource1. Add another Assign step next to the Release step and set the State Variable Name property to WIP and the New Value property to WIP - 1, indicating that when the entity releases the resource, we want to decrement the value of the state variable to reflect the fact that an entity has left the system. Add a Tally step and set the TallyStatisticName property to TimeInSystem (the Tally Statistic was created earlier so is available on the pull-down there), and set the Value property to TimeNow - Token.TimeCreated to indicate that the recorded value should be the current simulation time minus the time that the current token was created. This time interval represents the time that the current entity spent in the system. The Tally step implements the Record function shown in Figure 4.13. Note that we used the token state Token.TimeCreated instead of marking the arrival time as shown in Figure 4.13. Finally, switch back to the Facility Window and set the run parameters (e.g., set the Ending Type to a Fixed run length of 1000 hours). Note that we'll discuss the details of States, Properties, Tokens, and other components of the Simio Framework in Chapter 5. To test the model, create an Experiment by clicking on the New Experiment icon in the Project Home ribbon. Figure 4.18 shows the Pivot Grid results for a run of 10 replications of the model using a 500 hour warm-up period for each replication. Notice that the report includes the UserSpecified category including the CurrentWIP and TimeInSystem statistics. Unlike the ModelEntity statistics NumberInSystem and TimeInSystem that Simio collected automatically in the Standard Library object model from Section 4.2, we explicitly told Simio to collect these statistics in the process model. Understanding user-specified statistics is important, as it's very likely that you'll want more than the default statistics as your models become larger and more complex. The CurrentWIP statistic is an example of a time-dependent statistic. Here, we defined a Simio state (step 4), used the process logic to update the value of the state when necessary (step 10 to increment and step 14 to decrement), and told Simio to keep track of the value as it evolves over simulated time and to report the summary statistics (of \(\widehat{L}\), in this case — step 4). The TimeInSystem statistic is an example of an observational or tally statistic. In this case, each arriving entity contributes a single observation (the time that entity spends in the system) and Simio tracks and reports the summary statistics for these values (\(\widehat{W}\), in this case). Step 7 sets up this statistic and step 15 records each observation. Figure 4.18: Results from Model 4-2. Another thing to note about our processes model is that it runs significantly faster than the corresponding Standard Library objects model (to see this, simply increase the run length for both models and run them one after another). The speed difference is due to the overhead associated with the additional functionality provided by the Standard Library objects (such as automatic collection of statistics, animation, collision detection on paths, resource failures, etc.). As mentioned above, most Simio models that you build will use the Standard Library objects and it's unlikely that you'll build complete models using only Simio processes. However, processes are fundamental to Simio and it is important to understand how they work. We'll revisit this topic in more detail in Section 5.1.4, but for now we'll return to our initial model using the Standard Library objects. In developing our initial Simio models, we focused on an arbitrary queueing system with entities and servers — very boring. Our focus for this section and for Section 4.8 is to add some context to the models so that they'll more closely represent the types of "real" systems that simulation is used to analyze. We'll continue to enhance the models over the remaining chapters in this Part of the book as we continue to describe the general concepts of simulation modeling and explore the features of Simio. In Models 4-1 and 4-2 we used observations from the exponential distribution for entity inter-arrival and service times. We did this so that we could exploit the mathematical "niceness" of the resulting \(M/M/1\) queueing model in order to demonstrate the basics of randomness in simulation. However, in many modeling situations, entity inter-arrivals and service times don't follow nice exponential distributions. Simio and most other simulation packages can sample from a wide variety of distributions to support general modeling. Models 4-3 and 4-4 will demonstrate the use of a triangular distribution for the service times, and the models in Chapter 5 will demonstrate the use of many of the other standard distributions. Section 6.1 discusses how to specify such input probability distributions in practice so that your simulation model will validly represent the reality you're modeling. Model 4-3 models the automated teller machine (ATM) shown in Figure 4.19. Customers enter through the door marked Entrance, walk to the ATM, use the ATM, and walk to the door marked Exit and leave. For this model, we'll assume that the room containing the ATM is large enough to handle any number of customers waiting to use the ATM (this will make our model a bit easier, but is certainly not required and we'll revisit the use of limited-capacity queues in future chapters). With this assumption, we basically have a single-server queueing model similar to the one shown in Figure 4.1. As such, we'll start with Model 4-1 and modify the model to get our ATM model (be sure to use the Save Project As option to save Model 4-3 initially so that you don't over-write your file for Model 4-1). The completed ATM model (Model 4-3) is shown in Figure 4.20. Figure 4.19: ATM example. Figure 4.20: Model 4-3: ATM example. The required modifications are as follows: Update the object names to reflect the new model context (ATMCustomer for entities, Entrance for the Source object, ATM1 for the Server object, and Exit for the Sink object); Rearrange the model so that it "looks" like the figure; Change the Connector and entity objects so that the model includes the customer walk time; and Change the ATM processing-time distribution so that the ATM transaction times follow a triangular distribution with parameters (0.25, 1.00, 1.75) minutes (that is, between 0.25 and 1.75 minutes, with a mode of 1.00 minute). Updating the object names doesn't affect the model's running characteristics or performance, but naming the objects can greatly improve model readability (especially for large or complicated models). As such, you should get into the habit of naming objects and adding meaningful descriptions using the Description property. Renaming objects can be done by either selecting the object, hitting the F2 key, and typing the new name; or by editing the Name property for the object. Rearranging the model to make it look like the system being modeled is very easy — Simio maintains the connections between objects as you drag the object around the model. Note that in addition to moving objects, you can also move the individual object input and output nodes. In our initial queueing model (Model 4-1) we assumed that entities simply "appeared'' at the server upon arrival. The Simio Connector object supported this type of entity transfer. This is clearly not the case in our ATM model where customers walk from the entrance to the ATM and from the ATM to the exit (most models of"real" systems involves some type of similar entity movement). Fortunately, Simio provides several objects from the Standard Library to facilitate modeling entity movements: Connector — Transfers entities between objects in zero simulation time (i.e., instantly, at infinite speed); Path — Transfers entities between objects using the distance between objects and entity speed to determine the movement time; TimePath — Transfers entities between objects using a user-specified movement-time expression; and Conveyor — Models physical conveyors. We'll use each of these methods over the next few chapters, but we'll use Paths for the ATM model (note that the Simio Reference Guide, available via the F1 key or the ? icon in the upper right of the Simio window, provides detailed explanations of all of these objects). Since we're modifying Model 4-1, the objects are already connected using Connectors. The easiest way to change a Connector to a Path is to right-click on the Connector and choose the Path option from the Convert to Type sub-menu. This is all that's required to change the connection type. Alternatively, we could delete the Connector object and add the Path object manually by clicking on the Path in the Standard Library and then selecting the starting and ending nodes for the Path. The entity-movement time along a Path object is determined by the path length and the entity speed. Simio models are drawn to scale by default so when we added a path between two nodes, the length of the path was set as the distance between the two nodes (whenever you input lengths or other properties with units, the + will expand a field where you can specify the input units. The Unit Settings button on the Run ribbon allows you to change the units displayed on output, such as in the facility-window labels, the pivot-grid numbers, and trace output). The Length property in the Physical Characteristics/Size group of the General section gives the current length of the Path object. The length of the path can also be estimated using the drawing grid. The logical path length can also be manually set if it's not convenient to draw the path to scale. To set the logical length manually, set the Drawn to Scale property to False and set the Logical Length property to the desired length. The entity speed is set through the Initial Desired Speed property in the Travel Logic section of the entity properties. In Model 4-3, the path length from the entrance to the ATM is 10 meters, the path length from the ATM to the exit is 7 meters, and the entity speed is 1 meter/second. With these values, an entity requires 10 seconds of simulated time to move from the entrance to the ATM, and 7 seconds to move from the ATM to the exit. The path lengths and entity speed can be easily modified as dictated by the system being modeled. The final modification for our ATM model involves changing the processing-time distribution for the server object. The characteristics of the exponential distribution probably make it ill-suited for modeling the transaction or processing time at an ATM. Specifically, the exponential distribution is characterized by lots of relatively small values and a few extremely large values, since the mode of its density function is zero. Given that all customers must insert their ATM card, correctly enter their personal identification number (PIN), and select their transaction, and that the number of ATM transaction types is generally limited, a bounded distribution is likely a better choice. We'll use a triangular distribution with parameters 0.25, 1.00, and 1.75 minutes. Determining the appropriate distribution(s) to use is part of input analysis, which is covered in Section 6.1. For now, we'll assume that the given distributions are appropriate. To change the processing-time distribution, simply change the Processing Time property to Random.Triangular(0.25, 1, 1.75) and leave the Units property as Minutes. By using the Random keyword, we can sample statistically independent observations from some 19 common distributions (as of the writing of this book) along with the continuous and discrete empirical distributions for cases where none of the standard distributions provides an adequate fit. These distributions, their required parameters, and plots of their density or probability-mass functions are discussed in detail in the "Distributions'' subsection of the"Expressions Editor, Functions and Distributions" section in the "Modeling in Simio" part of the Simio Reference Guide. The computational methods that Simio uses to generate random numbers and random variates are discussed in Sections 6.3 and 6.4. Now that we have completed Model 4-3, we must verify our model as discussed in Section 4.2.5. As noted, the verification process involves developing a set of expectations, developing and running the model, and making sure that the model results match our expectations. When there's a mismatch between our expectations and the model results, we must find and fix the problems with the model, the expectations, or both. For Model 4-1, the process of developing expectations was fairly simple — we were modeling an \(M/M/1\) queueing system so we could calculate the exact values for the performance metrics. The process isn't quite so simple for Model 4-3, as we no longer have exponential processing times, and we've added entity-transfer times between the arrival and service, and between the service and the departure. Moreover, these two modifications will tend to counteract each other in terms of the queueing metrics. More specifically, we've reduced the variation in the processing times, so we'd expect the numbers of entities in system and in the queue as well as the time in system to go down (relative to Model 4-1), but we've also added the entity-transfer times, so we'd expect the number of entities in the system and the time that entities spend in the system to go up. As such, we don't have a set of expectations that we can test. This will be the case quite often as we develop more complex models. Yet we're still faced with the need for model verification. One strategy is to develop a modified model for which we can easily develop a set of expectations and to use this modified model during verification. This is the approach we'll take with Model 4-3. There are two natural choices for modifying our model: Set the Entity Transfer Times to 0 and use an \(M/G/1\) queueing approximation (as described in Chapter 2), or change the processing-time distribution to exponential. We chose the latter option and changed the Processing Time property for the ATM to Random.Exponential(1). Since we're simply adding 17 seconds of transfer time to each entity, we'd expect \(\rho\), \(L_q\), and \(W_q\) to match the \(M/M/1\) values (Table 4.2), and \(W\) to be 17 seconds greater than the corresponding \(M/M/1\) value. The results for running 500 replications of our model with replication length 30 hours and warm-up of 20 hours (the same conditions as we used in Section 4.2.5) are given in Table 4.6. Table 4.6: Model 4-3 (modified version) results. Utilization (\(\rho\)) \(0.797 \pm 0.004\) Number in system (\(L\)) \(4.139 \pm 0.131\) Number in queue (\(L_q\)) \(3.115 \pm 0.128\) Time in system (\(W\)) \(0.086 \pm 0.003\) Time in queue (\(W_q\)) \(0.064 \pm 0.003\) These results appear to match our expectations (we could run each replication longer or run additional replications if we were concerned with the minor deviations between the average values and our expectations, but we'll leave this to you). So if we assume that we have appropriately verified the modified model, the only way that Model 4-3 wouldn't be similarly verified is if either we mistyped the Processing Time property, or if Simio's implementation of either the random-number generator or Triangular random-variate generator doesn't generate valid values. At this point we'll make sure that we've entered the property correctly, and we'll assume that Simio's random-number and random-variate generators work. Table 4.7 gives the results of our experiment for Model 4-3 (500 replications, 30 hours run length, 20 hours warm-up). As expected, the number of entities in the queue and the entities' time in system have both gone down (this was expected since we've reduced the variation in the service time). It's worth reiterating a point we made in Section 4.2.5: We can't (in general) prove that a model is verified. Instead, we can only collect evidence until we're convinced (possibly finding and fixing errors in the process). Table 4.7: Model 4-3 results. We've emphasized already in several places that the results from stochastic simulations are themselves random, so need to be analyzed with proper statistical methods. So far, we've tended to focus on means — using averages from the simulation to estimate the unknown population means (or expected values of the random variables and distributions of the output responses of interest). Perhaps the most useful way to do that is via confidence intervals, and we've shown how Simio provides them in its Experiment Pivot Grids and Reports. Means (of anything, not just simulation-output data) are important, but seldom do they tell the whole tale since they are, by definition, the average of an infinite number of replications of the random variable of interest, such as a simulation output response like average time in system, maximum queue length, or a resource utilization, so don't tell you anything about spread or what values are likely and unlikely. This is among the points made by Sam Savage in his engagingly-titled book, The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty (Savage 2009). In the single-period static inventory simulation of Section 3.2.3, and especially in Figure 3.3, we discussed histograms of the results, in addition to means, to see what kind of upside and downside risk there might be concerning profit, and in particular the risk that there might be a loss rather than a profit. Neither of these can be addressed by averages or means since, for example, if we order 5000 hats, the mean profit seemed likely to be positive (95% confidence interval \(\$8734.20 \pm \$3442.36\)), yet there was a 30% risk of incurring a loss (negative profit). So in addition to the Experiment Pivot Grid and Report, which contain confidence-interval information to estimate means, Simio includes a new type of chart for reporting output statistics. Simio MORE (SMORE) plots are a combination of an enhanced box plot, first described by John Tukey in 1977 (Tukey 1977), a histogram, and a simple dot plot of the individual-replication summary responses. SMORE plots are based on the Measure of Risk and Error (MORE) plots developed by Barry Nelson in (B. L. Nelson 2008), and Figure 4.21 shows a schematic defining some of their elements. A SMORE plot is a graphical representation of the run results for a summary output performance measure (response), such as average time in system, maximum number in queue, or a resource utilization, across multiple replications. Similar to a box plot in its default configuration, it displays the minimum and maximum observed values, the sample mean, sample median, and "lower" and "upper" percentile values (points at or below which are that percent of the summary responses across the replications). The "sample" here is composed of the summary measures across replications, not observations from within replications, so this is primarily intended for terminating simulations that are replicated multiple times, or for steady-state simulations in which appropriate warm-up periods have been identified and the model is replicated with this warm-up in effect for each replication. A SMORE plot can optionally display confidence intervals on the mean and both lower/upper percentile values, a histogram of observed values, and the responses from each individual replication. Figure 4.21: SMORE plot components (from the Simio Reference Guide). SMORE plots are generated automatically based on experiment Responses. For Model 4-3, maybe we're interested in the average time, over a replication, that customers spend in the system (the time interval between when a customer arrives to the ATM and when that customer leaves the ATM). Simio tracks this statistic automatically and we can access within-replication average values as the expression ATMCustomer.Population.TimeInSystem.Average. To add this as a Response, click the Add Response icon in the Design ribbon (from the Experiment window) and specify the Name (AvgTimeInSystem) to label the output, and Expression (ATMCustomer.Population.TimeInSystem.Average) properties (see Figure 4.22). A reasonable question you might be asking yourself right about now is, "How do I know to type in ATMCustomer.Population.TimeInSystem.Average there for the Expression?" Good question, but Simio provides a lot of on-the-spot, smart, context-sensitive help. Figure 4.22: Defining the experiment Response for average time in system in Model 4-3. When you click in the Expression field you'll see a down arrow on the right; clicking it brings up another field with a red X and green check mark on the right — this is Simio's expression builder, discussed more fully in Section 5.1.7, and shown in Figure 4.23. Figure 4.23: Using the Simio expression builder for the average-time-in-system Response for a SMORE plot. For now, just experiment with it, starting by clicking in the blank expression field and then tapping the down-arrow key on your keyboard to open up a menu of possible ways to get started, at the left edge of the expression. In the list that appears below, find ATMCustomer (since that's the name of the entities, and we want something about entities here, to wit, their time in system) and double-click on it; note that that gets copied up into the expression field. Next, type a period just to the right of ATMCustomer in the expression field, and notice that another list drops down with valid possibilities for what comes next. In this case we are looking for a statistic for the entire population of ATMCustomer entities, not just a particular entity, so double-click on Population. Again you are provided a list of choices; double-click on TimeInSystem at the bottom of the list (since that's what we want to know about our ATMCustomer entities). If at any point you lose your drop-down list, just type the down arrow again. As before, type a period on the right of the expression that's gradually getting built in the field, and double-click on Average in the list that appears (since we want the average time in system of ATMCustomer entities, rather than the maximum or minimum — though the latter two would be valid choices if you wanted to know about them rather than the average). That's the end of it, as you can verify by trying another period and down arrow but nothing happens, so click on the green check mark on the right to establish this as your expression. You can add more Responses in this way, repeatedly clicking on the Add Response icon and filling in the Properties as above, and when viewing the SMORE plots you can rotate among them via a drop-down showing the Names you chose for them. Go ahead and add two more: the first with Name = MaxQueueLength and Expression = ATM1.AllocationQueue.MaximumNumberWaiting, and the second with Name = ResourceUtilization and Expression = ATM1.ResourceState.PercentTime(1). We invite you to poke through the expression builder to discover these, and in particular the (1) in the last one (note that as you hover over selections in the drop-downs from the expression builder, helpful notes pop up, as in Figure 4.23, describing what the entries are, including the (1)). The percents desired for the Lower and Upper Percentiles, and the Confidence Level for the confidence intervals can be set using the corresponding experiment properties (see Figure 4.24); by default, the lower and upper percentiles are set for 25% and 75% (i.e., the lower and upper quartiles) as in traditional box plots, though you may want to spread them out more than that since the "box" to which your eye is drawn contains the results from only the middle half of the replications in the default traditional settings, and maybe you'd like your eye to be drawn to something that represents more than just half (e.g., setting them at 10% and 90% would result in a box containing 80% of the replications' summary results). Figure 4.24: Setting the percentile and confidence interval levels. To view your SMORE plots, select the Response Results tab in the Experiment window. Figure 4.25 shows the SMORE plot for the average time in system from a 500-replication run of Model 4-3 described above (30-hour run length, 20-hour warm-up on each replication), with the Confidence Intervals and Histogram showing, but not the individual replication-by-replication observations. We left the percentiles at their defaults of 75% for upper and 25% for lower. The Rotate Plot button allows you to view the plot horizontally rather than vertically, if you prefer. The numerical values used to generate the SMORE plot, like the confidence-interval endpoints, are also available by clicking on the Raw Data tab at the bottom of the SMORE plot window, so you can see what they actually are rather than eyeballing them from the graph. We see from Figure 4.25 that the expected average time in system is just under 0.058 hour (3.5 minutes), and the median is a bit lower, consistent with the histogram shape's being skewed to the right. Further, the upper end of the box plot (75th percentile) is about 0.064 hour (3.8 minutes), so there's a 25% chance that the average time in system over a replication will be more than this. And the confidence-intervals seem reasonably tight, indicating that the 500 replications we made are enough to form reasonably precise conclusions. Figure 4.25: SMORE plot for average time in system in Model 4-3. Figure 4.26 shows the SMORE plots (with the same elements showing) for the maximum queue length; we temporarily moved the Upper Percentile from its default 75% up to 90%. Since this is for the maximum (not average) queue length across each replication, this tells us how much space we might need to hold this queue over a whole replication, and we see that if we provide space for 14 or 15 people in the ATM lobby, we'll always (remember, we're looking at the maximum queue length) have room to hold the queue in about 90% of the replications. Play around a bit with the Upper Percentile and Lower Percentile settings in the Experiment Design window; of course, as these percentiles move out toward the extreme edges of 0% and 100%, the edges of the box move out too, but what's interesting is that the Confidence Intervals on them become wider, i.e., less precise. This is because these more extreme percentiles are inherently more variable, being based on only the paucity of points out in the tails, and are thus more difficult to estimate, so the wider confidence intervals keep you honest about what you know (or, more to the point, what you don't know) concerning where the true underlying percentiles really are. Figure 4.26: SMORE plot for maximum queue length in system in Model 4-3. The distribution of the observed server utilization in Figure 4.27 shows that it will be between 77% and 83% about half the time, which agrees with the queueing-theoretic expected utilization of \[ \frac{\mbox{E(service time)}}{\mbox{E(interarrival time)}} = \frac{(0.15 + 1.00 + 1.75)/3}{1.25} = 0.80 \] (the expected value of a triangular distribution with parameters min, mode, max is (min + mode + max)/3, as given in the Simio Reference Guide). However, there's a chance that the utilization could be as heavy as 90% since the histogram extends up that high (the maximum utilization across the 500 replications was 90.87%, as you can see in the Raw Data tab at the bottom). Figure 4.27: SMORE plot for server utilization in Model 4-3. As originally described in (B. L. Nelson 2008), SMORE plots provide an easy-to-interpret graphical representation of a system's risk and the sampling error associated with the simulation — far more information than just the average over the replications, or even a confidence interval around that average. The confidence intervals in a SMORE plot depict the sampling error in estimation of the percentiles and mean — we can reduce the width of the confidence intervals (and, hence, the sampling error) by increasing the number of replications. Visually, if the confidence-interval bands on the SMORE plot are too wide to suit you, then you need to run more replications. Once the confidence intervals are sufficiently narrow (i.e., we're comfortable with the sampling error), we can use the upper and lower percentile values to get a feeling for the variability associated with the response. We can also use the histogram to get an idea about the distribution shape of the response (e.g., in Figures 4.25- it's apparent that the distributions of average time in system and maximum queue length are skewed right or high, but the distribution of utilizations is fairly symmetric). As we'll see in Chapter 5, SMORE plots are quite useful to see what the differences might be across multiple alternative scenarios in an output performance measure, not only in terms of just their means, but also their relative spread and distribution. Simio is a simulation package, not a statistical-analysis package. While it does provide some statistical capabilities, like the confidence intervals and SMORE plots that we've seen (and a few more that we'll see in future chapters), Simio makes it easy to export the results of your simulation to a CSV file that can then be easily read into a variety of dedicated statistical-analysis packages like SAS, JMP, SPSS, Stata, S-PLUS, or R, among many, many others, for post-processing your output data after your simulations have run. For relatively simple statistical analysis, the CSV file that you can ask Simio to export for you can be read directly into Excel and you could then use its built-in functions (like =AVERAGE, =STDEV, etc.) or the Data Analysis Toolbar that comes with Excel, or perhaps a better and more powerful third-party statistical-analysis add-in like StatTools from Palisade Corporation. With your responses from each replication exported in a convenient format like this, you're then free to use your favorite statistics package to do any analysis you'd like, such as hypothesis tests, analysis of variance, or regressions. Remember, from each replication you get a single summary value (e.g., average time in system, maximum queue length, or server utilization over the replication), not individual-entity results from within replications, and those summary values are independent and identically distributed observations to which standard statistical methods will apply; your "sample size," in statistics parlance, is the number of replications you ran. Note that you can collect individual-entity observations manually with a Write step in add-on process logic or with simple customization of a standard library object as described in Section 11.4. You can also enable logs to collect tally and state statistic observations and Dashboard Reports to display them. You've already seen how this export to a CSV file can be done, in Section 4.2.3, from the Experiment window via the Export Details icon in the Pivot Grid window. Depending on the size of your model, and the number of replications you made, this exported CSV file could be fairly large, and you may need to do some rearranging of it, or extracting from it, to get at the results you want. But once you have the numbers you want saved in this convenient format, you can do any sort of statistical analysis you'd like, including perhaps data mining to try to discover patterns and relationships from the output data themselves. For example, in the 500 replications we used to make the SMORE plots in Section 4.5, we exported the data and extracted the 500 average times in system to one column in an Excel spreadsheet, and the 500 utilizations in a second column. Thus, in each of the 500 rows, the first-column value is the average time in system on that replication, and the second-column value is the server utilization in that same replication. We then used the StatTools statistical-analysis add-in to produce the scatterplot and correlation in Figure 4.28. Figure 4.28: StatTools scatterplot of average time in system vs. server utilization in Model 4-3. We see that there's some tendency for average time in system to be higher when the server utilization is higher, but there are plenty of exceptions to this since there is a lot of random "noise" in these results. We also used the built-in Excel Chart Tools to superpose a linear regression line, and show its equation and \(R^2\) value, confirming the positive relationship between average time in system and server utilization, with about 45% of the variation in average time in system's being explained by variation in server utilization. This is just one very small example of what you can do by exporting the Simio results to a CSV file, extracting from it appropriately (maybe using custom macros or a scripting language), and then reading your simulation output data into powerful statistical packages to learn a lot about your models' behavior and relationships. Simio provides capabilities for logging various data during interactive runs and using this data to create dashboard reports. All Simio license types provide limited logging capabilities; Academic RPS licenses provide additional related capabilities. Note that Simio's logging and dashboard features are quite extensive and we will only demonstrate a simple case here. Additional feature descriptions and details are available in the Simio Help and SimBits. For this example, we will use Model 4-3 and will log the ATM resource usage and create a related dashboard report. First we will tell Simio to log the resource usages for the ATM server resource. The steps to do this are as follows: Turn on interactive logging from the Advanced Options item on the Run ribbon. Turn on resource usage logging for the ATM server object. To do this, select the ATM object instance, expand the Advanced Options resource group and set the Log Resource Usage property to True. Reset and run the model in fast-forward mode. Navigate to the Logs section of the Results tab and select the Resource Usage Log (if it is not already selected). Figure 4.29 shows a portion of the the Resource Usage Log view for the newly created log. Figure 4.29: Resource Usage Log for Model 4-3. The resource usage log records every time the specified resource is "used" and specifies the entity, the start time, the end time, and duration of the usage. Now that the individual resource usages are logged, we will create a simple dashboard report that plots the usage durations over the simulation time. The steps are as follows: Select the Dashboard Reports section of the Results tab, click on the Create item from the Dashboards ribbon, and enter a name for your dashboard in the Add Dashboard dialog box. Figure 4.30 shows the initial dialog box for the new dashboard. Select the Resource Usage Log from the Data Source drop-box. Figure 4.30: Initial dialog box for the new dashboard report. Click on the Chart option from the Chart Tools/Home ribbon to create a new chart. Figure 4.31 shows the the newly created chart in the new dashboard report. From here, we customize the chart by dragging items from the Resource Usage Log and dropping them on the DATA ITEMS components. Figure 4.31: Newly created chart for the dashboard. Drag the Duration (Hours) item and drop it on the Values (Pane 1) data item. Click on the bar chart item just to the right of the data item and change the chart type to a line plot. Drag the Start Time item and drop it on the Arguments data item. Open the drop-box for the data source and select the Date-Hour-Minute option from the list. Figure 4.32 shows the the completed resource usage chart. The chart plots the resource usage durations by the start time of the usages. Figure 4.32: Final resource usage chart for the dashboard. Click the Save item on the ribbon to save the newly created dashboard. As mentioned, Simio's logging and dashboard capabilities are extensive and you should experiment with these features and have a look at the Simio Help and related SimBits. In addition, details, examples, and instructions about Dashboards are available in the Simio help and at https://docs.devexpress.com/Dashboard/. Up until now we've barely mentioned animation, but we've already been creating useful animations. In this section we'll introduce a few highlights of animation. We'll cover animation in much greater detail in Chapter 8. Animation generally takes place in the Facility Window. If you click in the Facility Window and hit the H key, it will toggle on and off some tips about using the keyboard and mouse to move around the animation. You might want to leave this enabled as a reminder until you get familiar with the interface. One of Simio's strengths is that when you build models using the Standard Library, you're building an animation as you build the model. The models in Figures 4.8 and 4.20 are displayed in two-dimensional (2D) animation mode. Another of Simio's strengths is that models are automatically created in 3D as well, even though the 2D view is commonly used during model building. To switch between 2D and 3D view modes, just tap the 2 and 3 keys on the keyboard, or select the View ribbon and click on the 2D or 3D options. Figure 4.33 shows Model 4-3 in 3D mode. In 3D mode, the mouse buttons can be used to pan, zoom, and rotate the 3D view. The model shown in Figure 4.33 shows one customer entity at the server (shown in the Processing.Contents queue attached to the ATM1 object), five customer entities waiting in line for the ATM (shown in the InputBuffer.Contents queue for the ATM1 server object), and two customers on the path from the entrance to the ATM queue. Figure 4.33: Model 4-3 in 3D view. Let's enhance our animation by modifying Model 4-3 into what we'll call Model 4-4. Of course, you should start by saving Model_04_03.spfx to a file of a new name, say Model_04_04.spfx, and maybe to a different directory on your system so you won't overwrite our file of the same name that you might have downloaded; Simio's Save As capability is via the yellow pull-down tab just to the left of the Project Home tab on the ribbon. If you click on any symbol or object, the Symbols ribbon will come to the front. This ribbon provides options to change the color or texture applied to the symbol, add additional symbols, and several ways to select a symbol to replace the default. We'll start with the easiest of these tasks — selecting a new symbol from Simio's built-in symbol library. In particular, let's change our entity picture from the default green triangle to a more realistic-looking person. Start by clicking on the Entities object we named ATMCustomer. In the Symbols ribbon now displayed, if you click the Apply Symbols button, you will see the top section of the built-in library as illustrated in Figure 4.34. Figure 4.34: Navigating to Simio symbol library. The entire library consists of over 300 symbols organized into 27 categories. To make it easier to find a particular symbol, three filters are supplied at the bottom: Domain, Type, and Action. You can use any combination of these filters to narrow the choices to what you are looking for. For example, let's use the Type filter and check only the People option, resulting in a list of all the People in the library. Near the top you will see a Library People category. If you hover the mouse (don't click yet) over any symbol, you'll see an enlarged view to assist you with selection. Click on one of the "Female" symbols to select it and apply it as the new symbol to use for your highlighted object — the entity. The entity in your model should now look similar to that in Figure 4.35. Note that under the People folder there is an folder named Animated that contains symbols of people with built-in animation like Walking, Running, and Talking. Using these provides even more realistic animation, but that is probably overkill for our little ATM model. Figure 4.35: Model 4-4 with woman and ATM symbols in 2D view. You may notice that Figure 4.35 also contains a symbol for an ATM machine. Unfortunately this was not one of the symbols in the library. If you happened to have an ATM symbol readily available you could import it. Or if you're particularly artistic you could draw one. But for the rest of us Simio provides a much easier solution — download it from Trimble 3D Warehouse. This is a huge repository of symbols that are available free, and Simio provides a direct link to it. Let's change the picture of our server to that of an ATM machine. Start by clicking on the Server object we'd previously named ATM1. Now go to the Symbol ribbon and click on the Go To 3D Warehouse icon. This will open the 3D Warehouse web page (https://3dwarehouse.sketchup.com) in your default web browser. Enter ATM into the Search box and click on the Search button and then choose the Models tab. You'll see the first screen of hundreds of symbols that have ATM in their name or description, something like Figure 4.36. Note that the library is updated frequently, so your specific search results may vary. Similarly, the web page may also look/behave differently if it has been updated since the writing of this chapter. Figure 4.36: Trimble Warehouse results of search for ATM. Note that many of these don't involve automated teller machines, but many do, and in fact you should find at least one on the first screen that meets our needs. You can click on an interesting object (we chose a Mesin ATM) and see the basic details such as file size. If you click on the See more details link, you can view and rotate the model in 3D from the browser. If you are satisfied, choose Download and save the skp file on your computer (you may have to set up an account on the 3D warehouse site if you do not already have one in order to download symbols). Back in Simio, you can import the symbol and apply it to the selected Server using the Import Symbol icon (with the Server object instance selected). This will import the 3D model into your Simio model, allow you to change the name size and orientation of the symbol, and apply it to the object instance. Once the symbol has been imported, it can be applied to other object instances without re-importing (using the Apply Symbol icon). One of the most important things to verify during the import process is that the size (in meters) is correct. You cannot change the ratio of the dimensions, but you can change any one value if it was sized wrong. In our case our ATM is about 0.7 meter wide and 2 meters high, which seems about right. Click OK and we're done applying a new symbol to our ATM server. Now your new Model 4-4 should look something like to Figure 4.35 when running in the 2D view. If you change to the 3D view (3 key) you should see that those new symbols you selected also look good in 3D, as in Figure 4.37. Of course we've barely scratched the surface of animation possibilities. You could draw walls (see the Drawing ribbon), or add features like doorways and plants. You can even import a schematic or other background to make your model look even more realistic. Feel free to experiment with such things now if you wish, but we'll defer formal discussion of these topics until Chapter 8. As hard as it may be to believe, sometimes people make mistakes. When those mistakes occur in software they are often referred to as bugs. Many things can cause a bug including a typo (typing a 2 when you meant to type a 3), a misunderstanding of how the system works, a misunderstanding of how the simulation software works, or a problem in the software. Even the most experienced simulationist will encounter bugs. In fact a significant part of most modeling efforts is often spent resolving bugs — it is a natural outcome of using complex software to model complex systems accurately. It is fairly certain that you will have at least a few bugs in the first real model that you do. How effectively you recognize and dispatch bugs can determine your effectiveness as a modeler. In this section we will give you some additional insight to improve your debugging abilities. The best way to minimize the impact of bugs is to follow proper iterative development techniques (see Section 1.5.3). If you work for several hours without stopping to verify that your model is running correctly, you should expect a complex and hard-to-find bug. Instead, pause frequently to verify your model. When you find problems you will have a much better idea of what caused the problem and you will be able to find and fix it much more quickly. The most common initial reaction to a bug is to assume it is a software bug. Although it is certainly true that most complex software, regardless of how well-written and well-tested it is, has bugs, it is also true that the vast majority of problems are user errors. Own the problem. Assume that it is your error until proven otherwise and you can immediately start down the path to fixing it. How do you even know that you have a problem? Many problems are obvious — you press Run and either nothing happens or something dramatic happens. But the worst problems are the subtle ones — you have to work at it to discover if there even is a problem. In Section 4.2.2 we discussed the importance of developing expectations before running the model. Comparing the model results to our expectations is the first and best way to discover problems. In Section 4.2.5 we also discussed a basic model-verification process. The following steps extend that verification a bit deeper. Watch the animation. Are things moving where you think they should move? If not, why not? Enhance the animation to be more informative. Use floating labels and floor labels to add diagnostic information to entities and other objects. Examine the output statistics carefully. Are the results and the relationships between results reasonable? For example is it reasonable that you have a very large queue in front of a resource with low utilization? Add custom statistics to provide more information when needed. Finally, the same debugging tools described below to help resolve a problem can be used to determine if any problem even exists. Okay, you are convinced that you have a bug. And you have taken ownership by assuming (for now) that the bug is due to some error that you have introduced. Good start. Now what? There are many different actions that you can try, depending on the problem. Look through all of your objects, especially the ones that you have added or changed most recently. Look at all properties that have been changed from their defaults (in Simio these are all bold and their categories are all expanded). Ensure that you actually meant to change each of these and that you made the correct change. Look at all properties that have not been changed from their defaults. Ensure that the default value is meaningful; often they are not. Minimize entity flow. Limit your model to just a single entity and see if you can reproduce the problem. If not, add a second entity. It is amazing how many problems can be reproduced and isolated with just one or two entities. A minimal number of entities help all of the other debugging processes and tools work better. In Simio, this is most easily done by setting Maximum Arrivals on each source to 0, 1, or 2. Minimize the model. Save a copy of your model, then start deleting model components that you think should have no impact. If you delete too much, simply undo, then delete something else. The smaller your model is, the easier it will be to find and solve the problem. If you encountered a warning or error, go back and look at it again carefully. Sometimes messages are somewhat obscure, but there is often valuable information embedded in there. Try to decode it. Follow your entity(ies) step by step. Understand exactly why they are doing what they are doing. If they are not going the way they should, did you accidentally misdirect them? Or perhaps not direct them at all? Examine the output results for more clues. Change your perspective. Try to look at the problem from a totally different direction. If you are looking at properties, start from the bottom instead of the top. If you are looking at objects, start with the one you would normally look at last. This technique often opens up new pathways of thought and vision and you might well see something you didn't see the first time — In banking, people often do verification by having one person read the digits in a number from left to right and a second person read those same digits from right to left. This breaks the pattern-recognition cycle that sometimes allows people to see what they expect or want to see rather than what is really there. Enlist a friend. If you have the luxury of an associate who is knowledgeable in modeling in your domain, he or she might be of great help solving your problem. But you can also get help from someone with no simulation or domain expertise — just explain aloud the process in detail to them. In fact, you can use this technique even if you are alone — explain it to your goldfish or your pet rock. While it may sound silly, it actually works. Explaining your problem out loud forces you to think about it from a different perspective and quite often lets you find and solve your own problem! RTFM - Read The (um, er) Friendly Manual. Okay, no one likes to read manuals. But sometimes if all else fails, it might be time to crack the textbook, reference guide, or interactive help and look up how something is really supposed to work. You don't necessarily need to do the above steps in order. In fact you might get better results if you start at the bottom or skip around. But definitely use the debugging tools discussed below to facilitate this debugging process. Although animation and numerical output provide a start for debugging, better simulation products provide a set of tools to help modelers understand what is happening in their models. The basic tools include Trace, Break, Watch, Step, Profiler, and the Search Window. Trace provides a detailed description of what is happening as the model executes. It generally describes entity flow as well as the events and their side-effects that take place. Simio's trace is at the Process Step level — each Step in a process generates one or more trace statements. Until you learn about processes, Simio trace may seem hard to read, but once you understand Steps, you will begin to appreciate the rich detail made available to you. The Simio Trace can be filtered for easier use as well as exported to an external file for post-run analysis. Break provides a way to pause the simulation at a predetermined point. The most basic capability is to pause at a specified time. Much more useful is the ability to pause when an entity reaches a specified point (like arrival to a server). More sophisticated capability allows conditional breaks such as for "the third entity that reaches point A" or "the first entity to arrive after time 125." Basic break functionality in Simio is found by right-clicking on an object or step. More sophisticated break behavior is available in Simio via the Break Window. Watch provides a way to explore the system state in a model. Typically when a simulation is paused you can look at model and object-level states to get an improved understanding of how and why model decisions and actions are being taken and their side effects. In Simio, watch capability is found by right-clicking on any object. Simio watch provides access to the properties, states, functions, and other aspects of each object as well as the ability to "drill down" into the hierarchy of an object. Step allows you to control model execution by moving time forward by a small amount of activity called a step. This allows you to examine the actions more carefully and the side effects of each action. Simio provides two step modes. When you are viewing the facility view, the Step button moves the active entity forward to its next time advance. When you are viewing the process window the Step button moves the entity (token) forward one process step. Profiler is useful when your problem is related to execution speed. It provides an internal analysis of what is consuming your execution speed. Identification of a particular step as processor intensive might indicate a model problem or an opportunity to improve execution speed by using a different modeling approach. Search provides an interactive way to find every place in your project where a word or character string (like a symbol name) is used. Perhaps you want to find every place you have referenced the object named "Teller." Or perhaps you have used a state named "Counter" in a few places and you want to change the expressions it is used in. Trace, Break, Watch, and Step can all be used simultaneously for a very powerful debugging tool set. Combining these tools with the debugging process described above provides a good mechanism for better understanding your model and producing the best results. The Trace, Errors, Breakpoints, Watch, Search, and Profile windows can all be accessed on the Project Home ribbon. In most cases these windows open automatically as a result of some action. For example when you enable trace on the Run ribbon, the Trace window will open. If you cause a syntax error while typing an expression, the Errors window will open. But sometimes you may want to use these buttons to reopen a window you have closed (e.g., the Errors window), or open a window for extra capability (e.g., the Breakpoints window). Figure 4.38 illustrates these windows in a typical use. The black circle indicates the button used to display the Trace window and turn on the generation of model trace. You can see the trace from the running model until execution was automatically paused (a break) when the Break point set on the Server2 entry node (red circle) is reached. At that point the Step button (blue circle) was pushed and that resulted in an additional 11 lines of trace being generated as the entity moves forward until its next time advance (yellow background). The Watch window on the right side illustrates using a watch on Server2 to explore its input buffer and each entity in that buffer. Figure 4.38: Using trace, watch, and break windows in custom layout. In the default arrangement, these debugging windows display as multiple tabs on the same window. You can drag and drop the individual windows to reproduce the window arrangement in Figure 4.38, or any window arrangement that meets your needs as discussed in Section 4.1.8. Since these windows can be repositioned even on other screens, sometimes you might lose track of a window. In this case press the Reset button found on the Project Home ribbon and it will reset those window positions back to their default layout. In this chapter we've introduced Simio and developed several simple Simio models using the Standard Library and using Simio processes. Along the way, we integrated statistical analysis of simulation output, which is just as important as modeling in actual simulation projects, via topics like replications, run length, warm-up, model verification, and the analysis capabilities made possible by the powerful SMORE plots. We started out with an abstract queueing model, and added some interesting context in order to model a somewhat realistic queueing system. In the process, we also discussed use of Simio Paths to model entity movement and basics of animation with Simio. All of these Simio and simulation-related topics will be covered in more detail in the subsequent chapters, with more interesting models. Create a model similar to Model 4-1 except use an arrival rate, \(\lambda\), of 120 entities per hour and a service rate, \(\mu\), of 190 entities per hour. Run your model for 100 hours and report the number of entities that were created, the number that completed service, and the average time entities spend in the system. Develop a queueing model for the Simio model from Problem 1 and compute the exact values for the steady state time entities spend in the system and the expected number of entities processed in 100 hours. Using the model from Problem 1, create an experiment that includes 100 replications. Run the experiment and observe the SMORE plot for the time entities spend in the system. Experiment with the various SMORE plot settings — viewing the histogram, rotating the plot, changing the upper and lower percentile values. If you run the experiment from Problem 3 five (or any number of) times, you will always get the exact same results even though the interarrival and service times are supposed to be random. Why is this? You develop a model of a system. As part of your verification, you also develop some expectation about the results that you should get. When you run the model, however, the results do not match your expectations. What are the three possible explanations for this mismatch? In the context of simulation modeling, what is a replication and how, in general, do you determine how many replications to run for a given model? What is the difference between a steady-state simulation and a terminating simulation? What are the initial transient period and the warm-up period for a steady-state simulation? Replicate the model from Problem 1 using Simio processes (i.e., not using objects from the Standard Library). Compare the run times for this model and the model from Problem 1 for 50 replications of length 100 hours. Run the ATM model (Model 4-3) for 10 replications of length 240 hours (10 days). What are the maximum number of customers in the system and the maximum average number of customers in the system (recall that we mentioned that our model would not consider the physical space in the ATM). Was our assumption reasonable (that we did not need to consider the physical space, that is)? Describe how SMORE plots give a quick view of a system's risk and the sampling error associated with a run of the model. Animate your model from Problem 1 assuming that you are modeling a cashier at a fast food restaurant — the entities represent customers and the server represents the cashier at the cash register. Use Simio's standard symbols for your animation. Modify your model from Problem 1 assuming that you are modeling a manufacturing process that involves drilling holes in a steel plate. The drilling machine has capacity for up to 3 parts at a time (\(c=3\) in queueing terms). The arrival rate should be 120 parts per hour and the processing rate should be 50 parts per hour. Use Trimble 3D Warehouse to find appropriate symbols for the entities (steel plates) and the server (a drill press or other hole-making device). Add a label to your animation to show how many parts are being processed as the model runs. Build Simio models to confirm and cross-check the steady-state queueing-theoretic results for the four specific queueing models whose exact steady-state output performance metrics are given in Section 2.3. Remember that your Simio models are initialized empty and idle, and that they produce results that are subject to statistical variation, so design and run Simio Experiments to deal with both of these issues; make your own decisions about things like run length, number of replications, and Warm-up Period, possibly after some trial and error. In each case, first compute numerical values for the queueing-theoretic steady-state output performance metrics \(W_q\), \(W\), \(L_q\), \(L\), and \(\rho\) from the results in Section 2.3, and then compare these with your simulation estimates and confidence intervals. All time units are in minutes, and use minutes as well throughout your Simio models. \(M/M/1\) queue with arrival rate \(\lambda = 1\) per minute and service rate \(\mu = 1/0.9\) per minute. \(M/M/4\) queue with arrival rate \(\lambda = 2.4\) per minute and service rate \(\mu = 0.7\) per minute for each of the four individual servers (the same parameters used in the mmc.exe command-line program shown in Figure 2.2). \(M/G/1\) queue with arrival rate \(\lambda = 1\) per minute and service-time distribution's being gamma(2.00, 0.45) (shape and scale parameters, respectively). You may need to do some investigation about properties of the gamma distribution, perhaps via some of the web links in Section 6.1.3. \(G/M/1\) queue with interarrival-time distribution's being continuous uniform between 1 and 5, and service rate \(\mu = 0.4\) per minute (the same situation shown in Figure 2.3). Build a Simio model to confirm and cross-check the steady-state queueing-theoretic results from your solutions to the \(M/D/1\) queue of Problem 9 in Chapter 2. Remember that your Simio model is initialized empty and idle, and that it produces results that are subject to statistical variation, so design and run a Simio Experiment to deal with both of these issues; make your own decisions about things like run length, number of replications, and Warm-up Period, possibly after some trial and error. For each of the five steady-state queueing metrics, first compute numerical values for the queueing-theoretic steady-state output performance metrics \(W_q\), \(W\), \(L_q\), \(L\), and \(\rho\) from your solutions to Problem 9 in Chapter 2, and then compare these with your simulation estimates and confidence intervals. All time units are in minutes, and use minutes as well throughout your Simio model. Take the arrival rate to be \(\lambda = 1\) per minute, and the service rate to be \(\mu = 1/0.9\) per minute. Repeat Problem 15, except use the \(D/D/1\) queueing model from Problem 10 in Chapter 2. In the processes-based model we developed in Section 4.3, we used the standard Token.TimeCreated token state to determine the time in system. Develop a similar model where you manually mark the arrival time (as illustrated in Figure 4.13 and used that value to record the time in system. Hint: You will need to create a custom token with a state variable to hold the value and use an Assign step to store the current simulation time when the token is created.
CommonCrawl
Global phase portraits of uniform isochronous centers with quartic homogeneous polynomial nonlinearities Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity January 2016, 21(1): 103-119. doi: 10.3934/dcdsb.2016.21.103 Global behavior of delay differential equations model of HIV infection with apoptosis Songbai Guo 1, and Wanbiao Ma 1, Department of Applied Mathematics, School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China Received February 2015 Revised May 2015 Published November 2015 In this paper, a class of delay differential equations model of HIV infection dynamics with nonlinear transmissions and apoptosis induced by infected cells is proposed, and then the global properties of the model are considered. It shows that the infection-free equilibrium of the model is globally asymptotically stable if the basic reproduction number $R_{0}<1$, and globally attractive if $R_{0}=1$. The positive equilibrium of the model is locally asymptotically stable if $R_{0}>1$. Furthermore, it also shows that the model is permanent, and some explicit expressions for the eventual lower bounds of positive solutions of the model are given. Keywords: global asymptotic stability, permanence., delay differential equations, Lyapunov functional, HIV infection. Mathematics Subject Classification: Primary: 37N25, 34A34; Secondary: 93D20, 34D2. Citation: Songbai Guo, Wanbiao Ma. Global behavior of delay differential equations model of HIV infection with apoptosis. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 103-119. doi: 10.3934/dcdsb.2016.21.103 R. M. Anderson and R. M. May, The population dynamics of microparasites and their invertebrate hosts, Philos. T. R. Soc. B, 291 (1981), 451-524. doi: 10.1098/rstb.1981.0005. Google Scholar H. T. Banks, D. M. Bortz and S. E. Holte, Incorporation of variability into the modeling of viral delays in HIV infection dynamics, Math. Biosci., 183 (2003), 63-91. doi: 10.1016/S0025-5564(02)00218-3. Google Scholar A. L. Cunningham, H. Donaghy, A. N. Harman, M. Kim and S. G. Turville, Manipulation of dendritic cell function by viruses, Curr. Opin. Microbiol., 13 (2010), 524-529. doi: 10.1016/j.mib.2010.06.002. Google Scholar M. Carbonari, M. Cibati, A. M. Pesce, D. Sbarigia, P. Grossi, G. D'Offizi, G. Luzi and M. Fiorilli, Frequency of provirus-bearing CD4$^+$ cells in HIV type 1 infection correlates with extent of in vitro apoptosis of CD8$^+$ but not of CD4$^+$ cells, AIDS Res. Hum. Retrov., 11 (1995), 789-794. Google Scholar L. Conti, G. Rainaldi, P. Matarrese, B. Varano, R. Rivabene, S. Columba, A. Sato, F. Belardelli, W. Malorni and S. Gessani, The HIV-1 vpr protein acts as a negative regulator of apoptosis in a human lymphoblastoid T cell line: Possible implications for the pathogenesis of AIDS, J. Exp. Med., 187 (1998), 403-413. doi: 10.1084/jem.187.3.403. Google Scholar R. V. Culshaw, S. Ruan and G. Webb, A mathematical model of cell-to-cell spread of HIV-1 that includes a time delay, J. Math. Biol., 46 (2003), 425-444. doi: 10.1007/s00285-002-0191-5. Google Scholar W. Cheng, W. Ma and S. Guo, A class of virus dynamic model with inhibitory effect on the growth of uninfected T cells caused by infected T cells and its stability analysis,, Commun. Pur. Appl. Anal., (). Google Scholar O. Diekmann, S. A. van Oils, S. M. Verduyn Lunel and H.-O. Walther, Delay Equations: Functional-, Complex-, and Nonlinear Analysis, Springer-Verlag, New York, 1995. doi: 10.1007/978-1-4612-4206-2. Google Scholar O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio $R_0$ in models for infectious diseases in heterogeneous populations, J. Math. Biol., 28 (1990), 365-382. doi: 10.1007/BF00178324. Google Scholar P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar J. Embretson, M. Zupancic, J. L. Ribas, A. Burke, P. Racz, K. T.-Racz and A. T. Haase, Massive covert infection of helper T lymphocytes and macrophages by HIV during the incubation period of AIDS, Nature, 362 (1993), 359-362. doi: 10.1038/362359a0. Google Scholar B. Ensoli, G. Barillari, S. Z. Salahuddin, R. C. Gallo and F. W.-Staal, Tat protein of HIV-1 stimulates growth of cells derived from Kaposi's sarcoma lesions of AIDS patients, Nature, 345 (1990), 84-86. doi: 10.1038/345084a0. Google Scholar Y. Enatsu, Y. Nakata and Y. Muroya, Lyapunov functional techniques for the global stability analysis of a delayed SIRS epidemic model, Nonlinear Anal.-Real, 13 (2012), 2120-2133. doi: 10.1016/j.nonrwa.2012.01.007. Google Scholar A. M. Elaiw and N. H. AlShamrani, Global properties of nonlinear humoral immunity viral infection models, Int. J. Biomath., 8 (2015), 1550058, 53 pp. doi: 10.1142/S1793524515500588. Google Scholar H. I. Freedman and S. Ruan, Uniform persistence in functional differential equations, J. Differ. Equations, 115 (1995), 173-192. doi: 10.1006/jdeq.1995.1011. Google Scholar Z. Feng and L. Rong, The influence of anti-viral drug therapy on the evolution of HIV-1 pathogens, DIMACS Series in Discrete Math. Theor., 71 (2006), 161-179. Google Scholar R. Fan, Y. Dong, G. Huang and Y. Takeuchi, Apoptosis in virus infection dynamics models, J. Biol. Dyn., 8 (2014), 20-41. doi: 10.1080/17513758.2014.895433. Google Scholar H. Garg, J. Mohl and A. Joshi, HIV-1 induced bystander apoptosis, Viruses, 4 (2012), 3020-3043. doi: 10.3390/v4113020. Google Scholar M.-L. Gougeon, H. Lecoeur, A. Dulioust, M.-G. Enouf, M. Crouvoiser, C. Goujard, T. Debord and L. Montagnier, Programmed cell death in peripheral lymphocytes from HIV-infected persons: increased susceptibility to apoptosis of CD4 and CD8 T cells correlates with lymphocyte activation and with disease progression, J. Immunol., 156 (1996), 3509-3520. Google Scholar S. B. Hsu, Limiting behavior for competing species, SIAM J. Appl. Math., 34 (1978), 760-763. doi: 10.1137/0134064. Google Scholar M. Heinkelein, S. Sopper and C. Jassoy, Contact of human immunodeficiency virus type 1-infected and uninfected CD4$^+$ T lymphocytes is highly cytolytic for both cells, J. Virol., 69 (1995), 6925-6931. Google Scholar A. V. M. Herz, S. Bonhoeffer, R. M. Anderson, R. M. May and M. A. Nowak, Viral dynamics in vivo: Limitations on estimates of intracellular delay and virus decay, P. Natl. Acad. Sci. USA, 93 (1996), 7247-7251. doi: 10.1073/pnas.93.14.7247. Google Scholar G. Huang, W. Ma and Y. Takeuchi, Global properties for virus dynamics model with Beddington-DeAngelis functional response, Appl. Math. Lett., 22 (2009), 1690-1693. doi: 10.1016/j.aml.2009.06.004. Google Scholar G. Huang, Y. Takeuchi and W. Ma, Lyapunov functionals for delay differential equations model of viral infections, SIAM J. Appl. Math., 70 (2010), 2693-2708. doi: 10.1137/090780821. Google Scholar G. Huang, H. Yokoi, Y. Takeuchi, T. Kajiwara and T. Sasaki, Impact of intracellular delay, immune activation delay and nonlinear incidence on viral dynamics, Jpn. J. Ind. Appl. Math., 28 (2011), 383-411. doi: 10.1007/s13160-011-0045-x. Google Scholar M. W. Hirsch, H. L. Smith and X.-Q. Zhao, Chain transitivity, attractivity, and strong repellors for semidynamical systems, J. Dyn. Differ. Equ., 13 (2001), 107-131. doi: 10.1023/A:1009044515567. Google Scholar J. K. Hale, P. Waltman, Persistence in infinite-dimensional systems, SIAM J. Math. Anal., 20 (1989), 388-395. doi: 10.1137/0520025. Google Scholar J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer-Verlag, New York, 1993. doi: 10.1007/978-1-4612-4342-7. Google Scholar Y. Kuang, Delay Differential Equations with Applications in Population Dynamics, Academic Press, Inc., Boston, 1993. Google Scholar A. Korobeinikov, Global properties of basic virus dynamics models, B. Math. Biol., 66 (2004), 879-883. doi: 10.1016/j.bulm.2004.02.001. Google Scholar A. Korobeinikov, Global properties of infectious disease models with nonlinear incidence, B. Math. Biol., 69 (2007), 1871-1886. doi: 10.1007/s11538-007-9196-y. Google Scholar M. Y. Li and H. Shu, Global dynamics of an in-host viral model with intracellular delay, B. Math. Biol., 72 (2010), 1492-1505. doi: 10.1007/s11538-010-9503-x. Google Scholar C. J. Li, D. J. Friedman, C. Wang, V. Metelev and A. B. Pardee, Induction of apoptosis in uninfected lymphocytes by HIV-1 Tat protein, Science, 268 (1995), 429-431. doi: 10.1126/science.7716549. Google Scholar X. Li and S. Fu, Global stability of the virus dynamics model with intracellular delay and Crowley-Martin functional response, Math. Method. Appl. Sci., 37 (2014), 1405-1411. doi: 10.1002/mma.2895. Google Scholar X. Lai and X. Zou, Modeling HIV-1 Virus Dynamics with Both Virus-to-Cell Infection and Cell-to-Cell Transmission, SIAM J. Appl. Math., 74 (2014), 898-917. doi: 10.1137/130930145. Google Scholar X. Lai and X. Zou, Modeling cell-to-cell spread of HIV-1 with logistic target cell growth, J. Math. Anal. Appl., 426 (2015), 563-584. doi: 10.1016/j.jmaa.2014.10.086. Google Scholar C. C. McCluskey, Global stability of an SIR epidemic model with delay and general nonlinear incidence, Math. Biosci. Eng., 7 (2010), 837-850. doi: 10.3934/mbe.2010.7.837. Google Scholar B. Nardelli, C. J. Gonzalez, M. Schechter and F. T. Valentine, CD4$^+$ blood lymphocytes are rapidly killed in vitro by contact with autologous human immunodeficiency virus-infected cells, P. Natl. Acad. Sci. USA, 92 (1995), 7312-7316. doi: 10.1073/pnas.92.16.7312. Google Scholar M. A. Nowak and R. M. May, Virus Dynamics: Mathematical Principles of Immunology and Virology, Oxford University Press, Oxford, 2000. Google Scholar M. A. Nowak and C. R. M. Bangham, Population dynamics of immune responses to persistent viruses, Science, 272 (1996), 74-79. doi: 10.1126/science.272.5258.74. Google Scholar P. W. Nelson, J. D. Murray and A. S. Perelson, A model of HIV-1 pathogenesis that includes an intracellular delay, Math. Biosci., 163 (2000), 201-215. doi: 10.1016/S0025-5564(99)00055-3. Google Scholar P. W. Nelson and A. S. Perelson, Mathematical analysis of delay differential equation models of HIV-1 infection, Math. Biosci., 179 (2002), 73-94. doi: 10.1016/S0025-5564(02)00099-8. Google Scholar A. S. Perelson and P. W. Nelson, Mathematical analysis of HIV-1 dynamics in vivo, SIAM Rev., 41 (1999), 3-44. doi: 10.1137/S0036144598335107. Google Scholar L. Rong, Z. Feng and A. S. Perelson, Mathematical analysis of age-structured HIV-1 dynamics with combination antiretroviral therapy, SIAM J. Appl. Math., 67 (2007), 731-756. doi: 10.1137/060663945. Google Scholar N. Selliah and T. H. Finkel, Biochemical mechanisms of HIV induced T cell apoptosis, Cell Death Differ., 8 (2001), 127-136. doi: 10.1038/sj.cdd.4400822. Google Scholar H. Shu, L. Wang and J. Watmough, Global Stability of a nonlinear viral infection model with infinitely distributed intracellular delays and CTL immune responses, SIAM J. Appl. Math., 73 (2013), 1280-1302. doi: 10.1137/120896463. Google Scholar H. L. Smith and X.-Q. Zhao, Robust persistence for semidynamical systems, Nonlinear Anal.-Theor., 47 (2001), 6169-6179. doi: 10.1016/S0362-546X(01)00678-2. Google Scholar H. R. Thieme, Persistence under relaxed point-dissipativity (with application to an endemic model), SIAM J. Math. Anal., 24 (1993), 407-435. doi: 10.1137/0524026. Google Scholar J. Wu and X.-Q. Zhao, Permanence and convergence in multi-species competition systems with delay, P. Am. Math. Soc., 126 (1998), 1709-1714. doi: 10.1090/S0002-9939-98-04522-5. Google Scholar W. Wang, Global behavior of an SEIRS epidemic model with time delays, Appl. Math. Lett., 15 (2002), 423-428. doi: 10.1016/S0893-9659(01)00153-7. Google Scholar X. Wang, S. Liu and X. Song, Dynamics of a non-autonomous HIV-1 infection model with delays, Int. J. Biomath., 6 (2013), 1350030, 26pp. doi: 10.1142/S1793524513500307. Google Scholar R. A. Weiss, How does HIV cause AIDS?, Science, 260 (1993), 1273-1279. doi: 10.1126/science.8493571. Google Scholar R. Xu, Global stability of an HIV-1 infection model with saturation infection and intracellular delay, J. Math. Anal. Appl., 375 (2011), 75-81. doi: 10.1016/j.jmaa.2010.08.055. Google Scholar X.-Q. Zhao, Dynamical Systems in Population Biology, Springer-Verlag, New York, 2003. doi: 10.1007/978-0-387-21761-1. Google Scholar Anatoli F. Ivanov, Musa A. Mammadov. Global asymptotic stability in a class of nonlinear differential delay equations. Conference Publications, 2011, 2011 (Special) : 727-736. doi: 10.3934/proc.2011.2011.727 Bao-Zhu Guo, Li-Ming Cai. A note for the global stability of a delay differential equation of hepatitis B virus infection. Mathematical Biosciences & Engineering, 2011, 8 (3) : 689-694. doi: 10.3934/mbe.2011.8.689 Jinliang Wang, Lijuan Guan. Global stability for a HIV-1 infection model with cell-mediated immune response and intracellular delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 297-302. doi: 10.3934/dcdsb.2012.17.297 Ismael Maroto, Carmen Núñez, Rafael Obaya. Exponential stability for nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3167-3197. doi: 10.3934/dcdsb.2017169 Songbai Guo, Jing-An Cui, Wanbiao Ma. An analysis approach to permanence of a delay differential equations model of microorganism flocculation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021208 Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115 Pham Huu Anh Ngoc. New criteria for exponential stability in mean square of stochastic functional differential equations with infinite delay. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021040 Xiuli Sun, Rong Yuan, Yunfei Lv. Global Hopf bifurcations of neutral functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 667-700. doi: 10.3934/dcdsb.2018038 Yu Ji. Global stability of a multiple delayed viral infection model with general incidence rate and an application to HIV infection. Mathematical Biosciences & Engineering, 2015, 12 (3) : 525-536. doi: 10.3934/mbe.2015.12.525 Hermann Brunner, Chunhua Ou. On the asymptotic stability of Volterra functional equations with vanishing delays. Communications on Pure & Applied Analysis, 2015, 14 (2) : 397-406. doi: 10.3934/cpaa.2015.14.397 Tarik Mohammed Touaoula. Global stability for a class of functional differential equations (Application to Nicholson's blowflies and Mackey-Glass models). Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4391-4419. doi: 10.3934/dcds.2018191 Vitalii G. Kurbatov, Valentina I. Kuznetsova. On stability of functional differential equations with rapidly oscillating coefficients. Communications on Pure & Applied Analysis, 2018, 17 (1) : 267-283. doi: 10.3934/cpaa.2018016 Evelyn Buckwar, Girolama Notarangelo. A note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1521-1531. doi: 10.3934/dcdsb.2013.18.1521 A. M. Elaiw, N. H. AlShamrani. Global stability of HIV/HTLV co-infection model with CTL-mediated immunity. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021108 Dimitri Breda, Sara Della Schiava. Pseudospectral reduction to compute Lyapunov exponents of delay differential equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2727-2741. doi: 10.3934/dcdsb.2018092 Ovide Arino, Eva Sánchez. A saddle point theorem for functional state-dependent delay differential equations. Discrete & Continuous Dynamical Systems, 2005, 12 (4) : 687-722. doi: 10.3934/dcds.2005.12.687 Ismael Maroto, Carmen NÚÑez, Rafael Obaya. Dynamical properties of nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 3939-3961. doi: 10.3934/dcds.2017167 Leonid Berezansky, Elena Braverman. Stability of linear differential equations with a distributed delay. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1361-1375. doi: 10.3934/cpaa.2011.10.1361 Eduardo Liz, Gergely Röst. On the global attractor of delay differential equations with unimodal feedback. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1215-1224. doi: 10.3934/dcds.2009.24.1215 Ferenc Hartung, Janos Turi. Linearized stability in functional differential equations with state-dependent delays. Conference Publications, 2001, 2001 (Special) : 416-425. doi: 10.3934/proc.2001.2001.416 Songbai Guo Wanbiao Ma
CommonCrawl
Emilia coccinae (SIMS) G Extract improves memory impairment, cholinergic dysfunction, and oxidative stress damage in scopolamine-treated rats Harquin Simplice Foyet1Email author, Hervé Hervé Ngatanko Abaïssou2, Eglantine Wado3, Emmanuel Asongalem Acha4 and Ciobica Alin5, 6 © Foyet et al. 2015 E. coccinae (SIMS) G. (Asteraceae) is an annual plant commonly found throughout the plain of the Central Africa and widely used in Cameroonian folk medicine for the treatment of fever and convulsions in children. We previously reported that the methanolic extract of this plant improved spatial memory. However no underlying mechanism was explored. The present study was undertaken to investigate the effects of the hydroalcoholic extract of Emilia coccinae on memory in scopolamine treated rats and to propose possible mechanisms of action. Novel object recognition and Y-maze paradigm were used to test memory while oxidative profile, AChE and ACh level of the whole brain were assessed to outline the mechanism of nootropic activity of the extract. 200 and 400 mg/kg of the extract were chronically administrated during 14 consecutive days in separate groups of scopolamine intraperitoneal treated rats (1.5 mg/kg). The hydroalcoholic extract of Emilia coccinae (HEEC) at the dose of 200 mg/kg significantly improved the memory of rats and reversed the amnesia induced by scopolamine. In addition, we showed that this extract is decreasing the acetyl cholinesterase activity while also increasing the acetylcholine levels in the brain. HEEC (200 and 400 mg/kg) significantly increased antioxidant enzyme activities (SOD, GSH and CAT) and reduced lipid peroxidation (MDA level) in the rat whole brain homogenates. Taken together, our results suggested that the hydroalcoholic extract of Emilia coccinae ameliorated the cognitive dysfunction in scopolamine treated rats through the blockage of the oxidative effect of scopolamine and inhibition of AChE activity. Emilia coccinae Spatial memory Alzheimer's disease (AD) is a neurodegenerative disease related to cognitive and behavioral impairments, characterized by loss or decline in memory and severe cognitive impairments. Older age, genetic predisposition, other acquired medical conditions, stress and emotions are factors that may contribute to various deficiencies observed in AD such as memory loss, impaired learning, and dementia or to more ominous threats like Parkinson's and Alzheimer's diseases [1, 2]. To date, the main cause of AD remains unclear, but it is considered that the β-amyloid and tau protein aggregation, reduced acetylcholine (ACh), and glutamatergic deficit are regarded as principal pathogenesis of AD [3]. Recent studies have speculated that free radicals produced during oxidative stress and/or inflammatory processes are also pathologically important in AD [4]. Accumulation of free radical damage and alterations in the activities of antioxidant enzymes such as superoxide dismutase and catalase have been observed in the central nervous system of AD patients [5]. Moreover, in AD and mild cognitive impairment brains, the increased oxidative damage to lipids and proteins and the decline of glutathione and antioxidant enzyme activities correlate with the severity of the disease, suggesting that oxidative stress may be one of the alterations that occur during the initiation and development of AD [6, 7]. In the last decades many studies have been performed in order to develop an efficient drug for AD therapy. However, very few of them have obtained the final approvals in most countries. Unfortunately, none of the drugs can reverse nor stop the course of the disease. In this way, most of these substances, which are used for the symptomatic treatment of AD since 1990, are cholinesterase inhibitors that prolong the acetylcholine (ACh) availability after it is released from cholinergic nerve endings, through the inhibition of acetylcholinesterase (AChE). However, non selectivity of these drugs, their limited efficacy, and poor bioavailability, adverse cholinergic side effects in the periphery, narrow therapeutic ranges, and hepatotoxicity are among the several limitations to their therapeutic success [8, 9]. Moreover, the drugs could exert several important side effects including diarrhea, nausea, insomnia, muscles cramps, vomiting, fatigue and loss of appetite [10]. Thus, considering the fact that the management of AD can be a major challenge for the health care systems, all around the globe many research are now undertaken in order to valorize phytopharmaceutical alternatives approaches which is widely available and accessible at low costs. In this way, numerous plants like Areca catechu [11], Boswellia papyrifera [12] or Malva parviflora [13], have been reported to treat cognitive disorders; learning and memory disorders, which are commonly observed in AD. Thus, there is a growing interest in the use of plant extract to improve leaning, memory and general cognitive function, with a large number of reviews highlighting the benefits of some phytochemicals as brain function modulators [14, 15]. Moreover, in one of our previous studies, we reported the in vitro antioxidant effects of Emilia coccinae, which could have been correlated also with some facilitating effects of the same extract on an animal model of anxiety and depression [16]. In that study, the standardized methanolic extract of Emilia coccinae significantly increased the number of open arm entries and time spent in the open arms of the elevated plus maze test and was as effective as Imipramine in inducing shortening of immobility time in forced swimming test. Phytochemical analysis revealed that the dry leaves contained 863.04 ± 5.42 mg of GAE/100 g of dry material, which represents a very good content of total phenolics compounds while the total reducing power was about 4.71 ± 0.04 g of Vit C equivalent/100 g of dry material. Some studies hypothesized that the presence of depression or anxiety would hinder some aspects of memory performance. Depression, when compounded by anxiety, has not only an adverse effect on immediate recall and amount of acquisition, but also on the retrieval of newly learned information [17], illustrating the effects of comorbidity on memory and learning. It thus appeared reasonable for us to explore the effect of this extract with both anxiolytic and antidepressant activities in short-term memory. In fact, E. coccinae (SIMS) G. (Asteraceae) is an annual plant commonly found throughout the plain of the Central Africa and in dry area up to 2000 m altitude in the eastern Africa [18]. This species belongs to the genus Emilia represented by about 100 species, with 50 of them found in Africa [18]. In the folk African traditional medicine, this plant is used for the treatment of fever and convulsions in children [19]. The dry leaves are used for the treatment of wounds, sores and sinusitis ulcer, ringworm [20]. In addition, in some tribe in the western part of Cameroon, the infusion of the dry leaves of this plant is used as a potent sedative and restorative. Therefore, considering the promising behavioral result obtained with this specie, the modest cognitive benefit offered by the available treatments with some time the severe side effects, we decided to investigate the effect of chronic administration of the hydroalcoholic extract of E. coccinae in a experimental model of cognitive impairment in rats, as induced by scopolamine, an anti-muscarinic drug. Plant material and extraction Fresh leaves of E. coccinae were harvested in February 2013 at Etoug Ebe in the Centre Region of Cameroon and authenticated at the National Herbarium-Yaoundé, where the voucher specimen was conserved under the reference number 6297/HNC. The leaves were washed and dried at room temperature (24–26 °C during 10 days) and pulverized into a coarse powder using a suitable grinder. The powder was stored in a dark and airtight container and kept in −30 °C until further analysis. Hydroalcoholic extraction The preparation of the plant extract was as described by Haque et al. [21]. Briefly, 150 g of powdered material was placed in a clean, flat-bottomed glass container and soaked in 1.5 L of methanol/water (70/30, V/V). Then the extraction was carried out by using an Ultrasonic Sound Bath accompanied by sonication (45 °C, 25 min). The entire mixture underwent a coarse filtration through white cotton material. The extract obtained was filtered through Whitman filter paper (Bibby RE200, Sterilin Ltd., UK). The solution was then concentrated using a rotavapor in high vacuum up to 60 °C. The filtrate was later frozen and lyophilized to obtain the hydroalcoholic (27.90 g) extracts. Experimental animals Male Wistar albino rats (n = 30), weighing 100–180 g at the beginning of the experiment, were obtained from the Laboratory of Phytopharmacology (LAPHYPHA) of the University of Dschang, Cameroun. The animals were housed in polyacrylic cages (6 animals/cage) and maintained in a temperature and light-controlled room (25 ± 2 °C, a 12-h cycle). The animals were acclimatized to laboratory condition for 7 days before the start of experiment. Prior to and after treatment, the animals were fasted for 12 and 7 h, respectively. However, all animals were allowed to drink water ad libitum. Rats were treated in accordance with the guidelines of the Cameroonian bioethics committee (reg N°.FWA-IRB00001954) and in accordance with NIH- Care and Use of Laboratory Animals manual (8th Edition). The present study was approved by the Ethic Committee of the Faculty of Sciences of the University of Maroua (Ref. N°14/0261/Uma/D/FS/VD-RC), Cameroon. Efforts were also made to minimize animal suffering and to reduce the number of animal used in the experiment. Each animal was tested in only one behavioral test. The experiments were performed in the morning (8–12 h), and the light level in the experimental room was 200 lux. Scopolamine, chloral hydrate, acetylthiocholine iodide, 5, 5-dithiobis (2-nitro-benzoic acid) (DTNB), 2-thiobabituric acid (TBA), Piracetam were purchased from Sigma–Aldrich, USA. All drugs and extracts were freshly prepared in saline on the day of the experiments. Scopolamine and piracetam was administered intraperitoneally (i.p.) to the rats while the extract was administered by gavage (per os, p.o.). Control animals received oral administration of 10 ml/kg body of the vehicle. Animal treatments Rats were randomly allocated into five groups of 6 animals each as follows: Group I received saline solution and served as normal control. Group II received scopolamine (1.5 mg/kg) administered intraperitoneally and served as model group. Both groups received normal saline for 14 days. Groups III received piracetam (150 mg/kg) through intraperitoneal injection (i.p.). Group IV and V received respectively HEEC at the doses of 200 and 400 mg/kg, by gavage respectively for 14 days. The doses were fixed based on earlier studies on the hydroalcoholic extract of Emilia coccinae extract in our laboratory. Scopolamine (1.5 mg/kg) as a disease inducer was administered as a single dose 30 min after drugs administration through intraperitoneal injection (i.p.) route to all the groups except normal control group. The same procedure was carried out on days 1, 4 and 8 because of the transient effect of the scopolamine on memory impairment [22, 23]. Behavioral testing was done on days 8 and day 14 following extract administration, 30 min after scopolamine injection. The aforementioned dosage and the duration of treatment were selected following a screening studies and previously published reports regarding Emilia coccinae biological effects [16] (see Table 1). Experimental design Normal control treated with saline solution Negative control group treated with scopolamine 1.5 mg/kg, i.p Piracetam + Scopolamine 150 mg/kg + 1.5 mg/kg, i.p Extract low dose + Scopolamine 200 mg/kg, p.o. + 1.5 mg/kg, i.p Extract high dose + Scopolamine Behavioral evaluation The animals' behavioral activities were tracked and recorded using trial version of ANY-maze 4.9 behavioral software. Y-Maze test Y-maze analysis has been shown to be a reliable, noninvasive test to determine cognitive changes in Wistar rat through the measurement of the spontaneous alternation behavior in the Y-maze task [24]. The maze used in the present study consisted of three arms (35 cm long, 25 cm high and 10 cm wide) and an equilateral triangular central area. All animals were tested in a randomized order at the start and end of the experimental protocol. Rats were treated once daily whit the hydroalcoholic extract of E. coccinae leaves (200 and 400 mg/kg, per os), piracetam (150 mg/kg, i.p.), or saline (10 ml/kg; per os) during 14 consecutive days. Thirty minutes after the aforementioned drug had been administrated; rats were placed at the end of one arm and allowed to move freely through the maze for 8 min. The time limit in Y-maze test was 8 min, and every session was stopped after 8 min. An arm entry was counted when the hind paws of the rat were completely within the arm. Spontaneous alternation behavior was defined as three consecutive entries in three different arms (i.e. A, B, C or B, C, A, etc.). The percentage alternation score was calculated using the following formula: Total alternation number/Total number of entries minus 2) x 100. Furthermore, the total number of arm entries was used as a measure of general activity of the animals. The maze was wiped clean with 70 % ethanol between each animal to minimize odor cues [25, 26]. Object recognition test This test was performed as described by El-Marasy et al. [27]. Briefly, three days before testing, each rat was allowed to explore the apparatus for 2 min, while on the testing day, 30 min following scopolamine injection, a session of two trials, 2-min each was allowed. In the "sample" trial (T1), two identical objects were placed in two opposite corners of the apparatus. A rat was placed in the apparatus and was left to explore these two identical objects. After T1, the rat was placed back in its home cage and an inter-trial interval of 1 h was given. Subsequently, the "choice" trial (T2) was performed. In T2, a new object (N) replaced one of the objects that were presented in T1, then rats were exposed again to two different objects: the familiar (F) and the new one (N). Exploration was defined as follows: directing the nose toward the object at a distance of no more than 2 cm and/or touching the object with the nose. From this measure, a series of variables were then calculated: the total time spent in exploring the two identical objects in T1, and that spent in exploring the two different objects, F and N in T2. The distinction between F and N in T2 was measured by comparing the time spent in exploring the F with that spent in exploring the N. DI is the dissimilarity index and represents the difference in exploration time expressed as a proportion of the total time spent exploring the two objects in T2. DI was then calculated using the following formula: $$ DI=\frac{N-F}{N+F} $$ Estimation of biochemical parameters Brain tissue preparation The rats were decapitated on day 14th after the last behavioral testing under chloroform anesthesia. The skull was cut open and the brain was exposed from its dorsal side. The whole brain was immediately removed and cleaned with chilled normal saline on the ice. A 10 % (w/v) homogenate of brain samples (0.03 M sodium phosphate buffer, pH 7.4) was prepared. The homogenate was centrifuged (15 min at 3000 rpm) and the supernatant was used for assays of SOD, CAT activities, total GSH content and MDA level. In addition, the levels of AChE and ACh were estimated. Estimation of antioxidant enzymes Determination of SOD The activity of superoxide dismutase (SOD) was assayed by monitoring its ability to inhibit the photochemical reduction of nitroblue tetrazolium (NBT). Each 1.5 mL reaction mixture contained 100 mM Tris/HCl (pH 7.8), 75 mM NBT, 2 M riboflavin, 6 mM EDTA, and 200 L of supernatant. Monitoring the increase in absorbance at 560 nm followed the production of blue formazan. One unit of SOD is defined as the quantity required to inhibit the rate of NBT reduction by 50 % as described by Winterbourn et al. [28]. Estimation of GSH To measure the reduced glutathione (GSH) level, the tissue homogenate (in 0.1 M phosphate buffer pH 7.4) was taken. The procedure was followed as previously described by Shamnas et al. [29]. Briefly, the homogenate was added with equal volume of trichloroacetic acid (TBA, 20 %) containing 1 mM EDTA to precipitate the proteins. The mixture was kept for 5 min prior to centrifugation. The supernatant (200 μl) was then transferred to a new set of test tubes and added 1.8 ml of the Ellman's reagent (5, 5'-dithio bis-2-nitrobenzoic acid) (0.1 mM) was prepared in 0.3 M phosphate buffer with 1 % of sodium citrate solution). Then all the test tubes rose to the volume of 2 mL. After the completion of the total reaction, the solutions were measured at 412 nm against blank. Absorbance values were compared with a standard curve generated from standard curve from known GSH. Determination of CAT Catalase (CAT) activity was assayed following the method of Hritcu et al. [30]. The reaction mixture consisted of 150 μL phosphate buffer (0.01 M, pH 7.0), 100 μL supernatant. Reaction was started by adding 250 μL H2O2 0.16 M, incubated at 37 °C for 1 min and reaction was stopped by the addition of 1.0 mL of dichromate acetic acid reagent. The tubes were immediately kept in a boiling water bath for 15 min while the green color developed during the reaction and was read at 570 nm on a spectrophotometer. Control tubes, devoid of enzyme, were also processed in parallel. The difference in absorbance per unit was used as the measure of catalase activity. Determination of MDA Malondialdehyde (MDA), which is a measure of lipid peroxidation, was spectrophotometrically measured using the thiobarbituric acid assay [7, 31]. 200 μL of supernatant added and briefly mixed with 1 mL of 50 % trichloroacetic acid in 0.1 M HCl and 1 mL of 26 mM thiobarbituric acid. After the vortex mixing, samples were maintained at 95 °C for 20 min. Furthermore, samples were centrifuged at 3000 rpm for 10 min and supernatants were read at 532 nm. A calibration curve was constructed using MDA as standard and the results were expressed as nmol/g protein. Estimation of brain neurotransmitter Estimation of acetylcholinesterase (AChE) and Acetylcholine (Ach) activities Acetylcholinesterase activity was estimated by using an artificial substrate, acetylthiocholine (ATC). In the medium, thiocholine released due to the cleavage of ATC by AChE is allowed to react with the -SH reagent 5,5'-dithiobis-2-nitrobenzoic acid (DTNB), which is reduced to a yellow colored anion called thionitrobenzoic acid, measurable at the wave length 412 nm. Concentration of thionitrobenzoic acid was spectrophotometrically detected and taken as a direct estimate of the AChE activity [32]. Acetylcholine level in the whole brain was estimated using Hydroxylamine method as described by Stepankova et al. [33]. The reaction mixture was prepared and mixed with the aqueous hydroxylamine hydrochloride and 3.5 M aqueous KOH (1:1 v/v). The resulting mixture was mixed 2 min. to convert ACh totally to acethydroxamic acid. The pH value was changed again by adding Conc. HCl/H2O (1:2 v/v). The reddish brown color formed after adding 0.37 M ferric nitrate was read at 540 nm. The Ach content was calculated with reference to the standard values. Data was presented as mean ± SEM. One-way ANOVA (for Y-maze data) and two-way ANOVA (for novel object recognition and Y-maze data followed by Bonferoni Multiple Comparison Test was performed using Graph Pad Prism version 5.00 for Windows, Graph Pad Software, San Diego California USA. A probability level of 0.05 or less was accepted as significant. Pearson's correlation coefficient and regression analysis were used to evaluate the connection between behavioral responses and biochemical parameters. Effects of the extract in the Y-Maze task In Y-maze task, the hydroalcoholic extract of E. coccinae leaves significantly increased the spontaneous alternation behavior in rats with cognitive deficit-induced by scopolamine after eight days administration compared to the control group, (F (4, 12) = 3.25, P < 0.0102) (Fig. 1). Daily oral administration of HEEC (200 and 400 mg/kg) for 14 consecutive days before scopolamine injection also significantly increased the correct trials of rat in the Y-maze compare to scopolamine non treated group suggesting effects on short term-memory. Post-Hoc analysis revealed no significant difference between the 2 groups of extract. Also, post hoc analyses revealed significant statistical differences (p < 0.0001) between animals exposed to scopolamine and those treated with HEEC (Fig. 1). Likewise, the HEEC (200 and 400 mg/kg) substantially reduced the number of line crossed in the Y-maze (Fig. 2). However, this inhibition of the locomotory activity was more pronounced at the dose of 200 mg/kg and after 14 days of treatment (p < 0.0001) in comparison with the scopolamine-treated group. No significant reduction in the ambulatory activity was noted with piracetam. Effect of the hydroalcoholic extract of E. coccinae leaves and piracetam (Pir) on the spontaneous alternation percentage after 8 and 14 days of treatments on scopolamine (Scop) treated animal in Y-maze task. Each column represents mean ± S.E.M. of 6 animals. Data analysis was performed using One way ANOVA followed by Bonferroni posttests. *P < 0.05; **P < 0.001; **P < 0.0001 vs. control animals; #P < 0.05 vs. normal animals Effect of the hydroalcoholic extract of E. coccinae leaves and piracetam (Pir) on the locomotory activity of animals in Y-maze task. Each column represents mean ± S.E.M. of 6 animals. Data analysis was performed using One way ANOVA followed by Bonferroni posttests. *P < 0.05; **P < 0.001; **P < 0.0001 vs. control animals; #P < 0.05 vs. normal animals Effects of the extract in the Novel object recognition All rats treated with the hydroalcoholic extract from leaves and piracetam, as well as the control animals took more time to explore new object after 8 or 14 days of treatment. However, the time taken by the animals subjected to scopolamine injection was significantly less than that of normal animals, indicating the impairment of the memory process. The treatment of the animals by Piracetam and HEEC (200 mg/kg) significantly ameliorated the exploratory time of the rats in this task as compared to scopolamine group (Fig. 3). Although the exploratory time increased at the dose of 400 mg/kg, this was not significant when compared to scopolamine group. ANOVA of all differences yielded F (2, 38) = 4.034; (p < 0.032). Discrimination Index data revealed a significant effect of HEEC after 8 days (F (4, 18) = 2.92, P < 0.0015) and after 14 days (F (3, 8) = 4.06, P < 0.0003) of treatment. Post hoc comparisons indicated that the HEEC 200 and 400 mg/kg-treated rats discriminated significantly better N than F with respect to their scopolamine-treated counterparts (P < 0.05, 8 days; and P < 0.001, 14 days) (Fig. 4). Figure 3 Effect of MEEC and piracetam (Pir) on the exploration time of the familiar vs. the novel object in the object recognition test. Rats received 14 daily administration of HEEC (200 mg/kg, p.o.), or donepezil (1.5 mg/kg, p.o.) used as a standard drug Effect of HEEC discrimination index (DI) of scopolamine-induced memory impairment in rats in the object recognition test. Rats received 14 daily administration of HEEC (200 and 400 mg/kg, p.o.) or Piracetam (150 mg/kg, i.p.) used as a standard drug Effect of the hydroalcoholic extract of E. coccinae on brain lipid peroxidation and antioxidant enzymes Measurement of lipid peroxidation As shown in Fig. 5a, pre-treatment with hydroalcoholic extract of E. coccinae (200 and 400 mg/kg) for 14 successive days shows in a dose dependant manner significantly reduction (F (3, 16) = 3.23, p < 0.00034) of the MDA level in respect to the scopolamine treated group. Post hoc analysis also showed a significant difference between scopolamine versus HEEC treated group (p < 0.05). However the decrease in MDA level of the piracetam and HEEC-treated group were still below that of normal rats. Effect of the hydroalcoholic extract of E. coccinae administration on MAD (a) SOD (b), GSH (c) and CAT (C) level of the brain tissue. Data was analyzed by one-way ANOVA followed by Bonferroni post-hoc test. Each column represents mean ± S.E.M. of 6 animals. ***p < 0.001, **p < 0.01 and *p < 0.05 when compared to scopolamine treated group; ###p < 0.001, when compared to normal animals Effect of HEEC on Superoxide dismutase When treated with scopolamine, the level of brain SOD decreased significantly (p < 0.001) when compared with that of normal animals. One way ANOVA followed by post hoc test showed that the treatment of amnesic rats with HEEC (200 and 400 mg/kg) significantly (F (4, 20) = 2.86, p < 0.00023) increased SOD level when compared to scopolamine group (Fig. 5b). The standard drug piracetam (150 mg/kg) also increased the SOD level but this was not significant. GSH activity in the brain The effect of hydroalcoholic extract of HEEC on glutathione content in the brain is summarized in Fig. 5c. The GSH level of brain homogenate in scopolamine group (1.54 ± 0.27) was found to be significantly lower (p < 0.01) than the GSH level in normal group (2.78 ± 0.21). At the end of the treatment period with HEEC (200 and 400 mg/kg) or piracetam (150 mg/kg), GSH level was found to be increased in a highly significant manner (P < 0.05, 400 mg/kg; P < 0.001, piracetam). Piracetam almost completely restored the glutathione level in scopolamine treated groups to the normal level. CAT activity in the brain As shown in Fig. 5d, scopolamine injection to the rats brings to highly significant reduction (p < 0.001) in CAT level in the rats brain. The treatment of the scopolamine injected rats with the plant extract resulted in a significant and dose dependent increase in catalase activity when compared with Scopolamine injected rats. More importantly, a significant positive correlation was evidenced by determination of the linear regression between SOD vs. spontaneous alternation (r = 0.75, p < 0.0002) (Fig. 6a) and CAT vs. spontaneous alternation (r = 0.80, p < 0.0002) (Fig. 6d) in scopolamine exposed groups treated with the hydroalcoholic extract of E. coccinae (200 mg/kg). The correlation between GSH vs. spontaneous alternation was positive but very weak (r = 0.51, p = 0.017) (Fig. 6c) while significant negative correlation was obtained between MDA and spontaneous alternation percentage (r = 0.54, p = 0.0073) (Fig. 6a). Correlation between spontaneous alternation percentage vs. MDA (a), spontaneous alternation percentage vs. SOD (b), spontaneous alternation percentage vs. GSH (c) and spontaneous alternation percentage vs. CAT (d) in the hydroalcoholic extract-treated groups (200 mg/kg); with the 95 % confidence band of the best-fit line Effect of hydroalcoholic extracts of E. coccinae on brain AChE and Ach activities The effect of hydroalcoholic extracts of E. coccinae in brain AChE and Ach activities are shown in Fig. 5. Scopolamine administration resulted in significant increase in the AChE activity with respect to Normal group). Meanwhile, hydroalcoholic extracts of E. coccinae significantly (F (2, 86) = 6.04, p < 0.0023) reduced AChE activity reaching similar values to normal group (Fig. 6). These results show that administration of HEEC suppressed the increase of AChE activity by scopolamine administration. At the same time, was notice a significant decrease in the Ach content in the brain of scopolamine treated rats (Fig. 7). One way ANOVA analysis revealed that in contrast to the Ach reduction, the rats treated with HEEC and piracetam shows a significant recovery (F (2, 86) = 4.22, p < 0.012) on Ach content in the brain region. However with the Bonferroni's Multiple Comparison post Test, this recovery was significant (P <0.05) only with the HEEC at the dose of 200 mg/kg. The effect of 14 days oral administration of the hydroalcoholic extract of E. coccinae (HEEC, 200 and 400 mg/kg) and piracetam (Pir, 150 mg/kg) on AChE and Ach activity in rat's brain In this report, we demonstrate that sub chronic treatment with HEEC improved learning and memory behaviors in adult male rats subject to scopolamine memory disruption. Specifically, we have shown that the oral administration of HEEC (200 and 400 mg/kg) after 8 and 14 days ameliorated the memory retention in the novel object recognition task and improved spatial learning in the Y-maze. Acetylcholine is a neurotransmitter acting on cholinergic receptors and widely distributed throughout the brain. During the last decade, it has long received much attention in memory research among neuroscientist. It is well established that alteration in the levels of acetylcholine or AChE activity may affect the cholinergic transmission process and leads to learning and memory a deficit which imitates Alzheimer's disease [34]. Scopolamine, a nonselective muscarinic antagonist block cholinergic signaling and produce memory and cognitive dysfunctions, causing learning and memory deficits including long-term and short term memory impairment [35]. Consequently, intraperitoneal administration of the muscarinic antagonist scopolamine was widely and successfully exploited as a pharmacological model for AD in rats. Scopolamine induces dysregulation of the cholinergic neuronal pathway and memory circuits in the central nervous system, resulting in serious impairments in learning, acquisition, and short-term retention of spatial memory tasks [36]. In the present study we used two well-characterized memory tasks: Y-maze and novel object recognition to evaluate the effect of the HEEC on the scopolamine model of AD in rats. Spontaneous alternation behavior determined using the Y-maze test has been viewed as an indicator of spatial short-term memory [30]. In this test, rats must remember the arm most recently entered in order to alternate arm choice. Furthermore, treatment with scopolamine has been demonstrated to impair spontaneous alternation behavior in animal models [37]. In the present study, spontaneous alternation behavior in scopolamine treated rats was significantly lower than in rats treated with vehicle. In contrast, HEEC significantly reversed the cognitive deficit induced by scopolamine in the Y-maze task. Together, these results suggest that that HEEC enhance short term or working memory. In the same behavioral task, in contrast with piracetam, the hydroalcoholic extract of E. coccinae, especially at the dose of 400 mg/kg, significantly decreased the number of lines crossed by the rats (ambulatory activity). It is somehow difficult to evaluate the motor activity of the extract with some variety of locomotory manifestations. In our previous experiment using the same extract at the same doses [16], the motor activity (number of lines crossed) in the open field was decreased with an increase of the time spent at the center of the maze as compared to control. At the same time on the EPM platform, locomotor index- number of entrance in the closed arms decreased in the extract, as compared to control. The correlation of anxiety and memory parameters could thus be relevant since rats were displayed lower anxiety when treated with HEEC (200 and 400 mg/kg). So the increased activity in Y-maze may be mainly due to memory facilitation by the HEEC, resulting in increased searching in the maze and the better spatial memory that we observed. The novel object recognition test measures the natural propensity of a rat to explore a novel versus familiar object. Our results showed that exploration of the novel object was reduced in animals exposed to intraperitoneal injection with scopolamine. This task has both an exploratory behavior component as well as a memory retention component such that an animal must have sufficiently explored the familiar object during the pretest phase in order to distinguish between it and a novel object later during the test phase [38]. In this study, scopolamine-treated animals exhibited less total exploration time during pre-training than normal animals. This is similar to previous findings in other behavioral paradigms showing that scopolamine-treated adult animals exhibit decreased exploratory behavior and significantly reduction in the discrimination index in the NOR task. After the pre-treatment by HEEC, two-way analysis of variance followed by Bonferroni post test revealed a significant effect of the treatment on the total amount of time spent exploring the novel objects as well as on the discrimination index compare to scopolamine group. This indicate that the extract sustain memory formation in rats injected with scopolamine-induced dementia rat model. Cholinergic transmission generally ends by the acetylcholine hydrolysis through the enzyme AChE which is responsible for degradation of Acetylcholine to acetate and choline at the level of the synaptic cleft. In this way, AD has also been correlated with the loss of cholinergic neurons and decreases in the levels of acetylcholine (ACh) and choline acetyltransferase (ChAT) [39]. Lesions in these pathways result in decreased Ach release and thus cause learning and memory dysfunction. Until now, inhibition of acetylcholinesterase (AChE) has served as a strategy for the treatment of AD, senile dementia, ataxia, and Parkinson's disease [40, 41]. Thus, estimating the acetylcholine esterase activity can provide valuable information on cholinergic function, which can correlate with cognitive function. In that way, AChE inhibitors play an important role in the nervous system disorders, owing to their potential as pharmacological and toxicological agents. Recently, some AChE inhibitors like tacrine and rivastigmine were used in the treatment of Alzheimer's disease [9]. Our results clearly indicated that the administration of scopolamine to experimental animals was followed by a significant increase in the AChE activity in the brain, especially in normal rats. The administration of the plant extract for two weeks decreased acetylcholine esterase activity in brain homogenate of rat subject to scopolamine memory impairment. As a result, there was an increase of Ach Expression in brain hence triggering of cholinergic firings. This may be one of the mechanisms explaining the ability of the HEEC to enhance memory and cognition in this study. In present study, HEEC significantly decreased the AChE levels in the rat whole brain homogenate, indicating its potential in the attenuations of severity of AD. However, an absolute conclusion can't be made on the AChE inhibitory effect of this extract since the effect of extract alone on AChE activity was not measured in this study. In the scopolamine-induced AD animal model, the well-replicated amnesic effect of scopolamine has been identified as a principal consequence of the blockade of post-synaptic muscarinic M1 transmission, leading to disruption in the functioning of the hippocampus in working memory [36]. Other suggested mechanism is the increased oxidative stress in brains by scopolamine, inducing the activation of a cascade of redox-sensitive cell signal pathways [42]. Brain cells are known to contain a very high percentage of long chain polyunsaturated fatty acids. ROS are continuously generated in the nervous system during normal metabolism and normal neural activity. The brain is regularly subject to free radical-induced lipid peroxidation. It is also known that the protective system in the brain is poor against oxidative stress, compared to other tissues [43]. In this way, the estimation of the lipid peroxidation end product is commonly used as the biomarkers for in vivo lipid peroxidation in neurodegeneration research [44]. The results of antioxidant study showed decreased levels of SOD, CAT, and GSH in the scopolamine-induced dementia rat compared to normal rats. The protective system of the organism against reactive oxygen species is very complex, including enzymatic and non enzymatic biological process. These include SOD, that catalyses the conversion of superoxide radicals to hydrogen peroxide, which is then converted into water by GPX and catalase [45]. After 8 and 14 days pretreatment of the scopolamine-induced dementia rat with the hydroalcoholic extract of E. coccinae (200 and 400 mg.kg), there was a significant improvement of the levels of SOD, CAT and GSH whereas MDA level was decreased significantly. These results suggest that the plant extract has an in vivo antioxidant activity and is capable of ameliorating the effect of ROS in the brain of rats. In a previous study, we reported that the methanolic extract of E. coccinae asper leaves improved spatial memory in rat through a mechanism that involved antioxidant and neuroprotective activities. Furthermore, we observed that this extract was able to reduce iron (Fe3+) into Fe2+ using and scavenged 2, 4-dinitrophenyl-1-picryl hydrazyl (DPPH) radical in vitro [16]. This give us more confidence to conclude that the attenuation of the scopolamine neurotoxicity by the hydroalcoholic extract of E. coccinae in the present study may be due to the antioxidant propensity of this extract in the brain tissue. Despite these interesting results of the neuroprotective activity of HEEC, limitations of this work are that the AChE activity of the extract and histopathological studies of frontal lobes and hippocampus were not measured. This will be the main goal of our upcoming investigations. The present study demonstrates the beneficial effect of the hydroalcoholic extract of E. coccinae on scopolamine-induced cognitive impairment. In this way, E. coccinae protected from memory deficiency induced by scopolamine, as shown by behavioral tests using the Y-maze and Novel object recognition tests. Also, based on the results of the biochemical studies, it can be concluded that E. coccinae is an antioxidant; suggesting that it is possible the protective effects were related to its antioxidant effects. Although our results indicate that extract can ameliorate scopolamine induced AChE activity increment, we cannot say with great confidence that the extract has inhibitory effects on the enzyme because the effect of extract alone on AChE activity was not measured. Moreover, a significant positive correlation between SOD vs. spontaneous alternation and between CAT vs. spontaneous alternation was found when linear regression was performed. However, supplementary pharmacological and phytochemical investigations are necessary to clarify the molecular mechanisms of neuroprotection for these herbal extracts, as well as the principal bio-ingredient. Foyet Harquin Simplice was supported by TWAS grant N°: 12–132 RG/BIO/AF/AC_G; UNESCO FR: 12–132 RG/BIO/AF/AC_G of August 2013. The authors would also like to show their gratitude to the reviewers of this paper which significantly improved the value of the presented data by adding very important insights, comments, and suggestions. We would like to thank our colleague, Dr. Walther Bild, from "Gr. T. Popa" University of Medicine and Pharmacy, Iasi, which was kindly enough to make the English corrections on our manuscript. Competing interest The authors declare that they have no competing interest. FHS designed the experiments. FHS, NAHH and WE carried out the study; FHS, NAHH and AC wrote the manuscript; FHS and EAA supervised the work. All authors read and approved the final manuscript. Department of Biological Sciences, Faculty of Science, University of Maroua, Cameroon. P.O. Box: 814, Maroua, Cameroon Department of Life and Earth Sciences, Higher Teachers' Training College, University of Maroua, P.O. Box: 55 Maroua, Cameroon Department of Agriculture, Cattle farming and Derived products, High Institute of the Sahel, University of Maroua, P.O. Box: 46 Maroua, Cameroon Department of Biomedical Science, Faculty of Health Sciences, University of Buea, P.O. Box 63 Buea, Cameroon Alexandru Ioan Cuza University, 11 Carol I Blvd., 700506 Iasi, Romania Center of Biomedical Research of the Romanian Academy, Iasi Branch, Romania Kaplan DB, Berkman B. 2011. Dementia Care: A global concern and social work challenge. International Social Work. 54(3):361–73.Google Scholar Franchis P, Palmer A, Snape M, Wilcock G. The cholinergic hypothesis of Alzheimer's disease: a review of progress. J Neurol Neurosurg Psychiatry. 1999;66:137–47.View ArticleGoogle Scholar Yang MH, Yoon KD, Chin Y, Park JH, Kim SH, Kim YC, et al. Neuroprotective effects of Dioscorea opposita on scopolamine-induced memory impairment in in vivo behavioral tests and in vitro assays. J Ethnopharmacol. 2009;121:130–4.View ArticlePubMedGoogle Scholar Zhu X, Raina AK, Lee H-G, Casadesus G, Smith MA, Perry G. Oxidative stress signaling in Alzheimer's disease. Brain Res. 2004;1000:32–9.View ArticlePubMedGoogle Scholar Omar RA, Chyan Y-J, Andorn AC, Poeggeler B, Robakis NK, Pappolla MA. Increased expression but reduced activity of antioxidant enzymes in Alzheimer's disease. J Alzheimer's Dis. 1999;1(3):139–45.Google Scholar Zhao Y, Zhao B: Oxidative Stress and the Pathogenesis of Alzheimer's disease. Hindawi Publishing Corporation, Oxid Med Cell Longev. Volume 2013, Article ID 31652 3,10 pages. doi.org/10.1155/2013/316523. Padurariu M, Ciobica A, Hritcu L, Stoica B, Bild W, Stefanescu C. Changes of some oxidative stress markers in the serum of patients with mild cognitive impairment and Alzheimer's disease. Neurosci Lett. 2010;469:6–10.View ArticlePubMedGoogle Scholar Bores GM, Huger FP, Petko W, Mutlib AE, Camacho F, Rush DK, et al. Pharmacological evaluation of novel Alzheimer's disease therapeutics: acetylcholinesterase inhibitors related to galanthamine. J Pharmacol Exp Ther. 1996;277(2):728–38.PubMedGoogle Scholar Gauthier S, Scheltens P. Can we do better in developing new drugs for Alzheimer's disease? Alzheimers Dement. 2009;5(6):489–91. doi:10.1016/j.jalz.2009.09.002.View ArticlePubMedGoogle Scholar Birks J, Evans GJ, Iakovidou V, Tsolaki M, Holt FE. Rivastigmine for Alzheimer's disease. Cochrane Syst Rev. 2009;23:176–89.Google Scholar Madhusudan J, Kavita G, Sneha M, Sneha S. Pharmacological investigation of Areca catechu extracts for evaluation of learning, memory and behavior in rats. Int Cur Pharm J. 2012;1(6):128–13.Google Scholar Amir F, Golbarg G, Samireh F, Peyman MK. Effects of Boswellia Papyrifera Gum Extract on Learning and Memory in Mice and Rats. Ir J Basic Med Sci. 2010;13(2):9–15.Google Scholar Muhammad A, Ali AS. Neuroprotective Effect of Ethanol Extract of Leaves of Malva parviflora against Amyloid-β- (Aβ-) Mediated Alzheimer's Disease. Int Sch Res Notices. 2014;156976:156976. doi:10.1155/2014/156976.Google Scholar Richetti SK, Blank M, Capiotti KM, Piato AL, Bogo MR, Vianna MR, et al. Quercetin and rutin prevent scopolamine-induced memory impairment in zebrafish. Behav Brain Res. 2011;217:10–5.View ArticlePubMedGoogle Scholar Mahmood R, Arif M, Muhammad SQ et al.: Recent Updates in the Treatment of Neurodegenerative Disorders Using Natural Compounds. Evidence-Based CAM 2014, Article ID 979730, 7 pages, 2014. doi:10.1155/2014/979730. Foyet HS, Abdou BA, Ngatanko AHH, Manyi FL, Manyo NA, Shu Nyenti PN, et al. Neuroprotective and memory improvement effects of a standardized extract of Emilia coccinea (SIMS) G. on animal models of anxiety and depression. J Pharmacog Phytochem. 2014;3(3):146–54.Google Scholar Kizilbash AH, Vanderploeg RD, Curtiss G. The effects of depression and anxiety on memory performance. Archiv Clin Neuropsychol. 2002;17(1):57–67.View ArticleGoogle Scholar Bosch CH. Emilia coccinea (Sims) G.Don. In: Grubben GJH, Denton OA, editors. PROTA 2: Vegetables/Légumes. [CD-Rom]. Wageningen, Netherlands: PROTA; 2004.Google Scholar Agoha RC. Medicinal plants of Nigeria. Netherlands: Offsetdikker Jifaculteit waskunden, Natnurwenten schopp, pen; 1981. p. 22–158.Google Scholar Burkill HM. The useful plants of West Tropical Africa vol 3. Families J-L. 1984, Royal Botanical Garden kew pp. 522.Google Scholar Haque A, Zaman A, Tahmina, Hossain M, Sarker I, Islam S, Al Baki A: Evaluation of Analgesic, Anti-inflammatory and CNS Depressant Potential of Dendrophthoe falcata (Linn.) Leaves Extracts in Experimental Mice Model. Am J Biomed Sci 2014, 6(3), 139–156; doi: 10.5099/aj140300139.Google Scholar Puchchakayala G, Akina S, Thati M. Neuroprotective Effects of Meloxicam and Selegiline in Scopolamine-Induced Cognitive Impairment and Oxidative Stress. Hindawi Publishing Corporation. Int J Alzheimer's Dis. 2012;974013:8. doi:10.1155/2012/974013.Google Scholar Sharma D, Puri M, Tiwary AK, Singh N, Jaggi AS. Antiamnesic effect of stevioside in scopolamine-treated rats. Indian J Pharmacol. 2010;42(3):164–7.View ArticlePubMedPubMed CentralGoogle Scholar Hritcu L, Cioanca O, Hancianu M. Effects of lavender oil inhalation on improving scopolamine-induced spatial memory impairment in laboratory rats. Phytomed. 2012;19:529–34.View ArticleGoogle Scholar Siedlak SL, Casadesus G, Webber KM, Pappolla MA, Atwood CS, Smith MA, et al. Chronic antioxidant therapy reduces oxidative stress in a mouse model of Alzheimer's disease. Free Radic Res. 2009;43(2):156–64.View ArticlePubMedPubMed CentralGoogle Scholar Hernández-Chan NG, Góngora-Alfaro JL, Álvarez-Cervera FJ, Solís-Rodríguez FA, Heredia-López FJ, Arankowsky-Sandoval G. Quinolinic acid lesions of the pedunculopontine nucleus impair sleep architecture, but not locomotion, exploration, emotionality or working memory in the rat. Behav Brain Res. 2011;225(2):482–90.View ArticlePubMedGoogle Scholar El-marasy SA, El-Shenawy SM, El-Khatib AS, El-Shabrawz OA, Kenawy SA. Effect of Nigella sativa and wheat germ oils on scopolamine-induced memory impairment in rat. Bull Fac Pharm Cairo Univ. 2012;50:81–8.View ArticleGoogle Scholar Winterbourn C, Hawkins R, Brian M, Carrell R. The estimation of red cell superoxide dismutase activity. J Lab Clin Med. 1975;85(2):337–41.PubMedGoogle Scholar Shamnas M, Ratendra R, Teotia UVS. Neuroprotective activity of methanol extract of Salvia officinalis flowers in dementia related to Alzheimer disease. Der Pharm Sinica. 2014;5(2):29–38.Google Scholar Hritcu L, Foyet HS, Stefan M, Mihasan M, Asongalem AE, Kamtchouing P. Neuroprotective effect of the methanolic extract of Hibiscus asper leaves in 6-hydroxydopamine-lesioned rat model of Parkinson's disease. J Ethnopharmacol. 2011;137:585–91.View ArticlePubMedGoogle Scholar Hritcu L, Ciobica A, Stefan M, Mihasan M, Palamiuc L, Nabeshima T. Spatial memory deficits and oxidative stress damage following exposure to lipopolysaccharide in a rodent model of Parkinson's disease. Neurosci Res. 2011;71:35–43.View ArticlePubMedGoogle Scholar Naskar S, Islam A, Mazumder UK, Saha P, Haldar PK, Gupta M. In Vitro and In Vivo antioxidant potential of hydromethanolic extract of Phoenix dactylifera fruits. J Sci Res. 2010;2(1):144–57.Google Scholar Stepankova S, Vranova M, Zdrazilova P, Komers K, Komersova A, Cegan A. Two new methods monitoring kinetics of hydrolysis of acetylcholine and acetylthiocholine. Zeitschrift fuer Naturforschung Section C-A J. Biosc. Biosc. 2005;60:943–6.Google Scholar Micheau J, Marighetto A. Acetylcholine and memory: A long, complex and chaotic but still living relationship. Behav Brain Res. 2011;221:424–9.View ArticlePubMedGoogle Scholar Souza AC, Bruning CA, Acker CI, Neto JS, Nogueira CW. 2-Phenylethynyl -butyltellurium enhances learning and memory impaired by scopolamine in mice. Behav Pharmacol. 2013;24:249–54.View ArticlePubMedGoogle Scholar Lee B, Shim I, Lee H, Hahm D. Rehmannia glutinosa ameliorates scopolamine-induced learning and memory impairment in rats. J Microbiol Biotechnol. 2011;21:874–83.View ArticlePubMedGoogle Scholar Klinkenberg I, Blokland A. The validity of scopolamine as a pharmacological model for cognitive impairment: a review of animal behavioral studies. Neurosci Biobehav Rev. 2010;34:1307–50.View ArticlePubMedGoogle Scholar Soellner DE, Grandys T, Joseph L, Nunez JL. Chronic prenatal caffeine exposure impairs novel object recognition and radial arm maze behaviors in adult rats. Behav Brain Res. 2009;205:191–9.View ArticlePubMedPubMed CentralGoogle Scholar Ling FA, Hui DZ, Ji SM. Protective effect of recombinant human somatotropin on amyloid beta-peptide induced learning and memory deficits in mice. Growth Horm IGF Res. 2007;17:336–41.View ArticlePubMedGoogle Scholar Kim DH, Yoon BH, Kim YW, Lee S, Shin BY, Jung JW, et al. The Seed Extract of Cassia obtusifolia ameliorates learning and memory impairments induced by scopolamine or transient cerebral hypoperfusion in mice. J Pharmacol Sci. 2007;105:82–93.View ArticlePubMedGoogle Scholar Shekhar C, Kumar S. Kinetics of butyrylcholinesterase inhibition by an ethanolic extract of Shorea robusta. Int J Pharma Sc Res. 2014;5(8):480–3.Google Scholar Fan Y, Hu J, Li J. Effect of acidic oligosaccharide sugar chain on scopolamine-induced memory impairment in rats and its related mechanisms. Neurosci Lett. 2005;374(3):222–6.View ArticlePubMedGoogle Scholar Jeong JH, Kim HJ, Park SK, Jin DE, Kwon OJ, Kim HJ. An investigation into the ameliorating effect of black soybean extract on learning and memory impairment with assessment of neuroprotective effects. BMC Complement Altern Med. 2014;14:482.View ArticlePubMedPubMed CentralGoogle Scholar Denise G, Lucas SM, Juliana V, Clóvis P, Gabriela S. Importance of the lipid peroxidation biomarkers and methodological aspects for malondialdehyde quantification. Quim Nova. 2009;32(1):169–74.View ArticleGoogle Scholar Abreu IA, Cabelli DE. Superoxide dismutases- a review of the metal-associated mechanistic variations. Biochim Biophys Acta. 2010;1804(2):263–74.View ArticlePubMedGoogle Scholar
CommonCrawl
Welcome OM Use Cases openMSE Objects Using openMSE Concepts and Tutorials OM Library Reference Points Depletion and Averaged Unfished Reference Points Initial Depletion Annual reference points Averaged MSY Reference Points Setting Depletion at a fraction of SBMSY or VB0 Change in stock-recruit relationship The calculation of reference points are impacted by inter-annual variability in life-history and fishing parameters (i.e., selectivity pattern). For example, if there is large inter-annual variability in natural mortality or growth, MSY and $\text{SB}_{\text{MSY}}$ may vary significantly between years. Stock-recruit relationship In a system where the productivity is highly variable, it is important to have something constant in order to anchor the description of the system dynamics. In openMSE, the stock-recruit relationship is constant through both the historical and projection period of the operating model. Using a Beverton-Holt relationship, the age-0 recruitment $R_y$ predicted from spawning biomass $SB_y$ in year $y$ is $$ R_y = \dfrac{\alpha SB_y}{1 + \beta SB_y} $$ $$ \alpha = \dfrac{4h^{\textrm{SR}}}{(1-h^{\textrm{SR}}) \phi^{\textrm{SR}}_0} $$ $$ \beta = \dfrac{1}{R_0^{\textrm{SR}}}\left(\alpha - \dfrac{1}{\phi^{\textrm{SR}}_0}\right) $$ For a Ricker function, recruitment is $$R_y = \alpha SB_y \times \exp(-\beta SB_y)$$ where $$\alpha = \dfrac{(5h^{\textrm{SR}})^{1.25}}{\phi^{\textrm{SR}}_0}$$ $$\beta = \dfrac{\log(\alpha\phi^{\textrm{SR}}_0)}{R_0^{\textrm{SR}}\phi^{\textrm{SR}}_0}$$ Parameters $\alpha$ and $\beta$ are specified via the unfished recruitment parameter $R_0^{\textrm{SR}}$, steepness $h^{\textrm{SR}}$, and unfished spawners per recruit $\phi_0^{\textrm{SR}}$. Superscripts $\textrm{SR}$ explicitly denotes that these parameters are used for calculating $\alpha$ and $\beta$. Parameters $R_0^{\textrm{SR}}$ and $h^{\textrm{SR}}$ are specified in Stock@R0 and Stock@h, respectively, while $\phi_0^{\textrm{SR}}$ is the mean unfished spawners per recruit over the first generation ($A_{50}$ years): $$ \phi_0^{\textrm{SR}} = \dfrac{\sum_{y=1}^{A_{50}} \phi_{0(y)}}{A_{50}} $$ With constant biological parameters, $\phi_{0(y)}$ is constant over all years. With time-varying parameters, annual reference points describe the asymptotic values if the population were projected in perpetuity with $\phi_{0(y)}$, $\alpha$, $\beta$. This section describes the various annual reference points, all reported in Hist@Ref$ByYear and MSE@RefPoint$ByYear, and provides a simple example of the change in direction of reference points when there is a change in natural mortality. Reference points using the stock-recruit relationship Annual values of unfished reference points, including $N_{0(y)}$, $SN_{0(y)}$, $B_{0(y)}$, $SB_{0(y)}$, $VB_{0(y)}$, and $R_{0(y)}$, and steepness $h_y$ are calculated based on the intersection of the stock-recruit relationship and the recruits per spawner line in year $y$, i.e., $1/\phi_{0(y)}$. Note that $R_{0(y)}$ here is the asymptotic unfished recruitment if fishing were zero. It would be helpful to consider $R_0^{\textrm{SR}}$ more as a parameter for calculating $\alpha$ and $\beta$, and separate this value from the dynamics implied from a change in $\phi_0$. Similarly, $h_y$ is time-varying as well, and Stock@h is used for calculating $\alpha$. $R_{0(y)} = SB_{0(y)}/\phi_{0(y)}$ where, for the Beverton-Holt function: $$ SB_{0(y)} = \dfrac{\alpha \phi_{0(y)} - 1}{\beta}$$ $$ h_y = \dfrac{\alpha \phi_{0(y)}}{4 + \alpha \phi_{0(y)}}$$ and, for the Ricker function: $$ SB_{0(y)} = \dfrac{\log(\alpha \phi_{0(y)})}{\beta} $$ $$ h_y = 0.2 (\alpha \phi_{0(y)})^{0.8}$$ Annual values of MSY reference points, including $\textrm{MSY}_y$, $F_y^{\text{MSY}}$, $SB_y^{\text{MSY}}$, $B_y^{\text{MSY}}$, and $VB_y^{\text{MSY}}$, are calculated by maximizing the yield curve. The annual spawning potential ratio at which the stock crashes is $$ SPR_{\textrm{crash}(y)} = (\alpha\phi_{0(y)})^{-1} $$ and the corresponding fishing mortality $F_{\textrm{crash}(y)}$ is the value that produces $SPR_{\textrm{crash}(y)}$. If natural mortality were to increase, asymptotic unfished and MSY reference points decrease, while $SPR_{\textrm{crash}(y)}$ increases. The stock can crash in the absence of fishing if $1/\phi_{0(y)} > \alpha$, in which case, unfished and MSY reference points, as well as $F_{\textrm{crash}(y)}$, go to zero, and $SPR_{\textrm{crash}(y)} = 1$. It is interesting to consider whether a constant stock-recruit relationship would be appropriate if the stock is heading towards a crash in the absence of fishing. After all, shouldn't the stock evolve to avoid extinction? It may depend how sudden and how intense the factors that decrease $\phi_{0(y)}$ or increase fishing mortality come about, and whether there is enough time, in terms of generations, for the stock to respond. An example of resilience would be a decrease in the age of maturity over time that increases $\phi_{0(y)}$. Ultimately, the assumptions behind the dynamics of the operating model need to be clearly stated, and alternative projections in the absence of fishing may need to be explored in light of these extreme scenarios. Operating models are a simplication of the real-life system dynamics used to help make recommendations about how to manage the stock, and performance of such methods should be compared relative to no fishing scenarios. Per-recruit reference points Yield per recruit reference points $F_{\textrm{0.1}}$ and $F_{\textrm{max}}$ and spawning potential ratio reference points ($F_{\textrm{20%}}$, $F_{\textrm{25%}}$, …, $F_{\textrm{60%}}$) are solely calculated from $\phi_{0(y)}$. These values increase if M increases. Interpretation of annual reference points Interpretation of annual reference points is probably beyond the scope of this manual. When there is time-varying productivity, interpretation can be quite nebulous and will frequently be case-specific. It may be more helpful to describe the system in terms of regimes with one set of reference points pertaining to each regime. Updated on 06 Oct 2021 Initial Depletion Averaged MSY Reference Points Copyright © 2020 Blue Matter Science
CommonCrawl
Row of a Matrix Math Doubts A horizontal line of elements in a matrix is called row of a matrix. The space inside a matrix is divided as number of rows to place elements equally in each row. Every two elements in a row is separated by equal space. Similarly, an equal space between every two rows in a matrix is maintained to avoid mixing elements of every two rows. $\begin{bmatrix} -4 & 1 & 0 & 9 \\ 7 & 3 & -8 & 5\\ 2 & -5 & 1 & 0\\ \end{bmatrix}$ The horizontal line of elements $-4,$ $1,$ $0$ and $9$ is called the first row of matrix. The horizontal line of elements $7,$ $3,$ $-8$ and $5$ is called the second row of matrix. The horizontal line of elements $2,$ $-5,$ $1$ and $0$ is called the third row of matrix. Latest Math Topics Learn cosine of angle difference identity Learn constant property of a circle with examples Concept of Set-Builder notation with examples and problems Completing the square method with problems How to find the minors of 2 by 2 matrix Latest Math Problems Evaluate $\cos(100^\circ)\cos(40^\circ)$ $+$ $\sin(100^\circ)\sin(40^\circ)$ Evaluate $\begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9\\ \end{bmatrix}$ $\times$ $\begin{bmatrix} 9 & 8 & 7\\ 6 & 5 & 4\\ 3 & 2 & 1\\ \end{bmatrix}$ Evaluate ${\begin{bmatrix} -2 & 3 \\ -1 & 4 \\ \end{bmatrix}}$ $\times$ ${\begin{bmatrix} 6 & 4 \\ 3 & -1 \\ \end{bmatrix}}$ Evaluate $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\sin^3{x}}{\sin{x}-\tan{x}}}$ Solve $\sqrt{5x^2-6x+8}$ $-$ $\sqrt{5x^2-6x-7}$ $=$ $1$ Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more Math Topics Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
CommonCrawl
Microbial functional genes enriched in the Xiangjiang River sediments with heavy metal contamination Shiqi Jie1, Mingming Li1, Min Gan1, Jianyu Zhu ORCID: orcid.org/0000-0001-5245-08451, Huaqun Yin1,2 & Xueduan Liu1 Xiangjiang River (Hunan, China) has been contaminated with heavy metal for several decades by surrounding factories. However, little is known about the influence of a gradient of heavy metal contamination on the diversity, structure of microbial functional gene in sediment. To deeply understand the impact of heavy metal contamination on microbial community, a comprehensive functional gene array (GeoChip 5.0) has been used to study the functional genes structure, composition, diversity and metabolic potential of microbial community from three heavy metal polluted sites of Xiangjiang River. A total of 25595 functional genes involved in different biogeochemical processes have been detected in three sites, and different diversities and structures of microbial functional genes were observed. The analysis of gene overlapping, unique genes, and various diversity indices indicated a significant correlation between the level of heavy metal contamination and the functional diversity. Plentiful resistant genes related to various metal were detected, such as copper, arsenic, chromium and mercury. The results indicated a significantly higher abundance of genes involved in metal resistance including sulfate reduction genes (dsr) in studied site with most serious heavy metal contamination, such as cueo, mer, metc, merb, tehb and terc gene. With regard to the relationship between the environmental variables and microbial functional structure, S, Cu, Cd, Hg and Cr were the dominating factor shaping the microbial distribution pattern in three sites. This study suggests that high level of heavy metal contamination resulted in higher functional diversity and the abundance of metal resistant genes. These variation therefore significantly contribute to the resistance, resilience and stability of the microbial community subjected to the gradient of heavy metals contaminant in Xiangjiang River. Xiangjiang River, origined from Guangxi province, is the largest river in Hunan province, covering an area of 94,660 km2 and occupying 90.2 % of the total area in Hunan province [1]. Called as "the mother river of Hunan", it takes charge of agricultural irrigation, fishery breeding, navigation, receiving pollution and supplying drinking water for residents [2]. However, numerous ores used for mining, mineral processing, and smelting of non-ferrous and rare metals are located in Xiangjiang valley [3], giving rise to serious heavy metal contamination (e.g., Cd, Cu, Zn, Pb, Hg) [4] in the river and a enormous health menace to approximately 70 million people. Heavy metal is considered to be one of the greatest threats to the ecosystem of aquatic environment due to their high biotoxicity, perdurability and the bio-enrichment ability in food chain [5]. Heavy metals like Hg, Pb, As, Cd and Cr, which are defined as the primary toxic metals to biology, generally accumulate in soils and waters, bringing a serious diseases or even death to biosome. Plenty of studies that involving the effect of metal contamination on microbial community in aquatic ecosystem have been conducted. At present, these researches focused on the biomass and phylogenetical analysis of microbial response to heavy metal contamination. For example, the microbial communities in two sediment samples with different metal concentrations are clearly different, having bacteria in the taxa Acidobacterium (18 %), Acidomicrobineae (14 %), and Leptospirillum (10 %) in the slightly-polluted sediment and Methylobacterium (79 %) and Ralstonia (19 %) in the heavily-polluted sediment [6]. These are both well-known metal-resistant bacteria. And a previous study revealed that high pore-water Zn and As concentrations would bring the decrease of microbial biomass [7], and the enhance of sulfate reduction rates [8] suggested that metal contamination possibly has a obvious influence on the microbial abundance and activity. However, Chodak et al. [9] found that pyrosequencing demonstrated the effect of high heavy metal contents on soil microbial community measured was weak, and the abundance of most phyla stayed stable. With regard to facing various anthropogenic and climatic fluctuations, the biodiversity of an ecosystem is pretty crucial for its functioning [10, 11]. It is widely accepted that an ecosystem whose biodiversity is higher is easier to achieve greater stability under disturbance [12, 13], because multiple communities living together can efficiently make use of available resources and work federatively to maintain the ecosystem functioning [14]. In terms of the microbial biodiversity, previous studies at contaminated sites focused on the culturable microbes [15], bacterial abundance and evenness [16, 17], sequencing of 16S rRNA genes [18–20]. There is no doubt that these approaches have been applied perfectly into the studies which concentrate on the effect of heavy metal contamination on microbial community in Xiangjiang River sediment. However, in spite of this, little is known about its functional diversity and metabolic potential at the community level, and the relationship between functional genes structure of microorganism in heavy metal polluted Xiangjiang River sediment and the environmental factors remains obscure. Therefore, in order to comprehensively know the functional gene diversity and the underlying mechanisms influencing the microbial community structure and diversity, a more integrated characterization of microbial community in this contaminated sediment is needed. In recent years, GeoChip-based metagenomics technology, appeared as a original high throughput tool to provide significative information for the microbial community structure, composition and potential metabolic capacity, has been widely used to analyze microbial community from various habitats. For instance, GeoChip 5, containing 60,000 probes in diverse gene categories of primary microbial metabolism, such as carbon, nitrogen, sulfur, and phosphorus cycling, metal homeostasis, organic remediation, secondary metabolism, and virulence [21]. In this study, GeoChip 5.0 was employed to address two key questions. (i) What are the functional genes diversities, structures and potential metabolic capacity of microbial community in Xiangjiang River sediments with a gradient of heavy metals contaminant levels? (ii) How does the environmental variables impact the functional structure of microbial community? To answer these questions, nine sedimentary samples from three sites in Xiawan Port of Xiangjiang River (Zhuzhou city, Hunan province, China) were obtained. Our results indicated that the microbial community of the studied heavy metal contaminated sediments had a huge metabolic potential, and the functional genes structures and diversities were shaped by the heavy metal pollution. Geochemical description of the study sites The different sampling locations leaded to different geochemical parameters of each site (see Fig. 1 and Table 1). There was no significant difference in pH value between three sites. While, these samples were apparently heterogeneous in terms of the content of sulfur as well as metal concentrations, such as Cu (392 ~ 570 mg/kg), Pb (383 ~ 737 mg/kg), Zn (2840 ~ 6530 mg/kg), As (177 ~ 2480 mg/kg) and Cd (23 ~ 169 mg/kg). As shown in Table 1, a majority of detected elements, including Cu, Pb, Zn, Cd, Hg, Cr and S had maximum concentration at the site A which located near the sewage outlet, and minimum content at the site C that was farthest away from the outlet. It is probable because the sewage from the factory have been diluted by the river water as the distance from drain increasing, lessening the concentration of most metal. However, As and Ni showed unlike situations when compared with above ones. It can be seen that As had the highest concentration at the site B which is 100 m away from the outlet, and the concentration of Ni at site C was higher than that of site B. Site location and distribution of sampling points (Picture information of two parts at the top was obtained from National Geomatic Center of China. Picture information of the part at the bottom was obtained from Google, Astrium, Cnes/Spot Image and DigitalGlobe) Table 1 Geochemical properties of sampled sediments Overview of functional gene diversity To detailedly understand the microbial functional diversity and structure, the number of detected genes, overlapping genes between samples, unique genes, and the diversity indices were measured. A total of 25595 genes were detected, and the number of detected genes ranged from 24485 to 19431 in each of the samples. As illustrated in Table 2, the differences of the functional genes structure between samples from the same site were all less than 7 % through β-diversity calculation, showing high similarity (>90 %) among the subsamples. Nevertheless, there were apparent discrepancies between samples from different sites. The differences ranged from 13.8 % to 14.7 % between site A and site B, from 13.8 % to 14.7 % between site A and Site C, and from 18.1 to 19.4 % between site B and Sit C. Hierarchical clustering (Fig. 2) showing that the samples obtained from the same site were grouped together, and site A and site B formed a second group. The aforementioned results suggested that three sampling sites had distinct microbial functional gene structure, and adjacent sites with more similar contamination level shared the alike functional genes structure. Table 2 β-diversity of studied samples Hierarchical cluster analysis (based on Bray-Curtis distance) of functional genes in 9 studied samples from three sites named A, B, C. Every sample is named after A plant with "1", "2" or "3" that indicates one of three replicate samples from each plant As shown in Table 3, there was a high similarity (80.63 % ~ 89.64 %) of microbial community functional genes between all samples, illustrating that contamination level did not significantly affect the overall functional genes diversity. However, some differences among samples can be observed, such as samples from site A had the highest number of unique genes (2027; 8.16 %), while site C had the fewest (266; 1.31 %). In addition, Simpson's diversity index (1/D) which was usually used for evaluating the diversity in ecology was highest in site A and lowest in site C. Similar results were observed in the Shannon index (H') with the overall diversity in the following order: A > B > C. According to the above results, it was demonstrated that the place with a higher heavy metal contamination level would have more unique genes and higher microbial functional diversity. Table 3 Gene overlap, uniqueness, diversity indices, and detected gene number of studied samples Analysis of detected functional genes In these three sites, 87.28 % of the 393 functional gene included in GeoChip 5.0 was detected. The relative abundance of diverse functional gene categories were similar across all three sites (Fig. 3). Approximately 30 % of the detected probes were for genes involved in carbon degradation, another 22 % ~ 25 % were in organic remediation, about 13 % in nitrogen cycling, about 11 % in carbon fixation, 8 % ~ 10 % in sulfur cycling, 7 % ~ 8 % in metal homeostasis and few in phosphorus cycling, methane cycling, secondary metabolism and virulence. Among these, samples obtained from site A had slightly higher abundances of functional genes belonged to methane cycling, nitrogen cycling, sulfur cycling and metal homeostasis categories when compared with other sites. Further more, genes in organic remediation and virulence had the highest signal intensities at the site C. Relative richness of all functional gene group detected. The signal intensity for each functional gene category is the average of the total signal intensity from all replicates. All data are presented as mean ± SE Detailed analysis of key functional genes The studied sediment samples were derived from the river near the sewage outlet of a factory, leading to excessive amounts of heavy metal in sediment. In this regard, the functional genes involved in metal resistance which plays a crucial role in this ecosystem were particularly analyzed in this study. A total of 1793 ~ 1831, 1629 ~ 1595, and 1427 ~ 1379 genes involved in metal homeostasis were detected in three sites, respectively (see Additional file 1: Table S1). At the level of gene family, the normalized signal intensities of As, Hg and Te resistance genes were relatively higher among these metal (Fig. 4), for the biotoxicity of these metal are comparatively strong to microbe. In addition, it is evident that the relative abundances of Cu, Hg and Te resistance genes were highest in site A samples and lowest in site C samples. However, As and Cr resistance genes showed different conditions that there was significant difference of Cr resistance genes between three sites, and a high signal intensity of As resistance genes appeared in site B, but low intensities in site A and site C. This could be credited to the differences of metal concentrations between disparate studied sites showed in Table 1 that most of metals had high contents in site A samples, while site B had the highest concentration of As. In regard to Cr, the difference of Cr concentrations between three sites was small, and the Cr contamination level was relatively light compared with the reference criterion. A Mantel test showed that the abundance of As resistance genes was positively correlated with the As concentration (rM = 0.3243, p = 0.018) (Table 5), and the similar condition were observed for other metals and their related genes abundances (Table 5). Relative abundance of detected metal resistance genes. The signal intensity for each functional gene category is the average of the total signal intensity from all replicates. All data are presented as mean ± SE Notably, twelve metal homeostasis genes showed differences between these three groups (Fig. 5) (p < 0.05), which belonged to 24 classes of Thermoprotei, Halobacteria, Methanomicrobia, Acidobacteria, Solibacteres, Actinobacteria, Aquificae, Bacteroidetes, Cytophagia, Flavobacteria, Sphingobacteriia, Chloroflexi, Ktedonobacteria, Deinococcus, Bacilli, Clostridia, Gemmatimonadetes, Nitrospira, Planctomycetacia, Alphaproteobacteria, Betaproteobacteria, Deltaproteobacteria, Gammaproteobacteria, Eurotiomycetes. Aoxb (arsenite oxidase), arra (arsenate respiratory reductase), arsc (arsenate reductase), arxa (aristaless related homeobox A), silicon transporter and silaffin gene were most abundant in site B. While the other six genes had the highest normalized signal intensities in site A, cueo (multicopper oxidase), mer (mercury resistance), metc(cystathionine beta-lyase), merb (organomercury lyase), tehb (tellurite resistance) and terc (tellurium detoxification). These results indicated that heavy metal contamination increased the abundance of most metal homeostasis genes, and the abundances of four genes involving As detoxification (axob, arra, arsc, arxa) had a positive correlation with the As concentration in sediment, so that the sedimentary microbiology possessing these genes can help their community adapt the heavy metal polluted environment. The normalized signal intensity of detected key genes involved in metal resistance. The signal intensity for each functional gene is the average of signal intensities from all the replicates. All data are presented as mean ± SE It was observed in Fig. 6a that a number of metal resistance genes were detected (1898 gene probes from all sites), which confering resistance to various metals, including Cu, Hg, Cr, As and Te. These genes were clustered by metal contamination level. In addition, the summation of the relative abundance of metal resistance genes was highest in site A (normalized signal intensity = 2321.20), then followed by site B (normalized signal intensity = 2230.11) and site C (normalized signal intensity = 2154.62) with significant difference (ANOVA, p < 0.05) (Fig. 6b). a Hierarchical cluster analysis (based on Bray-Curtis distance) of metal resistance genes based on hybridization signal intensities for all wells. Every sample is named after A plant with "1", "2" or "3" that indicates one of three replicate samples from each plant. b Relative abundance of all detected metal resistance genes. The signal intensity for each functional gene category is the average of the total signal intensity from all replicates. All data are presented as mean ± SE High concentration of sulfur was also observed in these sediment samples (Table 1). Sulfur metabolism is considered to be propitious to alleviate the biotoxicity of heavy metal imposed to microbe which plays an important role in heavy metal contaminated river. Many sulphate reducing bacteria (SRB) are capable of reducing various metal [22], so those genes coding for the dissimilatory sulfite reductase (dsr) were examined. GeoChip 5.0 contains dsrA and dsrB probes to analyze the potential of sulfur reduction and sulfate-reducing bacterial populations, which were employed in this study to account for the impact of heavy metal pollution on sediment microbial functional genes structure. DsrA and dsrB genes were detected in all samples (A: 889 ~ 905; B,:795 ~ 810; C:674 ~ 694; see Table S1). Hierarchical cluster analysis of all detected dsr genes showed that subsamples were grouped together, and site A and site B formed a second group (Fig. 7a). What's more, there was a significant difference in the relative abundance of dsr genes between three sites (ANOVA, p < 0.05) (Fig. 7b), and the site A with most serious heavy metal pollution had the highest signal intensity of dsr gene, which further supports the positive relationship between heavy metal contamination level and relative abundance of dsr gene. In addition, mantel test analysis showed a positive correlation between the S concentration and the abundance of dsrA and dsrB genes (rM = 0.5172, p = 0.007), and between metal concentrations and the abundance of dsr genes (rM = 0.6927, p = 0.013) (Table 5). a Hierarchical cluster analysis (based on Bray-Curtis distance) of dsr genes based on hybridization signal intensities for all wells. Every sample is named after A plant with "1", "2" or "3" that indicates one of three replicate samples from each plant. b Relative abundance of dsr genes. The signal intensity for each functional gene category is the average of the total signal intensity from all replicates. All data are presented as mean ± SE Relationship between environment factors and functional genes To discern the connection between environmental factors and the sediment microbial functional genes structure of Xiangjiang River sediment contaminated with heavy metal, a Mantel test was performed (Table 4). The results showed that the gradients of S, Cu, Cd, Hg and Cr were significantly correlated with the microbial community functional structure (p < 0.01), indicating that these metal were of pronounced importance in shaping the microbial functional genes structure of heavy metal contaminated sediment. Table 4 Mantel test of the relationship of whole microbial community functional structure to individual environmental variables Further, Mantel test was performed to examine the relationships between various functional gene groups and individual metal concentration. As shown in Table 5, positive correlations between the single metal concentration and the abundance of the corresponding resistant genes were found. For instance, among these metals, chromium concentration of sediments and chromium resistant genes had a most positively relevance (rM = 0.8295, p = 0.001), and arsenic resistant genes and arsenic concentration had the lowest correlation. All aforementioned results demonstrated that microbial community in heavy metal contaminated sediments and functional gene structures were largely shaped by the surrounding metals. Table 5 Mantel test of relationship of different gene categories to corresponding environmental variables Exploring the microbial community structure, including both the microbiological compositions and functional genes structure, is of pronounced importance to deeply uncover the intricate influence of environment on microorganism. In this study, we focused on the change of sediment microbial functional genes structure caused by heavy metal contamination in river, and found that the pollution level could bring about varying degrees of impact on the functional genes diversity. The data of GeoChip 5.0 demonstrated that the site with more severe heavy metal contamination had higher diversity and more functional genes involving metal resistance, such as cueo, mer, metc, merb, tehb and terc gene. More comprehensive view of the overall functional structure and metabolic potential of sediment microbial communities was provided. A mass of evidences suggest that microorganisms are far more sensitive to heavy metal stress than animals or plants growing on the same soils [23]. Although some kinds of metal (e.g., Fe, Cu, Zn) are essential element to microorganism which can ensure the normal growth and reproduction, high concentration of them would inhibit microbial metabolism or give rise to their death [24]. Numerous metal show a strong affinity to biological ligand, such as phosphoric acid, purine and pyrimidine, then prevent the synthesis of biomacromolecule like nucleic acid and protein. And some metal can bring damage to cytomembrane, destroying the transportation of nutrient [23]. Moreover, the impact of heavy metal on microbial community is also undisregardable, and a plenty of studies have been done to explore the mechanism of it [9, 25, 26]. The results of this study demonstrated that the microbial functional gene structure was correlated to the heavy metal contamination. As shown in Additional file 2: Figure S1, the overall functional genes of nine sediment samples from three sites were analyzed, and then samples from the same site were gathered together, while those from different sites were separated from each other. The conclusion was further supported by Fig. 3 which depicted the detailed distinctions of various gene categories among different sites, such as genes involved in methane cycling, nitrogen cycling, sulfur cycling, metal homeostasis and organic remediation. Previous study at Xiangjiang River suggested that heavy metal contamination have ecological impact on bacterial community composition and diversity [27]. As illustrated in one research, microbial community structure was highly diverse and heterogeneous in four studied sediment samples which are obtained from Xiangjiang River (Zhu zhou) with heavy metal contamination, and α-Proteobacteria was significantly increased with the increases in heavy metal. However, the moderately polluted sediment X sample had the greatest species diversity, which is different with our observation. Another previous study illustrated that heavy metals would decrease the diversity of functional genes [28], but in this research, the site with more heavy metal had higher diversity in comparison among three studied sites (Table 3). Our results comply with a similar study about heavy metal contamination in Montana, which showed that the diversity of microbial community in sediment was evaluated across a heavy metal contamination gradient [29]. It was supposed that long term contamination has made the microbes adapted to the polluted environments, and maintained their diversity by various of resistance mechanisms [30]. To date, it is generally accepted that biodiversity plays an important role in enhancing the ecosystem stability by temporal and spatial variability [31, 32], resistance against abiotic perturbations [33], and biotic invasions [34]. Different species in diverse communities respond differentially to the environmental perturbations, making ecosystem regularly function. While, the impact of increasing metal stress on microbial diversity depends on the initial state of the system [23]. As we all know, the specific environment with high concentration of heavy metal is beneficial to screen dominant bacteria which is capable of resisting metal toxicity. The reason is believed to be correlated to the long-term natural selection which will reserve the species with ability to adapt and survive, and weed out the ones lack in the capacity. From an agricultural field subjected to Cr contamination, Maqbool et al. [35] isolated and screened twenty bacterial which can resist Cr(VI). For another example, six strains showed high degree of metal resistances were selected by Kumar et al. [36] from the soil samples collected from fly ash contaminated region near National Thermal Power Plant which had high content of various heavy metal. However, there is no inevitable relationship between the high heavy metal concentration of the studied sediment samples and the increase of metal resistant bacteria. Even though the metal content shown in Table 1 were fairly high, the amount actually effect on microorganism may be much lower, leading to little selection pressure for resistant bacteria. "Total" metal concentrations in sediment are not a good indicator of the actual concentration in the sediment to which microorganisms are exposed [23]. Only when a plenty of resistant genes corresponding to metals were detected in river sediment can we deduced that the relative resistant genes were enhanced and impacted by the high level heavy metal contamination. In this study, the relative abundance of metal resistance genes from lesser polluted site (site C) was lower than that of site A with high contamination level (Figs. 4 and 6), and the majority of metal detoxication gene involved in the array were abundant in site A (Fig. 5), confirming the influence of high concentration metal on microbial metal resistant genes. Sulfate reducing bacteria (SRB) possess the capacity of reducing various heavy metal, such as Fe, Cu, As, Cd, Mn and others [37, 38]. SRB make use of the way of precipitation to decrease the metal availability to microorganism or alter the metal valence state and chemical speciation to lighten the metal biotoxicity, or generate metal sulfides to achieve the goal of detoxication [39, 40]. In the black amorphous sludge which was rich in sulfur, iron, aluminum, and acidity, Riefler et al. [41] found a large population of sulfur-reducing bacteria and took advantage of them to treat acid mine drainage with high concentration of diverse heavy metal. Dissimilatory sulfite reductase (dsr) is a key enzyme in SRB, which catalyses sulfite transform to sulfide with multi-steps of electronic transfer. The amount of dsr genes will influence the capacity of reducing toxic metal by SRB in river sediment [42]. In this study, higher concentration of sulfur was measured in site A (Table 1), which had the most serious heavy metal contamination, suggesting the strong ability of sulfate reduction [8]. Fig. 6 showed the relative abundance of dsr gene was positive correlated with the heavy metal contamination level, and mantel test indicated dsr genes was significantly correlated with S content (rM = 0.5172, p = 0.007) and positively correlated with metal concentration (rM = 0.6927, p = 0.013). However, these results can hardly reflect the comprehensive information of metal resistant genes, since most metal homeostasis genes represented in GeoChip 5.0 are transporters which was the most common mechanism of metal resistance in bacteria, then other mechanisms including sequestration, reduction and lower the nutrient metal influx were neglected. What's more, the suitable resistant gene probes for all heavy metal are not covered on the GeoChip, such as the resistant genes corresponding to Fe, Zn, Pb and Ni which had high concentrations in studied sediments are not included. Besides, it is notable that some metal resistant genes can work on more than one kind of metal. For instance, the czc operon plays crucial roles in resisting the biotoxicities of metal Zn, Cd and Co [43]. In summary, a complicated functional structure of microbial communities in Xiangjiang River sediment with heavy metal contamination was detected. Positive correlations were found between the level of metal pollution with the community functional diversity, and with the relative abundance of associated metal resistant genes. While, due to the limitation of DNA genomics, transcriptomic approaches will be a consequent step to illuminate the activity of microbial functional genes. The information of transcribed RNA will actually reflect the expression of functional genes which were directly impacted by environmental perturbation. In order to comprehensively and detailedly understand the effect of heavy metal contamination on microbial community in sediment, more systematic, in-depth analyses are needed. GeoChip 5.0 was used to analyze the microbial functional diversity and genes structure in sediments of Xiangjiang River contaminated with various heavy metal. The results showed that heavy metal contamination did not significantly impact the overall microbial functional structure, while sediment sampled from site near the sewage outlet had more unique genes and a higher microbial functional diversity in comparison with other two groups. The abundance of functional genes involved in metal resistance had a positive correlation with the level of heavy metal contamination. Notably, the relative abundance of dsr gene coding for the dissimilatory sulfite reductase had a significant difference between three sites, supporting the effect of heavy metal contamination on microbial community in sediment. S, Cu, Cd, Hg and Cr were were determined to be key factors shaping the microbial community structure. In summary, the results of markedly linkages between microbial metabolic potentials and heavy metal contamination were influential, making it possible to understand the mechanism of microorganism adapting to environmental fluctuation. Future studies were needed to to further investigate direct response of microbial community at the transcription and translation level. Site describing and sample collection Studied sediments were sampled from Xiangjiang River (N 27.8554401, E 113.0786195, Zhuzhou city, Hunan province, China), which is a tributary of the Yangtze River. For this study, a total of nine sediments (~10 cm depth) were sampled from three sites near a sewage outlet in Xiangjiang River.. At each site, three 1 m × 1 m plots were established with a distance of approximately 2 m between adjacent plots. Five to eight soil cores were collected and mixed equally to gain one sample at each plot. These three sites has different distance from the sewage outlet, leading to the distinct pollution levels. The A samples (A1, A2, A3) were collected near the sewage outlet, which would have maximum concentration of heavy metals; the B samples (B1, B2, B3) were taken from the locations which are 100m away from the sewage outlet; at the same direction, the C samples (C1, C2, C3) were obtained 200metres away from the sewage outlet. All samples were maintained on ice after collected, then stored at −80 °C for further analysis. Sediment pH was measured by a PHS-3C pH meter (Leici, China) in a 1:2.5 suspension in water [44], and the compositions of heavy metals including Cu, Pb, Zn, As, Cd, Ni, Hg, Cr in sediments was measured by ICP-AES [45]. Microbial community DNA isolation and purification Given that high concentration of divalent metal ion may result in premature DNA precipitation during extraction [46], the sediment samples were pre-washed with 40 mM EDTA (pH 7.2) [47]. The community DNA was extracted using the Soil DNA Kit (D5625-01; Omega Bioservices, Norcross, GA, USA) according to the manufacturer's instructions. Then DNA quality was checked by the absorbance ratios at A260/A280 and A260/A230 using a NanoDrop ND-1000 spectrophotometer (NanoDrop Technologies Inc., Wilmington, DE). Only when the A260/A280 ratio is larger than 1.7 and the A260/A230 ratio is more than 1.8, can the DNA be used for further analysis. Purified DNA was stored at −80 °C for the following DNA analysis. Microbial community DNA amplification, labeling, microarray hybridization, and scanning The amplification and hybridization of community DNA were performed at Glomics Inc. (Norman, Oklahoma, USA). Approximately 100 ng of DNA was amplified employing the Templiphi kit (GE Healthcare, Piscataway, NJ, USA), with modifications of 0.1 μM spermidine and 260 ng · μl−1 single-stranded DNA binding protein to enhance the efficiency and reduce representational bias [48]. The amplified DNA was labeled with fluorescent dye Cy3 (GE Healthcare) by random primer, then purified with a QIAquick purification kit (Qiagen), and dried in a SpeedVac (45 °C, 45 min: ThermoSavant, Milford, MA, USA). Next, the processed DNA wass resuspended into 27.5 μl of DNase/RNase-free distilled water, and mixed with 42 μl hybridization buffer which contains 1× Acgh blocking, 1× HI-RPM hybridization buffer, 10 pM universal standard DNA, 0.05 μg/μl Cot-1 DNA, and 10 % formamide, then incubated at 95 °C for 3 min, and kept in 37 °C for 30 min. Hybridizations process was implemented in GeoChip 5.0 arrays (60 K) at 67 °C in a Agilent hybridization oven for 24 h. Subsequently, the slides were washed by Agilent Wash Buffers at room temperature. Then the arrays using NimbleGen MS200 Microarray Scanner (Roche NimbleGen, Inc., Madison, WI, USA) at 633 nm by a laser power of 100 and 75 % a photomultiplier tube (PMT) [49]. Scanned images were quantified with the help of ImaGene® version 6.0 (BioDiscovery, Inc., Los Angeles, CA, USA). The mean signal intensity was determinted for each spot, and local background signals were automatically ducted. Then the spots that flagged as low quality by ImaGene or with a signal to noise ratio(SNR) of less than 2.0 were removed. All poor, empty, and outlier spots were removed for the further analysis [50]. The hybridization signal was normalized by calculating the mean signal intensity across all genes on the arrays before subsequent analysis. After removing empty, poor and outlier spots, the across-array signal was normalized based on all intensities on the arrays. Then a ratio was calculated for each positive spot by dividing the signal intensity of the spot by the mean signal intensity to obtain the normalized ratio [51]. Functional genes diversity was calculated using Simpson's reciprocal index (1/D), Shannon Weaver index (H′) and Shannon evenness(J) using R (v.2.12.0; https://www.r-project.org/). Beta diversity was figured out for comparing differentiations of functional genes communities among habitats along environmental gradient. Beta diversity estimates were calculated using presence/absence for individual genes grouped into functional categories. We chose Sorensen's index for showing dissimilarity (Bray-Curtis dissimilarity): $$ \upbeta =1-\left(\frac{2\mathrm{c}}{{\mathrm{S}}_1-{\mathrm{S}}_2}\right)*100\% $$ where, S1 = the total number of genes within a specific functional group detected in the first community, S2 = the total number of genes within a specific functional group detected in the second community, and c = the number of genes within a specific functional group common to both communities. The Sorensen index ranges from 0 to 1 where 1 indicates completely different communities and 0 indicates identical communities. This research use the total number of genes detected in a sample as the S number for comparing the similarity of two microbial community functional structure. Differences in relative abundance of functional genes between various microbial communities were analyzed by a one-way analysis of variance (ANOVA) and Tukey's test. A significance level of p < 0.05 was adopted for all comparisons. For whole functional genes and specific gene, hierarchical cluster was carried out with CLUSTER (http://www.eisenlab.org/) and visualized in TREEVIEW. Further more, mantel test, a statistic test of the correlation between two matrices was performed to examine the connection between functional genes communities and environmental factors. Li Z, Zhang Q, Fang Y, Yang X, Yuan Q. Examining social-economic factors in spatial and temporal change of water quality in red soil hilly region of South China: a case study in Hunan Province. Int J Environ Pollut. 2010;42:184–198. Zhang Z, Tao F, Du J, Shi P, Yu D, Meng Y, et al. Surface water quality and its control in a river with intensive human impacts–a case study of the Xiangjiang River, China. J Environ Manage. 2010;91(12):2483–90. Zhang Q, Li Z, Zeng G, Li J, Fang Y, Yuan Q, et al. Assessment of surface water quality using multivariate statistical techniques in red soil hilly region: a case study of Xiangjiang watershed, China. Environ Monit Assess. 2009;152(1–4):123–31. Wang L, Guo Z, Xiao X, Chen T, Liao X. Song, et al. Heavy metal pollution of soils and vegetables in the midstream and downstream of the Xiangjiang River, Hunan Province. J Geogr Sci. 2008;18:353–62. Zhang C, Yu Z, Zeng G, Jiang M, Yang Z, Cui F, et al. Effects of sediment geochemical properties on heavy metal bioavailability. Environ Int. 2014;73:270–81. Kwon M, Yang J, Lee S, Lee J, Ham B, Boyanov M, et al. Geochemical characteristics and microbial community composition in toxic metal-rich sediments contaminated with Au–Ag mine tailings. J Hazard Mater. 2015;15(296):147–57. Gough H, Dahl A, Nolan M, Gaillard J, Stahl D. Metal impacts on microbial biomass in the anoxic sediments of a contaminated lake. J Geophys Res. 2008;113:17. Gough H, Dahl A, Tribou E, Noble P, Gaillard J, Stahl D. Elevated sulfate reduction in metal-contaminated freshwater lake sediments. J Geophys Res. 2008;113:37. Marcin C, Marcin G, Justyna M, Katarzyna K, Maria M. Diversity of microorganisms from forest soils differently polluted with heavy metals. Appl Soil Ecol. 2013;64:7–14. Awasthi A, Singh M, Soni S, Singh R, Kalra A. Biodiversity acts as insurance of productivity of bacterial communities under abiotic perturbations. ISME J. 2014;8(12):2445–52. Reich P, Tilman D, Isbell F, Mueller K, Hobbie S, Flynn D, et al. Impacts of biodiversity loss escalate through time as redundancy fades. Science. 2012;336(6081):589–92. Boutin C, Aya K, Carpenter D, Thomas P, Rowland O. Phytotoxicity testing for herbicide regulation: shortcomings in relation to biodiversity and ecosystem services in agrarian systems. Sci Total Environ. 2012;415:79–92. Cardinale B, Srivastava D, Duffy J, Wright J, Downing A, Sankaran M, et al. Effects of biodiversity on the functioning of trophic groups and ecosystems. Nature. 2006;443(7114):989–92. Tang B, Zhang Z, Chen X, Bin L, Huang S, Fu F, et al. Biodiversity and succession of microbial community in a multi-habitat membrane bioreactor. Bioresource Technol. 2014;164:354–61. Konstantinidis K, Isaacs N, Fett J, Simpson S, Long D, Marsh T. Microbial diversity and resistance to copper in metal-contaminated lake sediment. Microbial Ecol. 2003;45(2):191–202. DellAnno A, Beolchini F, Rocchetti L, Luna G, Danovaro R. High bacterial biodiversity increases degradation performance of hydrocarbons during bioremediation of contaminated harbor marine sediments. Environ Pollut. 2012;167:85–92. Zhang J, Zhang Y, Quan X, Chen S. Effects of ferric iron on the anaerobic treatment and microbial biodiversity in a coupled microbial electrolysis cell (MEC)–anaerobic reactor. Water Res. 2013;47(15):5719–28. Wu L, Wen C, Qin Y, Tu Q, Nostrand V, Yuan T, et al. Phasing amplicon sequencing on Illumina Miseq for robust environmental microbial community analysis. BMC Microbiol. 2015;15(1):125. Korehi H, Blöthe M, Schippers A. Microbial diversity at the moderate acidic stage in three different sulfidic mine tailings dumps generating acid mine drainage. Res Microbiol. 2014;165(9):713–8. Zhou J, Wu L, Deng Y, Yang Y, Zhi X, Jiang Y, et al. Reproducibility and quantitation of amplicon sequencing-based detection. ISME J. 2011;5(8):1303–13. Zhao M, Xue K, Wang F, Liu S, Bai S, Sun B, et al. Microbial mediation of biogeochemical cycles revealed by simulation of global changes with soil transplant and cropping. ISME J. 2014;8:2045–55. White C, Shaman A, Gadd G. An integrated microbial process for the bioremediation of soil contaminated with toxic metals. Nat Biotechnol. 1998;16(6):572–5. Giller K, Witter E, McGrath S. Heavy metals and soil microbes. Soil Biol Biochem. 2009;41(10):2031–7. Zhang J, Wang L, Yang J, Liu H, Dai J. Health risk to residents and stimulation to inherent bacteria of various heavy metals in soil. Sci Total Environ. 2015;508:29–36. Wang Y, Shi J, Wang H, Lin Q, Chen X, Chen Y. The influence of soil heavy metals pollution on soil microbial biomass, enzyme activity, and community composition near a copper smelter. Ecotox Environ Safe. 2007;67(1):75–81. Ancion P, Lear G, Dopheide A, Lewis G. Metal concentrations in stream biofilm and sediments and their potential to explain biofilm microbial community structure. Environ Pollut. 2013;173:117–24. Zhu J, Zhang J, Li Q, Han T, Xie J, Hu Y, Chai L. Phylogenetic analysis of bacterial community composition in sediment contaminated with multiple heavy metals from the Xiangjiang River in China. Mar Pollut Bull. 2013;70(1):134–9. Kang S, Van N, Gough H, He Z, Hazen T, Stahl D, et al. Functional gene array-based analysis of microbial communities in heavy metals-contaminated lake sediments. FEMS Microbiol Ecol. 2013;86:200–14. Bouskill N, Finkel J, Galloway T, Handy R, Ford T. Temporal bacterial diversity associated with metal-contaminated river sediments. Ecotoxicology. 2010;19:317–28. Chai L, Wang Z, Wang Y, Yang Z, Wang H, Wu X. Ingestion risks of metals in groundwater based on TIN model and dose–response assessment - a case study in the Xiangjiang watershed, central-south China. Sci Total Environ. 2010;408:3118–24. Tilman D, Reich P, Knops J. Biodiversity and ecosystem stability in a decade-long grassland experiment. Nature. 2006;441(7093):629–32. Weigelt A, Schumacher J, Roscher C, Schmid B. Does biodiversity increase spatial stability in plant community biomass? Ecol Lett. 2008;11(4):338–47. Mulder C, Uliassi D, Doak D. Physical stress and diversity-productivity relationships: the role of positive interactions. Proc Natl Acad Sci. 2001;98(12):6704–8. Eisenhauer N, Schulz W, Scheu S, Jousset A. Niche dimensionality links biodiversity and invasibility of microbial communities. Funct Ecol. 2013;27(1):282–8. Maqbool Z, Asghar H, Shahzad T, Hussain S, Riaz M, Ali S, et al. Isolating, screening and applying chromium reducing bacteria to promote growth and yield of okra (Hibiscus esculentus L.) in chromium contaminated soils. Ecotox Environ Safe. 2014;7:114. Kumar K, Srivastava S, Singh N, Behl H. Role of metal resistant plant growth promoting bacteria in ameliorating fly ash to the growth of Brassica juncea. J Hazard Mater. 2009;170(1):51–7. Gorny J, Billon G, Lesven L, Dumoulin D, Madé B, Noiriel C. Arsenic behavior in river sediments under redox gradient: a review. Sci Total Environ. 2015;505:423–34. Scala D, Hacherl E, Cowan R, Young L, Kosson D. Characterization of Fe (III)-reducing enrichment cultures and isolation of Fe (III)-reducing bacteria from the Savannah River site, South Carolina. Res Microbiol. 2006;157(8):772–83. Jameson E, Rowe O, Hallberg K, Johnson D. Sulfidogenesis and selective precipitation of metals at low pH mediated by Acidithiobacillus spp. and acidophilic sulfate-reducing bacteria. Hydrometallurgy. 2010;104(3):488–93. Kieu H, Mueller E, Horn H. Heavy metal removal in anaerobic semi-continuous stirred tank reactors by a consortium of sulfate-reducing bacteria. Water Res. 2011;45(13):3863–70. Riefler R, Krohn J, Stuart B, Socotch C. Role of sulfur-reducing bacteria in a wetland system treating acid mine drainage. Sci Total Environ. 2008;394(2):222–9. Venceslau S, Stockdreher Y, Dahl C, Pereira I. The "bacterial heterodisulfide" DsrC is a key protein in dissimilatory sulfur metabolism. BBA-Bioenergetics. 2014;1837(7):1148–64. Nies D. Efflux-mediated heavy metal resistance in prokaryotes. FEMS Microbiol Rev. 2003;27(2–3):313–39. Zhang J, Wang L, Yang J, Liu H, Dai J. Health risk to residents and stimulation to inherent bacteria of various heavy metals in soil. Sci Total Environ. 2015;1(508):29–36. Ramsey M, Thompson M. High-accuracy analysis by inductively coupled plasma atomic emission spectrometry using the parameter-related internal standard method. Anal At Spectrom. 1987;2:497–502. Kejnovsky E, Kypr J. DNA extraction by zinc. Nucleic Acids Res. 1997;25(9):1870–1. Gough H, Stahl D. Microbial community structures in anoxic freshwater lake sediment along a metal contamination gradient. ISME J. 2011;5(3):543–58. Wu L, Liu X, Schadt C, Zhou J. Microarray-based analysis of subnanogram quantities of microbial community DNAs by using whole-community genome amplification. Appl Environ Microb. 2006;72(7):4931–41. Cong J, Liu X, Lu H, Xu H, Li Y, Deng Y, et al. Available nitrogen is the key factor influencing soil microbial functional gene diversity in tropical rainforest. BMC Microbiol. 2015;15:167. Liang Y, He Z, Wu L, Deng Y, Li G, Zhou J. Development of a common oligonucleotide reference standard for microarray data normalization and comparison across different microbial communities. Appl Environ Microb. 2010;4(76):1088–94. Wu L, Kellogg L, Devol A, Tiedje J, Zhou J. Microarray-Based Characterization of Microbial Community Functional Structure and Heterogeneity in Marine Sediments from the Gulf of Mexico. Appl Environ Microb. 2008;74(14):4516–29. We thank the data analysis platform of Institute for Environmental Genomics. This research was supported by the National Natural Science Foundation of China (51174239). The data set supporting the results of this article has been deposited in NCBI's Gene Expression Omnibus and are accessible through GEO Series accession number GSE85064 (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE85064). JSQ carried out biogeochemical data analysis, geochip data analysis and the manuscript preparation. LMM carried out the sampling collecting. GM participated in the sampling collecting and biogeochemical data. ZJY participated in the lab design and the manuscript revision. YHQ participated in the Geochip analysis. LXD was involved in revising the manuscript in preparation for submission. All authors read and approved the final manuscript. JSQ is the master of School of Minerals Processing and Bioengineering in Central South University, Changsha, China. LMM was the master of School of Minerals Processing and Bioengineering in Central South University, Changsha, China. GM is the doctor of School of Minerals Processing and Bioengineering in Central South University, Changsha, China. ZJY is the associate professor of School of Minerals Processing and Bioengineering in Central South University, Changsha, China. YHQ is the associate professor of School of Minerals Processing and Bioengineering in Central South University, Changsha, China. LXD is the professor of School of Minerals Processing and Bioengineering in Central South University, Changsha, China. School of Minerals Processing and Bioengineering, Key Laboratory of Biometallurgy of Ministry of Education, Central South University, Changsha, 410083, China Shiqi Jie, Mingming Li, Min Gan, Jianyu Zhu, Huaqun Yin & Xueduan Liu Department of Botany and Microbiology, Institute for Environmental Genomics, University of Oklahoma, Norman, OK, 73019, USA Huaqun Yin Shiqi Jie Mingming Li Min Gan Jianyu Zhu Xueduan Liu Correspondence to Jianyu Zhu or Huaqun Yin. Numbers of detected genes involved in metal homeostasis and dsr genes. (PDF 102 kb) Detrended correspondence analysis (DCA) of all functional genes in three sites. (PDF 8 kb) Jie, S., Li, M., Gan, M. et al. Microbial functional genes enriched in the Xiangjiang River sediments with heavy metal contamination. BMC Microbiol 16, 179 (2016). https://doi.org/10.1186/s12866-016-0800-x GeoChip Microbial functional gene Heavy metal contamination Metal resistance Xiangjiang river
CommonCrawl
Confusion about the conic equation Let me start with some basic definitions: Definition 1. A conic section is the curve resulting from the intersection of a plane and a cone. Definition 2. A conic section is the set of all points in a plane with the same eccentricity with respect to a particular focus and directrix. Definition 3. A conic section is the set of points $(x,y)$ satisfying the implicit formula $$Ax^2+Bxy+Cy^2+Dx+Ey+F=0$$ All these definitions are familiar to me and they are taken from a paper of Mzuri S. Handlin, Conic Sections Beyond $\mathbb{R}^2$. Now, in by university textbook I have the following theorem, Theorem. Any conic has an equation of the form where $A,B,C,D,E$ and $F$ are real numbers and $A,B$ and $C$ are not all zero. Conversely, the set of all points in $\mathbb{R}^2$ whose coordinates $(x,y)$ satisfy an equation of the form that above, is a conic. Proof to this theorem is omitted, and reference are not included. I've been searching for a proof, but failed to find one. It seems that the above theorem make motivate us to define definition 3, right? If one can provide me a reference for the proof of the theorem or sketch guidelines for possible proof? conic-sections Salech RubensteinSalech Rubenstein $\begingroup$ How does your textbook define a conic ? $\endgroup$ – Yves Daoust Jan 13 '17 at 14:38 $\begingroup$ @YvesDaoust: The textbook start straight away with conic section, a non formal definition given: Conic section is the collective name given to the shapes that we obtain by taking different plane slices through a double cone. $\endgroup$ – Salech Rubenstein Jan 13 '17 at 14:46 $\begingroup$ This is your definition 1, isn't it ? $\endgroup$ – Yves Daoust Jan 13 '17 at 14:50 $\begingroup$ Yes, that is true. $\endgroup$ – Salech Rubenstein Jan 13 '17 at 14:51 $\begingroup$ Then write the equation of a cone in general position and set $z=0$. $\endgroup$ – Yves Daoust Jan 13 '17 at 14:52 In my opinion, the theorem as stated fails to account adequately for some special cases. But we'll deal with those as we proceed. In order to avoid having to write "ellipse or circle" in various places, I'll use the convention that a circle is a special case of an ellipse. Part 1. We wish to show that any conic has an equation of the form $$Ax^2+Bxy+Cy^2+Dx+Ey+F=0.$$ Consider a right circular cone of some arbitrary vertex angle in general position, and let $\gamma$ be the intersection of the cone with the $x,y$ plane. Perform an isometry $M$ in $x,y,z$ space that translates the vertex of the cone to the origin and rotates the axis of the cone onto the $z$ axis. The image of the original conic section $\gamma$ under $M$, the curve $M(\gamma),$ is the intersection of some cone of the form $x^2 + y^2 - k^2z^2 = 0$ with some plane of the form $px + qy + rz + s = 0.$ We can perform the inverse isometry $M^{-1}$ to take $M(\gamma)$ back to $\gamma.$ The curve $\gamma$ satisfies the previously given equations in rotated and translated coordinates, that is, it satisfies \begin{gather} x'^2 + y'^2 - k^2z'^2 = 0 \tag1 \\ \text{and} \\ px' + qy' + rz' + s = 0 \tag2 \end{gather} where each of the variables $x',$ $y',$ and $z'$ is a first-degree polynomial over the variables $x,$ $y,$ and $z.$ Substitute the appropriate polynomial over $x,$ $y,$ and $z$ for each of the variables $x',$ $y',$ and $z'$ in the preceding equations of $\gamma$; we will then find that Equation $1$ becomes a new quadratic equation in $x,$ $y,$ and $z$ and Equation $2$ is the equation $z=0.$ Then make the substitution $z=0$ in Equation $1$; the result is something of the form $Ax^2+Bxy+Cy^2+Dx+Ey+F=0.$ As for the condition that $A,$ $B,$ and $C$ are not all zero, if $A=B=C=0$ we obtain the equation of a line. But technically, this satisfies Definition 1 from the question: a line is the intersection of a cone with a plane through the vertex of the cone and tangent to a circle that generates the cone. It also satisfies Definition 2 if you consider an infinite eccentricity to be acceptable, as some sources do, or if you place the focus on the directrix. The objection to the "conic" described by $Dx+Ey+F=0$ seems to be solely that is is not one of the "expected" conics, that is, it is not a non-degenerate circle, ellipse, parabola, or hyperbola. Insisting on only these "expected" conics leads to complications in the second part of the theorem, however. Part 2. For the converse implication, let at least one of the real numbers $A,$ $B,$ and $C$ be non-zero, and let the real polynomial $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$ be the equation of a curve $\gamma.$ It will be convenient to use the fact that an ellipse, parabola, or hyperbola given by any equation of the form $Ax^2+Bxy+Cy^2+H=0$ remains an ellipse, parabola, or hyperbola (respectively) under any invertible linear transformation (that is, if the variables $x$ and $y$ are replaced in that equation by any two independent linear combinations of $x$ and $y$) and under any translation. That's a generally useful fact, but giving a full proof if it would make this answer much longer, so let's just suppose we already know it. Case 1. Suppose $B^2 - 4AC \neq 0.$ We will see that the coefficients $D$ and $E$ are determined by a translation of the center of the conic away from the origin of the $x,y$ plane, and that there is an inverse translation that eliminates these terms. Because $B^2 - 4AC \neq 0,$ the matrix $\begin{pmatrix} 2A & B \\ B & 2C \end{pmatrix}$ is invertible, and there exist two real numbers $m$ and $n$ that satisfy the simultaneous linear equations \begin{align} 2Am + Bn &= D, \tag3 \\ Bm + 2Cn &= E. \tag4 \end{align} Let $T(\gamma)$ be the translation of $\gamma$ by $m$ units in the $x$ direction and $n$ units in the $y$ direction. The equation of $T(\gamma)$ is $$A(x-m)^2+B(x-m)(y-n)+C(y-n)^2+D(x-m)+E(y-n)+F=0.$$ Multiply this out to obtain an equation of the form $Ax^2+Bxy+Cy^2+D'x+E'y+F'=0,$ where \begin{align} D' &= D - 2Am - Bn, \\ E' &= E - Bm - 2Cn, \ \text{and} \\ F' &= Am^2 + Bmn + Cn^2 - Dm - En + F \end{align} But Equations $3$ and $4$ imply that $D'=E'=0,$ so the equation of $T(\gamma)$ is $$ Ax^2+Bxy+Cy^2+F'=0. $$ Now if $A=0$ then the equation of $T(\gamma)$ is $(Bx+Cy)y + F'=0,$ so $T(\gamma)$ is a hyperbola. But if $A\neq0,$ we can "complete the square" as follows. Let $\alpha = \sqrt{\lvert A\rvert}.$ Then $$\left(\alpha x + \frac{B}{2\alpha}y\right)^2 = Ax^2 + Bxy + \frac{B^2}{4A}y^2,$$ and therefore the equation of $T(\gamma)$ is $$ \left(\alpha x + \frac{B}{2\alpha}y\right)^2 + \left(C - \frac{B^2}{4A}\right)y^2 + F' = 0. $$ We can also write this in the form $$ \bar x^2 - \frac{B^2 - 4AC}{4A}y^2 + F' = 0 \tag5$$ where $\bar x = \alpha x + \frac{B}{2\alpha}y$. If $B^2-4AC > 0$ this is the equation of a hyperbola if $F\neq0$ but it is the equation of two lines through the origin if $F=0.$ If $B^2-4AC < 0,$ Equation $5$ is the equation of an ellipse if $F'<0,$ a single point if $F'=0,$ but the empty set if $F'>0.$ As these cases describe the possible shapes of $T(\gamma)$, in the same cases $\gamma$ (which is congruent to $T(\gamma)$) must also be a hyperbola, an ellipse, a point, or the empty set, respectively. Case 2. Let $B^2 - 4AC = 0.$ Since we have assumed $A,$ $B,$ and $C$ are not all zero, $A$ and $C$ cannot both be zero. Without loss of generality, suppose $A \neq 0.$ Let $\alpha = \sqrt{\lvert A\rvert}$ and let $\beta = \frac{B}{2\alpha}.$ Then $2\alpha\beta = B$ and $\beta^2 = C,$ so the equation of $\gamma$ can be written $$(\alpha x + \beta y)^2 + Dx + Ey + F = 0. \tag6$$ If $\alpha E \neq \beta D$ then Equation $6$ gives $Dx + Ey$ as a quadratic function of $\alpha x + \beta y,$ so it is the equation of a parabola. But if $\alpha E = \beta D$ then Equation $6$ is a quadratic equation in $\alpha x + \beta y$ and it is the equation of either a single straight line, two parallel lines, or the empty set. So we see that the theorem as stated is not quite right. First of all, there are equations of the form $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$ that have no solutions in real numbers $x$ and $y.$ There are several special cases to consider: \begin{align} P:\quad & A=B=C=0, \\ Q:\quad & B^2 - 4AC > 0 \quad \text{and} \quad F = 0, \\ R:\quad & B^2 - 4AC < 0 \quad \text{and} \quad F \leq \frac{BDE-AE^2-CD^2}{B^2 - 4AC}, \\ S:\quad & B^2 - 4AC = 0 \quad \text{and} \quad AE = CD. \end{align} If any of the propositions $P,$ $Q,$ $R,$ or $S$ is true, it can be shown through algebra that the equation $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$ is one of the cases whose solution set has already been shown to be one or two lines, a single point, or the empty set. But if all four propositions are false then we have an ellipse, a parabola, or a hyperbola. The theorem as stated, however, considers only proposition $P,$ so it does not give sufficient conditions to determine whether the equation's solution is a non-degenerate conic (an ellipse, parabola, or hyperbola) or one of the other possible shapes. If we accept that the conic section might be a degenerate form, however, then all the possible solution sets can be interpreted as conics. This includes the empty set, which is obtained by a cone with axis parallel to the $z$ axis and vertex angle equal to a right angle, and parallel lines, which are obtained by intersecting the plane with a cylinder, which can be considered a cone with vertex at infinity (which it actually is, in projective geometry). A single point, single line, and intersecting lines can all be obtained by placing the vertex of the cone on the $x,y$ plane. Under this interpretation, it is not necessary to exclude the case in which $A=B=C=0.$ The answers to the following question may also be of interest: Confusion with the various forms of the equation of second degree David KDavid K $\begingroup$ Wonderful explanation, for what it's worth, +1 $\endgroup$ – complexmanifold Jan 27 '17 at 14:38 Not the answer you're looking for? Browse other questions tagged conic-sections or ask your own question. General equation of a conic. Confusion with the various forms of the equation of second degree How do you prove that a plane intersecting a cone gives an ellipse? Why general and Standard cases exists for conic sections? Prove that a conic section is symmetrical with respect to its principal axis. Why loci of conic sections are defined in the way they are? Points at infinity of a conic section and its eccentricity, foci, and directrix? Reconciliation of Cone-Slicing and Focus-Directrix Definitions of Conic Sections Conic Section Intuition Trying to understand what the conic section elements are Definition of Ecccentricity of an Ellipse Relation between the two definitions of conic sections How does the focus-directrix definition of a conic section apply to a circle? Conic sections & eccentricity: known vertical hyperbola, find radius of corresponding circle?
CommonCrawl
Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences Julian Hatwell ORCID: orcid.org/0000-0001-6589-51651, Mohamed Medhat Gaber1 & R. Muhammad Atif Azad1 Computer Aided Diagnostics (CAD) can support medical practitioners to make critical decisions about their patients' disease conditions. Practitioners require access to the chain of reasoning behind CAD to build trust in the CAD advice and to supplement their own expertise. Yet, CAD systems might be based on black box machine learning models and high dimensional data sources such as electronic health records, magnetic resonance imaging scans, cardiotocograms, etc. These foundations make interpretation and explanation of the CAD advice very challenging. This challenge is recognised throughout the machine learning research community. eXplainable Artificial Intelligence (XAI) is emerging as one of the most important research areas of recent years because it addresses the interpretability and trust concerns of critical decision makers, including those in clinical and medical practice. In this work, we focus on AdaBoost, a black box model that has been widely adopted in the CAD literature. We address the challenge – to explain AdaBoost classification – with a novel algorithm that extracts simple, logical rules from AdaBoost models. Our algorithm, Adaptive-Weighted High Importance Path Snippets (Ada-WHIPS), makes use of AdaBoost's adaptive classifier weights. Using a novel formulation, Ada-WHIPS uniquely redistributes the weights among individual decision nodes of the internal decision trees of the AdaBoost model. Then, a simple heuristic search of the weighted nodes finds a single rule that dominated the model's decision. We compare the explanations generated by our novel approach with the state of the art in an experimental study. We evaluate the derived explanations with simple statistical tests of well-known quality measures, precision and coverage, and a novel measure stability that is better suited to the XAI setting. Experiments on 9 CAD-related data sets showed that Ada-WHIPS explanations consistently generalise better (mean coverage 15%-68%) than the state of the art while remaining competitive for specificity (mean precision 80%-99%). A very small trade-off in specificity is shown to guard against over-fitting which is a known problem in the state of the art methods. The experimental results demonstrate the benefits of using our novel algorithm for explaining CAD AdaBoost classifiers widely found in the literature. Our tightly coupled, AdaBoost-specific approach outperforms model-agnostic explanation methods and should be considered by practitioners looking for an XAI solution for this class of models. Medical diagnosis is a complex, knowledge intensive process. A medical expert must consider the symptoms of a patient, along with their medical and family history including complications and co-morbidities [1]. The expert may carry out physical examinations and order laboratory tests and combine the results with their prior knowledge. These activities are time intensive and, increasingly, considered sources of Big Data [2, 3]. Suitably experienced, available practitioners and experts are needed to orchestrate and interpret the results, yet these experts are a scarce resource in many healthcare settings. As healthcare needs grow and the sources of medical data increase in size and complexity, the diagnostic process must scale to meet these growing demands. State of the art machine learning (ML) methods underpin many computer aided diagnostics (CAD) systems. CAD can address the aforementioned scalability challenges and may improve patient outcomes [4–6]. These ML methods demonstrate exceptional predictive and classification accuracy and can handle high dimensional data sets that often have very high rates of missing values. Examples of such challenging data sets include high throughput bioinformatics, magnetic resonance imaging scans, microarray experiments, and complex electronic health records (EHR) [7, 8], as well as unstructured, user-generated content (e.g. from social media feeds) that have been used to learn individuals' sub-health and mental health status outside of a clinical setting [9, 10]. Unfortunately, however, many state of the art ML models are so-called "black boxes" because they defy explanation. The complexity of black box models renders them opaque to human reasoning. Consequently, experts and medical practitioners are reluctant to accept black box models in practice since they need to reason about, verify and approve the model's output before making a final decision. In the clinical setting, the model's output should facilitate professional decision-making alongside their expert clinical training and experience. A standalone classification from a black box model does not serve this purpose well, if at all. This barrier to adoption is evident, even when the black box models are demonstrably more accurate [1, 11–17]. There is also a legal right to explanation for high stakes decisions, which includes medical diagnosis and treatment recommendations [5, 18]. Some might argue that a black box model is no less transparent than a doctor [19]. Nevertheless, a doctor can be asked to justify their diagnosis and will do so from a position of domain understanding. In contrast, providing explanations for black box models is a very complex challenge. These models find patterns in data without domain understanding. Yet we wish to communicate explanations to a variety of levels of domain expertise: patient, practitioner, healthcare administrators and regulators. Additionally, we set higher standards of statistical rigour before granting our trust to ML derived decisions and explanations [20, 21]. Recent studies found that classification is the most widely implemented ML task in the medical sector and solutions using the AdaBoost algorithm [22] form a significant subset of the available research. Clinical applications include the diagnosis of Alzheimer's disease, diabetes, hypertension and various cancers [23–26]. There are also non-clinical assessments of self-reported mental health, and subhealth status. The latter is characterised by chronic fatigue and infirmity that often leads to future ill-health. These non-clinical approaches used unstructured, user generated content from online health communities [9, 10]. AdaBoost has also been used as a preprocessing tool to select automatically the most important features from high dimensional data [27, 28]. Yet, AdaBoost is considered a typical black box as a consequence of its internal structure: an ensemble of typically 100s to 1000s of shallow decision trees. The ensemble uses a weighted majority vote to classify data instances; a system that is difficult to analyse mathematically. The widespread adoption of AdaBoost in medical applications, coupled with its black box nature leads to the challenge; to make AdaBoost explainable. We present Adaptive-Weighted High Importance Path Snippets (Ada-WHIPS), a novel method for explaining multi-class AdaBoost classification through inspection of the model internals; a collection of adaptive weighted, shallow decision trees. The method proceeds by extracting the decision path from each tree that is specific to the data instance requiring an explanation (the explanandum). Only the paths that agree with the weighted majority vote are retained. These paths are disaggregated into individual decision nodes (which we call path snippets), and the weights are reassigned according to depth within the tree and frequency within the ensemble. The most important snippets are filtered and sorted by the newly applied weights. These adaptive-weighted, high importance path snippets are then greedily added to a classification rule. The final rule is tested for quality metrics and counterfactual conditions against the training (or historical) data. To demonstrate our contribution, we now present four illustrative examples of Ada-WHIPS explanations. These examples have been drawn at random from the data sets used in our experiments, which are all CAD or medically relevant ML problems. An Ada-WHIPS explanation is a simple, conjunctive classification rule, presented alongside confidence and counterfactual (contrast) information. This includes: generality (coverage), specificity (precision), and how much precision decreases (% points) when any single rule term is violated. The end user can immediately determine the essential attributes (the features and decision boundary) that led to the model's confident classification: In Table 1, statistical features computed from foetal cardiotocograms are used to diagnose heart abnormalities. In Table 2, an online health community (self-selecting) responded to a twenty-four question survey on their mental health. The classification model identifies those individuals who have actually sought treatment. The individual shown in the examples has responded that they are experiencing problems at work and that there may be a family history of mental illness. Table 3 shows attributes from an EHR that were critical in determining the risk of readmission for one particular patient. Table 4 shows the results of a classifier for abnormal thyroid conditions. Full details of the data sets used can be found in Table 6. Table 1 Explanation of a classifier for foetal heart abnormalities Table 2 Explanation of a non-clinical mental health assessment classifier Table 3 Explanation of automated 30-day hospital readmission risk assessment Table 4 Explanation of a classifier for thyroid condition We proceed with a walk through of the interpretation of Table 1: The model has classified the instance as "Normal." This is on a prior of 79.0% Normal in the training (historical) data. However, the given instance has a set of readings that raises the precision to 98.2%. If an almost identical instance were found with a point change in any one of the features listed (taking the instance outside the decision boundary), precision would decrease by the amount shown on the adjacent Contrast column. The new values would be worse than a random guess on this prior, with a raised number of prolonged decelerations per second returning a different outcome code altogether. These conditions hold on 60% of the historical data, making this a high quality rule that can inform the clinician's decision on whether any intervention is necessary – most likely not, in this case. The rest of this paper is organised as follows: We continue this Background section with an in-depth review of the current state of the art in XAI, related work in CAD and a recap of the Multi-Class AdaBoost algorithm. We introduce our novel algorithm and describe our experimental setup in the Method section. We report our results and elaborate on their significance in the Results section. Further important points are presented in the Discussion section. The article finishes with a section on Conclusion & future work. XAI and interpretable models - current state of the art Medical practitioners making safety critical decisions need explanations of ML classification results that provide the required level of accountability. The current research seeks to address the challenge posed by the use of AdaBoost models in healthcare applications. In contrast to model-agnostic methods that operate on input sensitivity to synthetic data, our approach is to "open the black box" of an already trained and well performing AdaBoost model. This approach provides explanations that directly relate to the model internals. In the following paragraphs, we outline the state of the art and the novelty of our approach. The decompositional approach [29] to interpretability is well established. "Decompositional" refers to the process of querying directly the smallest information unit of a model, e.g. the set of all decision nodes within each decision tree of an ensemble. Examples in the literature include: DefragTrees [30], Forex++ [31], RF+HC [32], inTrees [33], RuleFit [34], Brute [35]. All these methods generate a cascading rule list (CRL) as a simpler, surrogate of the original classification model. The prevalence of CRL as interpretable models indicates the importance of logical rules for explainability. Logical rules are intuitive to understand, being the standard language of reasoning [20, 36] and are the paradigm that we have adopted in our method. The above mentioned methods are examples of globally interpretable proxy models; they allow the user to infer some understanding of the black box model's overall behaviour. However, with such proxy models there is always a trade-off; increasing interpretability but also increasing classification error and giving no guarantees of fidelity with the original model. Anything less than perfect fidelity means that, for some instances, proxy and model do not agree. Explanations that refer to a different class than the model's predicted class are of no use in a safety-critical setting, such as CAD. Ada-WHIPS uses logical rules and is a decompositional method but unlike the above mentioned methods, Ada-WHIPS explains one classification instance at a time rather than the global model behaviour described. The method is local and post-hoc [37]. Ada-WHIPS also has perfect fidelity by design. That is, the explanation generating process begins with the black model's classification as its starting point and is, therefore, guaranteed to match. Several post-hoc, per instance explanation methods have been proposed as model-agnostic frameworks (also known as didactic methods [29]). The model-agnostic assumption is that any model's behaviour can be explained given unfettered access only to the model inputs and outputs (that is, to make an unlimited number of calls) but no access to the training data nor the model internals. Model-agnostic methods probe the model's behaviour by generating a large, synthetic input sample. Each explanation is inferred from the effect of different input attributes on the outputs. Local Interpretable Model-agnostic Explanations (LIME) [21] generates a sparse linear model, SHapley Additive exPlanations (SHAP) [38] uses a game theoretic approach for a similar result: a set of non-zero coefficients for the input attributes. The coefficients are additive and their magnitude is proportional to the importance in the classification of the attributes they represent. As a result, these methods are categorised as Additive Feature Attribution Methods (AFAM) [38]. The main disadvantage of AFAM is that it is difficult to know when to apply an AFAM explanation to another previously unseen instance that does not share all of the same attribute values associated with the coefficients. Anchors [36] and LOcal Rule-based Explanations (LORE) [39] also use synthetic samples but generate a single classification rule (CR) as an explanation (as opposed to the many rules in a CRL). A CR-based explanation resolves the main disadvantage of AFAM because it is trivial to generalise a CR to another instance; the rule either covers or does not. Anchors uses the same synthetic sampling technique used by LIME since it was developed by the same research team to overcome the shortcoming of AFAM. LORE uses a genetic algorithm to generate the synthetic sample but this requires a very large number of calls to the black box model, and is computationally expensive to run in its own right. Model-agnostic techniques, while effective in image and text classification, have disadvantages on tabular data sets. For one thing, they require additional checks; variance in the sampling process can cause variance in the resulting explanations over repeated trials [40, 41]. Furthermore, for tabular data, a realistic synthetic distribution must be estimated from the training data set or a large i.i.d. sample. This requirement violates the model-agnostic assumption of accessing only the inputs and outputs of the black box model. LIME, Anchors, and SHAP sample from the marginal training distribution, while LORE explores the marginal input domains. Clearly such synthetic samples have no guarantees to represent the underlying population because they do not use the joint distribution. In most real-world problems, the joint distribution is unknown or intractable. Yet, these methods explicitly access the training data but there is no rationale given in the relevant articles for not using the empirical distribution, for example by the bootstrapping method used in Brute [35]. Consequently, these model-agnostic methods are thought to put too much weight on unlikely or impossible examples. Moreover, LIME and Anchors require all features of tabular data to be categorical. Continuous features must be discretised in advance of training the classification model. To this end, quartile binning [36] is proposed by the authors. This is an arbitrary procedure and a significant compromise that puts constraints on the model of choice and potentially loses important information from the continuous features. Ada-WHIPS, in contrast, assumes access to both the model internals and the training data. By decomposing the internals, using the adaptive weights and executing a greedy heuristic against the bootstrapped training data, the output explanation is an open-the-box method, and uses the empirical distribution instead of a synthetic distribution. Furthermore, Ada-WHIPS exploits the information-theoretic discretisation of the continuous features that occurs when the individual decision trees are induced during the AdaBoost model training. This information preserving approach is an advantage over the methods that require discretisation as a preprocessing step. Model-agnostic methods can also be slow to compute. For example, computing Shapley Values entails solving a large combinatorial problem which limits the scalability [42], while LORE's synthetic samples are generated by a genetic algorithm that is not parallelisable in the currently available versionFootnote 1. Ada-WHIPS is fast, as our experimental study shows. We suggest that the model-agnostic assumption should be taken with caution. There is a prevailing view in the XAI research community that model-agnostic methods are a very active research area while model-specific methods may be in decline. Yet, in a recent, comprehensive literature review [43] the following methods were categorised as model-agnostic when, in fact, they are model-specific: Saliency Maps, Activation Maximisation, Layer-wise Relevance Propagation. These methods all require access to the internal neurons in an Artificial Neural Network and their categorisation as model-agnostic may be a sign of confirmation bias in the research community. We also argue that model-agnostic methods are only required for a subset of ML problems, such as model auditing by an external third party. This scenario does not apply in CAD system development where the capability to add explanations would come from the owners themselves of the model and data. With access to both the training data and the model, decompositional methods should always be considered since they do not rely on synthetic data and can deliver explanations that are more representative of the model's internals [43]. Treeinterpreter [44] is possibly the earliest model-specific explanation method, applicable to regression problems with Random Forest models. TreeSHAP [42], based on the SHAP method, assumes an underlying XGBoost model and queries the internal decision nodes. This model-specific design provides faster and more consistent results than the original SHAP algorithm for XGBoost models. Thus, model-specific methods are and should remain an active and relevant research area. Finally, very few XAI methods have so far implemented counterfactuals, which are "what if" scenarios that indicate minimal changes to the inputs that would yield a different classification. LORE is the only well-cited example to the best of our knowledge and applies a strict change-of-class counterfactual paradigm and only works for binary classification. Ada-WHIPS provides a more flexible counterfactual solution that shows how the confidence (specificity) of a classification changes, as opposed to a discrete change of class. This novel, probabilistic approach allows the expert user to control and interpret the results since a decreasing confidence has ramifications even if the outcome code does not change. For example CAD may involve rare conditions in very unbalanced data sets, thus simply decreasing the probability that the individual is disease free may be enough to suggest an intervention. The method works just as well for multi-class problems. As a minor contribution, we also provide a novel method to avoid over-fitting explanations that could potentially be applied elsewhere. CAD is an active research area. Yet, the safety critical nature suggests that it is unethical to make diagnoses without human intervention [45, 46]. XAI in healthcare offers the paradigm to assist rather than replace the medical expert. Hence, we present recent research that aligns to this paradigm. We focus on methods that predict or classify from non-image based clinical data. Table 5 summarises our review. Table 5 Summary of related work Table 6 Data sets used in the experiments Lamy et al. [47] uses a case-based reasoning (CBR) approach to recommend treatments for breast cancer patients. Using a combination of weighted k-nearest neighbours (WkNN) and multidimensional scaling (MDS), the user is presented with a visual interface making recommendations based on similarities/differences with historical cases. CBR provides the medical expert with several comparison instances/cases to evaluate, while Ada-WHIPS presents one classification rule directly extracted from the model internals that must be true of the explanandum instance while coverage statistics measure the rule's generalisation to other instances. Kwon et al. [48] presents RetainVis, a visual analytics application for predicting health status from health insurance data. Feature attribution values and t-SNE clustering are used to provide an interactive interface. The paper demonstrates the benefits and deeper insights available from tight coupling to a specific model; a recurrent neural network (RNN), in this case. Adnan and Islam [31] uses a novel algorithm to simplify an existing tree ensemble. The compact, surrogate model is a rule list that can be used for classifying unseen instances. The authors claim that the global behaviour of the compact model is easier to interpret than the black box ensemble but the rule list can itself be long and time consuming to interpret. In contrast, our method is concerned with generating a single rule to explain a single instance at a time. Jalali and Pfeifer [8] use an ensemble of linear support vector machines (L1-SVM) to predict cancer diagnosis and identify important patterns of gene expression. This novel approach is tightly coupled to the data domain (genetic biomarkers) whereas Ada-WHIPS could feasibly be applied to any tabular data including those not related to medicine or healthcare. Turgeman and May [12] propose a simple ensemble of a C5.0 decision tree and a support vector machine (SVM). The easiest to classify instances can be explained by traversing the tree, while hard to classify instances are left to the SVM which remains a black box. Consequently, this method cannot produce a straightforward explanation for all instances, unlike our method. Jovanovic et al. [11] implement a Tree-Lasso system for introducing domain knowledge about serious disease conditions into a sparse logistic regression model that is easy to interpret. Lasso based methods discover a small set of important features using L1-norm regularisation but the tree-lasso requires domain knowledge to be provided apriori. Ada-WHIPS rule conditions are discovered by information theoretic tree induction during the AdaBoost model training, and does not require any apriori inputs. Letham et al. [13] proposes a novel interpretable model, the Bayesian Rule List (BRL). The model is used in stroke prediction. The predictive results are competitive with state of the art, but in common with cascading rule lists, interpretability decreases with rule depth as all previous rules must be considered and excluded. Ada-WHIPS generates one rule for one instance from a pre-trained AdaBoost model. Caruana et al. [6] uses generalised additive models (GAM) allowing second order interaction (GA2M) to predict pneumonia risk and hospital readmission. GAMs inherently provide partial independence (PI) plots, giving insight into the global model behaviour, and excellent predictive results. Domain knowledge was required apriori to discretise several features and to determine which second order interactions to include. However, interpretation of the non-linear components remains a challenge. Our method is a completely different approach that provides an explanation for individual cases and requires no apriori domain expertise. Kästner et al. [49] integrates expert knowledge into a neural gas. Interpretability arises from the activation of the explicitly incorporated fuzzy rules. The outputs of this novel method includes scored rule conditions but the fuzzy rules must be introduced apriori, again in contrast to Ada-WHIPS that requires no apriori domain knowledge. Multi-Class adaBoost In this section, we describe multi-class AdaBoost, with which our method is tightly coupled. Boosting is a method for generating a strong classifier by sequentially combining weak, base classifiers. It is one of the most significant developments in Machine Learning [50, 51]. AdaBoost [52] was the first, widely used implementation of boosting and is still favoured for its accuracy, ease of deployment and fast training time [53–55]. It uses shallow decision trees as the base classifiers. On each iteration, the training sample is re-weighted such that the next decision tree focuses on examples that were previously misclassified, while previously generated classifiers remain unchanged (the details of this iterative re-weighting are not central to this research so we refer the interested reader to [52, 56]). AdaBoost also adaptively updates its base classifier weights based on their individual performance, which we discuss now in further detail. Two algorithms, Stagewise Additive Modeling using a Multi-class Exponential loss function (SAMME) and real-valued SAMME (SAMME.R) [56] have emerged as the standard [57] for extending the original AdaBoost algorithm from binary classification to multi-class problems. The following formulations are based on [56]. Let \(f : \mathcal {X} \longmapsto \mathcal {Y}\) be an unknown classification function that we would like to approximate, where \(\mathcal {X}\) is an \(\mathbb {R}^{d}\) input space and \(\mathcal {Y} = \{C_{1},\ \dots,\ C_{K} \}\) is the set of possible classes. Let X be an input data set and our multi-class AdaBoost model be g(X)≈f(X). To classify an instance x, the output of a SAMME model is the weighted majority vote of all the base classifiers. $$\begin{array}{*{20}l} g(\mathbf{x}) = C_{k},\ k &= {arg}\underset{k \in K}{{max}} \sum^{M}_{m=1} \alpha^{(m)} \cdot T^{(m)}(\mathbf{x}),\\ T^{(m)}(\mathbf{x}) &= [c_{1},\ \dots,\ c_{K}],\ \sum T^{(m)}(\mathbf{x}) = 1 \end{array} $$ where \([c_{1},\ \dots,\ c_{K}]\) is a one dimensional (1D) vector indicating the position of the output class and is the output of a single tree T(m) at iteration m. Within this 1D vector, ck=1, cj=0, j≠k indicates that Ck is the predicted class. The whole model \(g = \left \{\left \{T^{(1)},\ \dots,\ T^{(M)}\right \},\ \left \{ \alpha ^{(1)},\ \dots,\ \alpha ^{(M)} \right \} \right \}\) is the combination of a set of M base decision tree classifiers and a set of M classifier weights. These weights are calculated during the training phase as: $$ \alpha^{(m)} = \log \frac{1 - {err}^{(m)}}{{err}^{(m)}} + \log(K - 1),\ 0 < {err}^{(m)} \leq 1 - \frac{1}{K} $$ where err(m) is the error rate at iteration m. To classify an instance x with SAMME.R, each base classifier returns a vector of the conditional probabilities that the class of x is Ck. This is the distribution of training instance weights in the terminal node of the decision path taken by x through each tree: $$ {\begin{aligned} T^{(m)}(\mathbf{x}) = [ \mathbb{P}_{T^{(m)}}(C_{1}|x),\ \dots,\ \mathbb{P}_{T^{(m)}}(C_{K}|x) ],\\ \sum T^{(m)}(\mathbf{x}) = 1,\ y \in \mathcal{Y} \end{aligned}} $$ and confidence weights are calculated at run time as: $$ {}\alpha^{(m)}_{k}|x = (K-1)\big(\log \mathbb{P}_{T^{(m)}}(C_{k}|x) - \frac{1}{K} \sum^{K}_{j=1} \log \mathbb{P}_{T^{(m)}}(C_{j}|x)\big). $$ The output of the whole model is the majority vote based on the additive contribution of these confidence weights per class: $$ g(\mathbf{x}) = C_{k},\ k = {arg}\underset{k}{{max}} \sum^{M}_{m=1} \alpha^{(m)}_{k}|x. $$ where \(g = \left \{T^{(1)},\ \dots,\ T^{(M)}\right \}\) (weights \(\alpha ^{(m)}_{k}\) evaluated at run time). Ada-WHIPS We now present Ada-WHIPS, our algorithm for generating a CR based explanation for the classification of an explanandum instance x by a previously trained AdaBoost model g. The algorithm begins by initialising a rule as an empty antecedent and the classification outcome g(x) as the consequent. Thus, the CR always agrees with the black box, by design. The algorithm then proceeds through the steps shown in Fig. 1, to identify a small set of antecedent terms, or logical conditions. These conditions must be true of x and must exert the most influence on the classification result. The source of these logical conditions is the ensemble of decision trees that make up g. The influence is determined by the classifier weights within the internals of g, which themselves are derived from the error rates (weights increase as errors decrease). Conceptual diagram of Ada-WHIPS Extract decision paths An AdaBoost model typically comprises 100's-1000's of shallow decision trees, potentially resulting in a very large search space. For a given x∈X, we can reduce this space logarithmically by considering only decision paths of that x in each decision tree and ignoring all other branches. The paths retain all the information about how g(x) was determined. A conceptual example of extracting the decision path is shown in Fig. 2. Here, \(\mathbf {x} = \{\dots,\ x_{i} = 0.1,\ x_{j} = 10,\ \dots \}\), where xi is the attribute value of the ith feature. The decision path starts from the root node Q1, following the binary split conditions down to a leaf node. The decision path contains node detail triples of the following form (j,ν,τ), where j is a feature index and \(\nu \in \mathbb {R}\) is the threshold for the inequality xj<ν and τ∈{0,1} is the binary truth of evaluating the inequality. Note that for this instance, all other nodes are irrelevant. For example, even though Q7 applies (xi<1.0), it cannot be reached by x because of the evaluation at Q5. Conceptual diagram of a decision path for one instance through one tree The search space can be further reduced by considering only those trees that agreed with the weighted majority vote. The rationale for this is based on the application of maximum margin theory to boosting [58]. If x is an unseen instance, the margin in SAMME is: $$ \begin{aligned} {margin} &= \frac{a^{+} - a^{-}}{\sum^{T}_{m=1} \alpha^{(m)}},\ a^{+} = \sum^{|\mathcal{T}^{+}|}_{n=1} \alpha^{(n)},\ a^{-} = \frac{1}{K-1} \sum^{K}_{k = 1} \sum^{|\mathcal{T}^{-}|}_{u=1} \alpha^{(u)}, \\ \mathcal{T}^{+} &= \left\{T : g(\mathbf{x})= C_{k},\ k = {arg}\underset{k \in K}{{max}} T(\mathbf{x}) \right\},\\ \mathcal{T}^{-} &= \left\{T : g(\mathbf{x}) = C_{k},\ k \neq {arg}\underset{j \in K}{{max}} T(\mathbf{x}) \right\},\ T^{(m)}, \alpha^{(m)} \in g. \end{aligned} $$ The quantity a+, represents the sum of weights from the classifiers that voted for the majority class and a+>a− is always true for the majority class. The set \(\mathcal {T}^{+}\) are the base classifiers that voted in the majority and thus contributed their weight to a+, and \(\mathcal {T}^{-}\) are the remaining classifiers. \(\mathcal {T}^{+}\) completely determines the ensemble's output for a given instance because an ensemble classifier formed from the union of \(\mathcal {T}^{+}\) and any subset of \(\mathcal {T}^{-}\) would return the same classification with a larger margin because \(a^{-}_{*} < a^{-},\ \mathcal {T}^{-}_{*} \subset \mathcal {T}^{-}\). We found no margin formalisation for SAMME.R in the literature but we can define \(\mathcal {T}^{+} := \left \{ (T^{(m)}, \alpha ^{(m)}_{k}) : \alpha ^{(m)}_{k} \geq \alpha ^{(m)}_{j},\ k,j \in \{1,\ \dots,\ K\} \right \}\) and, as a convenience, we can substitute the α terms in Eq. (6) for the following Kullback-Leibler (KL) Divergence. The KL-Divergence (also known as "relative entropy") measures the information lost if a distribution P′ is used, instead of another distribution P to encode a random variable and is defined as: $$ D_{KL}(P \parallel P') = - \sum_{x \in \mathcal{X}} P(\mathbf{x}) \log \left(\frac{P(\mathbf{x})}{P'(\mathbf{x})} \right) $$ and we set P,P′ as the posterior class distribution of each T(m)(x) given in Eq. (3), and prior class distribution in the training data, respectively. The KL-Divergence will be larger for trees that classify with greater accuracy, relative to the prior distribution. The DKL emulates the classifier weights yielded by Eq. (2), which allows the rest of the algorithm to proceed in an identical manner for SAMME and SAMME.R. Redistribute adaptive weights To avoid a combinatorial search of all the available decision nodes, we sort them, prior to rule merging, according to their ability to separate the classes. To do this, we disaggregate the entire set of decision paths into individual decision nodes and redistribute the classifier weights onto the nodes. This procedure is illustrated in Algorithm 1. The contribution of each node is conditional on the previous nodes in the path and this sorting must take into account the node order in the originating tree. To do this, we apply Eq. (7) to determine the relative entropies at each point in a path. For each root node, we set P,P′ as the class distribution when applying that decision to the training data, and the prior class distribution respectively. For subsequent nodes, P is the class distribution after applying all previous decision nodes including the current node and P′ is the distribution up to but not including the current node. The relative entropy scores for nodes in a single path are normalised such that their total is equal to that of the classifier weight α(m). The scores are grouped and summed for nodes that appear in multiple paths. We filter the nodes, keeping only those with the largest weights (e.g. top 20%). Finally, all nodes from all paths are sorted by this score in descending order. Generate classification rule It is trivial to convert the node detail triples (j,ν,τ) into antecedent terms of a CR [59]. We use nodes and terms interchangeably from here on. The objective is to find a minimal set of terms that maximises both precision and coverage while mitigating the problem of over-fitting. Over-fitting can occur if we maximise precision as an objective function. We risk converging on "tautological" rules that provide no generalisation. This is because precision is trivially maximised by single instances. A tautological rule contains enough terms to identify a single instance uniquely. In a noisy data set, there could be many such local maxima. Therefore, we propose stability as a novel objective function, defined as: $$ \zeta(\mathbf{x}, g, \mathbf{Z}) = \frac{|\{ \mathbf{z} : g(\mathbf{z}) = g(\mathbf{x}),\ \mathbf{z} \in \mathbf{Z} \}|}{|\mathbf{Z}| + K} $$ where Z is the set of instances covered by the current rule and K the number of classes. The maximum achievable ζ is \(\frac {1}{K}\) for a single instance but approaches precision asymptotically as |Z|→∞. Stability, therefore acts as a brake on adding too many terms and over-fitting. We proceed with a breadth first search, iteratively adding terms to an initially empty rule. We always add the first term in the sorted list. Then, we work down the list, greedily adding further terms if they increase stability and discard them if they do not. The algorithm stops when a threshold stability (e.g. 0.95) is reached or the list is exhausted. These steps are illustrated in Algorithm 2. Generate counterfactuals Counterfactuals answer the question "what would have happened if... ?" They illustrate minimal changes in the inputs that would give different results. Some authors define counterfactual (sometimes called contrastive) explanations as a minimal change set on the inputs that would return a different result [5, 15, 39, 60]. However, discrete change-of-classification counterfactuals do not allow any uncertainty. We suggest a fuzzy definition is better suited here; namely, if precision (specificity) decreases beyond a user-defined tolerance. The expert can better exercise their judgement with this approach. For example, decreasing from high to low confidence in a CAD or risk score can lead to requests for additional tests, a less aggressive clinical intervention and so on. Since the definition of counterfactuals is a minimal change set, it is not necessary (nor even practical) to provide every possible input scenario. It suffices to show the effect of each point change and this is easy to do with CR simply by changing each of the rule terms, one at a time. Any point changes that do not decrease the precision beyond the user-defined tolerance represent a non-counterfactual change and can be removed from the rule. This procedure provides an intuitive pruning mechanism for removing redundant terms that might have been added during the greedy rule merge algorithm. We illustrate this concept visually in Fig. 3. Here a model with a complex decision boundary is trained on a synthetic data set (a Gaussian mixture model) which has two classes, shown as triangles and circles. The model classifies an explanandum instance x as a triangle. The explanation is found - the following CR: \(\{\mathbf {z} : a \leq z_{1} \leq b,\ c \leq z_{2} \leq d,\ \mathbf {z} \in \mathcal {X}\} \implies \text {triangle}\). The counterfactual spaces are those spaces immediately adjacent to the four rule boundaries, derived by reversing one inequality at a time: $$ \begin{aligned} \big\{ &\{\mathbf{z} : z_{1} \leq a, c \leq z_{2} \leq d\},\ \{\mathbf{z} : b \leq z_{1}, c \leq z_{2} \leq d\},\\ &\{\mathbf{z} : a \leq z_{1} \leq b, z_{2} \leq c\},\ \{\mathbf{z} : a \leq z_{1} \leq b, d \leq z_{2}\},\ \mathbf{z} \in \mathcal{X} \big\} \end{aligned} $$ Counterfactual spaces - conceptual diagram Even though the triangle class is still predicted for parts of these spaces, the expected precision decreases drastically for a CR that is formed from any one of these counterfactual spaces for the antecedent and the same consequent. Thus, the original rule provides a crisp boundary where the maximal precision holds. The counterfactual rules communicate how much precision decreases when the rule is violated in any one dimension. We compared Ada-WHIPS in an experimental study with the state of the art. Three metrics are used to measure effectiveness, namely, coverage, precision and our new measure of stability. Efficiency, in terms of computing performance, is measured using the average time to generate an explanation. Comparisons are made against two other CR-based, per instance explanation methods: Anchors [36] and LORE [39]. Both methods are model-agnostic. Readers who are familiar with XAI research may question the omission of LIME [21] and SHAP [38], which are the most discussed per instance explanation methods. LIME and SHAP fall into a different class of methods, described as additive feature attribution methods (AFAM). AFAM are, effectively, local linear models (LM) whose coefficients relate the importance of various attributes to the original model's classification of the explanandum. There is no obvious way to apply the local LM for one instance to any other instances in order to calculate the quality measures such as precision and coverage, and comparison with CR-based methods is of limited value [36]. Fortunately, Anchors has been developed by the same research group that contributed LIME and uses the same synthetic sampling technique. Anchors can be viewed as a rule-based extension of LIME and its inclusion into this experimental study provides a useful comparison to best in class AFAM research. The experiments were conducted using Python 3.6.x running on a standalone Lenovo ThinkCentre with Intel i7-7600 CPU @ 3.4GHz and 32GB RAM using the Windows 10 operating system. We used nine data sets described in Table 6. These were sourced from the UCI Machine Learning repository [61] and represent specific disease diagnoses from clinical test results, except; the mental health surveys (Kaggle) which represents case studies in detecting mental health conditions from non-clinical online health community data; the hospital readmission data (Kaggle) which represents a large EHR; and understanding society [62] which is from the General Population Sample of the UK Household Longitudinal Study and used under license. We use the file from waves 2 and 3 where participants had a health visit carried out by a qualified nurse. At least one study [63] has shown that the biomarkers measured in the survey may be associated with the results from self-completion instruments measuring mental health. We run a classification task for the SF-12 Mental Component Summary (PCS) which has been discretised into nominal values "poor," "neutral" and "good.' Unfortunately, we discovered that LORE was not scalable after finalising our experimental design. The time cost of generating a synthetic distribution by means of a genetic algorithm rendered the method unusable on some of the data sets. The time per instance was on average twenty-five to thirty minutes for the hospital readmission data set and more than two hours per instance on the understanding society data set. The method generated system errors on the mental health survey '14 data set and was not runnable at all. We thoroughly examined the source code to look for opportunities to parallelise the operation, but the presence of a dynamically generated, non-serialisable distance function rendered this impossible. We have included the results where the method did run to completion. AdaBoost model training and testing Each data set was split into training and test sets (70%, 30%) by random sampling without stratification or other class imbalance correction. We trained AdaBoost models using ten-fold cross-validation of the training set on number of trees \({ntrees} \in \{200,\ 400,\ \dots,\ 1600 \}\) and maximum tree depth parameter maxdepth was always 4. We used the ntree setting that delivered the highest classification accuracy to train a final model on the whole training set. As mentioned in the section on related work, Anchors requires all features of the data to be categorical [36]. For our experiments, we generated a copy of each data set, and discretised them using Anchors' provided quartile binning function. A second AdaBoost model was generated from this discretised data set for Anchors to explain. Training and test splits used identical indices as the undiscretised versions. Each test set was then used as the pool of unseen instances to be classified by the AdaBoost model and explained by Ada-WHIPS, Anchors and LORE. Thus, there are three comparable explanations for each test instance. Generating explanations is done instance by instance, not batch wise as in classification. So, for time constraints, the number of instances (test units) was limited to either the whole test set or the first one thousand test instances, whichever was the smaller. For each explanation, all the remaining instances from the entire test set were used to assess the standard quality measures, precision and coverage, along with the novel quality measure, stability (8), which is more sensitive to over-fitting. This leave-one-out procedure ensures that test scores are not biased by leakage of information from the explanation-generating instance. The entire procedure is repeated for SAMME and SAMME.R AdaBoost models. We present the performance scores of the trained models in Table 7. It is important to note that the model training is part of the experimental setup and not to be taken as results per se. These training scores simply reflect the performance of AdaBoost; critiquing the performance of AdaBoost itself is not the objective of this work. We provide this level of detail only to demonstrate that the trained AdaBoost models reasonably approximate the underlying data sets and are very accurate. However, a true explanation by definition must stay faithful to the trained model regardless of whether the model is accurate or not (though a poor model would never be used in clinical practice). We show generalisation accuracy scores and Cohen's κ for the two models (discretised and undiscretised data set variants). Cohen's κ is a useful measure in multi-class problems and class imbalanced data because this statistic corrects for chance agreement, which can be high in such cases. Values close to zero indicate a high degree of chance agreement. See Appendix for further details on Cohen's κ. Table 7 Final AdaBoost model scores Our approach for the experimental study is based on the simulated user study implemented in [36]. In that study, coverage represents the fraction of previously unseen instances a user could attempt to classify after seeing an explanation and thence how generally the rule applies to the whole population. Similarly, precision represents the fraction of those classifications that would be correct if a user applied the explanation correctly, indicating the specificity of the rule. Real users who were shown high coverage and precision rule-based explanations demonstrated significantly improved task completion scores over those who were shown AFAM explanations. To determine statistical significance, we report differences between precision, stability and coverage among the algorithms using non-parametric hypothesis tests. The reason for using these tests is that these measures are proportions; from the interval [0,1] and very right-skewed by design since each method tries to generate very high precision explanations. We use the paired samples Wilcoxon signed rank test where we have results for just Ada-WHIPS and Anchors. The null hypothesis of this test is that the medians of the two samples are equal and the alternative is that the medians are unequal. We use the Friedman test where we have results for all three methods. The Friedman test is a non-parametric equivalent to ANOVA and an extension of the rank sum test for multiple comparisons. The null hypothesis of this test is that there is no significant difference between the mean ranks of all the groups and the alternative is that at least two mean ranks are different. For all our three-way comparisons using the Friedman test, p-values were vanishingly small ≈0. So, in our report that follows, we proceed directly to the recommended pairwise, post-hoc comparison test with the Bonferroni correction (for three pairwise comparisons) proposed in [64]. It is sufficient for this study to demonstrate whether the top scoring algorithm was significantly greater than the second place algorithm on our quality measures of interest. The critical value for a two-tailed test with the bonferroni correction is \(\frac {0.025}{3} = 0.00833\). See Appendix for further details on the Friedman test applied here. The three-way post-hoc tests and the two-way comparisons are shown in separate tables to avoid drawing invalid comparisons. The mean rank, rather than the mean, is given in the tables, as this is the statistic compared between groups by the chosen tests. A significant result is indicated by ** and the winning algorithm is formatted in boldface only if the results are significant. We begin by presenting the four worked examples from the introduction. Then, we assess the aggregated quality measures for the test samples. For each measure, we present dot chart showing the mean score (with standard errors) aggregated over all the test instances. In several cases, the results are close, resulting in over-plotting that could lead to confusion as to whether two or three results are returned for a given data set. To assist the reader in distinguishing the scores, a guide line has been added. However, each data set should still be viewed as a separate experiment. Worked examples Tables 8, 9, 10 and 11 present the worked examples from our introduction. Readers are reminded that the paths taken by a single instance in a pre-trained AdaBoost model are disaggregated into individual decision nodes. The most important of these nodes are recombined into a high quality rule for explaining the model's classification. Note that models had different numbers of iterations, and trees can grow to any depth up to the maximum of 4. It is also interesting to note a detail about the paths from trees that disagreed with the majority classification; that is, while they covered the instance (as they must), the boundary attributes are very distant from the instance attributes in the input space. We suggest that this is in keeping with the theoretical principles of AdaBoost – each iteration focuses on misclassified instances of the previous iteration, leading to a very different decision boundary in the next tree. Table 8 Worked example for foetal heart abnormalities data set Table 9 Worked example for non-clinical mental health assessment data set Table 10 Worked example for automated 30-day hospital readmission risk assessment data set Table 11 Worked example for thyroid condition data set Coverage analysis We present a visual analysis of the raw data (see Appendix for results tables) and tabulate the results of our statistical tests. A cursory inspection of the mean coverage charts shown in Figs. 4-5 indicates that Anchors has the lowest mean coverage over all the data sets but the comparison between Ada-WHIPS and LORE is less clear cut. The results of the hypothesis tests are given in Tables 12-13. The Wilcoxon tests showed that Ada-WHIPS always has significantly higher coverage than Anchors. Ada-WHIPS was the top algorithm in all but three of the post-hoc tests for three-way comparisons and in the top two alongside LORE with no significant difference for the remaining tests. Mean Coverage for SAMME model explanations. Guide lines are added to mitigate over-plotting Mean Coverage for SAMME.R model explanations. Guide lines are added to mitigate over-plotting Table 12 Coverage: Top two by mean rank (mrnk) for three-way comparisons Table 13 Coverage: Mean rank (mrnk) for two-way comparisons Precision analysis The mean precision chart, (Figs. 6-7), show that LORE has the lowest precision in all but one of the data sets where LORE results are available. It is harder to see if there is a definitive lead between Ada-WHIPS and Anchors. Mean Precision SAMME. Guide lines are added to mitigate over-plotting Mean Precision SAMME.R. Guide lines are added to mitigate over-plotting However, the complete picture – and the cost to Anchors of implementing a precision guarantee – can be seen in the distribution charts in Figs. 8-9. Here we see that a certain proportion of explanations have a precision of 0.0. The result shows that Anchors (and LORE to a lesser extent) is over-fitting. Some explanations are so specific that they only explain the explanandum and do not generalise to other instances in the test set. We present the proportion of 0.0 precision explanations that were returned by each algorithm in Table 14. Distributions of Precision SAMME Distributions of Precision SAMME.R Table 14 Proportion of over-fitting, 0.0 precision explanations The proportions vary from around 0.5%−28%. There are important consequences for methods that suffer this level of over-fitting. The most important consequence is that 0.0 precision rules are so specific that they uniquely identify the explanandum but cover no other instance. A unique identifier does not provide any useful new information to explain the model's classification. For the person requiring the explanation, this outcome represents a failure of the system. The lowest failure rates (0.5%) may be tolerable, depending on the criticality or compliance requirements of the application. However, we do not foresee any circumstances where a failure rate at the upper end of this range (28%) would ever be acceptable. Secondly, such over-fitting is symptomatic of an algorithm that generates rules that are overly long; having too many terms in the antecedent to be easily interpretable. To show the link between over-fitting and rule length we present the rule length distribution in Fig 10. Fig. 10 Distributions of Rule Length. Note the y-axis is log10 scaled We present the results of the hypothesis tests in Tables 15-16. Clearly, Anchors dominates out of the three algorithms on a statistical test of median differences. However, we have shown that these results should be taken with caution. To begin with, Anchors required us to discretise the data as a preprocessing step, which resulted in alternative models that were less accurate classifiers. The difference was two or more percentage points in 7/9 for SAMME models and 5/9 for SAMME.R models. Moreover, Anchors has a long tail distribution of rule length, and sometimes a high proportion of critically over-fitting explanations. The tabulated means of precision do not show a clear difference between Ada-WHIPS and Anchors (see Appendix). Furthermore, precision (specificity) is in a trade-off with coverage (generality). Rules that are too specific only apply to a small fraction of other instances. Ada-WHIPS makes a very small trade-off (just a percentage point or two in most cases), and delivers much more generalisable rules that rarely, if ever, over-fit. This behaviour is the result of optimising the novel stability function (Eq. 8). Table 15 Precision: Top two by mean rank (mrnk) for three-way comparisons Table 16 Precision: Mean rank (mrnk) for two-way comparisons Stability analysis Stability can also be used as a quality measure in the XAI setting. A precision of 0.0 for an explanation on a held-out test set can be caused by sampling artefacts (i.e. the ground truth may be a non-zero probability of finding certain attributes and that they are simply under-represented in the data set). For this reason, it can be argued that a precision of 0.0 is a harsh penalty against the aggregate score. Yet, if the rule covers and is correct for just a single instance in the held out set, the precision will be 1.0. This circumstance creates a discontinuity and gives a huge advantage to undesirable, over-fitting explanations. Instead of precision, we can measure stability while including the explanandum in the held out set. This condition results in the formulation \(\frac {n + 1}{m + K}\) where n is the number of covered and correct instances, m is the number of covered instances and K is the number of classes. See Eq. (8). Thus, stability is very similar to the classical additive smoothing function (precision with Laplace correction [65]). The minimum/maximum are both \(\frac {1}{1 + K}\) for N=1 but approach 0/1 asymptotically as N→∞. We present the visual analysis of stability in Figs. 11-12 and the results of the hypothesis tests in Tables 17-18. The post-hoc tests for the three-way comparisons show that Ada-WHIPS is the top or in the top two with no statistical difference in all except mental health survey '16 for the SAMME model. For the two-way comparisons, Ada-WHIPS has a significantly higher rank for hospital readmission (SAMME) and thyroid (SAMME.R) but lower for the remaining results. Mean Stability SAMME. Guide lines are added to mitigate over-plotting Mean Stability SAMME.R. Guide lines are added to mitigate over-plotting Table 17 Stability: Top two by mean rank (mrnk) for three-way comparisons Table 18 Stability: Mean rank (mrnk) for two-way comparisons Efficiency analysis Finally, we show the distribution of computation time per explanation in Fig. 13. A brief visual inspection shows that Ada-WHIPS and Anchors are roughly comparable for all data sets. The shortest run-times are fractions of a second and the longest are two to three minutes. LORE runs at several orders of magnitude longer than this. As we discussed in previous sections, it was prohibitive to run LORE for the data sets mental health survey '14, hospital readmission, thyroid and understanding society with a single explanation taking over two hours to generate. We performed both static and dynamic analysis of the LORE source code and discovered that the bottleneck was in a non-parallelisable, genetic-algorithmic step. Distributions of Computation Time per Explanation. Note the y-axis is log10 scaled Advantages of Ada-WHIPS Our method improves on prior research in that it delivers explanations that have high mean coverage (15%-68%). Ada-WHIPS explanations generalise well while making only a very small trade-off to keep precision/specificity competitive (80%-99%). At the same time, Ada-WHIPS is guarded against over-fitting while competing methods have the tendency to present critically over-fitting explanations, in 0.05%-28% of cases. A critically over-fitting explanation is defined as an explanation that uniquely identifies the explanandum and covers no other instances. Ada-WHIPS does not make any assumptions about the underlying data distribution, while some competing methods require continuous features to be discretised prior to model training. This treatment of the data can result in a less accurate model, detracting from the main benefit of using AdaBoost at the outset. By design, Ada-WHIPS rules extract discrete, logical conditions from the base decision tree classifiers of the AdaBoost model. These logical conditions have an information-theoretic derivation and we speculate that this is what leads to Ada-WHIPS's favourable trade-off between precision and coverage. Ada-WHIPS is efficient. At its fastest, explanations are generated in fractions of seconds. On high dimensional data sets, we recorded times of up to three minutes per explanation. This is in line with competing methods and could still be considered real-time in the context of a medical consultation. As a minor contribution, we presented stability, a novel measure that is a regularised version of precision. It gives more informative results in the XAI setting as it penalises low coverage while correcting for sampling artefacts. Limitations of Ada-WHIPS By design, Ada-WHIPS is a companion method for AdaBoost models and the algorithm is not transferable to other models without adaptation. In contrast, model-agnostic methods, such as Anchors and LORE, can be applied to any black box model with few restrictions. It is up to the end user to determine which approach best suits their specific scenario. Ada-WHIPS is an heuristic method for finding a short rule with high coverage and precision. Consequently, Ada-WHIPS will not provide a feature attribution value for each attribute with theoretical guarantees. If such values with guarantees are required, then the combinatorial calculation of Shapley Values is the recommended method. Experimental studies of XAI are challenging in terms of their time cost. Each explanation must be generated individually and, for all currently well-cited methods, generation of explanations is a much more time consuming process than the classification step. Furthermore, each explanation must be evaluated individually, rather than batchwise. For example, a trivial confusion matrix or AUC-ROC test is not appropriate. We calculated scores for each explanation and then used the means, medians and mean ranks to compare methods. Any experimental design for evaluating XAI must allow for this time cost, and also consider how instances used to generate explanations can be separated from instances used to evaluate explanations. Such designs may require three data partitions (training, explanation generating, explanation evaluating). We opted for a leave-one-out procedure, training a model on a training set then generating explanations one at a time and evaluating on the remaining instances from a held-out set. Conclusion & future work Our main contribution is the novel algorithm Ada-WHIPS for explaining the classification of AdaBoost models with simple classification rules. AdaBoost models are widely adopted as computer aided diagnostic tools and the non-clinical identification of sub-health and mental health conditions using unconventional data sources such as online health communities. As a minor contribution, we propose stability as a novel function for optimisation of explanation algorithms that explicitly avoids over-fitting and can be used as a quality metric in evaluations of XAI experimental research. Directions for future work include developing the method for Gradient Boosting Machines such as XGBoost that use decision trees as the base classifiers, and applying the proposed method on a variety of healthcare and medical data sets. Cohen's κ Cohen's κ is calculated as: $$ {{}\begin{aligned} \kappa = \frac{N \sum^{K}_{i=1} N_{ii} - \sum^{K}_{i=1} N_{i+} N_{+i} }{N^{2} - \sum^{K}_{i=1} N_{i+} N_{+i}},\ \left[\begin{array}{cccc} \mathrm{N}_{11} & \mathrm{N}_{12} & \dots & \mathrm{N}_{1K} \\ \mathrm{N}_{21} & \mathrm{N}_{22} & \dots & \mathrm{N}_{2K} \\ \vdots & \vdots & \ddots & \vdots \\ \mathrm{N}_{K1} & \mathrm{N}_{K2} & \dots & \mathrm{N}_{KK} \end{array}\right] \end{aligned}} $$ where K is the number of classes, N is the total number of instances, Nij is the number of instances in cell ij of the confusion matrix of true vs. predicted class counts, and Ni+,N+j are the ith row and jth column marginal totals, respectively. Table 19 Coverage of explanations of AdaBoost SAMME The original Friedman test produces an approximately χ2 distributed statistic, but this is known to be very conservative. Therefore, we use the modified F-test given in [64], because we have very large values for N, i.e. the count of instances in the test set. The null hypothesis of this test is that there is no significant difference between the mean ranks R of all the groups and the alternative is that at least two mean ranks are different. The null hypothesis is rejected when FF exceeds the critical value for an F distributed random variable with the first degrees of freedom df1=k−1 and the second df2=(k−1)(N−1), where k is the number of algorithms: $$ \begin{aligned} F_{F} = \frac{(N - 1) \chi^{2}_{F}}{N(k-1) - \chi^{2}_{F}},\ \ \chi^{2}_{F} = \frac{12N}{k(k+1)}\left[ \sum^{k}_{j=1}{R_{j}^{2} - \frac{k(k+1)^{2}}{4}}\right] \end{aligned} $$ Table 20 Coverage of explanations of AdaBoost SAMME.R The recommended pairwise, post-hoc comparison test with the Bonferroni correction (for three pairwise comparisons) proposed in [64]: $$ z = \text{diff}_{ij} \bigg/ \sqrt{\frac{k(k+1)}{6N}},\ \text{diff}_{ij} = R_{i} - R_{j} $$ Table 21 Precision of explanations of AdaBoost SAMME Table 22 Precision of explanations of AdaBoost SAMME.R where Ri and Rj are ranks of two algorithms and z is distributed as a standard normal under the null hypothesis that the pair of ranks are not significantly different. The critical value for a two-tailed test with the bonferroni correction is \(\frac {0.025}{3} = 0.00833\) Table 23 Stability of explanations of AdaBoost SAMME Table 24 Stability of explanations of AdaBoost SAMME.R The source code and data sets analysed during the current study are available in our repository: https://tinyurl.com/yxuhfh4e. https://tinyurl.com/qlyxzlv Ada-WHIPS: Adaptive-weighted high importance path snippets AFAM: Additive feature attribution methods BRL: Bayesian rule list CAD: Computer aided diagnostics CBR: Case-based reasoning Classification rule CRL: Cascading rule list DT: Decision tree(s) Electronic health record(s) GAM: Generalised additive model(s) GA2M: Generalised additive model(s) with second order interactions KL: Kullback-Leibler (divergence) LIME: Local interpretable model-agnostic explanations LOcal rule-based explanations MDS: Multi-dimensional scaling PI: Partial independence (plots) RNN: Recurrent neural network SAMME: Stagewise additive modeling using a multi-class exponential loss function SAMME.R: Real-valued SAMME SHAP: SHapley additive exPlanations Support vector machine(s) TSH: Thyroid stimulating hormone WkNN: Weighted k-nearest neighbours XAI: eXplainable artificial intelligence El-Sappagh S, Alonso JM, Ali F, Ali A, Jang J-H, Kwak K-S. An ontology-based interpretable fuzzy decision support system for diabetes diagnosis. IEEE Access. 2018; 6:37371–94. Mahdi MA, Al Janabi S. A Novel Software to Improve Healthcare Base on Predictive Analytics and Mobile Services for Cloud Data Centers. In: International Conference on Big Data and Networks Technologies. Leuven: Springer: 2019. p. 320–39. Al-Janabi S, Patel A, Fatlawi H, Kalajdzic K, Al Shourbaji I. Empirical rapid and accurate prediction model for data mining tasks in cloud computing environments. In: International Congress on Technology, Communication and Knowledge (ICTCK). Mashhad: IEEE: 2014. p. 1–8. Al-Janabi S, Mahdi MA. Evaluation prediction techniques to achievement an optimal biomedical analysis. Int J Grid Util Comput. 2019; 10(5):512–27. Wachter S, Mittelstadt B, Russell C. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harv J Law Technol. 2017; 31(2). https://doi.org/10.2139/ssrn.3063289. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15. Sydney: ACM Press: 2015. p. 1721–30. Kavakiotis I, Tsave O, Salifoglou A, Maglaveras N, Vlahavas I, Chouvarda I. Machine Learning and Data Mining Methods in Diabetes Research. Comput Struct Biotechnol J. 2017; 15:104–16. PubMed PubMed Central Article Google Scholar Jalali A, Pfeifer N. Interpretable per case weighted ensemble method for cancer associations. BMC Genomics. 2016; 17(1). https://doi.org/10.1186/s12864-016-2647-9. Yin Z, Sulieman LM, Malin BA. A systematic literature review of machine learning in online personal health data. J Am Med Informat Assoc. 2019; 26(6):561–76. Sun S, Zuo Z, Li GZ, Yang X. Subhealth state classification with AdaBoost learner. Int J Funct Informat Personalised Med. 2013; 4(2):167. Jovanovic M, Radovanovic S, Vukicevic M, Van Poucke S, Delibasic B. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression. Artif Intell Med. 2016; 72:12–21. Turgeman L, May JH. A mixed-ensemble model for hospital readmission. Artif Intell Med. 2016; 72:72–82. Letham B, Rudin C, McCormick TH, Madigan D. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model. Ann Appl Stat. 2015; 9(3):1350–71. Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J. 2015; 13:8–17. Subianto M, Siebes A. Understanding Discrete Classifiers with a Case Study in Gene Prediction. Omaha: IEEE: 2007. p. 661–6. Huysmans J, Baesens B, Vanthienen J. Using Rule Extraction to Improve the Comprehensibility of Predictive Models. SSRN Electron J. 2006. Accessed 16 Nov 2018. Pazzani MJ, Mani S, Shankle WR. Acceptance of Rules Generated by Machine Learning among Medical Experts. Methods Inf Med. 2001; 40(05):380–5. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 2018. Pande V. Artificial Intelligence's 'Black Box' Is Nothing to Fear. The New York Times. 2019. Accessed 14 Aug 2019. Pedreschi D, Giannotti F, Guidotti R, Monreale A, Pappalardo L, Ruggieri S, Turini F. Open the Black Box Data-Driven Explanation of Black Box Decision Systems. 2018. arXiv:1806.09936 [cs]. Ribeiro MT, Singh S, Guestrin C. Why Should I Trust You?: Explaining the Predictions of Any Classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery And Data Mining. San Francisco: ACM Press: 2016. p. 1135–44. Freund Y. An adaptive version of the boost by majority algorithm. In: Proceedings of the Twelfth Annual Conference on Computational Learning Theory - COLT '99. Santa Cruz: ACM Press: 1999. p. 102–13. Asgari S, Scalzo F, Kasprowicz M. Pattern Recognition in Medical Decision Support. BioMed Res Int. 2019; 2019:1–2. Rajendra Acharya U, Vidya KS, Ghista DN, Lim WJE, Molinari F, Sankaranarayanan M. Computer-aided diagnosis of diabetic subjects by heart rate variability signals using discrete wavelet transform method. Knowl-Based Syst. 2015; 81:56–64. Yoo I, Alafaireet P, Marinov M, Pena-Hernandez K, Gopidi R, Chang J-F, Hua L. Data Mining in Healthcare and Biomedicine: A Survey of the Literature. J Med Syst. 2012; 36(4):2431–48. Dolejsi M, Kybic J, Tuma S, Polovincak M. Reducing false positive responses in lung nodule detector system by asymmetric adaboost. In: 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro. Paris: IEEE: 2008. p. 656–9. Shakeel PM, Tolba A, Al-Makhadmeh Z, Jaber MM. Automatic detection of lung cancer from biomedical data set using discrete AdaBoost optimized ensemble learning generalized neural networks. Neural Comput Appl. 2019. Rangini M, Jiji DGW. Identification of Alzheimer's Disease Using Adaboost Classifier. In: Proceedings of the International Conference on Applied Mathematics and Theoretical Computer Science: 2013. p. 229–34. Andrews R, Diederich J, Tickle AB. Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl-Based Syst. 1995; 8(6):373–89. Hara S, Hayashi K. Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach. 2016. arXiv:1606.09066 [stat]. Adnan MN, Islam MZ. ForEx++: A New Framework for Knowledge Discovery from Decision Forests. Australas J Inf Syst. 2017; 21. Mashayekhi M, Gras R. Rule Extraction from Random Forest: the RF+HC Methods. In: Advances in Artificial Intelligence 2015. Lecture notes in computer science Artificial intelligence, vol. 9091. Halifax: Springer: 2015. p. 223–37. Deng H. Interpreting tree ensembles with intrees. Int J Data Sci Anal. 2014; 7(4):277–87. Friedman J, Popescu BE. Predictive Learning via Rule Ensembles. Ann Appl Stat. 2008; 2(3):916–54. Waitman LR, Fisher DH, King PH. Bootstrapping rule induction to achieve rule stability and reduction. J Intell Inf Syst. 2006; 27(1):49–77. Ribeiro MT, Singh S, Guestrin C. Anchors: High-Precision Model-Agnostic Explanations. In: AAAI. vol. 18. New Orleans: 2018. p. 1527–1535. Lipton ZC. The mythos of model interpretability: 2016. arXiv Preprint arXiv:1606.03490. Lundberg SM, Lee S-I. A Unified Approach to Interpreting Model Predictions. Adv Neural Inf Process Syst. 2017; 30:4768–77. Guidotti R, Monreale A, Ruggieri S, Pedreschi D, Turini F, Giannotti F. Local Rule-Based Explanations of Black Box Decision Systems. 2018. arXiv:1805.10820. Michal F. "Please, explain." Interpretability of black-box machine learning models. 2019. https://tinyurl.com/y5qruqgf. Accessed 19 April 2019. Fen H, Tan, Song K, Udell M, Sun Y, Zhang Y. Why should you trust my interpretation? Understanding uncertainty in LIME predictions. 2019. arXiv:1904.12991. Lundberg SM, Lee S-I. Consistent feature attribution for tree ensembles. Sydney: 2017. arXiv:1706.06060 [cs, Stat]. Adadi A, Berrada M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access. 2018; 6:52138–60. Sabaas A. Interpreting Random Forests. 2014. http://blog.datadive.net/interpreting-random-forests/. Accessed 11 Oct 2017. Tjoa E, Guan C. A Survey on Explainable Artificial Intelligence (XAI): towards Medical XAI. 2019:21. arXiv preprint arXiv:1907.07374. Mencar C. Interpretability of Fuzzy Systems. In: Fuzzy Logic and Applications: 10th International Workshop. Genoa: Springer: 2013. p. 22–35. Lamy J-B, Sekar B, Guezennec G, Bouaud J, Séroussi B. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artif Intell Med. 2019; 94:42–53. Kwon BC, Choi M-J, Kim JT, Choi E, Kim YB, Kwon S, Sun J, Choo J. RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records. IEEE Trans Vis Comput Graph. 2018; 25(1):255–309. Kästner M, Hermann W, Villmann T. Integration of Structural Expert Knowledge about Classes for Classification Using the Fuzzy Supervised Neural Gas. Comput Intell. 2012. Appel R, Fuchs T, Dollár P, Perona P. Quickly Boosting Decision Trees–Pruning Underachieving Features Early. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13): 2013. p. 594–602. Friedman J, Hastie T, Tibshirani R. Additive Logistic Regression A Statistical View of Boosting. Ann Stat. 2000; 28(2):337–407. Freund Y, Schapire RE. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J Comput Syst Sci. 1997; 55(1):119–39. Walker KW, Jiang Z. Application of adaptive boosting (AdaBoost) in demand-driven acquisition (DDA) prediction: A machine-learning approach. J Acad Librariansh. 2019; 45(3):203–12. Aravindh K, Moorthy S, Kumaresh R, Sekar K. A Novel Data Mining approach for Personal Health Assistance,. Int J Pure Appl Math. 2018; 119(15):415–26. Jaree T, Guangdong X, Yanchun Z, Fuchun H. Breast cancer survivability via AdaBoost algorithms. In: Proceedings of the Second Australasian Workshop on Health Data and Knowledge Management, vol. 80. Wollongong: Australian Computer Society: 2008. p. 55–64. Hastie T, Rosset S, Zhu J, Zou H. Multi-class AdaBoost. Stat Interface. 2009; 2(3):349–60. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D. Scikit-learn: Machine Learning in Python. J Mach Learn Res. 2011; 12:2825–30. Freund Y, Schapire RE. A Short Introduction to Boosting. J Japan Soc Artif Intell. 1999; 14(5):771–80. Quinlan JR. Generating Production Rules From Decision Trees. In: Proceedings of the Tenth International Joint Conference on Artificial Intelligence. Milan, Italy, August 23-28, 1987. Morgan Kaufmann: 1987. p. 304–307. http://ijcai.org/proceedings/1987-1. Dhurandhar A, Chen P-Y, Luss R, Tu C-C, Ting P, Shanmugam K, Das P. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives. 2018. arXiv:1802.07623 [cs]. Dheeru D, Karra Taniskidou E. UCI Machine Learning Repository. Irvine: University of California, Irvine, School of Information and Computer Sciences; 2017. https://archive.ics.uci.edu/ml/datasets/. Accessed 31 Mar 2019. Understanding Society: Waves 2-3 Nurse Health Assessment, 2010-2012 [data Collection]. vol. 7251, 3rd edn: UK Data Service, University of Essex, Institute for Social and Economic Research and National Centre for Social Research; 2019. Davillas A, Benzeval M, Kumari M. Association of Adiposity and Mental Health Functioning across the Lifespan: Findings from Understanding Society (The UK Household Longitudinal Study). PLoS ONE. 2016;11(2). https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0148561. Accessed 18 Aug 2019. Demsar J. Statistical Comparisons of Classifiers over Multiple Data Sets. J Mach Learn Res. 2006; 7:1–30. Clark P, Boswell R. Rule induction with CN2: some recent improvements. Mach Learn. 1991; 482:151–63. Birmingham City University, Curzon Street, Birmingham, B5 5JU, UK Julian Hatwell, Mohamed Medhat Gaber & R. Muhammad Atif Azad Julian Hatwell Mohamed Medhat Gaber R. Muhammad Atif Azad Original concept was by JH and MMG. JH was the major contributor in developing the software, designing and executing the experiments, analysing the data and writing the manuscript. JH, MMG and RMAA read and approved the final manuscript. Correspondence to Julian Hatwell. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Hatwell, J., Gaber, M.M. & Atif Azad, R.M. Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences. BMC Med Inform Decis Mak 20, 250 (2020). https://doi.org/10.1186/s12911-020-01201-2 AdaBoost Black box problem Interpretability
CommonCrawl
Efficient Legendre dual-Petrov-Galerkin methods for odd-order differential equations DCDS-B Home An SICR rumor spreading model in heterogeneous networks April 2020, 25(4): 1517-1541. doi: 10.3934/dcdsb.2019238 Critical and super-critical abstract parabolic equations Tomasz Dlotko 1, , Tongtong Liang 2, and Yejuan Wang 2, Institute of Mathematics, University of Silesia, Katowice, Poland School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou 730000, China Received December 2018 Published November 2019 Fund Project: This work was supported by NSF of China (Grants No. 41875084, 11571153), the Fundamental Research Funds for the Central Universities under Grant Nos. lzujbky-2018-ot03 and lzujbky-2018-it58 Our purpose is to formulate an abstract result, motivated by the recent paper [8], allowing to treat the solutions of critical and super-critical equations as limits of solutions to their regularizations. In both cases we are improving the viscosity, making it stronger, solving the obtained regularizations with the use of Dan Henry's technique, then passing to the limit in the improved viscosity term to get a solution of the limit problem. While in case of the critical problems we will just consider a 'bit higher' fractional power of the viscosity term, for super-critical problems we need to use a version of the 'vanishing viscosity technique' that comes back to the considerations of E. Hopf, O.A. Oleinik, P.D. Lax and J.-L. Lions from 1950th. In both cases, the key to that method are the uniform with respect to the parameter estimates of the approximating solutions. The abstract result is illustrated with the Navier-Stokes equation in space dimensions 2 to 4, and with the 2-D quasi-geostrophic equation. Various technical estimates related to that problems and their fractional generalizations are also presented in the paper. Keywords: Navier-Stokes equation, quasi-geostrophic equation, solvability, a priori estimates, fractional approximations. Mathematics Subject Classification: Primary: 35A25, 35Q30, 26A33. Citation: Tomasz Dlotko, Tongtong Liang, Yejuan Wang. Critical and super-critical abstract parabolic equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1517-1541. doi: 10.3934/dcdsb.2019238 [1] R. A. Adams, Sobolev Spaces, Pure and Applied Mathematics, Vol. 65. Academic Press, New York-London, 1975. Google Scholar H. Amann, Linear and Quasilinear Parabolic Problems. Vol. I. Abstract Linear Theory, Monographs in Mathematics, 89. Birkhäuser Boston, Inc., Boston, MA, 1995. doi: 10.1007/978-3-0348-9221-6. Google Scholar J. M. Arrieta and A. N. Carvalho, Abstract parabolic problems with critical nonlinearities and applications to Navier-Stokes and heat equations, Trans. Amer. Math. Soc., 352 (2000), 285-310. doi: 10.2307/118154. Google Scholar [4] J. W. Cholewa and T. Dlotko, Global Attractors in Abstract Parabolic Problems, London Mathematical Society Lecture Note Series, 278. Cambridge University Press, Cambridge, 2000. doi: 10.1017/CBO9780511526404. Google Scholar J. W. Cholewa and T. Dlotko, Fractional Navier-Stokes equations, Discrete Contin. Dyn. Syst. Series B, 23 (2018), 2967-2988. doi: 10.3934/dcdsb.2017149. Google Scholar A. Córdoba and D. Córdoba, A maximum principle applied to quasi-geostrophic equations, Commun. Math. Phys., 249 (2004), 511-528. doi: 10.1007/s00220-004-1055-1. Google Scholar A. Córdoba and D. Córdoba, A pointwise estimate for fractionary derivatives with applications to partial differential equations, Proc. Natl. Acad. Sci. USA, 100 (2003), 15316-15317. doi: 10.1073/pnas.2036515100. Google Scholar T. Dlotko, Navier-Stokes equation and its fractional approximations, Appl. Math. Optim., 77 (2018), 99-128. doi: 10.1007/s00245-016-9368-y. Google Scholar T. Dlotko, M. B. Kania and C. Y. Sun, Quasi-geostrophic equation in $ \mathbb{R}^2$, J. Differential Equations, 259 (2015), 531-561. doi: 10.1016/j.jde.2015.02.022. Google Scholar S. S. Dragomir, Some Gronwall Type Inequalities and Applications, Nova Science Publishers, Inc., Hauppauge, NY, 2003. Google Scholar C. Foias, D. D. Holm and E. S. Titi, The Navier-Stokes-alpha model of fluid turbulence, Physica D, 152/153 (2001), 505-519. doi: 10.1016/S0167-2789(01)00191-9. Google Scholar Y. Giga, Analyticity of the semigroup generated by the Stokes operator in Lr spaces, Math. Z., 178 (1981), 297-329. doi: 10.1007/BF01214869. Google Scholar Y. Giga and T. Miyakawa, Solutions in Lr of the Navier-Stokes initial value problem, Arch. Rational Mech. Anal., 89 (1985), 267-281. doi: 10.1007/BF00276875. Google Scholar L. Grafakos and S. Oh, The Kato-Ponce inequality, Comm. Partial Differential Equations, 39 (2014), 1128-1157. doi: 10.1080/03605302.2013.822885. Google Scholar B. L. Guo, D. W. Huang, Q. X. Li and C. Y. Sun, Dynamics for a generalized incompressible Navier-Stokes equations in $ \mathbb{R}^2$, Adv. Nonlinear Stud., 16 (2016), 249-272. doi: 10.1515/ans-2015-5018. Google Scholar D. Henry, Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics, 840. Springer-Verlag, Berlin-New York, 1981. doi: 10.1007/BFb0089649. Google Scholar D. B. Henry, How to remember the Sobolev inequalities, Differential Equations, Lecture Notes in Math., Springer, Berlin-New York, 957 (1982), 97-109. doi: 10.1007/BFb0066235. Google Scholar N. Ju, Global solutions to the two dimensional quasi-geostrophic equation with critical or super-critical dissipation, Math. Ann., 334 (2006), 627-642. doi: 10.1007/s00208-005-0715-6. Google Scholar T. Kato, Strong Lp-solutions of the Navier-Stokes equation in $ \mathbb{R}^m$, with applications to weak solutions, Math. Z., 187 (1984), 471-480. doi: 10.1007/BF01174182. Google Scholar A. Kiselev, F. Nazarov and A. Volberg, Global well-posedness for the critical 2D dissipative quasi-geostrophic equation, Invent. Math., 167 (2007), 445-453. doi: 10.1007/s00222-006-0020-3. Google Scholar H. Komatsu, Fractional powers of operators, Pacific J. Math., 19 (1966), 285-346. doi: 10.2140/pjm.1966.19.285. Google Scholar S. G. Kre$\check{{\rm i}}$n, Linear Differential Equations in Banach Spaces, Translations of Mathematical Monographs, Vol. 29, American Mathematical Society, Providence, R.I., 1971. Google Scholar I. Lasiecka, Unified theory for abstract parabolic boundary problems-A semigroup approach, Appl. Math. Optim., 6 (1980), 287-333. doi: 10.1007/BF01442900. Google Scholar J. Leray, Sur le mouvement d'un fluide visqueux emplissant l'espace, Acta Math., 63 (1934), 193-248. doi: 10.1007/BF02547354. Google Scholar F. Linares and G. Ponce, Introduction to Nonlinear Dispersive Equations, Universitext. Springer, New York, 2009. Google Scholar J.-L. Lions, Quelques Méthodes de Résolution des Problèmes aux Limites non Linéaires, Dunod, Gauthier-Villars, Paris, 1969. Google Scholar C. Martínez Carracedo and M. Sanz Alix, The Theory of Fractional Powers of Operators, North-Holland Mathematics Studies, 187. North-Holland Publishing Co., Amsterdam, 2001. Google Scholar A. Rodriguez-Bernal, Existence, Uniqueness and Regularity of Solutions of Nonlinear Evolution Equations in Extended Scales of Hilbert Spaces, CDSNS91-61 Report, Georgia Institute of Technology, Atlanta, 1991. Google Scholar H. Sohr, The Navier-Stokes Equations: An Elementary Functional Analytic Approach, Modern Birkhäuser Classics. Birkhäuser/Springer Basel AG, Basel, 2001. doi: 10.1007/978-3-0348-8255-2. Google Scholar E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Mathematical Series, No. 30 Princeton University Press, Princeton, N.J. 1970. Google Scholar W. A. Strauss, On continuity of functions with values in various Banach spaces, Pacific J. Math., 19 (1966), 543-551. doi: 10.2140/pjm.1966.19.543. Google Scholar R. Temam, Navier-Stokes Equations, Theory and Numerical Analysis, Studies in Mathematics and its Applications, Vol. 2. North-Holland Publishing Co., Amsterdam-New York-Oxford, 1977. doi: 10.1115/1.3424338. Google Scholar R. Temam, On the Euler equations of incompressible perfect fluids, J. Functional Analysis, 20 (1975), 32-43. doi: 10.1016/0022-1236(75)90052-X. Google Scholar H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, VEB Deutscher Verlag der Wissenschaften, Berlin, 1978. doi: 10.1097/00005768-199805001-01817. Google Scholar W. von Wahl, The Equations of Navier-Stokes and Abstract Parabolic Equations, Vieweg, Braunschweig/Wiesbaden, 1985. doi: 10.1007/978-3-663-13911-9. Google Scholar W. von Wahl, Global solutions to evolution equations of parabolic type, Differential Equations in Banach Spaces, Lecture Notes in Math., Springer, Berlin, 1223 (1986), 254-266. doi: 10.1007/BFb0099198. Google Scholar Y. Wang and T. Liang, Mild solutions to the time fractional Navier-Stokes delay differential inclusions, Discrete Contin. Dyn. Syst. Series B, 24 (2019), 3713-3740. Google Scholar J. H. Wu, Dissipative quasi-geostrophic equations with Lp data, Electron. J. Differential Equations, (2001), 13 pp. doi: 10.1111/1468-0262.00185. Google Scholar A. Yagi, Abstract Parabolic Evolution Equations and Their Applications, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2010. doi: 10.1007/978-3-642-04631-5. Google Scholar Hongjie Dong, Dapeng Du. Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1095-1101. doi: 10.3934/dcds.2008.21.1095 Wen Tan, Bo-Qing Dong, Zhi-Min Chen. Large-time regular solutions to the modified quasi-geostrophic equation in Besov spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3749-3765. doi: 10.3934/dcds.2019152 Kuijie Li, Tohru Ozawa, Baoxiang Wang. Dynamical behavior for the solutions of the Navier-Stokes equation. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1511-1560. doi: 10.3934/cpaa.2018073 C. Foias, M. S Jolly, I. Kukavica, E. S. Titi. The Lorenz equation as a metaphor for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 403-429. doi: 10.3934/dcds.2001.7.403 Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149 Hantaek Bae. Solvability of the free boundary value problem of the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 769-801. doi: 10.3934/dcds.2011.29.769 I. Moise, Roger Temam. Renormalization group method: Application to Navier-Stokes equation. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 191-210. doi: 10.3934/dcds.2000.6.191 Igor Kukavica, Mohammed Ziane. Regularity of the Navier-Stokes equation in a thin periodic domain with large data. Discrete & Continuous Dynamical Systems - A, 2006, 16 (1) : 67-86. doi: 10.3934/dcds.2006.16.67 T. Tachim Medjo. Multi-layer quasi-geostrophic equations of the ocean with delays. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 171-196. doi: 10.3934/dcdsb.2008.10.171 May Ramzi, Zahrouni Ezzeddine. Global existence of solutions for subcritical quasi-geostrophic equations. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1179-1191. doi: 10.3934/cpaa.2008.7.1179 Radjesvarane Alexandre, Jie Liao, Chunjin Lin. Some a priori estimates for the homogeneous Landau equation with soft potentials. Kinetic & Related Models, 2015, 8 (4) : 617-650. doi: 10.3934/krm.2015.8.617 Mehdi Badra, Fabien Caubet, Jérémi Dardé. Stability estimates for Navier-Stokes equations and application to inverse problems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2379-2407. doi: 10.3934/dcdsb.2016052 Yutaka Tsuzuki. Solvability of $p$-Laplacian parabolic logistic equations with constraints coupled with Navier-Stokes equations in 2D domains. Evolution Equations & Control Theory, 2014, 3 (1) : 191-206. doi: 10.3934/eect.2014.3.191 Anna Amirdjanova, Jie Xiong. Large deviation principle for a stochastic navier-Stokes equation in its vorticity form for a two-dimensional incompressible flow. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 651-666. doi: 10.3934/dcdsb.2006.6.651 Boris Haspot, Ewelina Zatorska. From the highly compressible Navier-Stokes equations to the porous medium equation -- rate of convergence. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3107-3123. doi: 10.3934/dcds.2016.36.3107 Viorel Barbu, Ionuţ Munteanu. Internal stabilization of Navier-Stokes equation with exact controllability on spaces with finite codimension. Evolution Equations & Control Theory, 2012, 1 (1) : 1-16. doi: 10.3934/eect.2012.1.1 Sun-Ho Choi. Weighted energy method and long wave short wave decomposition on the linearized compressible Navier-Stokes equation. Networks & Heterogeneous Media, 2013, 8 (2) : 465-479. doi: 10.3934/nhm.2013.8.465 Jingrui Wang, Keyan Wang. Almost sure existence of global weak solutions to the 3D incompressible Navier-Stokes equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 5003-5019. doi: 10.3934/dcds.2017215 Yukang Chen, Changhua Wei. Partial regularity of solutions to the fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5309-5322. doi: 10.3934/dcds.2016033 Yejuan Wang, Tongtong Liang. Mild solutions to the time fractional Navier-Stokes delay differential inclusions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3713-3740. doi: 10.3934/dcdsb.2018312 Tomasz Dlotko Tongtong Liang Yejuan Wang
CommonCrawl
Fuel dynamics after reintroduced fire in an old-growth Sierra Nevada mixed-conifer forest C. Alina Cansler ORCID: orcid.org/0000-0002-2155-44381, Mark E. Swanson2, Tucker J. Furniss3, Andrew J. Larson4 & James A. Lutz3 Fire Ecology volume 15, Article number: 16 (2019) Cite this article Surface fuel loadings are some of the most important factors contributing to fire intensity and fire spread. In old-growth forests where fire has been long excluded, surface fuel loadings can be high and can include woody debris ≥100 cm in diameter. We assessed surface fuel loadings in a long-unburned old-growth mixed-conifer forest in Yosemite National Park, California, USA, and assessed fuel consumption from a management-ignited fire set to control the progression of the 2013 Rim Fire. Specifically, we characterized the distribution and heterogeneity of pre-fire fuel loadings, both along transects and contained in duff mounds around large trees. We compared surface fuel consumption to that predicted by the standard First Order Fire Effects Model (FOFEM) based on pre-fire fuel loadings and fuel moistures. We also assessed the relationship between tree basal area—calculated for two different spatial neighborhood scales—and pre-fire fuel loadings. Pre-fire total surface fuel loading averaged 192 Mg ha−1 and was reduced by 79% by the fire to 41 Mg ha−1 immediately after fire. Most fuel components were reduced by 87% to 90% by the fire, with the exception of coarse woody debris (CWD), which was reduced by 60%. Litter depth in duff mounds were within 1 SD of plot means, but duff biomass for the largest trees (>150 cm diameter at breast height [DBH]) exceeded plot background levels. Overstory basal area generally had significant positive relationships with pre-fire fuel loadings of litter, duff, 1-hour, and 10-hour fuels, but the strength of the relationships differed between overstory components (live, dead, all [live and dead], species), and negative relationships were observed between live Pinus lambertiana Douglas basal area and CWD. FOFEM over-predicted rotten CWD consumption and under-predicted duff consumption. Surface fuel loadings were characterized by heterogeneity and the presence of large pieces. This heterogeneity likely contributed to differential fire behavior at small scales and heterogeneity in the post-fire environment. The reductions in fuel loadings at our research site were in line with ecological restoration objectives; thus, ecologically restorative burning during fire suppression is possible. La carga de combustibles de superficie es uno de los factores más importantes entre los que contribuyen a la intensidad y la velocidad de propagación del fuego. En bosques maduros, en los cuales el fuego ha sido excluido por largo tiempo, las cargas de combustibles de superficiepueden ser altas e incluir restos vegetales ≥100 cm de diámetro. Determinamos la carga de combustibles superficiales en un bosque mixto maduro ubicado en el Parque Nacional Yosemite, en California, EEUU, en el cual el fuego había sido excluido desde mucho tiempo atrás, y determinamos el consumo de combustibles mediante un fuego con objetivos de manejo iniciado para controlar la progresión del incendio Rim de 2013. Específicamente, caracterizamos la distribución y heterogeneidad de la carga de combustible previa al fuego a lo largo de transectas, y también la carga contenida en montículos alrededor de grandes árboles. Comparamos la combustión del combustible superficial con el predicho mediante el Modelo de Efectos de Fuego de Primer Orden (FOFEM) basado en cargas de combustible pre-fuego y humedad de los combustibles. También determinamos la relación entre el área basal de los árboles—calculada para dos escalas espaciales de proximidad—y las cargas de combustible pre fuego. La carga total de combustibles superficiales en el pre fuego promediaron 192 Mg ha−1 y fue reducida en un 79% por el fuego a 41 Mg ha−1 inmediatamente luego del fuego. La mayoría de los componentes del combustible fue reducido entre el 87% al 90% por el fuego, con la excepción del combustible grueso (CWD), que fue reducido en un 60%. La profundidad de la broza en los montículos estuvo dentro de una Desviación Standard (SD) de aquella contenida en las parcelas ubicadas en las transectas, aunque esta broza en árboles grandes (>150 cm de diámetro a la altura del pecho [DBH]) excedió a la de las parcelas de las transectas. El área basal del dosel superior tuvo, en general, una relación positiva con la carga previa al fuego en la broza (fina y gruesa), y los combustibles de 1 y 10 horas, pero la robustez de esta relación difirió entre los componentes del dosel (especies vivas, muertas [todas vivas y muertas]), y relaciones negativas fueron observadas entre el área basal de Pinus lambertiana Douglas vivos y el combustible grueso (CWD). El modelo FOFEM sobreestimó el consumo de CWD degradado y subestimó el consumo de la broza gruesa. Las cargas de combustible superficiales estuvieron caracterizadas por la heterogeneidad y la presencia de grandes trozos de material combustible. Esta heterogeneidad probablemente contribuya a un comportamiento del fuego diferencial a pequeña escala y heterogeneidad en el ambiente post fuego. Las reducciones de combustible en nuestro sitio de estudio están en línea con los objetivos de restauración ecológica; en consecuencia, las quemas restaurativas durante la supresión del fuego son factibles de ser realizadas. ABCO: Abies concolor CWD: Coarse woody debris; downed wood greater than 7.62 cm (3 inches) diameter DBH: Diameter at breast height (1.37 m) FOFEM: First Order Fire Effects Model PILA: Pinus lambertiana YFDP: Yosemite Forest Dynamics Plot Efforts to restore fire-dependent forest types via the reintroduction of fire, through either prescribed fire or managed wildfire, require an understanding of forest fuels (Ryan et al. 2013). Fuel properties vary with forest overstory species (van Wagtendonk et al. 1998), forest age (Agee and Huff 1987; van Wagtendonk and Moore 2010), disturbance history (Jenkins et al. 2012), and ecosystem productivity (Keane et al. 2000). Understanding the variability of pre-fire fuel loadings at spatial scales similar to that of prescribed fires (i.e., 25 ha to 250 ha) can be important both to managers seeking to reintroduce fire (Collins et al. 2010) and also for calculating the likely effects of burning on tree survival (Lutes et al. 2009; Furniss et al. 2019; Hood et al. 2018). Similarly, measurements of surface fuel combustion and residual fuel loadings immediately after fire are required to evaluate the effectiveness of fire as a fuel reduction and ecosystem restoration treatment (Knapp et al. 2005; Varner et al. 2005); to understand fire impacts to understory plant and fungal species (Moore et al. 2006; Larson et al. 2016); and to estimate fire-caused ecosystem changes, such as direct fire effects on aboveground carbon storage, pyrogenic carbon emissions (Campbell et al. 2007), soil heating (Swezy and Agee 1991), and related changes to soil chemistry and structure (Certini 2005; Hille and Den Ouden 2005). Deposition from overstory trees is the main source of fuel in most forest ecosystems. Previous studies in relatively uniform stands have shown differences in fuel loadings between stands with different composition and structure (van Wagtendonk and Moore 2010), but other studies at fine spatial scales (0.05 ha) have shown that overstory structure has only weak relationships to fuel loading (Lydersen et al. 2015). While overstory structure determines the fluxes and distribution of fuel loadings at broad spatial scales, the tall trees and heterogeneous spatial patterns characteristic of dry, mixed-coniferous forests can obscure this relationship at sub-stand scales. Duff and litter accumulations around the bases of large-diameter trees, particular those in genus Pinus L., are a conspicuous and important source of fuel heterogeneity in post-fire exclusion conifer forests in western North America (Ryan and Frandsen 1991; Swezy and Agee 1991). These duff mounds, created by higher inputs of exfoliated bark and leaf litter adjacent to trees, have been implicated in basal injury to tree cambial tissues when fire returns following several missed fire cycles (Ryan and Frandsen 1991; Sackett and Haas 1998; Hood 2010; Nesmith et al. 2010; Garlough and Keyes 2011). Because these accumulations are aggregated around large trees, which are rare and unevenly distributed on the landscape (e.g., at densities of 5 to 11 ha−1; Lutz et al. 2009a, 2018a), they may not be captured by traditional fuel sampling methods (e.g., "Brown's transects;" Brown 1974) but may have a strong influence on local biomass consumption during fire and on tree mortality of those trees after fire (Swezy and Agee 1991; Hood et al. 2010). Duff mounds therefore merit consideration from restoration ecologists and managers working to restore frequent fire regimes to dry forests in western North America (Varner et al. 2005; Kolb et al. 2007) and other forest types with similar thick-barked trees that exfoliate bark, such as longleaf pine (Pinus palustris Mill., Varner et al. 2009). Forest and fire managers use measurements of surface fuel loadings and moistures with decision support computer programs such as the First Order Fire Effects Model (FOFEM) to develop prescribed fire prescriptions and monitoring plans (Lutes et al. 2012). FOFEM and related models are essential management support tools, but have not been widely evaluated or validated across the various forest types and burning conditions in which they are used (French et al. 2011; Prichard et al. 2014). FOFEM uses a combination of empirical models (for shrub and duff consumption), rule-based predictions (100% of herbaceous and litter are consumed), and processed-based models (for woody fuels) for computing fuel consumption, which may not match the full range of burning conditions and fuel profiles present across regions and forest types (Prichard et al. 2014). We used an intensively measured forest research site consisting of a fully stem-mapped (georeferenced tree locations) 25.6 ha study area and 2240 m of fuel transects to quantify effects of reintroduced fire after a 114-year fire-free period (Barth et al. 2015) on surface fuel loadings in an old-growth Sierra Nevada mixed-conifer forest. We quantified pre-fire and post-fire surface fuel loadings, with the expectation that there would be relatively high fuel consumption under the very dry burning conditions. Our study was guided by three research questions. How did fire change the pre-fire fuel loading, and how much new surface fuel was immediately deposited by the start of the next growing season after fire? How well did overstory structure and composition explain surface fuel variability? Could surface fuel consumption be well modeled by FOFEM in this fire-excluded old-growth site? Study site The Yosemite Forest Dynamics Plot (YFDP; 37.8° N, 119.8° W; Tuolumne County, central California, USA; Fig. 1) is located in an old-growth mixed-conifer forest in northwestern Yosemite National Park. The YFDP is a 25.6 ha research site (800 m × 320 m) within which all trees ≥1 cm diameter at breast height (DBH; 1.37 m above the pre-fire soil surface) and all snags ≥10 cm DBH have been measured at DBH, mapped, and identified to species (see Lutz et al. 2012 for detailed methods). Coniferous trees dominate the plant community, with sugar pine (Pinus lambertiana Douglas), California white fir (Abies concolor [Gordon & Glend.] Hildebr.), and incense-cedar (Calocedrus decurrens [Torr.] Florin) the most abundant. Nomenclature follows Flora of North America Editorial Committee (1993); additional descriptions of the plant community can be found in (Lutz et al. 2012, 2014, 2017). (a) The Yosemite Forest Dynamics Plot (YFDP) is located in the Sierra Nevada Ecoregion of California, USA, (b) within Yosemite National Park. (c) Locations of sample points and planar transects for pre-fire and post-fire fuel measurements in the YFDP, which were run within the plot, parallel to plot boundaies. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. Locations of P. lambertiana ≥25 cm DBH are shown in gray points, with trees where litter and duff mounds were sampled in red. Pre-fire measurements of duff mounds were taken in August 2013, and the post-fire fuel measurements were taken in June 2014. One sampled-tree duff mound was located just outside the western edge of the plot, and is not shown. Green lines represent a LiDAR-derived 5 m contour interval Modeled mean annual precipitation at the YFDP is 1081 mm (PRISM Climate Group 2004). Mean January minimum and maximum temperatures are −1 °C and 8.9 °C, respectively, and mean July minimum and maximum temperatures are 13.0 °C and 24.5 °C (PRISM Climate Group 2004; Daly et al. 2008). There has been minimal anthropogenic disturbance in the form of tree harvesting, but fire exclusion has resulted in deviations from historical forest structure and composition. Prior to the period of fire exclusion, the fire return interval in the YFDP was estimated at 29.5 years (Barth et al. 2015), with the last widespread fire burning the plot in 1900 (Barth et al. 2015). The presence of large and old trees, especially of fire-scarred Pinus lambertiana, Abies concolor, and Calocedrus decurrens (Barth et al. 2015), throughout the plot suggests that high-severity fire had been rare and of limited spatial extent over the past 500 years. The YFDP was burned in 2013 in a backfire set by Yosemite National Park to check the advance of the Rim Fire (Stavros et al. 2016). The backfire was ignited at the Crane Flat lookout, approximately 1 km from the plot, on 31 August 2013, when conditions were hotter and drier than for normal prescribed burn windows (see Lutz et al. 2017 for details), and was not managed subsequent to the initial ignition. After ignition, the fire backed downslope to the north during the night of 1 September 2013, burning approximately half of the YFDP, with the remainder burning during the morning of 2 September 2013. Fire behavior ranged from low-intensity backing fire to high-intensity surface fire with some torching (Lutz et al. 2017 for details of burning conditions). Within the YFDP, the overall fire severity as inferred from Landsat-derived metrics was low or moderate, with a few small patches remaining unburned or burning at high severity (Lutz et al. 2018b), which is characteristic of recent fires in Yosemite mixed-conifer forests (van Wagtendonk and Lutz 2007; Lutz et al. 2009b). Fuel measurements Surface fuel loadings were sampled two years before fire in June 2011, and one year after fire in June 2014, using modified Brown's transects (Brown 1974). During pre-fire measurement, we sampled 112 20-meter transects, totaling 2240 m; during post-fire measurement, we sampled 99 of the same transects (Fig. 1). Thirteen transects were sampled along the incorrect line during post-fire measurement, so they could not be directly paired with the pre-fire sample. Fine woody material was tallied by the three diameter classes that correspond to the 1-, 10-, and 100-hour fuel classes used in the National Fire-Danger Rating System (Fosberg 1970; 0 to 0.61 cm, 0.64 to 2.51 cm, and 2.54 to 7.59 cm, respectively). At the intersection with the transect, diameter, species, and decay class (five classes, from sound to decayed) of all pieces of coarse woody debris (fuels >7.62 cm diameter, i.e., 1000-hour fuels; hereafter CWD [Maser et al. 1979, Harmon and Sexton 1996]) were recorded individually. The transect length for the 1-hour and 10-hour fuels was 2 m, for the 100-hour fuels was 4 m at pre-fire measurement and 2 m at post-fire measurement, and 20 m for CWD. Major and minor axis diameters were recorded for CWD with oval or elliptical form, and diameters were estimated for partially buried pieces. Duff and litter depths were measured at 10 locations on each 20 m transect: every 2 meters, starting at 1 m and ending at 19 m. Duff and litter were measured to the nearest 0.1 cm. The litter layer is the layer of relatively undecomposed fibric material, or the Oi organic soil horizon (Banwell et al. 2013). Duff consists of the fermentation and humus horizons (Oe and Oa, respectively), and is composed of fermenting and decomposing organic material (Brown 1974; Banwell et al. 2013). If a stump or log was over one of the sample points, we moved the point of measure 30 cm (one foot) to the right of the transect, and we recorded zero litter and duff if a rock or live tree occupied the sampling point (Brown 1974). For the post-fire measurement, we made separate measurements of litter, woody fuels, and CWD present immediately after fire ("old"), and deposited since the fire ("new"). "Old" fuels were often charred or decomposed, while "new" fuels consisted of newly dead needles, primarily red-orange in color due to scorching, and wood that had fallen from trees or snags during fire. Uncertainty in differentiating old and new fuels was minimal for the litter, where new fuel was easily identifiable due to different color and structure. Uncertainty in this classification was greater for the CWD, since standing snags that burned then fell during the fire may have had similar characteristics (deep charring, more burning on the side closest to the ground) as woody debris that was on the ground at the time of the fire. For all fuel classes, we totaled the "old" and "new" post-fire measurements to calculate the 1-year post-fire totals. We measured duff mounds around eight P. lambertiana in August 2013 and again in June 2014. The trees were selected to represent the diameter range found in the YFDP, with an individual sampled in every 25 cm diameter class. For each tree, four 3-meter radial transects were placed at 90-degree intervals around the base of the tree (Additional file 1). Beginning 0.2 m from the bole of the tree, we measured litter and duff every 0.2 m. Woody fuel intercept counts (1-hour, 10-hour, and 100-hour fuels) or volumes (CWD), and litter and duff depths were converted to estimates of biomass and summarized at the transect scale (n = 99) for each fuel class. For 1-hour, 10-hour, and 100-hour fuels, planar intercept data were converted to weights using the equation (Brown 1974; van Wagtendonk 1996): $$ WEIGHT=\frac{(CONST)(n)(QMD)(SG)(SEC)}{LENGTH}, $$ where LENGTH is the length of the planar intercept transect, n is the number of pieces of each fuel, CONST is a fuel constant to convert to mass (we used one from Brown 1974), QMD is the quadratic mean diameter, SG is the specific gravity, and SEC is the secant of the non-horizontal angle. For the latter three variables, we used species-specific values developed for conifer species in the Sierra Nevada derived from observations in single-species stands (van Wagtendonk 1996). Because our study site contained a mix of species, we weighted fuel loading constants by the relative basal area of the three most common species in the YFDP: A. concolor (45.24%), P. lambertiana (45.04%), and C. decurrens (7.45%). We then multiplied the calculated weights by the correction factor for Sierra Nevada conifers (0.939) provided by van Wagtendonk et al. (1998). CWD diameter measurements were converted to volume following equation 7 in Harmon and Sexton (1996): $$ V=9.869\left(\sum \left({d}^28L\right)\right), $$ where V is volume, d is diameter, and L is length. For elliptical pieces, we added the square of each diameter measurement and then took the square root to calculate a mean diameter before calculating volume. Volume estimates were converted to mass using species-specific and decay-class-specific wood densities from Harmon and Sexton (1996: Table 4) and from Harmon et al. (2008: Appendix 2), for conifers and hardwoods, respectively. CWD mass was summarized at the transect scale (n = 99). We converted depth measurements of litter and duff to mass estimates using species-specific equations for Sierra Nevada conifer species (van Wagtendonk et al. 1998). As for woody fuels, we weighted fuel-loading constants by the relative basal area of the three most common species. Litter and duff mass was summarized at the sample point scale (n = 990). We predicted the continuous depth of litter and duff from 0 m to 2 m for each intensively sampled duff-mound tree, based on 40 depth measurements from the four transects around each tree. We did not include data from 2 m to 3 m because fuel depths became similar to background levels in the plot at 2 m from the bole. We determined the continuous depth of litter and duff from 0 m to 2 m by calculating the volume of litter and duff in concentric 0.2 m bands around each tree and then summed the section volumes from ≤1 m and ≤2 m from each tree. Duff mound volumes were converted to fuel loadings using the bulk densities for sugar pine litter and duff from van Wagtendonk et al. (1998). See Appendix: Supplemental Methods for a detailed description of the duff mound loading measurement method. Sample size and FOFEM evaluation We compared observed changes in fuel loadings at the transect scale to expected post-fire fuel loadings as modeled in the First Order Fire Effects Mode (FOFEM) version 6.4 (Lutes et al. 2012), a commonly used fire planning software program, which predicts fuel consumption given pre-fire fuel loadings, fuel model type, and fuel moistures. We use fuel model SAF 243 – Sierra Nevada Mixed Conifer, and subsamples of observed pre-fire fuel loadings. We sampled more transects than would typically be used for pre-fire fuel consumption modeling by managers. Therefore, we rarefied our dataset to sample sizes that would be more typical of the number of transects that managers would use in a FOFEM model run. We used two smaller sample sizes: 15 transects, and 45 transects. The former was a subjectively chosen value that a manager with limited funding or time might select, and we used it to illustrate uncertainty in fuel estimate and projects with a small sample size. The latter, 45 transects, was the minimum sample size that would allow, for our study site, estimations of loadings in each fuel class within 20% of the mean (Additional files 2, 3, 4, 5, 6, 7, and 8). To illustrate the range of observed and predicted means that would be obtained with 15 transects and 45 transects, we iterated this process 100 times for each of those two sample sizes. Each of those samples (200 total) was run in FOFEM using the batch-processing mode to calculate expected post-fire fuel loads. To identify the above minimum sample size, we selected random samples of different numbers of individual transects (woody fuels; n = 1 to 99 transects) or sample point locations (litter and duff; n = 1 to 990 points) from our populations of pre-fire and post-fire samples and calculated mean and standard deviations of mass for each sample. Those summary statistics were plotted against sample size. Stabilization (stationarity) of variance was assessed graphically with the use of two envelope widths: the estimated mean within ±20% of the mean and ±10% of the mean. This procedure was repeated 10 times for each fuel class. Additional FOFEM input parameters that were used for all transects included the percentage of CWD that was rotten (calculated per transect), and the distribution of CWD greatest mass in the >50.8 cm diameter class. The season of burning was set to summer. Fuel moistures for duff, 100-hour fuels, 1000-hour fuels, and soils were 20%, 6%, 10%, and 5%, respectively; these are the defaults for very dry conditions, which corresponded with the weather conditions before and while the plot burned (Lutz et al. 2017). Field measurements of fuel moistures were not available from the fire managers nor from the nearby weather stations, but these values are generally consistent with local studies of fuel moisture (Hille and Stephens 2005; Banwell et al. 2013). Influence of forest structure and composition on pre-fire surface fuel loadings We used linear regression to model total pre-fire fuel loading as a function of the overstory structure (basal area) and composition in the local tree neighborhood. Overstory variables (Table 1) were sampled in 6.91-meter-radius and 17.84-meter-radius circles (0.015 ha and 0.1 ha neighborhoods, respectively) around each sampling point (i.e., at each odd number on each transect for litter and duff; at 1 m on the transect for 1-hour and 10-hour fuels; at 2 m on the transect for 100-hour fuels; and at 15 m on the transect for CWD). We modeled the mass of litter and duff in P. lambertiana duff mounds within 1 m and 2 m of trees as a function of tree DBH using a linear model. All statistical analyses were conducted in the R, including package nlme version 3.1-131.1 (R Core Team 2017: Pinheiro et al. 2018). Table 1 Fuel classes, forest structure, and composition metrics measured in long-unburned Sierra mixed conifer forests in June 2011 at the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. We modeled local fuel classes (response variable) based on forest structure and composition metrics (predictor variables) calculated at two scales: 0.015 ha and 0.1 ha Pre-fire total surface fuel loadings averaged 192 Mg ha−1 (range: 67.5 to 617.07 Mg ha−1) and were reduced by an average of 79% by the fire to 41 Mg ha−1 (range: 0.65 to 336.74 Mg ha−1) immediately after the fire (Additional file 9). Most fuel components were reduced by 87% to 90% by the fire; CWD was only reduced by 60% (Additional file 9; Fig. 2). Small woody fuels contributed negligible amounts (0.3% to 1.8%) to the total surface fuel loadings, during both pre-fire and post-fire measurements. Litter and duff made up a substantial amount of the fuel (12% and 46% before fire and 14% and 19% after fire, respectively). CWD was the other substantial source of fuel (38% and 64% for pre-fire and post-fire measurements, respectively). Surface fuel loading in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA, in 2011, before the 2013 Rim Fire, and in 2014. Gray box plots within violin plots show median (black bar), interquartile range (25% quartile and 75% quantile; gray box), 95% confidence intervals (black lines within colored areas). Width of violin plots is based on the probability density (colored shaded areas) for the range of observed observations (y-axis). Diameter classes of woody fuels are: 1-hour fuels = 0.63 cm, 10-hour fuels = 0.64 to 2.53 cm, 100-hour = 2.54 to 7.61, CWD ≥7.62 cm. Surface fuels (bottom right) are the combined total of all fuels. Post-fire old = material that persisted as ground fuels through the 2013 fire event; Post-fire new = material that became ground fuel following the fire event; Post-fire total = the sum of the pre-fire new and pre-fire old. Species-specific decay class bulk densities were used to convert estimated volumes to per-hectare biomass estimates. Woody fuel and total fuel data were summarized at the transect scale (n = 99), and litter and duff were summarized at the point sample locations (n = 990) before plotting New inputs after fire increased the total fuel loading from 12 Mg ha−1 to 53 Mg ha−1 by the start of the first growing season after the fire, or from 21% to 28% of the pre-fire fuel loading (Additional file 9; Fig. 2). The largest new inputs were from CWD (4.9 Mg ha−1) and litter (also 4.9 Mg ha−1). One-hour fuel loadings experienced a substantial input via post-fire deposition (0.25 Mg ha−1), representing a recovery of 41% of the pre-fire fuel load. Fire immediately reduced the total number of CWD pieces observed along the 2240 m of transects from 426 to 131 (Table 2). The biomass of C. decurrens CWD increased after fire (from 10.5 Mg ha−1 to 38.9 Mg ha−1), due to either new deposition onto transects between 2011 and the fire in 2013, or from snags felled by the fire. Large pieces (>62 cm) of P. lambertiana persisted through the fire, while large pieces of A. concolor that were present prior to the fire were largely consumed. Post-fire additions tended to be smaller in diameter (Table 2) and were dominated by P. lambertiana (55% of occurrences, 21% of mass; Table 2). P. lambertiana and C. decurrens accounted for most of the larger-diameter post-fire additions (Table 2), while A. concolor post-fire additions tended to be smaller in diameter (Table 2). Table 2 Summary of coarse woody debris (>7.62 cm intercepted diameter; ≥1000-hour time lag class) in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Pre-fire measurements were in June 2011, the site burned in the Rim Fire in September 2013, and post-fire measurements were in June 2014 There was a positive relationship between DBH and litter and duff fuel loadings ≤1 m from each tree and ≤2 m from P. lambertiana stems (Fig. 3; Additional files 10, 11, and 12). Litter depth in duff mounds were within 1 SD of plot means (based on fuel transects), but duff depths for our two largest trees (157 cm DBH and 176 cm DBH) exceeded plot background levels (Fig. 3). The amount of litter in a 2 m radius band around the boles of P. lambertiana trees, when converted to landscape scale using the pre-fire density of similar trees in the YFDP, ranged from 4.2 Mg ha−1 (smallest diameter; 26 cm DBH) to 14.4 Mg ha−1 (157 cm DBH), and the volume of duff from 46.8 Mg ha−1 to 167.3 Mg ha−1. After fire, surface fuel consumption was not influenced by the depth or the composition of pre-fire surface fuels because the moderate-intensity surface fire that burned the YFDP consumed almost all of the litter and duff in the P. lambertiana duff mounds (Additional file 13). Scatter plot of the fuel loading of litter and duff ≤1 m from trees (left) and ≤2 m from 830 trees (right), in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA, in 2011, before the 2013 Rim Fire. Regression lines are for linear regression (see Additional files 11 and 12 for coefficients). Solid lines show pre-fire fuel loadings, and dashed lines show post-fire total fuel loading. Gray shading shows 95% confidence intervals Influence of overstory trees on pre-fire fuel loading Most overstory variables had significant positive relationships with litter and duff fuel loadings at the 0.015 ha scale (Fig. 4), and there were also weak positive significant relationships between live A. concolor basal area and 1-hour and 10-hour fuel loadings at the 0.015 ha scale (Fig. 4), but these relationships explained very little of the variance. Interestingly, live P. lambertiana basal area had a significant negative relationship with CWD at the 0.015 ha scale, and a negative relationship at the 0.1 ha scale. There was also a significant positive relationships between both dead P. lambertiana basal area, and live A. concolor basal area and CWD at the 0.1 ha scale (Fig. 4). Nevertheless, even when regression model parameters were significant, the relationship only explained a small part of the overall variance (0.004 < r2 < 0.096; see scatter plots and regression in Additional files 14, 15, 16, and 17. (a) heat map of regression coefficients (β) for when basal area (BA) at the 0.015 ha scale (a 6.91 m radius circle, center on the sample point), and (b) at the 0.1 ha scale (a 17.84 m radius circle, center on the sample point) of each overstory component predicts fuel loadings of each fuel component. Tree basal area was calculated from a complete census of all live stems ≥1 cm diameter at 1.37 m height, and all dead stems ≥10 cm and ≥2.0 m tall. Regression slopes and P-values are shown in text. In cases for which the regression was not significant, the background color is set to white. Scatter plots and significant regressions are plotted on Additional files 14, 15, 16, and 17. ABCO: Abies concolor; PILA: Pinus lambertiana. Sample sizes are n = 112 for woody fuels and total fuels, and n = 1120 for litter and duff. Regression models were for 2011 pre-fire fuel loadings at the Yosemite Forest Dynamics Plot in Yosemite, California, USA FOFEM evaluation Pre-fire fuel loading was highly heterogeneous throughout the YFDP. Characterization of pre-fire fuel loadings to within 10% of the plot mean required 26 transects for litter, 19 for duff, and 34 for 1-hour and 10-hour fuels. Characterization of 100-hour fuels and CWD to within 20% of the plot mean required 40 and 44 Brown's transects, respectively. Total surface fuel loadings in the YFDP could be estimated to within 10% with 23 Brown's transects. Based on these sample size results, we subsampled our dataset with 100 samples of 45 transects in order to meet minimum sample size needs for CWD. FOFEM predicted complete consumption of 1-hour, 10-hour, 100-hour woody fuels, and of litter, but we observed residual fuels in all of those classes (Fig. 5). FOFEM under-predicted CWD consumption, and this was driven by under-prediction in the rotten CWD class (Fig. 5). In contrast, FOFEM predicted approximately twice as much duff consumption than was observed (Fig. 5). The impacts of the high heterogeneity in fuel loadings were evident in the differences in the range of values that could be obtained when taking the mean of 15 versus 45 fuel transects: analogous high and low values (particularly high CWD values) were evident when only 15 transects were sampled (Fig. 5). Mean post-fire surface fuel loading for each fuel class as observed in the field (dark gray) and predicted (light gray) by FOFEM for the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Observed post-fire fuel measurements were made in June 2014. We calculated FOFEM inputs and outputs using the sampling procedures typically used by managers: multiple transects were sampled (15 in the left column, 45 in the right column), and a mean was calculated from those sampled transects. The pre-fire mean was then entered into FOFEM, which projected the post-fire fuel loadings for each fuel class. Box plots within violin plots show median (black bar), interquartile range (25% quartile and 75% quantile; small box), 95% confidence intervals (vertical lines). Width of violin plots is based on the probability density (shaded areas) for the range of observed observations (y-axis). Diameter classes of woody fuels are: 1-hour fuels = 0.63 cm, 10-hour fuels = 0.64 to 2.53 cm, 100-hour = 2.54 to 7.61 cm, CWD ≥7.62 cm. CWD consumption in FOFEM is modeled separately for the sound (CWDS) and rotten (CWDR) classes The pre-fire fuel loadings in our study site, which had not been burned since 1900 (Barth et al. 2015) and had not been previously logged, were similar to those of Lydersen et al. (2015), who characterized fuel loadings in a similar fire-excluded Sierra Nevada mixed-conifer forest that had been selectively logged in the 1920s (compare our Fig. 2 with their Figure 2). We observed much higher loads of CWD in our study site than Lydersen et al. (2015), likely reflecting the presence of very large trees at our unharvested site (Lutz et al. 2012), and the deposition of their branches and boles on the forest floor. Most of the observations in Lydersen et al. (2015) had higher ranges of values, likely reflecting a greater length of sample transects (5760 m of transects versus our 2240 m), but we observed higher loadings of duff, and greater variation in duff fuel loading. This difference may be due to the presence of very large trees and their associated duff mounds at our study site: large sugar pines influenced surface fuel abundance at a very local scale (<2 m) before the fire. Overstory trees influenced the surface fuel loadings, and this relationship was dependent on spatial scale. Nevertheless, the amount of variance explained by these relationships was small. In other words, overstory structure, although associated with fuel loading of some fuel classes, could not be used to accurately predict understory fuel loadings. At both the 0.015 ha and 0.1 ha scale, overstory trees had significant positive relationships with the litter and duff classes, consistent with the results of the duff mound analyses. We and Lydersen et al. (2015) both found a relationships between Abies basal area and loading in the 1- to 100-hour woody fuel classes, which may be a result of deposition from the fine branching structure of Abies, but our observed relationship was very weak. Similarly, the negative association of CWD with live P. lambertiana and the positive association with dead P. lambertiana likely derive from its canopy architecture: standing dead P. lambertiana often shed large branches that are in the CWD, 100-hour, and 10-hour classes, but these branches are infrequently shed while the trees are still alive. At the 0.1 ha scale, both live and dead components of the overstory influenced CWD accumulation. This more pronounced relationship at the larger spatial scale may be reflective of the nature of CWD accumulation—CWD is created when a large tree falls (or the top breaks off), and this CWD typically lands farther than 6.91 m away from the rooting location. Accurate characterization of pre-fire fuel loading is crucial to the reintroduction of fire to landscapes where it has been long excluded; reducing uncertainties in fuel loading (and therefore in burning characteristics) can give managers a higher level of confidence that a fire will have the calculated behavior. These results confirm the well-known problems of heterogeneity of surface fuels (Sikkink and Keane 2008) and the difficulty in understanding the effects of heavy fuels (Hyde et al. 2011). Pre-fire fuel loadings at the 20 m transect scale ranged over an order of magnitude (68.5 Mg ha−1 to 617.1 Mg ha−1), and immediate post-fire fuel loadings varied over two orders of magnitude (0.7 Mg ha−1 to 336.7 Mg ha−1). In the YFDP, 23 Brown's transects were required to estimate surface fuel loadings to within 10% of the plot mean, and 44 were needed to estimate every fuel class within 20% of the plot mean. If only a very small number of transects were to be sampled (e.g., 15), mean observed fuel loads could be well outside the interquartile range of real values, and model predictions of fuel consumption and residual fuel (Fig. 5), as well as fire behavior and emissions, would be inaccurate. We recommend increasing the length of small woody fuel transects, particularly the 100-hour fuels transect, to the full 20 m transect length in future sampling efforts to capture heterogeneity in woody fuels more efficiently and decrease the number of transects with zero observations of woody fuel classes. The Rim Fire reduced most classes of surface fuels by roughly an order of magnitude, even following three to four missed mean fire return intervals. Stephens and Moghaddas (2005b), working in a similar forest type, found that closely managed prescribed fire and thinning plus prescribed fire treatments substantially reduced surface fuels. At the YFDP, reductions in these woody fuels were achieved even though fire was ignited about 1 km away and allowed to spread naturally through the study site. Furthermore, small unburned areas persisted through the fire (Blomdahl et al. 2019), which was possibly counterbalanced by higher fuel consumption in burned areas due to very low fuel moistures. Litter was reduced from 63.9 Mg ha−1 to 3.22 Mg ha−1, similar to the 8.69 Mg ha−1 reported by Stephens (2004) for a similar frequent-fire reference ecosystem. Duff was reduced dramatically, potentially changing nutrient dynamics (Stephens and Moghaddas 2005a; Moghaddas and Stephens 2007). Reference frequent-fire ecosystems have very little duff (Stephens 2004) due to frequent consumption by fire, yet still maintain high levels of aboveground biomass. Kauffman and Martin (1989), Hille and Stephens (2005), and Knapp et al. (2005) likewise describe reductions in fuel loading of litter, duff, and woody fuels in Sierra Nevada mixed-conifer forest types, with the additional documentation of lower proportional reduction in spring burns versus fall burns. We expect that prescribed fire or wildfire that occurs earlier or later in the year would result in less complete consumption of fine fuels and forest floor layers. Our findings, therefore, indicated that lightly managed wildfire can achieve fuel reductions in support of ecosystem management and restoration objectives, even where mechanical fuel reduction activities are not possible due to access or regulatory limitations. Moreover, management-ignited backburns during a wildfire can achieve large reductions in surface fuel even while overall fire severity remains low to moderate. Thus, ecologically restorative burning during fire suppression is possible even under extremely dry fuel moistures, and could be an objective for fire managers. Total woody debris biomass was reduced by approximately half, with some types of woody debris experiencing substantial reductions. Woody debris of A. concolor experienced a 9.2-fold reduction in biomass, greater than that experienced by the other conifers, likely due to smaller particle size (as indicated by mean intercepted diameter), thin bark, and more decayed wood, reflecting more rapid decay rates (Harmon and Sexton 1996). Woody debris unidentifiable to species, which largely consisted of material in decay classes 3 to 5 (moderately to well decayed), experienced a 3.5-fold reduction in biomass. The low bulk density of this material, its tendency to be buried to some degree in duff and litter layers, and the lower energy needed to ignite rotten logs (Albini et al. 1995; Albini and Reinhardt 1997) likely all contributed to a high rate of consumption. While we observed large initial reductions in surface fuel loadings, our results also captured a considerable increase in surface fuels within one year of the fire. Further monitoring will be required to determine how long it will take for post-fire fuel loadings to equal or surpass pre-fire levels, but preliminary data show that this process may occur relatively quickly as fire-killed branches and trees fall (Morrison and Raphael 1993; Passovoy and Fulé 2006; Yocom-Kent et al. 2015; Grayson et al. 2019). A study of post-fire fuel dynamics in the northern Rocky Mountains, USA, only found post-fire increases in the finest fuel category (Stalling et al. 2017), whereas we found post-fire inputs of all woody fuel classes, reflecting the old-growth characteristics of our study site, specifically the large old trees and snags with large dead branches that were deposited during or after the fire. In fire-suppressed old-growth forests such as the YFDP, multiple fires may be necessary to restore and maintain characteristically low fuel loadings, as fire-related mortality of high densities of small trees can quickly replace the fuels that were consumed in a first re-entry fire. There may potentially exist a need for a "maintenance" fire shortly after the initial return of fire to the system due to initial high post-fire inputs of previously live canopy fuels (Larson et al. 2013). Another possible reason for the apparent inconsistency with the findings of Stalling et al. (2017) is that we conducted this study within a single forest type following a single fire while Stalling et al. (2017) grouped post-fire fuel measurements from four sites that spanned 500 m in elevation and overstory mortality rates from 25 to 100%. Grouping such different study sites likely contributed to the high variability that they observed in fuel loadings, and this could have obscured any underlying patterns in fuel accumulation within individual sites. By conducting our study at a single site, we were able to reduce landscape-level variability in fuel loadings and more clearly capture post-fire fuel accumulation. FOFEM uses a process-based approach (the "Burnup" model; Albini and Reinhardt 1997) to model consumption of litter and woody fuels, and a separate empirical model created by Brown et al. (1985) to predict duff consumption. Burnup over-predicted consumption of litter, 1-hour, 10-hour, and 100-hour fuels, due primarily to variability in burning conditions within the YFDP that resulted in a residual amount of litter and small woody fuels. Burnup almost always predicts total consumption or 1-hour, and 10-hour fuels (D. Lutes, USDA Forest Service, Missoula, Montana, USA, personal communication), but, in practice, most fires burn with some heterogeneity. The Burnup model may be more accurate in prescribed fire applications for which fire is applied more uniformly, but our results show that heterogeneity in burning conditions is an important consideration when modeling fuel consumption post wildfire. FOFEM and the Burnup sub-model performed poorly for CWD and duff. Burnup also under-predicted consumption of CWD, particularly rotten CWD (Fig. 5), a deviation that may be due in part to the relatively high abundance of very large diameter rotten CWD woody debris within the YFDP. Burnup models consumption based on heat transfer from small fuels to larger fuels, and this relies on the assumption that there is a relatively continuous diameter distribution of fuels. The under-prediction we observed may have been due to a gap in the diameter distribution of fuels, which could have reduced heat transfer and ignition of larger fuels, or perhaps Burnup is not calibrated well for modeling consumption of fuels beyond the diameter range it was based on (up to 30 cm in diameter; Albini et al. 1995), which our CWD data far exceed (Table 2). Conversely, fuel moistures in the rotten CWD may have been lower than was entered into the model. The duff model, in contrast, greatly under-predicted consumption of duff. The extreme dry conditions of August represent a window during which duff is more combustible than the model predicts, and those conditions were likely beyond the range of duff moisture well represented by the domain of the model. Conclusions and Management Implications In long-unburned Sierra Nevada mixed-conifer forests, surface fuel loadings can be high, but are also heterogeneous. There was almost an order of magnitude variation in fuel loading before fire (mean = 192.6 Mg ha−1, SD = 97.5), and over two orders of magnitude in the residual fuel loading after fire (mean = 41.3 Mg ha−1, SD = 57.0). Fire dramatically reduced surface fuels, but most fuel classes—with the exception of duff—received new inputs within one year of the fire. The immediate spatial neighborhood of trees and snags in this fire-excluded, productive forest had some explanatory relationship with surface fuels but, at most, explained 10% of the variance in some fuel classes, similar to results found in nearby forests where overstory variables were sampled with a different approach (Lydersen et al. 2015). This contrasts with the results of van Wagtendonk and Moore (2010), which compared fuel loadings in stands where at least 90% of the trees at a site were of a single species and diameter class, and found significant differences in fuel loadings. In fire-excluded forests with variable tree size and species composition, local forest structure and species composition seems to be only weakly related to fuel loadings. Thus, overstory structure is not a strong enough predictor to replace measurements of surface fuels. Our results for CWD are noteworthy throughout our analyses. CWD was a major surface fuel pool before fire, and the largest pool after fire. Of all the fuel classes, it had the lowest overall consumption—large CWD may be better able to persist in frequent fire regimes. CWD also had the strongest relationship with overstory variables, although the direction of relationships differed with overstory component: positive for total dead basal area, and negative for live P. lambertiana. Finally, FOFEM over-predicted consumption of CWD, indicating that FOFEM could potentially be improved to better model dynamics of very large woody debris. The low accuracy for rotten CWD in FOFEM is important since CWD plays an important role in many ecosystem functions, including wildlife habitat, nutrient cycling, carbon storage, and post-fire vegetation dynamics (Bull 2002; Janisch and Harmon 2002; Koenigs et al. 2002; Brown et al. 2003), in addition to functioning as a surface fuel. Our observations suggest that beneficial structural changes, such as reduction of fine fuels without excessive losses of CWD, can result even from fire events occurring during severe fire weather. Surface fuel loadings were spatially variable within the 25.6 ha study area, reflecting variation in litter and duff near large trees, and the presence of large CWD pieces. This heterogeneity likely contributes to variation in fire severity at small scales and heterogeneity in the post-fire environment. This study makes a unique contribution by investigating variability of pre- and post-fire fuel loading within a single intensively measured site in a large-statured old-growth forest. The size of our research site is closely aligned with the size of a typical prescribed fire unit. Thus, our results have direct relevance to management and restoration of fire-excluded old-growth mixed-conifer forests. Agee, J.K., and M.H. Huff. 1987. Fuel succession in a western hemlock/Douglas-fir forest. Canadian Journal of Forest Research 17: 697–704. Albini, F.A., J.K. Brown, E.D. Reinhardt, and R.D. Ottmar. 1995. Calibration of a large fuel burnout model. International Journal of Wildland Fire 5: 173–192. https://doi.org/10.1071/WF9950173. Albini, F.A., and E.D. Reinhardt. 1997. Improved calibration of a large fuel burnout model. International Journal of Wildland Fire 7: 21–28. https://doi.org/10.1071/WF9970021. Banwell, E.M., J.M. Varner, E.E. Knapp, and R.W. Van Kirk. 2013. Spatial, seasonal, and diel forest floor moisture dynamics in Jeffrey pine-white fir forests of the Lake Tahoe Basin, USA. Forest Ecology and Management 305: 11–20. https://doi.org/10.1016/j.foreco.2013.05.005. Barth, M.A.F., A.J. Larson, and J.A. Lutz. 2015. A forest reconstruction model to assess changes to Sierra Nevada mixed-conifer forest during the fire suppression era. Forest Ecology and Management 354: 104–118. https://doi.org/10.1016/j.foreco.2015.06.030. Blomdahl, E.M., C.A. Kolden, A.J.H. Meddens, and J.A. Lutz. 2019. The importance of small fire refugia in the central Sierra Nevada, California, USA. Forest Ecology and Management 432: 1041–1052. https://doi.org/10.1016/j.foreco.2018.10.038. Brown, J.K. 1974. Handbook for inventorying downed woody material. USDA Forest Service General Technical Report INT-16. Ogden, Utah: USDA Forest Service, Intermountain Forest and Range Experiment Station. Brown, J.K., M.A. Marsden, K.C. Ryan, and E.D. Reinhardt. 1985. Predicting duff and woody fuel consumed by prescribed fire in the northern Rocky Mountains. USDA Forest Service Research Paper INT-337. Ogden, Utah: USDA Forest Service, Intermountain Forest and Range Experiment Station. https://doi.org/10.2737/INT-RP-337. Brown, J.K., E.D. Reinhardt, and K.A. Kramer. 2003. Coarse woody debris: managing benefits and fire hazard in the recovering forest. USDA Forest Service General Techmical Report RMRS-GTR-105. Ogden, Utah: USDA Forest Service, Rocky Mountain Research Station. https://doi.org/10.2737/RMRS-GTR-105. Bull, E.L. 2002. The value of coarse woody debris to vertebrates in the Pacific Northwest. In Proceedings of the symposium on the ecology and management of dead wood in western forests. USDA Forest Service General Technical Report PSW-GTR-181, ed. W.F. Laudenslayer Jr., P.J. Shea, B.E. Valentine, C.P. Weatherspoon, and T.E. Lisle, technical coordinators. Pages 171–178. Albany, California: USDA FOrest Service, Pacific Southwest Research Station. https://doi.org/10.2737/PSW-GTR-181. Campbell, J., D. Donato, D. Azuma, and B. Law. 2007. Pyrogenic carbon emission from a large wildfire in Oregon, United States. Journal of Geophysical Research 112: G04014. https://doi.org/10.1029/2007JG000451. Certini, G. 2005. Effects of fire on properties of forest soils: a review. Oecologia 143: 1–10. https://doi.org/10.1007/s00442-004-1788-8. Collins, B.M., S.L. Stephens, J.J. Moghaddas, and J. Battles. 2010. Challenges and approaches in planning fuel treatments across fire-excluded forested landscapes. Journal of Forestry 108: 24–31. Daly, C., M. Halbleib, J.I. Smith, W.P. Gibson, M.K. Doggett, G.H. Taylor, J. Curtis, and P.P. Pasteris. 2008. Physiographically sensitive mapping of climatological temperature and precipitation across the conterminous United States. International Journal of Climatolology 28: 2031–2064. https://doi.org/10.1002/joc.1688. Demaerschalk, J.P., and S.A.Y. Omule. 1982. Estimating breast height diameters from stump measurements in British Columbia. The Forestry Chronicle 53 (3): 143–146. https://doi.org/10.5558/tfc58143-3. Flora of North America Editorial Committee, ed. 1993. Flora of North America north of Mexico. 20+ volumes. New York, New York: Oxford University Press. Fosberg, M.A. 1970. Drying rates of heartwood below fiber saturation. Forest Science 16: 57–63. French, N.H.F., W.J. de Groot, L.K. Jenkins, B.M. Rogers, E. Alvarado, B. Amiro, B. de Jong, S. Goetz, E. Hoy, E. Hyer, R. Keane, B.E. Law, D. McKenzie, S.G. McNulty, R. Ottmar, D.R. Pérez-Salicrup, J. Randerson, K.M. Robertson, and M. Turetsky. 2011. Model comparisons for estimating carbon emissions from North American wildland fire. Journal of Geophysical Research 116: G00K05. https://doi.org/10.1029/2010JG001469. Furniss, T.J., A.J. Larson, V.R. Kane, and J.A. Lutz. 2019. Multi-scale assessment of post-fire tree mortality models. International Journal of Wildland Fire 28: 46–61. https://doi.org/10.1071/WF18031. Garlough, E.C., and C.R. Keyes. 2011. Influences of moisture content, mineral content and bulk density on smouldering combustion of ponderosa pine duff mounds. International Journal of Wildland Fire 20: 589–596. https://doi.org/10.1071/WF10048. Grayson, L., D. Cluck, and S. Hood. 2019. Persistence of fire-killed conifer snags in California. Fire Ecology 15: 1. https://doi.org/10.1186/s42408-018-0007-7. Harmon, M.E., and J. Sexton. 1996. Guidelines for measurements of woody detritus in forest ecosystems. Long-Term Ecological Research Publication No. 20. Seattle Washington: Network Office, University of Washington. Harmon, M.E., C.W. Woodall, B. Fasth, and J. Sexton. 2008. Woody detritus density and density reduction factors for tree species in the United States: a synthesis. USDA Forest Service General Technical Report NRS-29. Newton Square, Pennsylvania: USDA Forest Service, Northern Research Station. https://doi.org/10.2737/NRS-GTR-29. Hille, M., and J. den Ouden. 2005. Fuel load, humus consumption and humus moisture dynamics in Central European Scots pine stands. International Journal of Wildland Fire 14: 153–159. https://doi.org/10.1071/WF04026. Hille, M.G., and S.L. Stephens. 2005. Mixed conifer forest duff consumption during prescribed fires: tree crown impacts. Forest Science 51: 417–424. Hood, S., C.A. Cansler, P. van Mantgem, and J.M. Varner. 2018. Fire and tree death: understanding and improving modeling of fire-induced tree mortality. Environmental Research Letters 13 (11): 113004. https://doi.org/10.1088/1748-9326/aae934. Hood, S.M. 2010. Mitigating old tree mortality in long-unburned, fire-dependent forests: a synthesis. USDA Forest Service General Technical Report RMRS-GTR-238. Fort Collins, Colorado: USDA Forest Service, Rocky Mountain Research Station. https://doi.org/10.2737/RMRS-GTR-238. Hood, S.M., S.L. Smith, and D.R. Cluck. 2010. Predicting mortality for five California conifers following wildfire. Forest Ecology and Management 260: 750–762. https://doi.org/10.1016/j.foreco.2010.05.033. Hyde, J.C., A.M. Smith, R.D. Ottmar, E.C. Alvarado, and P. Morgan. 2011. The combustion of sound and rotten coarse woody debris: a review. International Journal of Wildland Fire 20: 163–174. Janisch, J.E., and M.E. Harmon. 2002. Successional changes in live and dead wood carbon stores: implications for net ecosystem productivity. Tree Physiology 22: 77–89. https://doi.org/10.1093/treephys/22.2-3.77. Jenkins, M.J., W.G. Page, E.G. Hebertson, and M.E. Alexander. 2012. Fuels and fire behavior dynamics in bark beetle-attacked forests in western North America and implications for fire management. Forest Ecology and Management 275: 23–34. https://doi.org/10.1016/j.foreco.2012.02.036. Kauffman, J.B., and R.E. Martin. 1989. Fire behavior, fuel consumption, and forest-floor changes following prescribed understory fires in Sierra Nevada mixed conifer forests. Canadian Journal of Forest Research 53: 1689–1699. https://doi.org/10.1017/CBO9781107415324.004. Keane, R.E., S.F. Arno, and C.A. Stewart. 2000. Ecosystem-based management in the Whitebark Pine Zone. In Bitterroot Ecosystem Management Restoration Project: what we have learned. USDA Forest Service Proceedings RMRS-P-17, ed. H. Smith. Pages 36–40. Ogden, Utah: USDA Forest Service, Rocky Mountain Research Station. Knapp, E.E., J.E. Keeley, E.A. Ballenger, and T.J. Brennan. 2005. Fuel reduction and coarse woody debris dynamics with early season and late season prescribed fire in a Sierra Nevada mixed conifer forest. Forest Ecology and Management 208: 383–397. https://doi.org/10.1016/j.foreco.2005.01.016. Koenigs, E., P.J. Shea, R. Borys, and M.L. Haverty. 2002. An investigation of the insect fauna associated with coarse woody debris of Pinus ponderosa and Abies concolor in northeastern California. In Proceedings of the symposium on the ecology and management of dead wood in Western forests. USDA Forest Service General Technical Report PSW-GTR-181, ed. W.F. Laudenslayer Jr., P.J. Shea, B.E. Valentine, C.P. Weatherspoon, and T.E. Lisle, technical coordinators. Pages 97–110. Albany, California: USDA FOrest Service, Pacific Southwest Research Station. Kolb, T.E., J.K. Agee, P.Z. Fulé, N.G. McDowell, K. Pearson, A. Sala, and R.H. Waring. 2007. Perpetuating old ponderosa pine. Forest Ecology and Management 249: 141–157. https://doi.org/10.1016/j.foreco.2007.06.002. Larson, A.J., R.T. Belote, C.A. Cansler, S.A. Parks, and M.S. Dietz. 2013. Latent resilience in ponderosa pine forest: effects of resumed frequent fire. Ecological Applications 23: 1243–1249. https://doi.org/10.1890/13-0066.1. Larson, A.J., C.A. Cansler, S.G. Cowdery, S. Hiebert, T.J. Furniss, M.E. Swanson, and J.A. Lutz. 2016. Post-fire morel (Morchella) mushroom abundance, spatial structure, and harvest sustainability. Forest Ecology and Management 377: 16–25. https://doi.org/10.1016/j.foreco.2016.06.038. Lutes, D., R.E. Keane, and E.D. Reinhardt. 2012. FOFEM 6.0 user guide. Fort Collins, Colorado: USDA Forest Service, Rocky Mountain Research Station. Lutes, D.C., R.E. Keane, and J.F. Caratti. 2009. A surface fuel classification for estimating fire effects. International Journal of Wildland Fire 18: 802–814. https://doi.org/10.1071/WF08062. Lutz, J., A. Larson, and M. Swanson. 2018b. Advancing fire science with large forest plots and a long-term multidisciplinary approach. Fire 1(1): 5. https://doi.org/10.3390/fire1010005 Lutz, J.A., T.J. Furniss, S.J. Germain, K.M.L. Becker, E.M. Blomdahl, S.M.A. Jeronimo, C.A. Cansler, J.A. Freund, M.E. Swanson, and A.J. Larson. 2017. Shrub communities, spatial patterms, and shrub-mediated tree mortality following reintorduced fire in Yosemite National Park, California, USA. Fire Ecology 13: 104–126. https://doi.org/10.4996/fireecology.1301104. Lutz, J.A., T.J. Furniss, D.J. Johnson, S.J. Davies, D. Allen, A.Alonso, K.J. Anderson-Teixeira, A. Andrade, J. Baltzer, K.M. L. Becker, E.M. Blomdahl, N.A. Bourg, S. Bunyavejchewin, D.F.R.P. Burslem, C.A. Cansler, K. Cao, M. Cao, D. Cárdenas, L.-W. Chang, K.-J. Chao, W.-C. Chao, J.-M. Chiang, C. Chu, G.B. Chuyong, K. Clay, R. Condit, S. Cordell, H.S. Dattaraja, A. Duque, C. E.N. Ewango, Gunter A. Fischer, Christine Fletcher, James A. Freund, Christian Giardina, Sara J. Germain, G.S. Gilbert, Z.Hao, T.Hart, B.C.H. Hau, F. He, A. Hector, R.W. Howe, C.-F. Hsieh, Y.-H. Hu, S.P. Hubbell, F.M. Inman-Narahari, A. Itoh, D. Janík, A.R. Kassim, D. Kenfack, L. Korte, K. Král, A.J. Larson, Y. Li, Y. Lin, S. Liu, S. Lum, K. Ma, J.-R. Makana, Y. Malhi, S.M. McMahon, W.J. McShea, H.R. Memiaghe, X. Mi, M. Morecroft, P.M. Musili, J.A. Myers, V. Novotny, A. de Oliveira, P. Ong, D.A. Orwig, R. Ostertag, G.G. Parker, R. Patankar, R.P. Phillips, G. Reynolds, L. Sack, G.-Z.M. Song, S.-H. Su, R. Sukumar, I-F. Sun, H.S. Suresh, M.E. Swanson, S.Tan, D.W. Thomas, J. Thompson, M. Uriarte, R. Valencia, A. Vicentini, T. Vrška, X. Wang, G.D. Weiblen, A.Wolf, S.-H. Wu, H. Xu, T. Yamakura, S. Yap, J.K. Zimmerman. 2018a. Global importance of large-diameter trees. Global Ecology and Biogeography 27: 849–864. https://doi.org/10.1111/geb.12747 Lutz, J.A., A.J. Larson, M.E. Swanson, and J.A. Freund. 2012. Ecological importance of large-diameter trees in a temperate mixed-conifer forest. PLoS ONE 7: e36131. https://doi.org/10.1371/journal.pone.0036131. Lutz, J.A., K.A. Schwindt, T.J. Furniss, J.A. Freund, M.E. Swanson, K.I. Hogan, G.E. Kenagy, and A.J. Larson. 2014. Community composition and allometry of Leucothoe davisiae, Cornus sericea, and Chrysolepis sempervirens. Canadian Journal of Forest Research 44: 677–683. https://doi.org/10.1139/cjfr-2013-0524. Lutz, J.A., J.W. van Wagtendonk, and J.F. Franklin. 2009a. Twentieth-century decline of large-diameter trees in Yosemite National Park, California, USA. Forest Ecology and Management 257: 2296–2307. https://doi.org/10.1016/j.foreco.2009.03.009 Lutz, J.A., J.W. van Wagtendonk, A.E. Thode, J.D. Miller, and J.F. Franklin. 2009b. Climate, lightning ignitions, and fire severity in Yosemite National Park, California, USA. International Journal of Wildland Fire 18: 765-774. https://doi.org/10.1071/WF08117 Lydersen, J.M., B.M. Collins, E.E. Knapp, G.B. Roller, and S. Stephens. 2015. Relating fuel loads to overstorey structure and composition in a fire-excluded Sierra Nevada mixed conifer forest. Internatioanl Journal of Wildland Fire 24: 484–494. https://doi.org/10.1071/WF13066. Maser, C., R.G. Anderson, K.J. Cromack, et al. 1979. Dead and down woody material. Dead and Down Woody Material. In: Thomas JW (ed) Wildlife habitats in managed forests: the Blue Mountains of Oregon and Washington, vol. 553. USDA Forest Service Agriculture Handbook. Washington, DC., US Government Printing Office Moghaddas, E.E.Y., and S.L. Stephens. 2007. Thinning, burning, and thin-burn fuel treatment effects on soil properties in a Sierra Nevada mixed-conifer forest. Forest Ecology and Management 250: 156–166. https://doi.org/10.1016/j.foreco.2007.05.011. Moore, M.M., C.A. Casey, J.D. Bakker, J.D. Springer, P.Z. Fulé, W.W. Covington, and D.C. Laughlin. 2006. Herbaceous vegetation responses (1992–2004) to restoration treatments in a ponderosa pine forest. Rangeland Ecology and Management 59: 135–144. https://doi.org/10.2111/05-051R2.1. Morrison, M.L., and M.G. Raphael. 1993. Modeling the dynamics of snags. Ecological Applications 3: 322–330. https://doi.org/10.2307/1941835. Nesmith, J.C.B., K.L. O'Hara, P.J. van Mantgem, and P. de Valpine. 2010. The effects of raking on sugar pine mortality following prescribed fire in Sequoia and Kings Canyon national parks, California, USA. Fire Ecology 6 (3): 97–116. https://doi.org/10.4996/fireecology.0603097. Passovoy, M.D., and P.Z. Fulé. 2006. Snag and woody debris dynamics following severe wildfires in northern Arizona ponderosa pine forests. Forest Ecology and Management 223: 237–246. https://doi.org/10.1016/j.foreco.2005.11.016. Pinheiro, J., D. Bates, S. DebRoy, D. Sarkar, and R. Core Team. 2018. nlme: linear and nonlinear mixed effects models. R package version 3.1-131. <https://CRAN.R-project.org/package=nlme>. Accessed 1 Mar 2018. Prichard, S.J., E.C. Karau, R.D. Ottmar, M.C. Kennedy, J.B. Cronan, C.S. Wright, and R.E. Keane. 2014. Evaluation of the CONSUME and FOFEM fuel consumption models in pine and mixed hardwood forests of the eastern United States. Canadian Journal of Forest Research 44: 784–795. https://doi.org/10.1139/cjfr-2013-0499. PRISM Climate Group. 2004. Northwest alliance for computational science and engineering. <http://www.prismclimate.org. Accessed 1 May 2018. R Core Team. 2017. R: a language and environment for statistical computing. Version 3.4.3. Vienna: R Foundation for Statistical Computing. Ryan, K.C., and W.H. Frandsen. 1991. Basal injury from smoldering fires in mature Pinus ponderosa Laws. International Journal of Wildland Fire 1 (2): 107–118. https://doi.org/10.1071/WF9910107. Ryan, K.C., E.E. Knapp, and J.M. Varner. 2013. Prescribed fire in North American forests and woodlands: history, current practice, and challenges. Frontiers in Ecology and the Environment 11: e15–e24. https://doi.org/10.1890/120329. Sackett, S.S., and S.M. Haas. 1998. Two case histories for using prescribed fire to restore ponderosa pine ecosystems in northern Arizona. In Proceedings of the 20 th Tall Timbers fire ecology conference—fire in ecosystem management: shifting the paradigm from suppression to prescription, ed. T.L. Pruden and L.A. Brennan. Pages 380–389. Tallahassee, Florida: Tall Timbers Research Station. Sikkink, P.G., and R.E. Keane. 2008. A comparison of five sampling techniques to estimate surface fuel loading in montane forests. International Journal of Wildland Fire 17: 363–379. https://doi.org/10.1071/WF07003. Stalling, C., R.E. Keane, and M. Retzlaff. 2017. Surface fuel changes after severe disturbances in northern Rocky Mountain ecosystems. Forest Ecology and Management 400: 38–47. https://doi.org/10.1016/j.foreco.2017.05.020. Stavros, E.N., Z. Tane, V.R. Kane, S.R. Veraverbeke, R.J. McGaughey, J.A. Lutz, C. Ramirez, and D. Schimel. 2016. Unprecedented remote sensing data over King and Rim megafires in the Sierra Nevada mountains of California. Ecology 97: 3244–3244. https://doi.org/10.1002/ecy.1577. Stephens, S., and J. Moghaddas. 2005a. Experimental fuel treatment impacts on forest structure, potential fire behavior, and predicted tree mortality in a California mixed conifer forest. Forest Ecology and Management 215: 21–36. https://doi.org/10.1016/j.foreco.2005.03.070 Stephens, S.L. 2004. Fuel loads, snag abundance, and snag recruitment in an unmanaged Jeffrey pine–mixed conifer forest in northwestern Mexico. Forest Ecology and Management 199: 103–113. https://doi.org/10.1016/j.foreco.2004.04.017. Stephens, S.L., and J.J. Moghaddas. 2005b. Fuel treatment effects on snags and coarse woody debris in a Sierra Nevada mixed conifer forest. Forest Ecology and Management 214: 53–64. https://doi.org/10.1016/j.foreco.2005.03.055 Swezy, D.M., and J.K. Agee. 1991. Prescribed-fire effects on fine-root and tree mortality in old-growth ponderosa pine. Canadian Journal of Forest Research 21: 626–634. https://doi.org/10.1139/x91-086. van Wagtendonk, J.W. 1996. Physical properties of woody fuel particles Sierra Nevada conifers. International Journal of Wildland Fire 6: 117–123. https://doi.org/10.1071/WF9960117. van Wagtendonk, J.W., J.M. Benedict, and W.M. Sydoriak. 1998. Fuel bed characteristics of Sierra Nevada conifers. Western Journal of Applied Forestry 13: 73–84. van Wagtendonk, J.W., and J.A. Lutz. 2007. Fire regime attributes of wildland fires in Yosemite National Park, USA. Fire Ecology 3 (2): 34–52. https://doi.org/10.4996/fireecology.0302034. van Wagtendonk, J.W., and P.E. Moore. 2010. Fuel deposition rates of montane and subalpine conifers in the central Sierra Nevada, California, USA. Forest Ecology and Management 259: 2122–2132. https://doi.org/10.1016/j.foreco.2010.02.024. Varner, J.M., D.R. Gordon, F.E. Putz, and J.K. Hiers. 2005. Restoring fire to long-unburned Pinus palustris ecosystems: novel fire effects and consequences for long-unburned ecosystems. Restoration Ecology 13: 536–544. https://doi.org/10.1111/j.1526-100X.2005.00067.x. Varner, J.M., F.E. Putz, J.J. O'Brien, J.K. Hiers, R.J. Mitchell, and D.R. Gordon. 2009. Post-fire tree stress and growth following smoldering duff fires. Forest Ecology and Management 258: 2467–2474. https://doi.org/10.1016/j.foreco.2009.08.028. Yocom-Kent, L.L., K.L. Shive, B.A. Strom, C.H. Sieg, M.E. Hunter, C.S. Stevens-Rumann, and P.Z. Fulé. 2015. Interactions of fuel treatments, wildfire severity, and carbon dynamics in dry conifer forests. Forest Ecology and Management 349: 66–72. https://doi.org/10.1016/j.foreco.2015.04.004. We thank Yosemite National Park for logistical assistance and the Yosemite Forest Dynamics Plot field crews, especially R. Burke, each individually acknowledged at http://yfdp.org This work was performed under National Park Service research permits YOSE-2011-SCI-0015, YOSE-2012-SCI-0059, YOSE-2013-SCI-0012, and YOSE-2014-SCI-0012 for study YOSE-0051. This work was funded by the National Park Service (Awards P14AC00122 and P14AC00197). This work was supported by the USDA National Institute of Food and Agriculture (McIntire Stennis accession number 1000655), and the Utah Agricultural Experiment Station (projects 1153 and 1398). The fuel datasets generated or analyzed during the current study are available through the Utah State University Digital Commons repository: https://digitalcommons.usu.edu/all_datasets/51/; https://doi.org/10.15142/T3G93X Tree and snag data are available through the Smithsonian ForestGEO data portal: https://forestgeo.si.edu USDA Forest Service, Rocky Mountain Research Station, Fire, Fuel, and Smoke Science Program, 5775 Hwy 10 W, Missoula, Montana, 59808, USA C. Alina Cansler School of the Environment, Washington State University, Box 646410, Pullman, Washington, 99164-6420, USA Mark E. Swanson Department of Wildland Resources, Utah State University, 5230 Old Main Hill, Logan, Utah, 84322-5230, USA Tucker J. Furniss & James A. Lutz W.A. Franke College of Forestry and Conservation, University of Montana, 32 Campus Drive, Missoula, Montana, 59812, USA Andrew J. Larson Tucker J. Furniss James A. Lutz All authors designed the research, conducted field sampling, analyzed data, contributed to writing the manuscript, and approved the final manuscript. C.A. Cansler developed the final field methods and conceptual models, performed and synthesized all analyses, and wrote the manuscript. M.E. Swanson developed field methods, and developed some analyses. Correspondence to C. Alina Cansler. Example arrangement of litter and duff mound sampling transects in relation to a tree. Each is 3 m long. Duff mounds were measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Pre-fire measurements were taken in August 2013, prior to the Rim Fire in September 2013, and the post-fire fuel measurements were taken in June 2014. (PDF 66 kb) Assessment of variability in estimated means as function of sample size for 1-hour fuels measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. To identify the minimum sample size for each fuel class, we selected random samples of different numbers of individual transects (woody fuels; n = 1 to 99 transects) from our populations of pre-fire and post-fire samples and calculated mean and standard deviations of mass for each sample. Those summary statistics were plotted against sample size. Stabilization (stationarity) of variance was assessed graphically with the use of two envelope widths: the estimated mean within ±20% of the mean and ±10% of the mean. This procedure was repeated 10 times for each fuel class. Red and blue dotted lines represent ±10% and ±20% of mean values, respectively. (PDF 522 kb) Assessment of variability in estimated means as function of sample size for 10-hour fuels measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. To identify the minimum sample size for each fuel class, we selected random samples of different numbers of individual transects (woody fuels; n = 99 transects) from our populations of pre-fire and post-fire samples and calculated mean and standard deviations of mass for each sample. Those summary statistics were plotted against sample size. Stabilization (stationarity) of variance was assessed graphically with the use of two envelope widths: the estimated mean within ±20% of the mean and ±10% of the mean. This procedure was repeated 10 times for each fuel class. Red and blue dotted lines represent ±10% and ±20% of mean values, respectively. (PDF 514 kb) Assessment of variability in estimated means as function of sample size for 100-hour fuels measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. To identify the minimum sample size for each fuel class, we selected random samples of different numbers of individual transects (woody fuels; n = 1 to 99 transects) from our populations of pre-fire and post-fire samples and calculated mean and standard deviations of mass for each sample. Those summary statistics were plotted against sample size. Stabilization (stationarity) of variance was assessed graphically with the use of two envelope widths: the estimated mean within ±20% of the mean and ±10% of the mean. This procedure was repeated 10 times for each fuel class. Red and blue dotted lines represent ±10% and ±20% of mean values, respectively. (PDF 512 kb) Assessment of variability in estimated means as function of sample size for CWD fuels measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. To identify the minimum sample size for each fuel class, we selected random samples of different numbers of individual transects (woody fuels; n = 1 to 99 transects) from our populations of pre-fire and post-fire samples and calculated mean and standard deviations of mass for each sample. Those summary statistics were plotted against sample size. Stabilization (stationarity) of variance was assessed graphically with the use of two envelope widths: the estimated mean within ±20% of the mean and ±10% of the mean. This procedure was repeated 10 times for each fuel class. Red and blue dotted lines represent ±10% and ±20% of mean values, respectively. (PDF 516 kb) Assessment of variability in estimated means as function of sample size for litter measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. To identify the minimum sample size for each fuel class, we selected random samples of different numbers of sample point locations (litter and duff; n = 1 to 990 points) from our populations of pre-fire and post-fire samples and calculated mean and standard deviations of mass for each sample. Those summary statistics were plotted against sample size. Stabilization (stationarity) of variance was assessed graphically with the use of two envelope widths: the estimated mean within ±20% of the mean and ±10% of the mean. This procedure was repeated 10 times for each fuel class. Red and blue dotted lines represent ±10% and ±20% of mean values, respectively. (PDF 469 kb) Assessment of variability in estimated means as function of sample size for duff measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. To identify the minimum sample size for each fuel class, we selected random samples of different numbers of sample point locations (litter and duff; n = 1 to 990 points) from our populations of pre-fire and post-fire samples and calculated mean and standard deviations of mass for each sample. Those summary statistics were plotted against sample size. Stabilization (stationarity) of variance was assessed graphically with the use of two envelope widths: the estimated mean within ±20% of the mean and ±10% of the mean. This procedure was repeated 10 times for each fuel class. Red and blue dotted lines represent ±10% and ±20% of mean values, respectively. (PDF 461 kb) Assessment of variability in estimated means as function of sample size for total fuels measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. To identify the minimum sample size for each fuel class, we selected random samples of different numbers of individual transects (total fuels; n = 1 to 99 transects) from our populations of pre-fire and post-fire samples and calculated mean and standard deviations of mass for each sample. Those summary statistics were plotted against sample size. Stabilization (stationarity) of variance was assessed graphically with the use of two envelope widths: the estimated mean within ±20% of the mean and ±10% of the mean. This procedure was repeated 10 times for each fuel class. Red and blue dotted lines represent ±10% and ±20% of mean values, respectively. (PDF 524 kb) Mean and standard deviation (SD) in fuel mass for each fuel class in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Measurements were taken in June 2011 for "pre-fire" times, and in June 2014 for "post-fire" times, before and after the Rim Fire burned the research site in September 2013. "Post-fire old" refers to fuel that was present before fire and did not burn. "Post-fire new" refers to surface fuel that accumulated after fire. (PDF 23 kb) Additional file 10: Non-linear models of litter and duff depth for eight P. lambertiana measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Fuel transects were measured in June 2011, before the 2013 Rim Fire, and after the fire in June 2014. Error bars represent the standard error around the mean, and colors from light to dark represent small to large diameter (DBH in cm) trees. Models could not be fit for duff 1 year after fire because of the high number of zero depth measurements. Measurements were of duff mounds in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Pre-fire measurements were taken in August 2013, prior to the Rim Fire in September 2013, and the total post-fire fuel was measured in June 2014. (PDF 354 kb) Ricker curve-linear model results for litter depth (cm) as a function of distance (m) from eight P. lambertiana. Measurements were of duff mounds in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Pre-fire measurements were taken in August 2013, prior to the Rim Fire in September 2013, and post-fire measurements were taken in June 2014. Estimates of regression coefficients (a; B) are given, as are the stand error of the estimate (SE), t-statistic, and P-values. (PDF 22 kb) Ricker curve model results for pre-fire duff depth (cm) as a function of distance (m) from eight P. lambertiana in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA, and were taken in August 2013, prior to the Rim Fire in September 2013. Estimates of regression coefficients (a; B) are given, as are the stand error of the estimate (SE), t-statistic, and P-values. (PDF 21 kb) Litter and duff mound regression models for the fuel loading of litter and duff normalized by surface area as a function of tree DBH for eight P. lambertiana trees. Measurements were of duff mounds in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA. Pre-fire measurements were taken in August 2013, prior to the Rim Fire in September 2013, and the post-fire fuel measurements were taken in June 2014. Models were significant for ≤1 m and ≤2 m from the trees. Intercept and slope are given, as are stand error (SE), t-statistic, and P-values. Adj. R2 = adjusted R2. (PDF 20 kb) Scatter plots and regression between overstory basal area and fuel loadings of live basal area for all trees (left), Abies concolor (ABCO, middle), and Pinus lambertiana (PILA, right), for each fuel class. Pre-fire fuel loadings and overstory trees were measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA, in June 2011, before the 2013 Rim Fire. Overstory basal area was sampled at local neighborhoods around fuel sampling locations within 0.015 ha (6.91 m radius). Overstory components are labeled on the x-axis. Only significant regressions are plotted (blue lines with grey confidence intervals). The coefficient of determination is labeled on all scatter plots: note that even significant regression explained only 10% of the variance, at most. (PDF 3528 kb) Scatter plots and regression between overstory basal area and fuel loadings of basal area for all dead trees (far left), dead Abies concolor (ABCO, middle left), and dead Pinus lambertiana (PILA, middle right), and all live and dead trees (far right) for each fuel class. Pre-fire fuel loadings and overstory trees were measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA, in June 2011, before the 2013 Rim Fire. Overstory basal area was sampled at local neighborhoods around fuel sampling locations within 0.015 ha (6.91 m radius). Overstory components are labeled on the x-axis. Only significant regressions are plotted (blue lines with grey confidence intervals). The coefficient of determination is labeled on all scatter plots: note that even significant regression explained only 5% of the variance, at most. (PDF 3528 kb) Scatter plots and regression between overstory basal area and fuel loadings of live basal area for all trees (left), Abies concolor (ABCO, middle), and Pinus lambertiana (PILA, right), for each fuel class. Pre-fire fuel loadings and overstory trees were measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA, in June 2011, before the 2013 Rim Fire. Overstory basal area was sampled at local neighborhoods around fuel sampling locations within 0.1 ha (17.84 m radius). Overstory components are labeled on the x-axis. Only significant regressions are plotted (blue lines with grey confidence intervals). The coefficient of determination is labeled on all scatter plots: note that even significant regression explained only 7% of the variance, at most. (PDF 3528 kb) Scatter plots and regression between overstory basal area and fuel loadings of basal area for all dead trees (far left), dead Abies concolor (ABCO, middle left), and dead Pinus lambertiana (PILA, middle right), and all live and dead trees (far right) for each fuel class. Pre-fire fuel loadings and overstory trees were measured in the Yosemite Forest Dynamics Plot, Yosemite National Park, California, USA, in June 2011, before the 2013 Rim Fire. Overstory basal area was sampled at local neighborhoods around fuel sampling locations within 0.1 ha (17.84 m radius). Overstory components are labeled on the x-axis. Only significant regressions are plotted (blue lines with grey confidence intervals). The coefficient of determination is labeled on all scatter plots: note that even significant regression explained only 5% of the variance, at most. (PDF 3528 kb) Supplemental Methods Data reduction: duff mounds Distributions for larger pre-fire trees were humped near each tree because depth measurements were often shallower close to the tree where the trees roots were just under the litter layer. Because preliminary analysis showed that the distributions were also right-skewed, we used a Ricker curve to model the distribution (Equation 3; Additional files 10, 11 and 12. $$ y={axe}^{-\mathrm{bx}}, $$ where a and b are constants, x is distance from the tree, and y is litter or duff depth. Modeling was done using the nlme package version 3.1-131.1 (Pinheiro et al. 2018). Because the depths of litter and duff one year after fire were often zero, we were not able to model them using a Ricker curve, or other regression. Therefore, we calculated the mean depth for each of those measurements from 0 to 1 m and 0 to 2 m, and used those means to calculate fuel loadings and bulk densities (Additional file 10). To calculate the total volumes of litter and duff within each pre-fire duff mound, we numerically integrated the area beneath the fitted Ricker curve for each tree. We implemented this by calculating the volume of litter and duff in concentric 0.2 m bands around each tree and then summed the section volumes from ≤1 m and ≤2 m from each tree. Volumes for each band were calculated using the formula for a cylinder with a smaller cylinder missing from the center: $$ V=\left({{h\uppi r}_1}^2\right)-\left({{h\uppi r}_2}^2\right), $$ where V= volume, r1 = the larger radius and r2 = the smaller radius, and h = calculated mean height (depth) for each 0.2-meter section. We determined r1 and r2 by adding the basal radius of the tree to each section start and end distance. The basal radius was modeled based on the DBH of the tree, following the equation for western yellow pine in Demaerschalk and Omule (1982): Equation 4 $$ DBH= DS+0.33711\left( DS\ \ln \left(\frac{SH+1}{2.3}\right)\right), $$ where DS = stump diameter, and SH = stump height. Therefore, where SH = 0: $$ DS=\frac{DBH}{\left(1+0.33711\left(\mathit{\ln}\left(\frac{0+1}{2.3}\right)\right)\right)}. $$ Volumes were converted to bulk densities (kg m−2) using the bulk densities for sugar pine litter and duff (40.768 kg m−3 and 160.960 kg m−3, respectively) from van Wagtendonk et al. (1998). Cansler, C.A., Swanson, M.E., Furniss, T.J. et al. Fuel dynamics after reintroduced fire in an old-growth Sierra Nevada mixed-conifer forest. fire ecol 15, 16 (2019). https://doi.org/10.1186/s42408-019-0035-y DOI: https://doi.org/10.1186/s42408-019-0035-y coarse woody debris FOFEM fuel heterogeneity Pinus lambertiana Rim Fire Smithsonian ForestGEO Yosemite Forest Dynamics Plot
CommonCrawl
Is the EmDrive, or "Relativity Drive" possible? In 2006, New Scientist magazine published an article titled Relativity drive: The end of wings and wheels1 [1] about the EmDrive [Wikipedia] which stirred up a fair degree of controversy and some claims that New Scientist was engaging in pseudo-science. Since the original article the inventor claims that a "Technology Transfer contract with a major US aerospace company was successfully completed", and that papers have been published by Professor Yang Juan of The North Western Polytechnical University, Xi'an, China. 2 Furthermore, it was reported in Wired magazine that the Chinese were going to attempt to build the device. Assuming that the inventor is operating in good faith and that the device actually works, is there another explanation of the claimed resulting propulsion? 1. Direct links to the article may not work as it seems to have been archived. 2. The abstracts provided on the EmDrive website claim that they are Chinese language journals which makes them very difficult to chase down and verify. electromagnetism forces waveguide David Z♦ rjziirjzii migrated from skeptics.stackexchange.com Apr 13 '12 at 18:16 This question came from our site for scientific skepticism. $\begingroup$ 100kg unit producing 96 milinewtons of thrust? I wouldn't call that "working". $\endgroup$ – vartec Apr 10 '12 at 13:54 $\begingroup$ @vartec - Depends upon the applications, if we are talking about applications in space then that might be enough over a long enough peirod of time. The HiPEP only produced 460 - 670 mN in the pre-prototype testing. $\endgroup$ – rjzii Apr 10 '12 at 14:15 $\begingroup$ This belongs on Physics and it's very unlikely to get a decent answer here, in my opinion. Do you want me to migrate? $\endgroup$ – Sklivvz Apr 10 '12 at 21:43 $\begingroup$ The second part of your question would almost certainly get an answer on Physics (essentially, "no," with explanation). The first part, I'm not sure about. I think it'd be on topic for us, but there is a chance nobody on the site would be able to answer it. I will say that it would be helpful to split this into two separate posts, one for each part of the question, if it is migrated. $\endgroup$ – David Z♦ Apr 11 '12 at 19:30 $\begingroup$ Actually I am a mod on Physics - I figured I could reply here since the discussion would benefit from being public. $\endgroup$ – David Z♦ Apr 11 '12 at 19:42 It is impossible to generate momentum in a closed object without emitting something, so the drive is either not generating thrust, or throwing something backwards. There is no doubt about this. Assuming that the thrust measurement is accurate, that something could be radiation. This explanation is exceedingly unlikely, since to get mN of radiation pressure you need an enormous amount of energy, since in 1s you get 1 ${\rm gm s^{-1}}$ of momentum, which in radiation can only be carried by $3 \times 10^5$ J (multiply by c), so you need 30,000 Watts of energy to push with mN force, or at least a million Watts for 80 mN. So, it's not radiation. But a leaky microwave cavity can heat the water-vapor in the air around the object, and the heat can lead to a current of air away from the object. With a air current, you can produce mN thrusts from a relatively small amount of energy, and with a barely noticible breeze. To get mN force, you need to accelerate $300 \ {\rm cm^3}$ of air (1 gram) to 1 m/s every second, or to get 80 mN, accelerate $1 {\rm m^3}$ of air (3000 g) to 0.2 m/s (barely perceptible) and this can be done with a hot-cold thermal gradient behind the device which is hard to notice. If the thrust measurements are not in error, this is the certain cause. So at best, Shawyer has invented a very inefficient and expensive fan. EDIT: The initial tests were at atmospheric pressure. To test the fan hypothesis, an easy way is to vary the pressure, another easy way is to put dust in the air to see the air-currents. The experimenters didn't do any of this (or at least didn't publish it if they did), instead, they ran the device inside a vacuum chamber but at ambient pressure after putting it through a vacuum cycle to simulate space. This is not a vacuum test, but it can mislead one on a first read. In response to criticism of this faux-vacuum test, they did a second test in a real vacuum. This time, they used a torsion pendulum to find a teeny-tiny thrust of no relation to the first purported thrust. The second run in vacuum has completely different effects, possibly due to interactions between charge building up on the device and metallic components of the torsion pendulum, possibly due to deliberate misreporting by these folks, who didn't bother to explain what was going on in the first experiments they hyped up. Since they didn't bother to do a any systematic analysis of the effect on the first run, to vary air-pressure, look at air flows with dust, whatever, or if they did this they didn't bother to admit their initial error, this is not particularly honest experimental work, and there's not much point in talking about it any more. These folks are simply wasting people's time. Ron MaimonRon Maimon $\begingroup$ @BAR: I do not need to read anything to know that this claim is false. The vacuum is a unique state, and to produce thrust, you need to produce something going the other way, either air or radiation, and radiation is ruled out as I explained. The experiments are fraudulent, and your comment is gullible. $\endgroup$ – Ron Maimon Jun 1 '15 at 21:29 $\begingroup$ @VolkerSiegel: No, NASA claims it does work, which means I have more smarts than all the NASA idiots put together. $\endgroup$ – Ron Maimon Jun 2 '15 at 15:45 $\begingroup$ @BAR: Solid scientific knowledge has this property. I don't need to pick up a pencil to prove the things I am saying, I understand the theory. The gullible folks at NASA don't, so one can only feel pity for them. The claim is theoretically unsound, but it is an experimental claim, so one must look at the experiments. The experiments are also unsound, so there is no basis for this claim, and it is simply fraudulent. $\endgroup$ – Ron Maimon Jun 3 '15 at 14:17 $\begingroup$ Shaw wrote a paper on the theory; I think he's sincere, but like a lot of electrical engineers arrogantly deluded, and doesn't understand the criticisms of professional physicists. I understand where he's coming from because I've done exactly the same, only to realize my stupid mistakes upon spending a year reading around it. Laithwaite was a professor of electrical engineering, convinced that gyroscopes held the key to weight loss: Eric Laithwaite Gyro Propulsion 1994 UK $\endgroup$ – John McVirgo Jun 4 '15 at 0:51 $\begingroup$ This reminds me of that time we all laughed off that Einstein guy for suggesting that gravity is caused by some nonsense curve of spacetime. We've known for centuries it's a force, lol. $\endgroup$ – Devsman Aug 31 '16 at 20:29 Shawyer's "analysis" is a mess, incoherent and deeply confused about fundamental aspects of relativity: he mixes up frames, assumes a universal rest frame, etc. The EmDrive supposedly works best when "stationary relative to the thrust", whatever that means, and Shawyer goes on to suggest using it for levitating vehicles with some kind of conventional propulsion for driving them forward: he apparently believes there is something special about gravitational acceleration. According to his latest paper, the EmDrive supposedly acts as an electric motor, consuming energy when accelerating and producing it when decelerating. However, a deceleration is just an acceleration in a particular direction, so if it worked, the EmDrive could operate as an infinite energy machine just sitting on one end in a gravity field or while producing thrust for a spacecraft. So to answer the question in the title: "No." As for other explanations of the observed propulsion, there aren't many details of the measurement procedures or results. There are videos of an EmDrive test on a rotating platform, but there's numerous pieces of equipment that may contain fans, thick power cables going to the equipment that may apply torques, and even a laptop with a hard drive that may be spinning up or down. (And on top of everything else, the whole thing's apparently rotating in the wrong direction.) If this rig is typical of his testing methodology, it's probably safe to chalk up the rest to bad measurements. Christopher James HuffChristopher James Huff $\begingroup$ No free energy here - conservation laws are preserved. In an ideal case it is 100% efficient but no overunity. $\endgroup$ – BAR Jun 2 '15 at 19:34 $\begingroup$ @BAR: Failure of conservation of momentum implies failure of conservation of energy in a moving frame, because energy and momentum mix up together under boosting even nonrelativistically. $\endgroup$ – Ron Maimon Jun 6 '15 at 12:07 No. In special relativity, 4 momentum is exactly conserved. The first component of 4 momentum is total mass/energy, but the next 3 are given by: p = m*γ(v)*v m is the invariant mass, how much inertia it has when you are moving at the same velocity of it. This is Newton except now momentum is a non-linear function of velocity. Nonlinearity does not change anything. Mass and momentum still are constant (ignoring leaks), making γ(v)*v, and thus the center-of-mass velocity v, constant. So why do we measure force? Possibly currents in the waveguide walls induce currents in the metal support structure which creates small magnetic forces between them. Kevin KostlanKevin Kostlan Yes, it might actually be possible. I'm not sure it has been proven that the math shows that it can't be done. I know Einstein found a way of defining relativistic mass and momentum as a function of rest mass and velocity for any particle and laws that conserve relativistic mass and momentum and are conserved in all frames of references and that momentum is even conserved in the collision of a photon with an object assuming the particle nature of the photon. The microwave photons on the other hand don't behave much like a particle at all and have a wavelength that's a very significant fraction of the size of the cavity. All that's left to do is determine whether the math shows that the microwave radiation can provide thrust according to the model where it's only an electromagnetic wave and not a particle. TimothyTimothy $\begingroup$ I don't see what's wrong with my answer. Has somebody who knew all the laws actually actually mathematically proven that momentum is exactly conserved according to those laws in the model where light is only an electromagnetic wave and not a particle. If so, how was I supposed to know that that has been done. $\endgroup$ – Timothy Nov 19 '16 at 22:13 $\begingroup$ In no way does this either 1) answer the question (more than just stating 'yes', without a helpful explanation), or 2) add anything to the existing answers. It sounds like you're trying to say, "It hasn't been shown to be impossible" which is not only misleading, but misses the crux of the controversy, and is again less complete than the already accepted answer---not to mention that an explanation of how it is possible is obviously far more interesting. Further still, there is no question whether energy/momentum is still conserved for waves as apposed to particles... $\endgroup$ – DilithiumMatrix Nov 19 '16 at 23:27 $\begingroup$ I'm pretty sure a set of laws and a way of defining relativistic mass and momentum has been found and that set of laws has been mathematically proven to be conserved in all frames of reference and to conserve relativistic mass and momentum, but has it been confirmed experimentally that the universe actually follows those laws? Has it been mathematically proven that quantum mechanics predicts that objects will follow those laws at the macroscopic level? $\endgroup$ – Timothy Nov 21 '16 at 0:51 Not the answer you're looking for? Browse other questions tagged electromagnetism forces waveguide or ask your own question. Shawyer EM Drive — Momentum Conservation Violation What are the evidences that support the EmDrive? NASA's "Impossible" Space Engine What are the proposed theoretical explanations for the EmDrive? Has NASA confirmed that Roger Shawyer's EmDrive thruster works? What exactly is NASA's proposed mechanism for "propellantless" "EM Drive" propulsion?
CommonCrawl
A magazine for the mathematically curious Constructing the cover of Issue 10 How to crochet a fractal Chalkdust issue 10 – Coming 24 October Black mathematician month 2019 Crossnumber winners, issue 09 Reproduce or die Review of Problem Solving in GCSE Mathematics In conversation with Clifford Cocks Can computers prove theorems? Artificial music On √2 Curiosities of linearly ordered sets Crocheting fractals Spotlight on: Pamela Harris Secrets, surveys and statistics They might not be giants On the cover: Islamic geometry In conversation with Matt Parker Playing billiards with cue-bics When Truchet met Chladni The phantom parabola Counting caterpillars Hiding in plain sight Striking the right chord On the cover: Harriss spiral Oπnions: Can a horse have an Erdős number? Bells, braids and taxicabs Conference bingo Dear Dirichlet How to make... I ❤️ maths Page 3 model What's hot and what's not Old issues Hall of lame Crossnumber Submit answers Order T-shirts Pretty pictures in the complex plane Contemplate the beauty of the Julia and Mandelbrot sets and an elegant mathematical explanation of them by Emily Clapham. Published on 18 October 2017. Some of the greatest works of art in history have been produced by mathematicians. One fascinating source of mathematical artwork is fractals: infinitely complex shapes, with similar patterns at different scales. Fractal geometry has dramatically altered how we see the world. Technology has many uses for fractals, one of which is the production of beautiful computer graphics. These pretty pictures are used to present a large amount of information about a function in a clear and comprehensible manner, and the simplicity of the maths involved in producing these pictures is fascinating. Pretty pictures in the $z$-plane are widely used as computer graphics, book covers and even sold as works of art. Modern art studies have often been dismissive of the power of beauty in mathematics, with the idea that "beauty is not in itself sufficient to create a work of art". Mathematics produces rigid and inflexible answers, whereas art is free-moving and open to interpretation. However, it is undeniable that these pretty pictures demonstrate true beauty, not only in the images but also in the mathematics behind them. The mathematics behind pretty pictures Extremely simple functions can be used to produce these pictures. For example, let's consider the quadratic function $f\hspace{0.4mm}(x\hspace{0.3mm})=x\hspace{0.3mm}^2+c$, for some constant $c$. An iterative method is applied to the function. First, a seed (let's call it $x_0$) is selected to be the initial value for iteration. The solution of the function is then subsequently recycled as the new input value, $x$. In this way: x\hspace{0.3mm}_1&=x\hspace{0.3mm}_{0}^2 + c,\\ x\hspace{0.3mm}_2&=x\hspace{0.3mm}_{1}^2 + c = (x\hspace{0.3mm}_{0}^2 + c)^2 +c,\\ x\hspace{0.3mm}_3&=x\hspace{0.3mm}_{2}^2 + c = \cdots\\ \text{and in general, }x\hspace{0.3mm}_n&=x\hspace{0.3mm}_{n-1}^2+c. We continue until the iteration either converges to a fixed point or cycle, or diverges to infinity. The orbit is the sequence of numbers generated during the process of iteration: $x_0,x_1,x_2,x_3,\ldots,x_n$. If we only apply real numbers to the quadratic function we limit the graphical representation of the iterations to a line. To produce pictures in the plane, we use complex numbers instead. The abundant beauty in the plots is somehow increased when the simplicity of the mathematics is understood. Through the process of iteration, each seed will either converge or diverge, and so for a given function we can divide the plane into an escaping set $E_c =\{ z_0 : |z_n| \rightarrow \infty \, \mbox{as} \, n \rightarrow \infty \}$ (that is, all the seeds that end up at infinity) and prisoner set, where the iteration tends to a point or becomes periodic. The Julia set of a function To go from the iterative procedure described above to the vivid images to the right, we need to introduce the idea of the Julia set of a function, named after the French mathematician Gaston Julia. Julia was an extraordinary man, who tragically lost his nose while fighting in the first world war. Despite the substantial injury, he made immense progress in the field of complex iteration and published the book Mémoire sur l'itération des fonctions rationnelles in 1918, which began the study of what we now call a Julia set. The filled-in Julia set is the collection of points in the complex plane that form the prisoner set of a function, while the Julia set itself is the boundary of this region. The points within the filled-in Julia set remain bounded under the iteration since their orbits converge to an attracting point or cycle. Connected and unconnected Julia sets of the quadratic function for different values of $c$ Conventionally, when pictures of the Julia set are shown, the filled-in Julia set is shaded black and varying colours are used to show the rate at which the escaping set diverges to infinity. The Julia set is therefore the edge of the black region. Maps 1–7 above show the Julia sets of the quadratic function for different values of $c$, with the escaping set colour-coded as follows: red areas represent points that slowly escape from the set, while blue areas signify points that quickly escape to infinity. The value of the complex constant $c$ influences the shape of the Julia set. Maps 1, 4 and 5 all have black centres, which indicate that the Julia set is connected, while maps 3, 6 and 7 demonstrate unconnected sets. For these images, the Julia sets have no black regions and instead the pictures are just flurries of colour. It is not always easy to spot whether a Julia set is connected, however. In map 2, there is no obvious black region, but neither are there colourful individual flurries and instead we see a spiky line. In fact the set is connected, it is just that the filled in Julia set is so slender that the black line points are not visible in the image. During the initial study of these sets, a fascinating criterion for connectivity was discovered concerning the critical point, $z_0=0$. If the critical point is used as the seed, we produce the critical orbit, which is bounded if and only if the Julia set is connected. Fractal patterns appear in all plots, apart from when $c=0$ or $-2$. The picture below displays examples of magnified sections of the fractals, for $c=-0.7$ (maps 9–12), $c=-0.12 + 0.75 \,\mathrm{i}$ (maps 13–16), $c=0.1 + 0.7 \, \mathrm{i} $ (maps 17–20) and $c=-0.1 + 1 \, \mathrm{i} $ (maps 21–24). Each enhancement of a section produces what appears to be copies of the whole section, not just in overall shape but also with smaller embellishments on every "limb". For connected plots, these fractals appear as loopy ovals and circles or thin, almost stick-like, sections. For disconnected plots, however, the fractals are grouped together in intricate floral patterns, revealing the same shape and pattern with each level of magnification. Magnified sections of fractals for different values of $c$ Prior to computer technology, Julia had to rely on his imagination and manually carry out the iterations by hand. Fifty years later, another mathematician applied modern computing power to plot these pretty pictures, finally showing the sets in all their beauty… The Mandelbrot set is named after the Polish mathematician Benoit B Mandelbrot, known for being the founder of fractal geometry. The word fractal is derived from the Latin fractus, which means broken, and describes the shape of a stone after it has been smashed. Mandelbrot discovered that fractals appear not only in mathematics but also in nature, through crystal formation, the growth of plants and landscapes, as well as in the structure of the human body. In 1945, Mandelbrot read Julia's 1918 book. He was fascinated and, with the aid of computer graphics, was able to show that Julia's work contained some of the most beautiful fractals known today. To create the Mandelbrot set, each complex value of $c$ is used as the constant term in the quadratic function $f\hspace{0.3mm}(z\hspace{0.2mm})=z\hspace{0.3mm}^2+c$ and iterated with the critical point $z_0=0$ as the seed. If the orbit escapes to infinity, the number of iterations taken for the modulus of the function to exceed a specified value is used to decide on the colour of the map at that point, $c$. Otherwise, when the orbit converges, the point is coloured black. The Mandelbrot set is the set of black points. For example, if we let $c=-0.15+0.3 \, \mathrm{i}$ then we have the complex quadratic function $f\hspace{0.3mm}(z\hspace{0.2mm})=z\hspace{0.3mm}^2-0.15+0.3 \, \mathrm{i}$. We start with $z_0=0$ as the seed and the sequence of iteration (to 5 significant figures) is as follows: z_1&={0}^2 -0.15 +0.3 \, \mathrm{i} &&\Rightarrow &z_1&= -0.15 +0.3 \, \mathrm{i},\\ z_2&=(-0.15+0.3 \, \mathrm{i})^2-0.15+0.3 \, \mathrm{i} &&\Rightarrow &z_2&= 0.2175 +0.21 \, \mathrm{i},\\ &&&&z_3&=-0.14679+0.20865 \, \mathrm{i},\\ &&&&z_5&=-0.17742+0.21788 \, \mathrm{i}. Continuing to 30 iterations, the orbit has not escaped to infinity and instead converges to the point $z=-0.17082+0.22361\, \mathrm{i}$ (again to 5 significant figures). Therefore, $c=-0.15+0.3 \, \mathrm{i}$ is within the Mandelbrot set and is coloured black. On the other hand, if we take $c=-1.85+1.2 \, \mathrm{i}$, and hence the complex quadratic function $f\hspace{0.3mm}(z\hspace{0.2mm})=z\hspace{0.4mm}^2-1.85+1.2 \, \mathrm{i}$, then the sequence of iterations (to 5 sf) is as follows: z_2&=(-1.85 +1.2 \, \mathrm{i})^2-1.85 +1.2 \, \mathrm{i} &&\Rightarrow &z_2&= 0.1325 -3.24 \, \mathrm{i},\\ &&&&z_3&= -12.33+0.3414 \, \mathrm{i},\\ &&&&z_4&= 150.06 – 7.2189 \, \mathrm{i},\\ &&&&z_5&= 22465 – 2165.4 \, \mathrm{i}. The characteristic segments of the Mandelbrot set If the modulus of $z$ exceeds 100, then it has been proven that the orbit escapes to infinity. This occurs on the fourth iteration, so the colour chosen to represent the value of 4 would be plotted at the point $(-1.85,1.2)$ in the complex plane. The resulting image is shown in map 8, and also in the picture to the left. The largest segment of the set is called the cardioid due to its heart-like shape. Attached to this are adornments called bulbs, upon closer inspection of which it is possible to see many smaller, somewhat similar, embellishments. The bulbs are not completely identical, although most exhibit a similar shape, and the main differences can be seen in their filaments. The filaments are the thin strings of bounded points that sprout like sticks from the tops of the bulbs. These sticks are extremely narrow and they appear to be coloured red, which would indicate they are not part of the set. However, if we were to zoom in closer on these regions, we would actually see black lines! The self-similarity of the Mandelbrot set The Mandelbrot set is self-similar, consisting of miniature Mandelbrot sets within the boundary of the largest set. By enhancing the filaments, smaller copies of the overall set appear in 'Russian-doll' like fashion, as seen in maps 26–30 above. Closer inspection of map 27 shows many more self-similar sets within the filaments around the perimeter of the Mandelbrot set. Magnifying the small copies of these Mandelbrot sets would yield infinite layers of self-similar sets. Other fascinating and intricate shapes occur, for example the "seahorse valley" that is visible in maps 31–34 above. By enhancing the plot within this region we see two rows of seahorse shaped embellishments, each with "eyes" and "tails". Further magnification of the "eyes" reveals spiral constellations of more "seahorses". Connection between Julia sets and the Mandelbrot set The orbit of the critical point $z_0=0$ can be used to test the connectivity of the Julia set, and the Mandelbrot set shows the boundedness of these critical orbits. Hence, the Mandelbrot set itself indicates the connectivity of the Julia sets of all the different complex quadratics. The Mandelbrot set can be described as $M = \{ c \in \mathbb{C} \, | \, J_c \, \mbox{is connected}\} $, where $J_c$ is the Julia set of the function $z\hspace{0.3mm}^2+c$. The Julia set is a connected structure if $c$ is within the Mandelbrot set, and will be broken into an infinite number of pieces if $c$ lies outside the Mandelbrot set. The cardioid-shaped main body contains all values of $c$ for which the Julia set is roughly a deformed circle (figure below: maps 35, 37, 38 and 40). The values of $c$ which lie in a bulb of the Mandelbrot set produce a Julia set consisting of multiple deformed circles surrounding the points of a periodic attractor. The number of subsections sprouting from a point on the Julia set is equal to the period of the bulb in the Mandelbrot set (below; maps 36, 39, 41–44). A specific Julia set can be defined by a point in the Mandelbrot set The nature of the convergence of points within the Mandelbrot set depends on the segment in which the point resides. Seeds within the cardioid converge to an attractive point, whereas orbits starting in the bulb lead to an attracting cycle. Three particularly interesting cases of Julia sets are shown below. The first is when $c=0$, where the filled-in Julia set comprises of all the values within the unit circle (circle of radius 1, centred on the origin) and each of these points converges to $0$ when iterated. The Julia set is the boundary of the circle, the points of which, when iterated, remain on the boundary. Three remarkable examples of the Julia set with $c=0$, $c=\mathrm{i}$ and $c=-2$ The second interesting case is when $c=\mathrm{i}$. Here, the Julia set is a dendrite, meaning there are no interior points. Instead, the set is just a branch of points. For this complex constant the dendrite is a single line in an almost lightning-bolt shape. The final case is $c=-2$, where the Julia set is a dendrite that lies directly on the horizontal axis between $-2<x<2$. Explore the sets yourself I hope to have displayed the beauty behind these pictures by emphasising the extraordinary quantity of information contained in such a simple procedure, as well as through highlighting the complexity of each image, in the variety of fractals and colours visible, which further enhances the beauty. If this article has sparked an interest in fractals, then why not try exploring these sets for yourself? You could do this by magnifying different sections of the Mandelbrot set to explore the countless shapes and patterns that exist within. You could also go deeper into exploring individual orbits. All of these pictures are generated using simple quadratic formula. However, the Julia and Mandelbrot sets can be produced for a wide variety of functions in a similar manner to obtain countless pretty pictures. These images are already becoming dated, having been taken for granted for so many years since they were first produced on the big bulky computers of the 1980s. The Julia set of the quadratic function, and the corresponding Mandelbrot set, could be inspiration for pretty pictures which are yet to be fully explored, or even discovered. Largely, the discoveries discussed here have been recorded in recent years. Furthermore, there could still be vast amounts of information within these sets that are yet to be discovered. Could you be the one to make a discovery? Franke, H. (1986). Refractions of Science into Art. In: H. Peitgen and P. Richter, ed., The Beauty of Fractals, 1st ed. Berlin: Springer-Verlag, pp.181-187. Mandelbrot, B. (2004). Fractals and chaos. 1st ed. New York: Springer. Peitgen, H., Jurgens, H. and Saupe, D. (1992). Fractals for the Classroom: Part 2: Complex Systems and Mandelbrot Set. 1st ed. New York: Springer-Verlag, pp.353-473. Hall, N. (1992). The New Scientist guide to chaos. 1st ed. London: Professional Books. Douady, A. (1986). Julia Sets and the Mandelbrot Set. In: H. Peitgen and P. Richter, ed., The Beauty of Fractals, 1st ed. Berlin: Springer-Verlag, pp.161-173. Fraser, J. (2009). An Introduction to Julia Sets. 1st ed. [ebook] Available at: http://www.gvp.cz/~vinkle/mafynet/GeoGebra/matematika/fraktaly/linearni_system/julia.pdf [Accessed 4 Apr. 2017]. Peitgen, H., Jurgens, H. and Saupe, D. (1992). Fractals for the Classroom: Strategic Activities Volume two. 1st ed. New York: Springer-Verlag. Devaney, R. (2006). Unveiling the Mandelbrot set | plus.maths.org. [online] Plus.maths.org. Available at: https://plus.maths.org/content/unveiling-mandelbrot-set [Accessed 4 Apr. 2017]. Moler, C. (2011). Experiments with MATLAB. 1st ed. [ebook] MathWorks, p.Chapter 13 Mandelbrot Set. Available at: https://uk.mathworks.com/content/dam/mathworks/mathworks-dot-com/moler/exm/chapters/mandelbrot.pdf [Accessed 20 Apr. 2017]. Peitgen, H. and Richter, P. (1986). The Beauty of Fractals. 1st ed. Berlin: Springer-Verlag. Mandelbrot, B. (1986). Fractals and the Rebirth of Iteration Theory. In: H. Petigen and P. Richter, ed., The Beauty of Fractals, 1st ed. Berlin: Spring-Verlag, pp.151-160. Uk.mathworks.com. (2017). Illustrating Three Approaches to GPU Computing: The Mandelbrot Set – MATLAB & Simulink Example – MathWorks United Kingdom. [online] Available at: https://uk.mathworks.com/help/distcomp/examples/illustrating-three-approaches-to-gpu-computing-the-mandelbrot-set.html?searchHighlight=mandelbrot%20set&s_tid=doc_srchtitle [Accessed 20 Apr. 2017]. Emily Clapham Emily is a mathematics graduate from Sheffield Hallam University. She is taking a year out to write GCSE study revision guides, before pursuing a career in teaching. Emily is hoping to interest more children into taking STEM subjects. + More articles by Emily More from Chalkdust In conversation with Cédric Villani We feel underdressed for Breakfast at Villani's Cardioids in coffee cups Staring at your coffee, you wonder whether the light reflecting in cup really is a cardioid curve... Mathematics for the three-fingered mathematician Robert J Low flips one upside down. The mathematics of Maryam Mirzakhani We take a proper look at her mathematical accomplishments Biography of Sophie Bryant A biography of Sophie Bryant Geographic profiling Murder, maths, malaria and mammals ← Page 3 model: Crowd control On the cover: Euclidean Egg III → @chalkdustmag Tweets by @chalkdustmag Chalkdust is published by Chalkdust Magazine, UCL, Gower Street, London WC1E 6BT, United Kingdom. ISSN 2059-3805 (Print). ISSN 2059-3813 (Online).
CommonCrawl
12 July 2019 / 18 min read / Business Intelligence, Lean Analytics Lean Analytics Part 2: The Stages of a Data Driven Startup by Cedric Chin In Part 1 of our Lean Analytics summary, we covered the basics of analytical thinking, and provided you with a summary sheet of six basic tech-company business models. In Part 2, we'll be discussing the metrics and thinking associated with the five stages of the lean analytics framework. To recap, Lean Analytics depends on the following five stages: Empathy — You're looking for a real, poorly-met need that can be found in a reachable market. Once you do so, you're figuring out how to solve their problem in a way customers will accept and pay for. Stickiness — You're looking for the right mix of products/features/functionality that will keep users around. Virality — You're looking for ways to fuel growth organically and artificially. Revenue — You're looking for a scalable and sustainable business with the right margins in a healthy ecosystem. Scale — You're looking to scale up the business on all fronts. Croll and Yoskovitz assert that most startups go through the five stages elucidated here, and they argue that each stage works as a 'gating mechanism' — that is, only once you've passed through one of these stages should you move on to the next. But the authors also admit that these are generalisations, and you should pick only what is applicable to your company. Objections on this point are common. For instance, I squint suspiciously on the inclusion of 'virality' as an entire stage, instead of a discussion on broader acquisition methods and their metrics, but it appears Croll and Yoskovitz have made the decision to focus on virality to the exclusion of other methods of customer acquisition … for better or worse. We'll go through each of these stages below. Stage One: Empathy The first stage of a startup's life is what Croll and Yoskovitz call the 'empathy stage'. The founders are concerned with three things: Is the problem they intend to solve painful enough? Do enough people care? Are people already trying to solve it? These three questions are fundamentally qualitative in nature, and so the metrics that you'll collect in this stage will be qualitative as well. You will conduct problem interviews, storyboard and brainstorm possible solutions, watch potential customers for body language and signs that the opportunity is worth pursuing. The goal of this stage is to locate a problem that is painful enough and lucrative enough to solve, in order to build a company around it. A lot has been written about the difficulties of this stage, however. As a taste test: When interviewing potential customers, people will tell you what you wish to hear — often subconsciously — because they want to make you happy. The ability to listen for real problems (without bias!) is very difficult. It requires empathy and practice. Validating a problem isn't sufficient to build a company — the problem could be real and painful but the customer might not be motivated to pay enough for it to support a viable business. While Croll and Yoskovitz cover some of the techniques that exist for this stage, my recommendation is to skip this section entirely and go read a book titled The Mom Test by Rob Fitzpatrick. Better still, read this summary, for a fraction of the time. Stage Two: Stickiness The goal of stage two is to validate that you have a solution that is useful enough and sticky enough to keep people using it. The metric that matters here is retention, which the authors call 'stickiness'. More specifically, Croll and Yoskovitz argue that you should focus on cohort stickiness. This could be usage frequency (if you are a social media app, for instance, or a SaaS offering), or some other metric that indicates or promotes engagement. For instance, the authors talked about a qidiq, a (now defunct, it should be noted) group survey tool. The founders realise they could get better engagement if they let users respond to surveys without an account, and chose to make response rate their One Metric That Matters (OMTM). Croll and Yoskovitz note that you shouldn't care about acquisition at this stage. This is akin to optimising water flow into a leaky bucket. You shouldn't turn the faucet until you're sure that your solution is good enough to get users to stick around. The authors also suggest two principles to assist with your pursuit of stickiness: You should be willing to iterate on anything — and they really mean anything … you should be willing to experiment with changing your business model or your acquisition plan in pursuit of higher stickiness. These aren't normally elements that you'll think of changing in the pursuit of engagement, but Croll and Yoskovitz point out that you don't know until you experiment. At this early stage, everything's up for consideration. You should figure out the simplest, least-friction path to aha for your customer — the aha moment here is the moment that your customer understands the value your product offers them. Once they do so, it's highly likely that they'll stick around. Croll and Yoskovitz relay a conventional piece of product management wisdom: when you're designing and building a product, you want to get the user to that moment as quickly as possible. More concretely, figure out that paths to aha, and figure out how to get new users on that path. A common objection at this point is that certain products require a minimum number of users in order to present compelling value propositions. This is especially true for products with network effects. For instance, Facebook, Yelp and LinkedIn require a minimum number of people to be compelling for new users to join. Croll and Yoskovitz argue that optimising for users is really expensive, and therefore dangerous if you're not sure your product is engaging enough to hold on to them. There are typically two playbooks for dealing with network effects: You design for 'single player mode' — that is, your product should deliver value even without a social graph. This was the strategy that Instagram used, as filters provided a compelling reason to use the app, even in the nascent days of the network. You grow roots in small, targeted communities before expanding outwards — this is the more tenable option for products that really depend on network efforts to create value. The quintessential example here is Facebook, which focused on college campuses in the early years, and Yelp, which focused on night clubs in a tiny handful of cities, before expanding services to other establishments. The final tool that Lean Analytics gives us is something called a 'Problem/Solution Canvas'. It looks like this: The Problem/Solution canvas is meant to be filled in on a weekly basis. The goal? To provide some structure to the cadence of experimentation and execution. Step Three: Virality Step three is when you've finally built a product that's sticky enough, and therefore valuable enough, to begin optimising for growth. While Lean Analytics focuses on virality factors in this stage above all others, I take this stage to mean 'turn on the faucet and optimise for user acquisition'. This could mean experimenting with acquisition channels like ads, content marketing, or lead gen, or it could just mean … well, simple virality. Croll and Yoskovitz open their discussion of this stage with the idea that virality is really divided into three types: Inherent virality — this is the virality spread that comes from natural usage of the product. For instance, collaborating on Google Docs, or sharing a file via Dropbox. Artificial virality — this is the virality that comes from gamification and incentives. Dropbox is famous for this — if you invite a new user, both you and the new user are rewarded with extra storage. Word-of-Mouth virality — this is the spontaneous virality that emerges from when users tell others about your product. Not all products benefit from inherent virality, or may be augmented to benefit from artificial virality. But nearly all good products will benefit form some word-of-mouth. Lean Analytics also asserts that having some amount of virality will augment your natural marketing spend efforts. This allows you to 'get the biggest bang for your marketing buck.' The actual question of measuring virality reduces to just two key metrics: The Viral Coefficient — this is the number of new customers that each existing customer is able to successfully convert. To calculate this metric, you first calculate the invitation rate, which is the number of invites sent divided by the number of users you have. Then, you calculate the acceptance rate, which is the number of signups or enrolments divided by the number of invites. Finally, you multiply the two together. What you're looking for is a number that is > 1, because that means that every new user is inviting at least another user. The Viral Cycle Time — this is the amount of time it takes for a user to invite others. Cycle time makes a huge difference, so much so that investor David Skok argues that it's more important than viral coefficient. To wit: "After 20 days with a cycle time of two days, you'll have 20,470 users, but if you halved that cycle time to one day, you would have over 20 million users!" These two metrics in turn imply that you may do a small handful of things to increase your virality: You may focus on increasing the invite acceptance rate. You can try to extend the lifetime of the customer so they have more time to invite people (this is a laudable goal regardless of virality; see stage two above). You can try to shorten the cycle time for invitations, to get growth faster. Or you can work on convincing customers to invite more people. Croll and Yoskovitz do acknowledge that such measures don't apply to enterprise companies, but argue that a good substitution for this is to measure Net Promoter Score. Worth mentioning at this stage is the idea of Growth Hacking, which Lean Analytics describes as a 'data-driven approach to marketing'. If we look past the buzzwords, however, growth hacking is basically the act of finding a leading-indicator that is correlated to some core business metric, and then attacking that leading indicator in order to increase the odds of improving that core business metric. Here are several examples of famous tech companies, and the leading indicators they've identified in their businesses: Facebook: a user would become engaged if she reached seven friends within 10 days of creating an account. This means that Facebook should do everything possible to get a new user to that amount of friends within the first 10 days. Zynga: if a user came back the day after she signed up for a game, she was likely to become an engaged user (and eventually pay for in-game purchases). Dropbox: a user is more likely to become engaged if she puts at least one file in one folder on one of her devices. LinkedIn: there is a correlation between the number of connections a user creates in a certain number of days and the probability of longer-term engagement. This is a lot easier said than done, of course. Leading indicators — especially leading indicators that are correlated to eventual positive outcomes, rely on finding patterns in data. The key method to doing this is to segment users into those who stuck around and those who didn't, and then go spelunking in the data for both segments in order to identify what they have in common. To that end, Lean Analytics points out that good leading indicators have a few common characteristics: They tend to relate to social engagement, content creation, or return frequency. The leading indicator must be clearly tied to some core business model metric. For instance, user engagement is crucial for the eventual profitability of user generated content (UGC) sites. So anything that serves as a predictor for high levels of future engagement should be prioritised. The indicator should come early in the user's lifecycle or conversion funnel. This is based on a simple calculation: assuming that most users drop off over time in the early days of your product, you should focus your attention on the earliest parts of your funnel where you have the most users, because that's where you'll have to most data points to consider. Good indicators should also be an early extrapolation. A good example of this is the e-commerce business model. An e-commerce business can be in 'loyalty' mode (repeat buys) or 'acquisition' mode (one-time only purchases). Instead of waiting a whole year to understand what mode the business is in, just look at the first 90 days and extrapolate from there, in order to pick an execution strategy. Stage Four: Revenue This chapter contains one of the best analogies of the book. Imagine you have a little machine, Croll and Yoskovitz write. On one end of the machine, you put in a penny. The machine whirs and clunks and puffs, and then after a couple of seconds, a nickel comes out on the other end of the machine. "Do that again!" the VCs roar. So you do it again. "No tricks?" An analyst asks. You open up the machine to show that no nickels are hidden inside. "How quickly can pennies be turned into nickels?" You tell them that the machine takes five seconds to cool down, so you can make \$36 nickels an hour, for a \$28.80 profit, with an 80% margin. This 'penny machine', as Croll and Yoskovitz call it, is in essence what a business is. At its most abstract level, a business is a thing that takes in money on one end, and turns it to more money on the other end. The financial metrics that investors and businesspeople ask for are the means of measuring this conversion of money into more money. When you hear someone talking about 'unit economics', the penny machine is essentially what it devolves into: for each dollar you put into the business, how much do you make? For SaaS businesses, there is a particular formula that you can use to determine if you are healthy. You need three numbers: Your quarterly recurring revenue for quarter x (QRR[x]) Your quarterly recurring revenue for the quarter before x (QRR[x-1]) And your sales and marketing expense for the quarter before x (QExpSM[x-1]) (If you don't have quarterly sales and marketing spending, you can simply take annual spending and divide it by four.) The final step is to put the three numbers together. You're going to divide how much you changed your recurring revenue in the past quarter by what it cost you to do so, as measured by sales and marketing spend. The formula looks like this: $$\frac{QRR[X]-QRR[x-1]}{QExpSM[x-1]}$$ If you get a value below 0.75, you have a problem. What you'd like to aim for is a value that is higher than 1. Since this ratio measures the growth that comes from marketing and sales spend from the previous quarter, a number above 1 indicates that each dollar you put in is returning more, while a ratio below 1 tells you that each dollar that goes into sales and marketing spend is returning less than that dollar you put in. In essence, this formula gives you a quick and dirty measure of your unit economics. Revenue Growth … The Coca Cola Way Croll and Yoskovitz return to the quote by Coca-Cola CMO Sergio Zyman's quote on marketing that I covered in Part 1 of my summary: "marketing is about selling more stuff to more people more often for more money more efficiently." They assert that growing your company at the revenue stage, assuming you have product-market fit, is a matter of chasing down one of these factors, as it applies to your company: Selling things more efficiently applies if you're dependent on physical, per-transaction costs (like direct sales, shipping products to a buyer, or signing up merchants). You don't want your margins to get away from you. Focusing on adding more people applies if you've found a high viral coefficient, because you've got a strong force multiplier added to every dollar you pour into customer acquisition. Getting repeat purchases more often makes sense if you have a loyal set of returning customers. If your model relies on one-time, big-ticket transactions, then more money per transaction will probably help more, since you've only got one chance to extract revenue from the customer, and need to leave as little money on the table as possible. If you have a subscription model, and you're fighting churn, then upselling customers to higher-capacity, higher-price packages with more features will be a key part of your strategy. This means a focus on more stuff. Apart from sources of growth, one key idea at the revenue stage is breaking even. Here, Lean Analytics lists a number of ways to think about measuring breakeven: Breakeven on variable costs — If you are a venture-funded startup, you're probably spending more on growth than you're making on revenue. Croll and Yoskovitz say that this is ok: VCs want startups they fund to make a 10x return, so they aren't that interested in breakeven companies that turn a profit. Instead, a good measure to use is getting to breakeven on variable costs — you want the money you make from each customer to exceed the cost of acquiring and delivering your service to the customer. This means that you're only spending on fixed costs — things like hiring, rent, building new features, and so on, but each additional customer you add isn't costing you anything. Time to customer breakeven — A key measurement of successful revenue growth is whether the customer LTV exceeds the CAC. But how quickly you get to customer breakeven is also an important factor to consider. A company that takes three months to recoup the costs of acquiring a new customer would do much better than a company that takes a year to recoup such costs. EBITDA breakeven — The authors of Lean Analytics note that EBITDA (earnings before income tax, depreciation, and amortisation) fell out of favour in the dot-com bust. But EBITDA breakeven is worth considering in today's startup world, where the majority of costs in a tech startup are pay-as-you-go costs like cloud computing, as opposed to up-front capital expenditure. Hibernation breakeven — This is sometimes known as 'ramen profitable', though the more rigorous definition is 'the business is able to continue growing without new marketing spend; all growth comes from word-of-mouth or virality, and customers don't get new features'. Once you hit this point, it's commonly understood that you're the 'master of your own destiny', because you can survive indefinitely. Stage Five: Scale This chapter is the weakest of all the chapters in Lean Analytics. The authors admit that lean methods apply more cleanly in the early stages of a startup's life. Once you reach the scale stage, your metrics begin to resemble those that large companies use to measure and track performance. You begin to be more concerned with competition, and strategic advantages. You're no longer figuring things out — you're now fighting to win. In my view, Croll and Yoskovitz only present two interesting ideas in this section. The first is Michael Porter's Hole in the Middle Problem. The second is The Three Three's Model. The Hole in the Middle Problem In the 80s, Harvard Professor Michael Porter noticed that firms with a large market share (like Apple, Costco, and Amazon) were often profitable, and so were those with a small market share (like your neighbourhood coffee shop). The problem was companies that were neither small nor large. He termed this the 'hole in the middle' problem. The challenge that medium-sized businesses faced was that they were not large enough to benefit from cost or scale advantages, but they were too large to benefit from a niche strategy. Porter argued that such firms (based on his 'generic strategies' framework) needed to make the shift from differentiated competitor to survive the midsize gap, and then achieve scale and efficiency as a large competitor. Croll and Yoskovitz bring Porter's 'hole in the middle' problem simply to illustrate that a startup isn't out of the woods yet, even if they hit the Scale stage. The Three Three's Model The Three Three's Model is simpler. Croll and Yoskovitz write: (…) By now, you're a bigger organization. You're worrying about more people, doing more things, in more ways. It's easy to get distracted. So we'd like to propose a simple way of focusing on metrics that gives you the ability to change while avoiding the back-and-forth whipsawing that can come from management-by-opinion. We call it the Three-Threes Model. It's really the organisational implementation of the Problem-Solution Canvas we saw in Chapter 16. The Three-Threes Model At this stage, you probably have three tiers of management. There's the board and founders, focused on strategic issues and major shifts, meeting monthly or quarterly. There's the executive team, focused on tactics and oversight, meeting weekly. And there's the rank-and-file, focused on execution, and meeting daily. Don't get us wrong: for many startups, the same people may be at all three of these meetings. It's just that you'll have very different mindsets as a board than you will as the person who's writing code, stuffing boxes, or negotiating a sale. We've also found that it's hard to keep more than three things in your mind at once. But if you can limit what you're working on to just three big things, then everyone in the company knows what they're doing and why they're doing it. The reason Lean Analytics does so badly on the scale stage is because it attempts to be a generic guide to analytics — something that no longer works when a startup hits the big leagues. At this point, industry dynamics do matter, and neither Croll nor Yoskovitz have the ability to give good advice about the challenges of competing at this stage. They default to generalities, and apart from saying 'oh, scaling is important!' do not have much to offer us. Lean Analytics Inside a Larger Company I thought Croll and Yoskovitz's adaptation of Kelly's 14 Rules & Practices for Skunk Works to be pretty useful, and am reproducing it here for completeness. The authors argue that this approach to building a skunk works effort is particularly applicable when you're attempting to launch a lean initiative within a larger company. They use the term 'intrapreneurship', but I don't like the buzzwordy name; I also believe this is generally applicable to mid-sized startups. The rules are as follows: If you're setting out to break rules, you need the responsibility for making changes happen — and the authority that can come only from high-level buy-in. Get an executive sponsor, and make sure everyone else knows that you've got one. Insist on access both to resources within the host company and to real customers. You'll probably need the permission of the support and sales teams to do this. This is going to be difficult, depending on the org you're in, but insist on it anyway. Build a small, agile team of high performers who aren't risk-averse, and who lean toward action. If you can't put together such a team, it's a sign you don't really have the executive buy-in you thought you did. Use tools that can handle rapid change. Rent instead of buy. Favour on-demand tech like cloud-computing, and opex over capex. Don't get bogged down in meetings, keep the reporting you do simple and consistent, but be disciplined about recording progress in a way that can be analysed later on. Keep the data current, and don't try to hide things from the org. Consider the total cost of the innovation you're working on, not just the short-term costs. Don't be afraid to choose new suppliers if they're better, but also leverage the scale and existing contracts of the host org when it makes sense. Streamline the testing process, and make sure the components of your new product are themselves reliable. Don't reinvent the wheel. Build on building blocks that already exist, particularly in early versions. Eat your own dog food, and get face-time with end users, rather than delegating testing and market research to others. Agree on goals and success criteria before starting the project. This is essential for buy-in from execs, but also reduces confusion and avoids feature creep and shifting goals. Make sure you have access to funds and working capital without a lot of paperwork and the need to 'resell' people midway through the project. Get day-to-day interaction with customers, or at the very least, a close proxy to the customer such as someone in support or post-sales, to avoid miscommunication and confusion. Limit access to the team by outsiders as much as possible. Don't poison the team with naysayers, and don't leak half-finished ideas to the company before they're properly tested. Reward performance based on results, and get ready to break the normal compensation models. After all, you're trying to keep entrepreneurs within a company, and if they're talented, they could leave to do their own thing. Building a Data Driven Culture Lean Analytics does have one last interesting section: Part 4, on creating a data-driven culture in your organisation. Croll and Yoskovitz have several general tips for doing so: Start Small, Pick One Thing, and Show Value — There will always be naysayers in an org who believe that instincts, gut and 'the way we've always done things' is good enough. The best thing you can do is to pick a small but significant problem that your company faces and solve it through analytics. This shouldn't be the most crucial issue your company is facing — that's likely got too many cooks in the kitchen already (or is mired in politics). Instead, pick an ancillary issue that you can show quick progress with. Make Sure Goals are Clearly Understood — Any analytics project you take on needs to have clear goals. If you don't have a one (which includes a 'line-in-the-sand goal', you'll fail. Get Executive Buy-In — Unless you're the CEO and pushing this approach top-down, you'll need an exec's buy-in. Win one exec over, and let the culture spread out from there. Make Metrics Simple to Digest — Remember the best practices from Part 1 of our Lean Analytics summary: pick a ratio, don't present too many metrics, and make sure everything you present is actionable. Ensure Transparency — If you're going to use data to make decisions, it's important that you share the data and the methodologies used to acquire and process it. The goal here is to build trust. Which, finally, leads us to: Don't Eliminate Your Gut — Remember that analytics isn't about replacing your intuition; it's about proving your intuitions right or wrong. Lean Analytics is a long read, and attempts to do many things — too many, in this writer's opinion. One thing that I can't discount, however: Croll and Yoskovitz's work serves as an accessible guide to the basics of startup analytics. And thanks to this summary, you'll never need to read the full book to benefit from it. Cedric Chin Staff writer at Holistics. Enjoys Python, coffee, green tea, and cats. I'd love to talk to you about the future of business intelligence! What We Know and Don't Know About Analytics Engineering Three Data Industry Questions We're Investigating For The New Year Five Things We're Pretty Sure About When Measuring Performance, Find Lines in the Sand One of the most useful ideas from the 2013 book Lean Analytics is the notion of 'lines in the sand' — concrete values that tell you how well you're doing on a metric that matters. Lean Analytics Part 1: An Introduction to Analytical Thinking In the first part of our comprehensive summary of Lean Analytics, we examine the basics of analytical thinking, explore six startup business models, and examine the metrics that matter the most to each.
CommonCrawl
export.arXiv.org > astro-ph > arXiv:1910.06345 astro-ph.GA (what is this?) Astrophysics > Astrophysics of Galaxies Title: The universal acceleration scale from stellar feedback Authors: Michael Y. Grudić, Michael Boylan-Kolchin, Claude-André Faucher-Giguère, Philip F. Hopkins (Submitted on 14 Oct 2019 (v1), last revised 10 Aug 2020 (this version, v3)) Abstract: It has been established for decades that rotation curves deviate from the Newtonian gravity expectation given baryons alone below a characteristic acceleration scale $g_{\dagger}\sim 10^{-8}\,\rm{cm\,s^{-2}}$, a scale promoted to a new fundamental constant in MOND. In recent years, theoretical and observational studies have shown that the star formation efficiency (SFE) of dense gas scales with surface density, SFE $\sim \Sigma/\Sigma_{\rm crit}$ with $\Sigma_{\rm crit} \sim \langle\dot{p}/m_{\ast}\rangle/(\pi\,G)\sim 1000\,\rm{M_{\odot}\,pc^{-2}}$ (where $\langle \dot{p}/m_{\ast}\rangle$ is the momentum flux output by stellar feedback per unit stellar mass in a young stellar population). We argue that the SFE, more generally, should scale with the local gravitational acceleration, i.e. that SFE $\sim g_{\rm tot}g_\mathrm{crit} \equiv (G\,M_{\rm tot}/R^{2}) / \langle\dot{p}/m_{\ast}\rangle$, where $M_{\rm tot}$ is the total gravitating mass and $g_\mathrm{crit}=\langle\dot{p}/m_{\ast}\rangle = \pi\,G\,\Sigma_{\rm crit} \approx 10^{-8}\,\rm{cm\,s^{-2}} \approx g_{\dagger}$. Hence the observed $g_\dagger$ may correspond to the characteristic acceleration scale above which stellar feedback cannot prevent efficient star formation, and baryons will eventually come to dominate. We further show how this may give rise to the observed acceleration scaling $g_{\rm obs}\sim(g_{\rm baryon}\,g_{\dagger})^{1/2}$ (where $g_{\rm baryon}$ is the acceleration due to baryons alone) and flat rotation curves. The derived characteristic acceleration $g_{\dagger}$ can be expressed in terms of fundamental constants (gravitational constant, proton mass, and Thomson cross section): $g_{\dagger}\sim 0.1\,G\,m_{p}/\sigma_{\rm T}$. Comments: corrected MNRAS version Subjects: Astrophysics of Galaxies (astro-ph.GA) DOI: 10.1093/mnrasl/slaa103 Cite as: arXiv:1910.06345 [astro-ph.GA] (or arXiv:1910.06345v3 [astro-ph.GA] for this version) From: Michael Grudić [view email] [v1] Mon, 14 Oct 2019 18:00:11 GMT (992kb,D) [v2] Mon, 22 Jun 2020 17:08:08 GMT (1446kb,D) [v3] Mon, 10 Aug 2020 04:43:29 GMT (1448kb,D)
CommonCrawl
World Science Scholars 1.2 From Einstein to LIGO Albert Einstein discovered the field equations of general relativity in 1915. Before general relativity, gravity was understood in Newtonian terms as an attractive force between two objects. This force is proportional to their masses and decreases with increasing distance between the objects. Newton was famously troubled by the problem of instantaneous transmission of the force of gravity. He was right to be worried about this: with Einstein's proposal of special relativity in 1905 came the so-called "universal speed limit"—the speed of light—which applies to all forms of information and energy, including gravity. Einstein's general theory of relativity, which developed out of special relativity, does not have a gravitational force at all. It is a theory in which space is deformed by mass, and masses move because of the deformations in space. In 1916 Einstein wrote his first paper on gravitational waves. Einstein's original work on gravitational waves had some errors, including the idea that any moving matter could produce them. It turns out only non-spherical motion produces gravitational waves. Nonetheless, his original paper made a number of correct predictions about gravitational waves: they are strains in spacetime, they propagate at the speed of light, and they are transverse waves. Transverse waves cause distortions perpendicular to their direction of motion. A ripple on a pond is one example—the wave moves out horizontally, causing vertical distortions to the water's surface. Gravitational waves are also quadrupolar, which means that the strain works differently in different directions. One dimension of spacetime experiences compression while the other experiences expansion, and these effects oscillate back and forth in the two dimensions. The strain ($h$) is a dimensionless value that relates the change in separation between two points ($\Delta L$) to the total separation between them ($L$) by this equation: $$h = \Delta L/L $$ It takes a tremendous amount of energy to distort spacetime, even by a tiny amount. The strain is approximately proportional to the source mass ($m$), the square of the source velocity ($v$), and inverse distance of the measurement from the source ($R$) by the equation: $$ h \approx \frac{Gm}{Rc^{2}} * \frac{v^{2}}{c^{2}} $$ This equation shows that even an extremely small amount of spacetime strain requires huge amounts of energy. This means that the only viable sources of measurable gravitational waves are astronomical systems, like binary star systems. Furthermore, the strains are so small that measuring them is extremely difficult. Certainly in Einstein's time, measurement of such strains was completely hopeless. "Gravitational waves move with the speed of thought." Arthur Eddington, who made Einstein famous when he verified general relativity's prediction about the bending of light around massive objects, found that Einstein's formulation of gravitational waves was coordinate-dependent. This meant that in a different coordinate system the waves were not present, suggesting that they were simply mathematical artifacts. He also found that two stars in a binary system would actually gain energy from gravitational waves, which made no sense and convinced him that they did not exist. Einstein himself doubted gravitational waves, and even published a paper in 1936 with Nathan Rosen asserting that gravitational waves do not exist. For the next twenty years gravitational waves languished in the realm of mathematical theory, untouched by physicists. Gravitational waves were brought out of obscurity in 1957. A thought experiment created by Richard Feynman at a 1957 conference in Chapel Hill, North Carolina convinced John Wheeler and Joe Weber that gravitational waves existed and prompted them to consider experimental measurements of these waves. Weber designed a device that he thought would be able to detect gravitational waves. The "Weber bar" was an aluminum cylinder that would be stretched by the passing of a gravitational wave; this stretching would cause the bar to ring, indicating a successful detection. In 1968 Weber claimed he had used this technology to detect gravitational waves. This set off a flurry of attempted replications that all found nothing, discrediting Weber's experiment and the field of gravitational wave research generally. The idea behind LIGO was conceived in the late 1960s. Mikhael Gertsenshtein was the first to conceive of the basic technique used today to detect gravitational waves, though it was not pursued and fell into obscurity. It was later discovered again by American and German physicists. The idea was to separate two masses, each with a perfectly precise clock. A beam of light is sent from one mass to the other, and the travel time is recorded. When a gravitational wave passes between the masses, the space between them will expand or contract, and the travel time of the light will change. This is only a thought experiment (or Gedankenexperiment in German), as no clocks exist that are precise enough to make the measurement. The basic idea, however, is experimentally sound and underlies LIGO's technology. We use cookies and other technologies to optimize site functionally, analyze website traffic, and share information with our service and analytics partners. To understand more about how we use cookies, visit our Cookie Policy. To understand more about who we are, how we process your information. and how best to contact us, visit our Privacy Policy.OK © 2015-2020 World Science Foundation. All Rights Reserved.
CommonCrawl
A direct approach to controlling the topology in structural optimization Computers & Structures 227:106141 DOI:10.1016/j.compstruc.2019.106141 Zi-Long Zhao Shiwei Zhou Kun Cai Northwest A & F University Yi Min Xie Structural shape and topology optimization has undergone tremendous developments in recent years due to its important applications in many fields. However, effectively controlling the structural complexity of the optimization result remains a challenging issue. The structural complexity is usually characterized by the distribution and geometries of interior holes. In this work, a new approach is developed based on the graph theory and the set theory to control the number and size of interior holes of the optimized structures. The minimum distance between the edges of any two neighboring holes can also be constrained. The structural performance and the effect of the structural complexity control are well balanced by using this approach. We use three typical numerical examples to verify the effectiveness of the developed approach. The optimized structures with and without constraints on the structural complexity are quantitatively compared and analyzed. The present methodology not only enables the designer to have a direct control over the topology of the optimized structures, but also provides diverse and competitive solutions. ... Even if many complex structures can be printed by AM, there are still some manufacturing constraints that are needed to be addressed. In the AM-oriented TO, the most common manufacturing constraints considered in the literature are the self-supporting constraint (Gaynor and Guest 2016;Qian 2017;Guo et al. 2017;Langelaar 2017;Allaire et al. 2017;Wang et al. 2018;Johnson and Gaynor 2018;Zhang and Zhou 2018;Zhang et al. 2019;Fu et al. 2019;Zhang and Cheng 2020;Luo et al. 2020;Garaigordobil et al. 2021), minimum length constraint (Guest et al. 2004;Lazarov et al. 2016;Wang et al. 2011;Sigmund 2009;Zhou et al. 2015;Zhang et al. 2014Zhang et al. , 2017Guo et al. 2014;Xia and Shi 2015;Liu 2019) including solid-phase minimum length constraint, and that of void phase, connectivity constraint (Liu et al. 2015;Li et al. 2016;Zhou and Zhang 2019;Zhao et al. 2019;Wang et al. 2020;Xiong et al. 2020;Liang et al. 2022), and thermal residual stress/deformation constraint (Cheng et al. 2019;Misiun et al 2021;Xu et al. 2022). ... ... Following this approach, Wang et al. 2020 studied the parameter selection for suppressing the enclosed voids with an electrostatic model. Some non-SIMP approaches (Zhou and Zhang 2019;Zhao et al. 2019;Xiong et al. 2020;Liang et al. 2022) were also developed by special skills. ... Structural topology optimization with four additive manufacturing constraints by two-phase self-supporting design STRUCT MULTIDISCIP O Kaiqing Zhang G. D. Cheng This paper studies the additive manufacturing (AM)-oriented minimum compliance structural topology optimization (TO) subject to four AM constraints: self-supporting constraint, connectivity constraint, solid-phase minimum length constraint, and void-phase minimum length constraint simultaneously. The essential novelty of this study is that we show that the connectivity constraint can be realized by imposing the void-phase self-supporting constraint. The corresponding proof is given in Appendix. The Elements Scheme (ES) method is used to construct the element-wise self-supporting constraint. By improving the constraint aggregating functions and aggregating a large number of element-wise self-supporting constraints on solid-phase and void-phase structures into three constraints, we propose a concise topology optimization formulation to effectively and simultaneously suppress the small overhang angle boundaries, hanging features (solid-phase upside-down triangles), voids with pointed tips (void-phase upside triangles), slim components, small voids, and enclosed voids in the optimized design. Numerical examples demonstrate the effectiveness of this formulation in comparison with other connectivity control methods. ... Xiong et al. [40] developed an approach to controlling the structural connectivity by generating tunnels. Based on the graph theory and set theory, Zhao et al. [41] developed a direct approach to explicitly controlling the structural complexity during the form-finding process. This approach has been successfully applied to the morphological optimization of biological organs [42]. ... ... The optimization process is implemented through Python code that links to Abaqus. A 2D structure can be treated as a degenerated 3D one (Fig. 5), and the SCC in 2D optimization can be realized by existing methods [36,38,41]. Therefore, this study only considers 3D optimization problems to demonstrate the effectiveness of Table 1. ... A thinning algorithm based approach to controlling structural complexity in topology optimization FINITE ELEM ANAL DES Yunzhen He Shape and topology optimization techniques aim to maximize structural performance through material redistribution. Effectively controlling structural complexity during the form-finding process remains a challenging issue. Structural complexity is usually characterized by the number of connected components (e.g., beams and bars), tunnels, and cavities in the structure. Existing structural complexity control approaches often prescribe the number of existing cavities. However, for three-dimensional problems, it is highly desirable to control the number of tunnels during the optimization process. Inspired by the topology-preserving feature of a thinning algorithm, this paper presents a direct approach to controlling the topology of continuum structures under the framework of the bi-directional evolutionary structural optimization (BESO) method. The new approach can explicitly control the number of tunnels and cavities for both two- and three-dimensional problems. In addition to the structural topology, the minimum length scale of structural components can be easily controlled. Numerical results demonstrate that, for a given set of loading and boundary conditions, the proposed methodology may produce multiple high-performance designs with distinct topologies. The techniques developed from this study will be useful for practical applications in architecture and engineering, where the structural complexity usually needs to be controlled to balance the aesthetic, functional, economical, and other considerations. ... The original intention of reducing low efficient materials are the same for both topology optimization and evolution of biological structures in nature. Topology optimization has not only been applied in engineering structural design but also for exploring the optimization mechanisms of biological materials (Zhao et al., 2018;Zhao et al., 2020aZhao et al., , 2020b. Apart from the advantages of achieving high structural performance and low material usage, beautiful structure appearance is also a by-product from topology optimization. ... ... It can produce 0-1 results where there is no transitional materials in the final design (Xia et al., 2018;. Recently, complex constraints on, e.g. the structural complexity/connectivity and the maximum principal stress have been successfully integrated into the BESO-based topology optimization (Zhao et al., 2020a(Zhao et al., , 2020bXiong et al., 2020;Chen et al., 2021). Novel approaches have developed based on the BESO method for generating diverse and competitive designs (He et al., 2020;Yang et al., 2019;Xie et al., 2019). ... Creating novel furniture through topology optimization and advanced manufacturing RAPID PROTOTYPING J Jiaming Ma Zhi Li Purpose-Furniture plays a significant role in daily life. Advanced computational and manufacturing technologies provide new opportunities to create novel, high-performance and customized furniture. This paper aims to enhance furniture design and production by developing a new workflow in which computer graphics, topology optimization and advanced manufacturing are integrated to achieve innovative outcomes. Design/methodology/approach-Workflow development is conducted by exploring state-of-the-art computational and manufacturing technologies to improve furniture design and production. Structural design and fabrication using the workflow are implemented. Findings-An efficient transdisciplinary workflow is developed, in which computer graphics, topology optimization and advanced manufacturing are combined. The workflow consists of the initial design, the optimization of the initial design, the postprocessing of the optimized results and the manufacturing and surface treatment of the physical prototypes. Novel chairs and tables, including flat pack designs, are produced using this workflow. The design and fabrication processes are simple, efficient and low-cost. Both additive manufacturing and subtractive manufacturing are used. Practical implications-The research outcomes are directly applicable to the creation of novel furniture, as well as many other structures and devices. Originality/value-A new workflow is developed by taking advantage of the latest topology optimization methods and advanced manufacturing techniques for furniture design and fabrication. Several pieces of innovative furniture are designed and fabricated as examples of the presented workflow. ... Topology optimization is gaining exponentially growing applications in a wide range of industries such as civil, automotive, aerospace and others [1][2][3][4]. It has been recognized as an advanced design method for lightweight and high-performance structures [5,6]. The majority of topology optimization approaches are dedicated to linear elastic structures whose boundary value problem can be formulated as linear system of equations and solved efficiently. ... ... To give an example, several hyperelastic models under the assumption of uniaxial deformation [16] are given in Table 1. (5) in which the tangent modulus D tan e can be analytically calculated from prescribed constitutive model ME={(ε, σ)| σ=f(ε)}. Note that constant stiffness matrix K is usually used in replacement of K tan to avoid the time-consuming reassembly and inverse calculation of large-scale K tan in each iteration. ... A new data-driven topology optimization framework for structural optimization COMPUT STRUCT Ying Zhou Haifei Zhan Weihong Zhang Yuantong Gu The application of structural topology optimization with complex engineering materials is largely hindered due to the complexity in phenomenological or physical constitutive modeling from experimental or computational material data sets. In this paper, we propose a new data-driven topology optimization (DDTO) framework to break through the limitation with the direct usage of discrete material data sets in lieu of constitutive models to describe the material behaviors. This new DDTO framework employs the recently developed data-driven computational mechanics for structural analysis which integrates prescribed material data sets into the computational formulations. Sensitivity analysis is formulated by applying the adjoint method where the tangent modulus of prescribed uniaxial stress-strain data is evaluated by means of moving least square approximation. The validity of the proposed framework is well demonstrated by the truss topology optimization examples. The proposed DDTO framework will provide a great flexibility in structural design for real applications. ... For instance, the level-set method uses higherdimensional implicit functions [22][23][24][25][26], and the moving morphable components method controls the shapes and layout of a set of structural components. Different kinds of new constraints have been imposed during the optimization process in recent years, such that further structural design problems can be addressed effectively and practically [27][28][29][30][31][32][33][34][35][36]. In addition, topology optimization has been applied in transdisciplinary research such as biomechanical morphogenesis [37][38][39] and metamaterial designs [40][41][42][43]. ... Topology optimization of ribbed slabs and shells ENG STRUCT Ribbed slabs are widely used in the building industry. Designing ribbed slabs through conventional engineering techniques leads to limited structural forms, low structural performance and high material waste. Topology optimization is a powerful tool for generating free-form and highly efficient structures. In this research, we develop a mapping constraint optimization approach to designing ribbed slabs and shells. Compared with conventional ones, the presented approach is able to produce designs with higher performance and without isolated ribs. The approach is integrated into three optimization methods and used to design both flat slabs and curved shells. Several numerical examples are used to demonstrate the effectiveness of the new approach. The findings of this study have potential applications in the design of aesthetically pleasing and structurally efficient ribbed slabs and shells. ... In view of these drawbacks, many researchers have made many improvements to density-based method in recent years. Zhao et al. proposed a control method based on graph theory and set theory to control the number and size of internal holes in the optimized structure, which can well balance the structural performance and structural complexity control effect (Zhao et al. 2020). Jiu et al. developed a CAD oriented topology optimization method, the results show that this method is almost free from the limitations of conventional finite element methods in modeling and meshes (Jiu et al. 2020). ... Non-probabilistic uncertain design for spaceborne membrane microstrip reflectarray antenna by using topology optimization Yanben Han Yufei Liu Chengbo Cui Spaceborne large aperture membrane microstrip reflectarray antenna has the characteristics of high gain, lightweight and small storage volume, which will be used in future space missions. However, there are two main reasons restricting its application. Firstly, the traditional dimensional optimization method cannot effectively affect the distribution of prestress in the membrane reflector, so it needs to increase too much mass to achieve the goal of stiffness improvement; secondly, low stiffness makes the membrane reflector more sensitive to various uncertainties. In view of this, this paper proposes a method to affect the distribution of prestress by sticking irregular shaped additional layer, and proposes a non-probabilistic uncertain topology optimization method to design the shape of additional layer. The effectiveness of the proposed methods is verified by numerical examples. ... However, experiments are time-consuming and significant resources are utilized. Topological optimization based on numerical simulation is another option [22][23][24]. However, the design space may be extraordinarily massive when the compositional and topological structures are complex [25]. ... Optimization of vascular structure of self-healing concrete using deep neural network (DNN) CONSTR BUILD MATER Zhi Wan Ze Chang Yading Xu Branko Šavija In this paper, optimization of vascular structure of self-healing concrete is performed with deep neural network (DNN). An input representation method is proposed to effectively represent the concrete beams with 6 round pores in the middle span as well as benefit the optimization process. To investigate the feasibility of using DNN for vascular structure optimization (i.e., optimization of the spatial arrangement of the vascular network), structure optimization improving peak load and toughness is first carried out. Afterwards, a hybrid target is defined and used to optimize vascular structure for self-healing concrete, which needs to be healable without significantly compromising its mechanical properties. Based on the results, we found it feasible to optimize vascular structure by fixing the weights of the DNN model and training inputs with the data representation method. The average peak load, toughness and hybrid target of the ML-recommended concrete structure increase by 17.31%, 34.16% and 9.51%. The largest peak load, toughness and hybrid target of the concrete beam after optimization increase by 0.17%, 14.13%, and 3.45% compared with the original dataset. This work shows that the DNN model has great potential to be used for optimizing the design of vascular system for self-healing concrete. ... Zhang et al. (2017) combined the level set method with the structural skeletons to explicitly control the holes for the 2D structures by endowing the independent level set function for each hole. Under the soft-kill bi-directional evolutionary structure optimization (BESO) method, Zhao et al. (2020) proposed a direct approach based on graph theory and set theory to control holes in structural optimization. Han et al. (2021) proposed a heuristic constraint for the hole-filling method based on BESO to achieve the maximum number of constrained holes in the structure. ... Explicit 2D topological control using SIMP and MMA in structural topology optimization Tongxing Zuo Chong Wang Haitao Han Zhenyu Liu Structural topology can be measured on the basis of its betti numbers. A fundamental feature of structural topology optimization is that it allows the structural topology to be changed during the optimization process. However, traditional structural topology optimization methods use indirect and nonquantitative approaches to change the structural topology during the optimization procedure. Therefore, these traditional methods leave the detailed implementation of optimization with nonintuitive parameters (e.g., filter radius) to adjust the final topology of optimized results. Choosing a suitable nonintuitive parameter for beginners is not straightforward, and makes the optimization procedure complex when applying structural topology optimization methods to engineering design tasks with a preferred level of complexity (number of structural holes). A 2D structure has two betti numbers, B0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${B}_{0}$$\end{document} and B1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${B}_{1}$$\end{document}, where B0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${B}_{0}$$\end{document} and B1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${B}_{1}$$\end{document} correspond to the number of independent connected components and the number of holes in the structure, respectively. To solve the aforementioned problems, this paper explicitly quantitatively controls over the number of structural holes within the framework of the solid isotropic material with penalty (SIMP) interpolation of the design variable and the method of moving asymptotes (MMA) optimization algorithm in 2D, thus achieving direct unilateral constraint (constraining the maximum number of structural holes) over structural topology. The framework of SIMP and MMA is a powerful way because of its ability to handle more complex problems. Thus, the proposed topological control method based on SIMP and MMA is useful for structural topology optimization research field. For example, the proposed method is based on triangular meshing discretization of the initial design domain; therefore, irregular design domains can be easily processed, and adaptive meshes can be used to improve the geometric approximation of the design domains. Numerical examples show that the proposed method can effectively control the topology, the maximum number of holes (complexity) of the optimized structure. ... Recently, based on the description of the size, shape and number of holes of an optimized structure 1 , topology optimization has been implemented from a geometric complexity perspective in a given design domain. The number of holes in the structure is implicitly designed, and the set of elements in the holes is constrained [2][3][4] . In the 3D case, the number of holes is a topological invariant that is classified as the number of internal enclosed holes (or enclosed voids) and the genus (or the number of tunnels of a structure). ... Inequality constraint on the maximum genus for 3D structural compliance topology optimization Structural topology constraints in topology optimization are an important research topic. The structural topology is characterized by the topological invariance of the number of holes. The holes of a structure in 3D space can be classified as internally enclosed holes and external through-holes (or tunnels). The genus is the number of tunnels. This article proposes the quotient set design variable method (QSDV) to implement the inequality constraint on the maximum genus allowed in an optimized structure for 3D structural topology optimization. The principle of the QSDV is to classify the changing design variables according to the connectivity of the elements in a structure to obtain the quotient set and update the corresponding elements in the quotient set to meet the topological constraint. Based on the standard relaxation algorithm discrete variable topology optimization method (DVTOCRA), the effectiveness of the QSDV is illustrated in numerical examples of a 3D cantilever beam. ... Zhao et al. [4] developed a new method to control topology in structural optimization, wherein the number and size of internal holes were explicitly controlled. The developed method improved the manufacturability of products resulting from structural optimization and provided designers with diverse and competitive solutions. ... Study on Kinematic Structure Performance and Machining Characteristics of Machine Tools Tzu-Chi Chan Chia-Chuan Chang Han-Huei Lin The rigidity and natural frequency of machine tools considerably influence cutting and generate great forces when in contact with the workpiece. The poor static rigidity of these machines can cause deformations and destroy the workpiece. If the natural frequency of the machines is low or close to the commonly used cutting frequency, they vibrate considerably, resulting in poor workpiece surfaces and thus shortening the lifespan of the tool. In this study, the finite element method was mainly used to analyze the structure, static force, modal, frequency spectrum, and transient state of machine tools. The results of the state analysis were verified and compared to the experimental results. The analysis model and conditions were modified to ensure that the analysis results were consistent with the experimental results. Multi-body dynamics analyses were conducted by examining the force of each component and casting of the machine tools and the load of the motor during the cutting stroke. Moreover, an external force was applied to simulate the load condition of the motor when the machine tool is cutting to confirm the feed. In this study, we used topology optimization for effective structural optimization designs. The optimal conditions for topology optimization included lightweight structures, which resulted in reduced structural deformation and increased natural frequency. ... Zhang et al. (2017) controlled the structural complexity through the constraints on multiple level-set functions in a level-set-based topology optimization. Zhao et al. (2020) applied void number and size constraint in evolutionary topology optimization. They used algorithms from graph theory and set theory to describe and adjust the status of voids formed by elements. ... Topology optimization incorporating a passageway for powder removal in designs for additive manufacturing Dedao Liu Louis N S Chiu Chris H. J. Davies Wenyi Yan In powder-based additive manufacturing, the unused powder must be removed after printing. Topology optimization has been applied to designs for additive manufacturing, which may lead to designs with enclosed voids, where the powder will be trapped inside during printing. A topology optimization method incorporating a powder removal passageway is developed to avoid the powder being trapped inside the structure. The passageway is generated by connecting the entrance, all voids, and the exit sequentially. Each void is limited to have only one pair of inlet and outlet to guarantee a single-path flow to facilitate powder removal after the additive manufacturing. The path of the passageway is optimized to minimize its influence on structural stiffness. The proposed optimization method was applied to two practical case studies where the powder removal passageways were generated successfully. ... The structural connectivity was controlled by introducing a new auxiliary temperature field. With the foundation of graph theory and set theory (Zhao et al., 2020), the number and size of interior holes can be controlled by updated BESO (soft-kill bi-directional evolutionary structural optimization) method, and the conception of structural complexity control was proposed. Genus, the concept that describes topological invariant was used to directly control the maximum number of holes for the design domain. ... Topological control for 2D minimum compliance topology optimization using SIMP method Qianglong Wang Topological constraints have recently been introduced to structural topology optimization by the BESO method. However, for the classical and widely used SIMP-type optimization method, an implicit and continuously changing variable cannot express the topological characteristics directly during the optimization process. This is partly caused by missing well-defined boundaries to compute topological characteristics. To introduce topological constraints into the SIMP-type method, an auxiliary discrete expression of structural boundaries through the volume preservation projection method is used to compute topological characteristics, that is, the genus or number of holes. A topological control methodology based on persistence homology, a numerical calculation idea derived from topological data analysis, is introduced in this paper to implement topological constraints. With the help of the design space progressive restriction method, the proposed methodology shows that for the 2D static minimum compliance optimization problem, the inequality constraints on the number of holes can be satisfied. The effectiveness of the proposed topological control method for the SIMP-type framework is illustrated by several numerical examples. ... Recently, a transdisciplinary computational framework was established to reveal the developmental mechanisms of animal and plant tissues through biomechanical morphogenesis [18,19]. Besides, much effort has been directed toward increasing the resolutions of the design domain [20,21], enhancing the manufacturability of the optimized results [22], improving the multi-material compatibility of the optimization process [23], and controlling the structural complexity and connectivity [24,25]. ... Structural topology optimization with an adaptive design domain COMPUT METHOD APPL M Yi Rong Xi-Qiao Feng Topology optimization has rapidly developed as a powerful tool of structural design in multiple disciplines. Conventional topology optimization techniques usually optimize the material layout within a predefined, fixed design domain. Here, we propose a subdomain-based method that performs topology optimization in an adaptive design domain (ADD). A subdomain-based parallel processing strategy that can vastly improve the computational efficiency is implemented. In the ADD method, the loading and boundary conditions can be easily changed in concert with the evolution of the design space. Through the automatic, flexible, and intelligent adaptation of the design space, this method is capable of generating diverse high-performance designs with distinctly different topologies. Five representative examples are provided to demonstrate the effectiveness of this method. The results show that, compared with conventional approaches, the ADD method can improve the structural performance substantially by simultaneously optimizing the layout of material and the extent of the design space. This work might help broaden the applications of structural topology optimization. ... The original intention of reducing low efficient materials are the same for both topology optimization and evolution of biological structures in nature. Topology optimization has not only been applied in engineering structural design, but also for exploring the optimization mechanisms of biological materials (Zhao et al., 2018;Zhao et al., 2020aZhao et al., , 2020b. ... Purpose Furniture plays a significant role in daily life. Advanced computational and manufacturing technologies provide new opportunities to create novel, high-performance and customized furniture. This paper aims to enhance furniture design and production by developing a new workflow in which computer graphics, topology optimization and advanced manufacturing are integrated to achieve innovative outcomes. Design/methodology/approach Workflow development is conducted by exploring state-of-the-art computational and manufacturing technologies to improve furniture design and production. Structural design and fabrication using the workflow are implemented. Findings An efficient transdisciplinary workflow is developed, in which computer graphics, topology optimization and advanced manufacturing are combined. The workflow consists of the initial design, the optimization of the initial design, the postprocessing of the optimized results and the manufacturing and surface treatment of the physical prototypes. Novel chairs and tables, including flat pack designs, are produced using this workflow. The design and fabrication processes are simple, efficient and low-cost. Both additive manufacturing and subtractive manufacturing are used. Practical implications The research outcomes are directly applicable to the creation of novel furniture, as well as many other structures and devices. Originality/value A new workflow is developed by taking advantage of the latest topology optimization methods and advanced manufacturing techniques for furniture design and fabrication. Several pieces of innovative furniture are designed and fabricated as examples of the presented workflow. ... In the past three decades, several optimization techniques have been successfully established, including the solid isotropic material with penalization (SIMP) method (Bendsøe, 1989;Bendsøe, 1995;Sigmund and Maute, 2013), the level-set method (Allaire et al., 2002;Wang et al., 2003), the evolutionary structural optimization (ESO) method (Xie and Steven, 1993;Xie and Steven, 1997) and the bi-directional evolutionary structural optimization (BESO) method Xie, 2007, 2009). Most recently, imposing complicated constraints during the form-finding process has been realized (Chen et al., 2020;He et al., 2020;Xiong et al., 2020;Zhao et al., 2020a). By establishing transdisciplinary computational methods for biomechanical morphogenesis, Zhao et al. (2018Zhao et al. ( , 2020bZhao et al. ( , 2020c 4 have revealed the optimization mechanisms of, e.g., plant leaves and animal stingers. ... Topology of leaf veins: Experimental observation and computational morphogenesis J MECH BEHAV BIOMED Sen Lin The unique, hierarchical patterns of leaf veins have attracted extensive attention in recent years. However, it remains unclear how biological and mechanical factors influence the topology of leaf veins. In this paper, we investigate the optimization mechanisms of leaf veins through a combination of experimental measurements and numerical simulations. The topological details of three types of representative plant leaves are measured. The experimental results show that the vein patterns are insensitive to leaf shapes and curvature. The numbers of secondary veins are independent of the length of the main vein, and the total length of veins increases linearly with the leaf perimeter. By integrating biomechanical mechanisms into the topology optimization process, a transdisciplinary computational method is developed to optimize leaf structures. The numerical results show that improving the efficiency of nutrient transport plays a critical role in the morphogenesis of leaf veins. Contrary to the popular belief in the literature, this study shows that the structural performance is not a key factor in determining the venation patterns. The findings provide a deep understanding of the optimization mechanism of leaf veins, which is useful for the design of high-performance shell structures. ... Gholizadeh et al. (2020) introduced the Newton Metaheuristic Algorithm for discrete performance-based seismic design optimization of steel moment frames. As topology optimization for practical engineering problems, Zhao et al. (2020) proposed a direct approach to control the number and size of interior holes of the optimized structures. The method not only enables the designer to have a direct control over the topology of the optimized structures but also provides diverse and competitive solutions. ... Simultaneous shape and topology optimization method for frame structures with multi-materials Masatoshi Shimoda Shoki TANI In this study, we present a novel non-parametric shape-topology optimization method for frame structures with multi-materials, aiming at designing more light and stiff frame structures. The sum of squared error norm for achieving the target displacements on the specified members is minimized under the volume constraints of multi-materials as a design problem. A simultaneous shape and topology optimization problem is formulated as a distributed-parameter optimization problem based on the variational method. The shape gradient function and the material gradient functions for this design optimization problem are theoretically derived with the Lagrange multiplier method, the material derivative method and the adjoint variable method. The generalized solid isotropic material with penalization (GSIMP) method is employed to classify into the multi-materials in topology optimization. The gradient functions derived are applied to the unified H¹ gradient method for frame structures to determine the optimal shape and material variations. With this method, the optimal free-form and topology for arbitrary large-degree of design freedom frame structures with multi-materials can be obtained without any shape and topology parameterization. Numerical examples with various materials and different boundary conditions are demonstrated and the results are discussed. ... Topology optimization is a mathematical technique commonly used in free-form designs. Since its invention (Bendsøe and Kikuchi 1988), various TO-based design approaches have been developed (Jakiela et al. 2000, Wang, M. Y. et al. 2003, Juan et al. 2008, Schevenels et al. 2011, Guo et al. 2014, Zhang, W. et al. 2017, Zhang, X. et al. 2019, Zhao et al. 2020) and applied to design a wide range of structures and products such as automobile and aircraft parts/components (Cavazzuti et al. 2011, Zhu et al. 2016. The advent in additive manufacturing technologies has further broadened the application scope of TO. ... Accelerating gradient-based topology optimization design with dual-model artificial neural networks Chao Qian Wenjing Ye Topology optimization (TO) is a common technique used in free-form designs. However, conventional TO-based design approaches suffer from high computational cost due to the need for repetitive forward calculations and/or sensitivity analysis, which are typically done using high-dimensional simulations such as finite element analysis (FEA). In this work, artificial neural networks are used as efficient surrogate models for forward and sensitivity calculations in order to greatly accelerate the design process of topology optimization. To improve the accuracy of sensitivity analyses, dual-model artificial neural networks that are trained with both forward and sensitivity data are constructed and are integrated into the Solid Isotropic Material with Penalization (SIMP) method to replace the FEA. The performance of the accelerated SIMP method is demonstrated on two benchmark design problems namely minimum compliance design and metamaterial design. The efficiency gained in the problem with size of 64 × 64 is 137 times in forward calculation and 74 times in sensitivity analysis. In addition, effective data generation methods suitable for TO designs are investigated and developed, which lead to a great saving in training time. In both benchmark design problems, a design accuracy of 95% can be achieved with only around 2000 training data. ... Furthermore, according to the working manner of the SLM technique, a large amount of metal powder will be left in the cavities of the rudder and should definitely be removed after manufacturing. In the academic community, design with closed voids is generally regarded as bad designs for additive manufacturing and several approaches have been put forward to eliminate closed voids in topology optimization [50][51][52][53]. In this work, the authors, from an engineering point of view, propose to open some powder-discharge holes on the ribs so that those cavities can be connected to outside space and the metal powder can be discharged. ... An all-movable rudder designed by thermo-elastic topology optimization and manufactured by additive manufacturing Longlong Song Tong Gao Lei Tang In high-speed vehicles, rudders often endure both aerodynamic pressure and thermal loads. The innovative design of rudders is of great importance for the performance of the whole vehicle. In this work, thermo-elastic topology optimization is adopted to design a typical all-movable rudder structure. The compliance of the rudder skin is considered to be a new objective and the moment of inertia of the rudder is constrained during optimization to ensure its fast response to instructions of the control system. Then sensitivity analysis of the structural compliance and the moment of inertia is carried out. Optimization results show that thermal load has a great effect on the optimized configuration and minimizing the compliance of the rudder skin gives much better design than minimizing the global compliance. Subsequently, an engineering-oriented post-processing is conducted to make the optimized design suitable for additive manufacturing. An appropriate printing direction is selected based on the layout of the optimized ribs and certain ribs are reshaped with fillets to make the rudder free of internal support structures. Besides, according to a secondary topology optimization of the ribs and the stress distribution, a set of powder-discharge holes are properly opened on the ribs so that all cavities within the rudder are connected and the metal powder inside the rudder can be discharged with little effort after manufacturing. Finally, the optimized design is successfully printed using Selective Laser Melting, demonstrating the proposed post-processing is effective for additive manufacturing. ... As an important branch of topology optimization, the BESO technique is based on the simple concept of gradually removing inefficient material from a structure and adding material to the most needed locations at the same time [24]. It is widely 3 recognized owing to its high-quality topology solutions [25], ease of understanding and implementation [26], and excellent computational efficiency [27]. ... Lessons Learnt from a National Competition on Structural Optimization and Additive Manufacturing Ding Wen Bao Yulin Xiong Background: As an advanced design technique, topology optimization has received much attention over the past three decades. Topology optimization aims at finding an optimal material distribution in order to maximize the structural performance while satisfying certain constraints. It is a useful tool for the conceptional design. At the same time, additive manufacturing technologies have provided unprecedented opportunities to fabricate intricate shapes generated by topology optimization. Objective: To design a highly efficient structure using topology optimization and to fabricate it using additive manufactur-ing. Method: The bi-directional evolutionary structural optimization (BESO) technique provides the conceptional design, and the topology-optimized result is post-processed to obtain smooth structural boundaries. Results: We have achieved a highly efficient and elegant structural design which won the first prize in a national competition in China on design optimization and additive manufacturing. Conclusion: In this paper, we present an effective topology optimization approach to maximizing the structural load-bearing capacity and establish a procedure to achieve efficient and elegant structural designs. In the loading test of the final competition, our design carried the highest loading and won the first prize of the competition, which clearly demonstrates the capability of BESO in engineering applications. Accelerating gradient-based topology optimization design with dual-model neural networks Topology optimization (TO) is a common technique used in free-form designs. However, conventional TO-based design approaches suffer from high computational cost due to the need for repetitive forward calculations and/or sensitivity analysis, which are typically done using high-dimensional simulations such as Finite Element Analysis (FEA). In this work, neural networks are used as efficient surrogate models for forward and sensitivity calculations in order to greatly accelerate the design process of topology optimization. To improve the accuracy of sensitivity analyses, dual-model neural networks that are trained with both forward and sensitivity data are constructed and are integrated into the Solid Isotropic Material with Penalization (SIMP) method to replace FEA. The performance of the accelerated SIMP method is demonstrated on two benchmark design problems namely minimum compliance design and metamaterial design. The efficiency gained in the problem with size of 64x64 is 137 times in forward calculation and 74 times in sensitivity analysis. In addition, effective data generation methods suitable for TO designs are investigated and developed, which lead to a great saving in training time. In both benchmark design problems, a design accuracy of 95% can be achieved with only around 2000 training data. ... The ESO and BESO methods have been used for solving topology optimization problems in many areas of structural engineering. These problems include structural frequency optimization (Xie and Steven 1994), minimizing structural volume with a displacement or compliance constraint (Liang et al. 2000), structural complexity control in topology optimization (Zhao et al. 2020a;Xiong et al. 2020), topology optimization for energy absorption structures (Huang et al. 2007), design of periodic structures (Huang and Xie 2008), geometrical and material nonlinearity problems (Huang and Xie 2007a), stiffness optimization of structures with multiple materials (Huang and Xie 2009), maximizing the fracture resistance of quasi-brittle composites (Xia et al. 2018a), stress minimization designs (Xia et al. 2018b), biomechanical morphogenesis (Zhao et al. 2018(Zhao et al. , 2020b, stiffness maximization of structures with von Mises constraints (Fan et al. 2019), and diverse and competitive designs (Xie et al. 2019;Yang et al. 2019;He et al. 2020). ... Controlling the maximum first principal stress in topology optimization Anbang Chen Previous studies on topology optimization subject to stress constraints usually considered von Mises or Drucker–Prager criterion. In some engineering applications, e.g., the design of concrete structures, the maximum first principal stress (FPS) must be controlled in order to prevent concrete from cracking under tensile stress. This paper presents an effective approach to dealing with this issue. The approach is integrated with the bi-directional evolutionary structural optimization (BESO) technique. The p-norm function is adopted to relax the local stress constraint into a global one. Numerical examples of compliance minimization problems are used to demonstrate the effectiveness of the proposed algorithm. The results show that the optimized design obtained by the method has slightly higher compliance but significantly lower stress level than the solution without considering the FPS constraint. The present methodology will be useful for designing concrete structures. ... Based on different penalty methods, Yang et al. proposed five simple strategies to obtain diverse and competitive designs, which can be easily integrated into commonly used topology optimization techniques [34]. Zhao et al. developed an approach to control the number and size of the interior holes of structures [35]. ... Stochastic approaches to generating diverse and competitive structural designs in topology optimization Topology optimization techniques have been widely used in structural design. Conventional optimization techniques usually are aimed at achieving the globally optimal solution which maximizes the structural performance. In practical applications, however, designers usually desire to have multiple design options, as the single optimal design often limits their artistic intuitions and sometimes violates the functional requirements of building structures. Here we propose three stochastic approaches to generating diverse and competitive designs. These approaches include (1) penalizing elemental sensitivities, (2) changing initial designs, and (3) integrating the genetic algorithm into the bi-directional evolutionary structural optimization (BESO) technique. Numerical results demonstrate that the proposed approaches are capable of producing a series of random designs, which possess not only high structural performance, but also distinctly different topologies. These approaches can be easily implemented in different topology optimization techniques. This work is of significant practical importance in architectural engineering where multiple design options of high structural performance are required. ... Most recently, Zhao et al. proposed an effective approach to controlling the structural connectivity in topology optimization [49]. Using this approach, the structural performance and the effect of the structural complexity control can be well balanced. ... A new approach to eliminating enclosed voids in topology optimization for additive manufacturing Song Yao Topology optimization is increasingly used in lightweight designs for additive manufacturing (AM). However, conventional optimization techniques do not fully consider manufacturing constraints. One important requirement of powder-based AM processes is that enclosed voids in the designs must be avoided in order to remove and reuse the unmelted powder. In this work, we propose a new approach to realizing the structural connectivity control based on the bi-directional evolutionary structural optimization technique. This approach eliminates enclosed voids by selectively generating tunnels that connect the voids with the structural boundary during the optimization process. The developed methodology is capable of producing highly efficient structural designs which have no enclosed voids. Furthermore, by changing the radius and the number of tunnels, competitive and diverse designs can be achieved. The effectiveness of the approach is demonstrated by two examples of three-dimensional structures. Prototypes of the obtained designs without enclosed voids have been fabricated using AM. Explicit Tunnels and Cavities Control Using SIMP and MMA in Structural Topology Optimization COMPUT AIDED DESIGN Design and fabrication of artificial brain coral: Evolution principle, turbulent hydrodynamics and matter interchange S Lin N Z Chou G Y Li This paper presents a study of the morphogenesis of brain corals based on an experimental investigation and a topological optimization method. The resistance to matter interchange was employed to allocate the optimal space for the growth of polyp colonies from the perspective of topological optimization, where the optimized structures are those of natural brain corals. Computational fluid dynamics simulations revealed that these complicated structures can provide shelter to protect polyps from ocean currents. A reverse mold was prepared from silica gel and used to cast models from mixtures of cement and calcium carbonate, where the mixture ratio was determined based on compressive strength and biocompatibility. Based on an acid corrosion experiment, the matter interchange capability was verifi�ed. This study also proved that the many folds in the structure of brain corals contribute to the circulation of seawater, thus maintaining the concentration of nutrients and hindering the deposition of harmful substances. This paper establishes an innovative methodology for the creation of artificial brain corals, which is important for environmental restoration. Keywords: Brain corals, Topological optimization, Turbulent hydrodynamics, Matter interchange 适用于建筑设计的广义拓扑优化方法 近年来,越来越多的设计师使用拓扑优化技术来寻找优美且新颖的建筑设计。然而由于无法直接满足建筑师与工程师提出的诸多设计需求,现有方法生成的拓扑优化设计往往很少在实际案例(特别是大型项目)中出现。本文指出了拓扑优化中惯用假设的局限性,并揭示了寻找设计多解的重要性。为了生成多样化、高性能且满足使用需求的设计,我们突破了这些限制并提出了面向建筑领域的拓扑优化新方法。与传统的拓扑优化不同,我们可以将荷载和边界条件作为额外的设计变量,以显著提高最终设计的结构性能。此外,改变设计域能带来更多的可能性,使设计者可以从诸多设计方案中的获得满意的结果。 Generalized topology optimization for architectural design In recent years, topology optimization has become a popular strategy for creating elegant and innovative forms for architectural design. However, the use of existing topology optimization techniques in practical applications, especially for large-scale projects, is rare because the generated forms often cannot satisfy all the design requirements of architects and engineers. This paper identifies the limitations of commonly used assumptions in topology optimization and highlights the importance of having multiple solutions. We show how these limitations could be removed and present various techniques for generating diverse and competitive structural designs that are more useful for architects. Unlike conventional topology optimization, we may include load and support conditions as additional design variables to enhance the structural performance substantially. Furthermore, we show that varying the design domain provides a plethora of opportunities to achieve more-desirable design outcomes. Explicit control of 2D and 3D structural complexity by discrete variable topology optimization method Yuan Liang XinYu Yan Gengdong Cheng The structural complexity (the number of holes) of the 2D or 3D continuum structures can be measured by their topology invariants (i.e., Euler and Betti numbers). Controlling the 2D and 3D structural complexity is significant in topology optimization design because of the various consideration, including manufacturability and necessary structural redundancy, but remains a challenging subject. In this paper, we propose a programmable Euler–Poincaré formula to efficiently calculate the Euler and Betti numbers for the 0–1 pixel-based structures. This programmable Euler–Poincaré formula only relates to the nodal density and nodal characteristic vector that represents the nodal neighbor relation so that it avoids manually counting the information of the vertices, edges, and planes on the surfaces of the structure. As a result, the explicit formulations between the structural complexity (the number of holes) and the discrete density design variables for 2D and 3D continuum structures can be efficiently constructed. Furthermore, the discrete variable sensitivity of the structural complexity is calculated through the programmable Euler–Poincaré formula so that the structural complexity control problem can be efficiently and mathematically solved by Sequential Approximate Integer Programming and Canonical relaxation algorithm Various 2D and complicated 3D numerical examples are presented to demonstrate the effectiveness of the method. We further believe that this study bridges the gap between structural topology optimization and mathematical topology analysis, which is much expected in the structural optimization community. Experimental and Optimization Study of Compression Behavior of Sandwich Panels with New Symmetric Lattice Cores P I MECH ENG L-J MAT Hossein Norouzi Masoud Mahmoodi The paper presents a novel core design for sandwich panels and conducts an experiment to determine whether the mechanical strength of symmetric aluminum lattice core sandwich panels can be improved. Both Design of Experiments (DOE) and Response Surface Methodology (Box-Behnken) were used to establish a quantitative relationship between the strength-to-weight ratio and the input parameters. The thickness of the sheet, the height of sandwich panels, and the width of the seat were all considered design variables to achieve the optimal state. The maximum Initial Peak Crushing Forces (IPCF) were then determined using quasi-static axial flatwise compression tests. This study found that the model's predicted values were consistent with the experimental results. As a result, the parameters were optimized using the Design-Expert software to maximize the initial peak force while minimizing the weight. The results were validated using the Genetic Algorithm, NSGA2, and LINGO. The results indicated that the height of the sandwich panel and the thickness of the sheet had the most significant impact on the maximum force and panel weight. To this end, it is concluded that introducing a novel core design for the sandwich panel, utilizing a suitable Snap-Fitting method for attaching lattice parts rather than using a paste, and finally optimizing the core were the primary reasons for achieving this level of strength. Smooth topological design of structures with minimum length scale and chamfer/round controls Xiaolei Yan Jiawen Chen Haiyan Hua X. Huang Topology optimization is a powerful tool for designing high-performance structures. However, the structures resulting from topology optimization usually have complex geometries, which makes them difficult or costly to fabricate. As a result, topology optimization is often used for the conceptual design of product structures. In this paper, a topology optimization method considering manufacturing constraints is proposed under the fixed finite element mesh. The minimum length scale and chamfer/round are controlled as required based on the floating projection topology optimization (FPTO) method, where the linear material interpolation scheme is adopted instead and the material 0/1 distribution is realized by applying sequential constraints on the elemental design variables through the floating projection. The minimum length scale is strictly controlled with the help of the structural skeleton, which is extracted from the structural topology by using a graphic thinning algorithm. Meanwhile, boundary filtering is proposed by using a variable filtering radius to control chamfers and rounds. Two-dimensional and three-dimensional numerical examples demonstrate that the proposed topology optimization algorithm is effective for designing the stiffest structures with smooth boundaries, desired minimum length scale and chamfers/rounds, so as to improve their manufacturability. Broadband All-angle Negative Refraction by Optimized Phononic Crystals Yang Fan Li Fei Meng All-angle negative refraction (AANR) of phononic crystals and its frequency range are dependent on mechanical properties of constituent materials and their spatial distribution. So far, it is impossible to achieve the maximum operation frequency range of AANR theoretically. In this paper, we will present a numerical approach for designing a two-dimensional phononic crystal with broadband AANR without negative index. Through analyzing the mechanism of AANR, a topology optimization problem aiming at broadband AANR is established and solved by bi-directional evolutionary structural optimization method. The optimal steel/air phononic crystal exhibits a record AANR range over 20% and its refractive properties and focusing effects are further investigated. The results demonstrate the multifunctionality of a flat phononic slab including superlensing effect near upper AANR frequencies and self-collimation at lower AANR frequencies. Structural complexity control in topology optimization via Moving Morphable Component (MMC) approach Jianhua Zhou Yichao Zhu Weisheng Zhang Xu Guo In the present paper, an explicit method for controlling the structural complexity of continuum structures in topology optimization is proposed. This method is devised under the Moving Morphable Component (MMC) based framework where the geometries of the basic building blocks for topology optimization are described explicitly. Compared to the existing structural complexity control approaches which are generally developed in an implicit geometry/topology description framework, the proposed method allows an explicit definition of structural complexity, which facilitates the construction of the controlling schemes significantly. The effectiveness of the proposed method is demonstrated by numerical examples shown at the end of the paper. Bi-directional Evolutionary Structural Optimization on Advanced Structures and Materials: A Comprehensive Review Liang Xia Qi Xia The evolutionary structural optimization (ESO) method developed by Xie and Steven (1993, [162]), an important branch of topology optimization, has undergone tremendous development over the past decades. Among all its variants , the convergent and mesh-independent bi-directional evolutionary structural optimization (BESO) method developed by Huang and Xie (2007, [48]) allowing both material removal and addition, has become a widely adopted design methodology for both academic research and engineering applications because of its efficiency and robustness. This paper intends to present a comprehensive review on the development of ESO-type methods, in particular the latest con-vergent and mesh-independent BESO method is highlighted. Recent applications of the BESO method to the design of advanced structures and materials are summarized. Compact Malab codes using the BESO method for benchmark structural and material microstructural designs are also provided. Design complexity control in truss optimization André Jacomel Torii Rafael Holdorf Lopez Leandro Fleck Fadel Miguel Truss optimization based on the ground structure approach often leads to designs that are too complex for practical purposes. In this paper we present an approach for design complexity control in truss optimization. The approach is based on design complexity measures related to the number of bars (similar to Asadpoure et al. Struct Multidisc Optim 51(2):385–396 2015) and a novel complexity measure related to the number of nodes of the structure. Both complexity measures are continuously differentiable and thus can be used together with gradient based optimization algorithms. The numerical examples show that the proposed approach is able to reduce design complexity, leading to solutions that are more fit for engineering practice. Besides, the examples also indicate that in some cases it is possible to significantly reduce design complexity with little impact on structural performance. Since the complexity measures are non convex, a global gradient based optimization algorithm is employed. Finally, a detailed comparison to a classical approach is presented. Study of biomechanical, anatomical and physiological properties of scorpion stingers for developing biomimetic materials MAT SCI ENG C-BIO S Tao Shu An identification method for enclosed voids restriction in manufacturability design for additive manufacturing structures Shutian Liu Quhao Li Wenjiong Chen Additive manufacturing (AM) technologies, such as selective laser sintering (SLS) and fused deposition modeling (FDM), have become the powerful tools for direct manufacturing of complex parts. This breakthrough in manufacturing technology makes the fabrication of new geometrical features and multiple materials possible. Past researches on designs and design methods often focused on how to obtain desired functional performance of the structures or parts, specific manufacturing capabilities as well as manufacturing constraints of AM were neglected. However, the inherent constraints in AM processes should be taken into account in design process. In this paper, the enclosed voids, one type of manufacturing constraints of AM, are investigated. In mathematics, enclosed voids restriction expressed as the solid structure is simplyconnected. We propose an equivalent description of simply-connected constraint for avoiding enclosed voids in structures, named as virtual temperature method (VTM). In this method, suppose that the voids in structure are filled with a virtual heating material with high heat conductivity and solid areas are filled with another virtual material with low heat conductivity. Once the enclosed voids exist in structure, the maximum temperature value of structure will be very high. Based upon this method, the simplyconnected constraint is equivalent to maximum temperature constraint. And this method can be easily used to formulate the simply-connected constraint in topology optimization. The effectiveness of this description method is illustrated by several examples. Based upon topology optimization, an example of 3D cantilever beam is used to illustrate the trade-off between manufacturability and functionality. Moreover, the three optimized structures are fabricated by FDM technology to indicate further the necessity of considering the simply-connected constraint in design phase for AM. © 2015, Higher Education Press and Springer-Verlag Berlin Heidelberg. A simple and compact Python code for complex 3D topology optimization ADV ENG SOFTW Zhihao Zuo This paper presents a 100-line Python code for general 3D topology optimization. The code adopts the Abaqus Scripting Interface that provides convenient access to advanced finite element analysis (FEA). It is developed for the compliance minimization with a volume constraint using the Bi-directional Evolutionary Structural Optimization (BESO) method. The source code is composed of a main program controlling the iterative procedure and five independent functions realising input model preparation, FEA, mesh-independent filter and BESO algorithm. The code reads the initial design from a model database (.cae file) that can be of arbitrary 3D geometries generated in Abaqus/CAE or converted from various widely used CAD modelling packages. This well-structured code can be conveniently extended to various other topology optimization problems. As examples of easy modifications to the code, extensions to multiple load cases and nonlinearities are presented. This code is intended for educational purposes and would be useful for researchers and students in the topology optimization field. With further extensions, the code could solve sophisticated 3D conceptual design problems in structural engineering, mechanical engineering and architecture practice. The complete code is given in the appendix section and can also be downloaded from the website: www.rmit.edu.au/research/cism/. Topology optimization with flexible void area Anders Clausen Niels Aage Ole Sigmund This paper presents a methodology for including fixed-area flexible void domains into the minimum compliance topology optimization problem. As opposed to the standard passive elements approach of rigidly specifying void areas within the design domain, the suggested approach allows these areas to be flexibly reshaped and repositioned subject to penalization on their moments of inertia, the positions of their centers of mass, and their shapes. The flexible void areas are introduced through a second, discrete design variable field, using the same discretization as the standard field of continuous density variables. The formulation is based on a combined approach: The primary sub-problem is to minimize compliance, subject to a volume constraint, with a secondary sub-problem of minimizing the disturbance from the flexible void areas. The design update is performed iteratively between the two sub-problems based on an optimality criterion and a discrete update scheme, respectively. The method is characterized by a high flexibility, while keeping the formulation very simple. The robustness and applicability of the method are demonstrated through a range of numerical examples. The flexibility of the method is demonstrated through several extensions, including a shape measure requiring the flexible void area to fit a given reference geometry. Topology optimization—Broadening the areas of application CONTROL CYBERN Erik Lund Martin P. Bendsøe N. Olhoff This paper deals with recent developments of topology optimization techniques for application in some new types of design problems. The emphasis is on recent work of the Danish research groups at Aalborg University and at the Technical University of Denmark and focus is on the central role that the choice of objective functions and design parameterization plays for a successful extension of the material distribution approach to new design settings and to new types of physics models. The applications that will be outlined encompass design of laminated composite structures, design for pressure loads, design in fluids, design in acoustics, and design in photonics. A short outline of other design optimization activities is also given. Binding Mechanisms in Selective Laser Sintering and Selective Laser Melting J-P. Kruth Peter Mercelis J. Van Vaerenbergh Marleen Rombouts Purpose This paper provides an overview of the different binding mechanisms in selective laser sintering (SLS) and selective laser melting (SLM), thus improving the understanding of these processes. Design/methodology/approach A classification of SLS/SLM processes was developed, based on the binding mechanism occurring in the process, in contrast with traditional classifications based on the processed material or the application. A broad range of commercial and experimental SLS/SLM processes – found from recent articles as well as from own experiments – was used to explain the different binding mechanism categories. Findings SLS/SLM processes can be classified into four main binding mechanism categories, namely "solid state sintering", "chemically induced binding", "liquid phase sintering – partial melting" and "full melting". Most commercial processes can be classified into the latter two categories, which are therefore subdivided. The binding mechanism largely influences the process speed and the resulting part properties. Research limitations/implications The classification presented is not claimed to be definitive. Moreover some SLM/SLM processes could be classified into more than one category, based on personal interpretation. Originality/value This paper can be a useful aid in understanding existing SLS/SLM processes. It can also serve as an aid in developing new SLS/SLM processes. Multidisciplinary aerospace design optimization: Survey of recent developments Jaroslaw Sobieszczanski-Sobieski Raphael Haftka The increasing complexity of engineering systems has sparked rising interest in multidisciplinary optimization (MDO). This paper surveys recent publications in the field of aerospace, in which the interest in MDO has been particularly intense. The primary c hallenges in MDO are computational expense and organizational complexity. Accordingly, this survey focuses on various methods used by different researchers to address these challenges. The survey is organized by a breakdown of MDO into its conceptual components, reflected in sections on mathematical modelling, approximation concepts, optimization procedures, system sensitivity, and human interface. Because the authors' primary area of expertise is in the structures discipline, the majority of the references focus on the interaction of this discipline with others. In particular, two sections at the end of this review focus on two interactions that have recently been pursued with vigour: the simultaneous optimization of structures and aerodynamics and the simultaneous optimization of structures with active control. Bendsoe, M.P.: Optimal Shape Design as a Material Distribution Problem. Structural Optimization 1, 193-202 Shape optimization in a general setting requires the determination of the optimal spatial material distribution for given loads and boundary conditions. Every point in space is thus a material point or a void and the optimization problem is a discrete variable one. This paper describes various ways of removing this discrete nature of the problem by the introduction of a density function that is a continuous design variable. Domains of high density then define the shape of the mechanical element. For intermediate densities, material parameters given by an artificial material law can be used. Alternatively, the density can arise naturally through the introduction of periodically distributed, microscopic voids, so that effective material parameters for intermediate density values can be computed through homogenization. Several examples in two-dimensional elasticity illustrate that these methods allow a determination of the topology of a mechanical element, as required for a boundary variations shape optimization technique. Numerical instabilities in topology optimization: A survey on procedures dealing with checkerboards, mesh-dependencies and local minima Joakim Petersson In this paper we seek to summarize the current knowledge about numerical instabilities such as checkerboards, mesh-dependence and local minima occurring in applications of the topology optimization method. The checkerboard problem refers to the formation of regions of alternating solid and void elements ordered in a checkerboard-like fashion. The mesh-dependence problem refers to obtaining qualitatively different solutions for different mesh-sizes or discretizations. Local minima refers to the problem of obtaining different solutions to the same discretized problem when choosing different algorithmic parameters. We review the current knowledge on why and when these problems appear, and we list the methods with which they can be avoided and discuss their advantages and disadvantages. Method for varying the number of cavities in an optimized topology using Evolutionary Structural Optimization H Alicia Kim Osvaldo Querin Grant P. Steven In recent years, the Evolutionary Structural Optimization (ESO) method has been developed into an effective engineering design tool, allowing various structural constraints to be incorporated into the optimization process such as natural frequency, buckling, stiffness, stress, displacement and heat. However, no attempts have been made to incorporate nonstructural constraints such as the number of cavities in the final topology and manufacturing constraints. This paper introduces a modification of the ESO method named Intelligent Cavity Creation (ICC) by which the number of cavities can be controlled. This method has the additional benefit of eliminating the formation of checkerboard patterns. The proposed ICC algorithm is applied to several optimization problems to show its effectiveness. It is also demonstrated that ICC produces more practical topologies. Morphology-based black and white filters for topology optimization To ensure manufacturability and mesh independence in density-based topology optimization schemes, it is imperative to use restriction methods. This paper introduces a new class of morphology-based restriction schemes that work as density filters; that is, the physical stiffness of an element is based on a function of the design variables of the neighboring elements. The new filters have the advantage that they eliminate grey scale transitions between solid and void regions. Using different test examples, it is shown that the schemes, in general, provide black and white designs with minimum length-scale constraints on either or both minimum hole sizes and minimum structural feature sizes. The new schemes are compared with methods and modified methods found in the literature. Simple and effective strategies for achieving diverse and competitive structural designs Kai Yang Shape and topology optimization techniques are widely used to maximize the performance or minimize the weight of a structure through optimally distributing its material within a prescribed design domain. However, existing optimization techniques usually produce a single optimal solution for a given problem. In practice, it is highly desirable to obtain multiple design options which not only possess high structural performance but have distinctly different shapes and forms. Here we present five simple and effective strategies for achieving such diverse and competitive structural designs. These strategies have been successfully applied in the computational morphogenesis of various structures of practical relevance and importance. The results demonstrate that the developed methodology is capable of providing the designer with structurally efficient and topologically different solutions. The structural performance of alternative designs is only slightly lower than that of the optimal design. This work establishes a general approach to achieving diverse and competitive structural forms, which holds great potential for practical applications in architecture and engineering. On the internal architecture of emergent plants J MECH PHYS SOLIDS It remains a puzzling issue why and how the organs in plants living in the same natural environment evolve into a wide variety of geometric architecture. In this work, we explore, through a combination of experimental and numerical methods, the biomechanical morphogenesis of the leaves and stalks of representative emergent plants, which can stand upright and survive in harsh water environments. An interdisciplinary topology optimization method is developed here by integrating both mechanical performance and biological constraint into the bi-directional evolutionary structural optimization technique. The experimental and numerical results reveal that, through natural selection over many million years, these leaves and stalks have been optimized into distinctly different cross-sectional shapes and aerenchyma tissues with intriguing anatomic patterns and improved load-bearing performance. The internal aerenchyma is an optimal compromise between the mechanical performance and functional demands such as air exchange and nutrient transmission. We find that the optimal distribution of the internal material depends on multiple biomechanical factors such as the cross-sectional geometry, hierarchical structures, boundary condition, biological constraint, and material property. This work provides an in-depth understanding of the property–structure–performance–function interrelations of biological materials. The proposed topology optimization method and the presented biophysical insights hold promise for designing highly efficient and advanced structures (e.g., airplane wings and turbine blades) and analyzing other biological materials (e.g., bones, horns, and beaks). Explicit control of structural complexity in topology optimization Peng Wei Fused deposition modeling of novel scaffold architectures for Tissue Engineering applications Ibnu Zein Dietmar W. Hutmacher Kim-Cheng Tan S.H. Teoh Evolutionary Topology Optimization of Continuum Structures: Methods and Applications Constraints of distance from boundary to skeleton: For the control of length scale in level set based structural topology optimization Tujin Shi A method is proposed for the control of minimum/maximum length scale in the level set based structural topology optimization. The minimum/maximum length scale of structure is characterized by using the concept of smallest/biggest maximal inscribable ball. In order to prevent trivial zero minimum length scale, the skeleton of structure is utilized and trimmed. The control of length scale is realized by constraining the distance from boundary to skeleton, and the distance is explicitly constructed by using the highly efficient fast marching method. Numerical examples in two dimensions are investigated. Rationalization of trusses generated via layout optimization Linwei He Matthew Gilbert Numerical layout optimization provides a computationally efficient and generally applicable means of identifying the optimal arrangement of bars in a truss. When the plastic layout optimization formulation is used, a wide variety of problem types can be solved using linear programming. However, the solutions obtained are frequently quite complex, particularly when fine numerical discretizations are employed. To address this, the efficacy of two rationalization techniques are explored in this paper: (i) introduction of 'joint lengths', and (ii) application of geometry optimization. In the former case this involves the use of a modified layout optimization formulation, which remains linear, whilst in the latter case a non-linear optimization post-processing step, involving adjusting the locations of nodes in the layout optimized solution, is undertaken. The two rationalization techniques are applied to example problems involving both point and distributed loads, self-weight and multiple load cases. It is demonstrated that the introduction of joint lengths reduces structural complexity at negligible computational cost, though generally leads to increased volumes. Conversely, the use of geometry optimization carries a computational cost but is effective in reducing both structural complexity and the computed volume. Optimal Analysis of Structures by Concepts of Symmetry and Regularity A. Kaveh Symmetry is not only one of the most fundamental concepts in science and engineering, but it is also an ideal bridging idea crossing various branches of sciences and different fields of engineering. In the past, symmetry has been considered important for its aesthetic appeal; however, this century has witnessed a great enhancement in its recognition as a basis of scientific and engineering principle. At the same time, the meaning and utility of symmetry have greatly expanded. It is not surprising that many valuable books are published in this field and regular annual conferences are devoted to symmetry in various fields of science and engineering. In the following, different definitions are provided for symmetry. Structures, properties, and functions of the stings of honey bees and paper wasps: A comparative study Biol Open Hong-Ping Zhao Guo-Jun Ma Through natural selection, many animal organs with similar functions have evolved different macroscopic morphologies and microscopic structures. Here, we comparatively investigate the structures, properties, and functions of honey bee stings and paper wasp stings. Their elegant structures were systematically observed. To examine their behaviors of penetrating into different materials, we performed penetration-extraction tests and slow motion analyses of their insertion process. In comparison, the barbed stings of honey bees are relatively difficult to be withdrawn from fibrous tissues (e.g., skin), while the removal of paper wasp stings is easier due to their different structures and insertion skills. The similarities and differences of the two kinds of stings are summarized on the basis of the experiments and observations. © 2015. Published by The Company of Biologists Ltd. Explicit layout control in optimal design of structural systems with multiple embedding components Wenliang Zhong In this paper, two novel methods are proposed for optimizing the layout of structural systems with embedding components considering the minimum/maximum distance constraints between the components. The key ideas are using level set functions to describe the shapes of arbitrary irregular embedding components and resorting to the concept of structural skeleton to formulate the distance control constraints explicitly. Numerical examples presented demonstrate that the proposed approaches can give a complete control of the layout of embedding components in an explicit and local way. Topology optimization approaches A comparative review Kurt Maute Topology optimization has undergone a tremendous development since its introduction in the seminal paper by Bendsøe and Kikuchi in 1988. By now, the concept is developing in many different directions, including "density", "level set", "topological derivative", "phase field", "evolutionary" and several others. The paper gives an overview, comparison and critical review of the different approaches, their strengths, weaknesses, similarities and dissimilarities and suggests guidelines for future research. Incorporating fabrication cost into topology optimization of discrete structures and lattices Alireza Asadpoure Lorenzo Valdevit James K Guest In this article, we propose a method to incorporate fabrication cost in the topology optimization of light and stiff truss structures and periodic lattices. The fabrication cost of a design is estimated by assigning a unit cost to each truss element, meant to approximate the cost of element placement and associated connections. A regularized Heaviside step function is utilized to estimate the number of elements existing in the design domain. This makes the cost function smooth and differentiable, thus enabling the application of gradient-based optimization schemes. We demonstrate the proposed method with classic examples in structural engineering and in the design of a material lattice, illustrating the effect of the fabrication unit cost on the optimal topologies. We also show that the proposed method can be efficiently used to impose an upper bound on the allowed number of elements in the optimal design of a truss system. Importantly, compared to traditional approaches in structural topology optimization, the proposed algorithm reduces the computational time and reduces the dependency on the threshold used for element removal. Graph Theory and Applications J. A. Bondy U. S. R. Murty Topology optimization. Theory, methods, and applications. 2nd ed., corrected printing Optimal Structural Analysis An explicit length scale control approach in SIMP-based topology optimization Algorithm Graph Theory and Perfect Graphs Martin Golumbic On the Design of Compliant Mechanisms Using Topology Optimization* This paper presents a method for optimal design of compliant mechanism topologies. The method is based on continuum-type topology optimization techniques and finds the optimal compliant mechanism topology within a given design domain and a given position and direction of input and output forces. By constraining the allowed displacement at the input port, it is possible to control the maximum stress level in the compliant mechanism. The ability of the design method to find a mechanism with complex output behavior is demonstrated by several examples. Some of the optimal mechanism topologies have been manufactured, both in macroscale (hand-size) made in Nylon, and in microscale (<.5mm)) made of micromachined glass. The theory and application of evolutionary structural optimization method J ENG MECH-ASCE X.Y. Yang Explicit feature control in structural topology optimization via level set method Level-set methods for structural topology optimization: A review Nico Paul van Dijk Matthijs Langelaar Fred van Keulen This review paper provides an overview of different level-set methods for structural topology optimization. Level-set methods can be categorized with respect to the level-set-function parameterization, the geometry mapping, the physical/mechanical model, the information and the procedure to update the design and the applied regularization. Different approaches for each of these interlinked components are outlined and compared. Based on this categorization, the convergence behavior of the optimization process is discussed, as well as control over the slope and smoothness of the level-set function, hole nucleation and the relation of level-set methods to other topology optimization methods. The importance of numerical consistency for understanding and studying the behavior of proposed methods is highlighted. This review concludes with recommendations for future research. Topology optimization of continuum structures: A review APPL MECH REV Hans A. Eschenauer It is of great importance for the development of new products to find the best possible topology or layout for given design objectives and constraints at a very early stage of the design process (the conceptual and project definition phase). Thus, over the last decade, substantial efforts of fundamental research have been devoted to the development of efficient and reliable procedures for solution of such problems. During this period, the researchers have been mainly occupied with two different kinds of topology design processes; the Material or Microstructure Technique and the Geometrical or Macrostructure Technique. It is the objective of this review paper to present an overview of the developments within these two types of techniques with special emphasis on optimum topology and layout design of linearly elastic 2D and 3D continuum structures. Starting from the mathematical-physical concepts of topology and layout optimization, several methods are presented and the applicability is illustrated by a number of examples. New areas of application of topology optimization are discussed at the end of the article. This review article includes 425 references. Bidirectional evolutionary topology optimization of continuum structures with one or multiple materials There are several well-established techniques for the generation of solid-void optimal topologies such as solid isotropic material with penalization (SIMP) method and evolutionary structural optimization (ESO) and its later version bi-directional ESO (BESO) methods. Utilizing the material interpolation scheme, a new BESO method with a penalization parameter is developed in this paper. A number of examples are presented to demonstrate the capabilities of the proposed method for achieving convergent optimal solutions for structures with one or multiple materials. The results show that the optimal designs from the present BESO method are independent on the degree of penalization. The resulted optimal topologies and values of the objective function compare well with those of SIMP method. Structural Mechanics: Graph and Matrix Methods A Simple Evolutionary Procedure for Structural Optimization A simple evolutionary procedure is proposed for shape and layout optimization of structures. During the evolution process low stressed material is progressively eliminated from the structure. Various examples are presented to illustrate the optimum structural shapes and layouts achieved by such a procedure. Topology optimization for nano‐photonics LASER PHOTONICS REV J.S. Jensen Topology optimization is a computational tool that can be used for the systematic design of photonic crystals, waveguides, resonators, filters and plasmonics. The method was originally developed for mechanical design problems but has within the last six years been applied to a range of photonics applications. Topology optimization may be based on finite element and finite difference type modeling methods in both frequency and time domain. The basic idea is that the material density of each element or grid point is a design variable, hence the geometry is parameterized in a pixel-like fashion. The optimization problem is efficiently solved using mathematical programming-based optimization methods and analytical gradient calculations. The paper reviews the basic procedures behind topology optimization, a large number of applications ranging from photonic crystal design to surface plasmonic devices, and lists some of the future challenges in non-linear applications. Filters in topology optimization INT J NUMER METH ENG Blaise Bourdin In this article, a modified ('filtered') version of the minimum compliance topology optimization problem is studied. The direct dependence of the material properties on its pointwise density is replaced by a regularization of the density field by the mean of a convolution operator. In this setting it is possible to establish the existence of solutions. Moreover, convergence of an approximation by means of finite elements can be obtained. This is illustrated through some numerical experiments. The 'filtering' technique is also shown to cope with two important numerical problems in topology optimization, checkerboards and mesh dependent designs. Copyright © 2001 John Wiley & Sons, Ltd. Topology optimization based on graph theory of crash loaded flight passenger seats Axel Schumacher Christian Olschinka Bastian Hoffmann Today, real-world crashworthiness optimization applications are limited to sizing and shape optimization. Topology optimization in crashworthiness design has been withstanding until today any attempt of finding efficient solution algorithms. This is basically due to the high computational effort and the inherent sensitivity of crash simulation responses to design scatterings. In this work, the topology optimization problem shall be addressed with a new approach, in such a way as mathematical graphs are used to describe the optimization sequence (including geometry, loads, design variables and responses, etc.). This design conception is a good advance in topological model flexibility and allows for the application of new (e.g. rule-based) topology optimization algorithms. In this contribution, the topology optimization of crash loaded flight passenger seats is presented. Therefore, we focus on the necessary workflow which includes the graph-based description of the structure´s topology, the CAD description of the structure and the formulation of the crash problem in LS-DYNA. This workflow is included in an optimization loop. Integrated layout design of multi‐component system Jihong Zhu Pierre Beckers A new integrated layout optimization method is proposed here for the design of multi-component systems. By introducing movable components into the design domain, the components layout and the supporting structural topology are optimized simultaneously. The developed design procedure mainly consists of three parts: (i) Introduction of non-overlap constraints between components. The finite circle method (FCM) is used to avoid the components overlaps and also overlaps between components and the design domain boundaries. (ii) Layout optimization of the components and supporting structure. Locations and orientations of the components are assumed as geometrical design variables for the optimal placement while topology design variables of the supporting structure are defined by the density points. Meanwhile, embedded meshing techniques are developed to take into account the finite element mesh change caused by the component movements. (iii) Consistent material interpolation scheme between element stiffness and inertial load. The commonly used solid isotropic material with penalization model is improved to avoid the singularity of localized deformation in the presence of design dependent loading when the element stiffness and the involved inertial load are weakened by the element material removal. Finally, to validate the proposed design procedure, a variety of multi-component system layout design problems are tested and solved on account of inertia loads and gravity center position constraint. Solutions are compared with traditional topology designs without component. Copyright Achieving minimum length scale in topology optimization using nodal design variable and projection functions Jean H. Prevost T. Belytschko A methodology for imposing a minimum length scale on structural members in discretized topology optimization problems is described. Nodal variables are implemented as the design variables and are projected onto element space to determine the element volume fractions that traditionally define topology. The projection is made via mesh independent functions that are based upon the minimum length scale. A simple linear projection scheme and a non-linear scheme using a regularized Heaviside step function to achieve nearly 0–1 solutions are examined. The new approach is demonstrated on the minimum compliance problem and the popular SIMP method is used to penalize the stiffness of intermediate volume fraction elements. Solutions are shown to meet user-defined length scale criterion without additional constraints, penalty functions or sensitivity filters. No instances of mesh dependence or checkerboard patterns have been observed. Copyright © 2004 John Wiley & Sons, Ltd. On projection methods, convergence and robust formulations in topology optimization Fengwen Wang Boyan S Lazarov Mesh convergence and manufacturability of topology optimized designs have previously mainly been assured using density or sensitivity based filtering techniques. The drawback of these techniques has been gray transition regions between solid and void parts, but this problem has recently been alleviated using various projection methods. In this paper we show that simple projection methods do not ensure local mesh-convergence and propose a modified robust topology optimization formulation based on erosion, intermediate and dilation projections that ensures both global and local mesh-convergence. KeywordsTopology optimization–Robust design–Compliant mechanisms–Manufacturing constraints Symmetry and Non-uniqueness in Exact Topology Optimization of Structures G. I. N. Rozvany The aim of this article is to initiate an exchange of ideas on symmetry and non-uniqueness in topology optimization. These concepts are discussed in the context of 2D trusses and grillages, but could be extended to other structures and design constraints, including 3D problems and numerical solutions. The treatment of the subject is pitched at the background of engineering researchers, and principles of mechanics are given preference to those of pure mathematics. The author hopes to provide some new insights into fundamental properties of exact optimal topologies. Combining elements of the optimal layout theory (of Prager and the author) with those of linear programming, it is concluded that for the considered problems the optimal topology is in general unique and symmetric if the loads, domain boundaries and supports are symmetric. However, in some special cases the number of optimal solutions may be infinite, and some of these may be non-symmetric. The deeper reasons for the above findings are explained in the light of the above layout theory. KeywordsTopology optimization–Non-uniqueness–Symmetry–Optimal layout theory–Trusses–Grillages Imposing maximum length scale in topology optimization This paper presents a technique for imposing maximum length scale on features in continuum topology optimization. The design domain is searched and local constraints prevent the formation of features that are larger than the prescribed maximum length scale. The technique is demonstrated in the context of structural and fluid topology optimization. Specifically, maximum length scale criterion is applied to (a) the solid phase in minimum compliance design to restrict the size of structural (load-carrying) members, and (b) the fluid (void) phase in minimum dissipated power problems to limit the size of flow channels. Solutions are shown to be near 0/1 (void/solid) topologies that satisfy the maximum length scale criterion. When combined with an existing minimum length scale methodology, the designer gains complete control over member sizes that can influence cost and manufacturability. Further, results suggest restricting maximum length scale may provide a means for influencing performance characteristics, such as redundancy in structural design. Optimization methods for truss geometry and topology design Aharon Ben-Tal Jochem Zowe Truss topology design for minimum external work (compliance) can be expressed in a number of equivalent potential or complementary energy problem formulations in terms of member forces, displacements and bar areas. Using duality principles and non-smooth analysis we show how displacements only as well as stresses only formulations can be obtained and discuss the implications these formulations have for the construction and implementation of efficient algorithms for large-scale truss topology design. The analysis covers min-max and weighted average multiple load designs with external as well as self-weight loads and extends to the topology design of reinforcement and the topology design of variable thickness sheets and sandwich plates. On the basis of topology design as an inner problem in a hierarchical procedure, the combined geometry and topology design of truss structures is also considered. Numerical results and illustrative examples are presented. Topology optimization of trusses by growing ground structure method Takao Hagishita Makoto Ohsaki A new method called the growing ground structure method is proposed for truss topology optimization, which effectively expands or reduces the ground structure by iteratively adding or removing bars and nodes. The method uses five growth strategies, which are based on mechanical properties, to determine the bars and nodes to be added or removed. Hence, the method can optimize the initial ground structures such that the modified, or grown, ground structures can generate the optimal solution for the given set of nodes. The structural data of trusses are manipulated using C++ standard template library and the Boost Graph Library, which help alleviate the programming efforts for implementing the method. Three kinds of topology optimization problems are considered. The first problem is a compliance minimization problem with cross-sectional areas as variables. The second problem is a minimum compliance problem with the nodal coordinates also as variables. The third problem is a minimum volume problem with stress constraints under multiple load cases. Six numerical examples corresponding to these three problems are solved to demonstrate the performance of the proposed method. Discover more about: Controlling Design of thermal driven rotary nanomotor! To realize design of a rotary nanomotor from nanotubes. nanotube from black phosphorene fabrication of a nanotube/nanoring from black phosphorene Rotation transmission nanosystem Characterising solution sets of LTI differential equations August 2012 · Automatica Jean-Charles Delvenne We prove that a set of smooth trajectories is LTID (i.e., the solution set of a constant coefficient linear differential equation) if and only if it is the direct sum of a linear time-invariant closed controllable part and a linear time-invariant finite-dimensional part. This characterisation does not directly involve derivatives in its formulation. It solves a problem opened by Willems (1991). ... [Show full abstract] We also characterise morphisms between LTID sets as linear time-invariant maps which do not increase support. On the controlled consensus protocols with directed graphs Xianren Kong Anhui Zhang Shijie Zhang Feng Wang The paper studies the controllability problem for consensus protocols with leader-following structure and directed information topologies. Some algebraic criteria for controllability are presented based on the eigenvalues of the underlying system information topology Laplacian. In addition, the reflections of the system information topology on system controllability are investigated from graph ... [Show full abstract] theory. Further on the controllability of networked MIMO LTI systems November 2017 · International Journal of Robust and Nonlinear Control Yuqing Hao Zhisheng Duan Guanrong Chen Further on the controllability of networked multiple-input–multiple-output systems, an efficient, necessary, and sufficient condition is derived, where the network topology is directed and weighted and the nodes are higher-dimensional linear time-invariant systems. The new condition is easier to verify, which explicitly shows the effects of the network topology, node-system dynamics, external ... [Show full abstract] control inputs, and inner interactions on the controllability of the whole networked system. For networked multiple-input–multiple-output systems in several specific topologies, the corresponding conditions are expressed more precisely. The effectiveness of the conditions is demonstrated through several examples. The Strength of Preference Y. Ren Growth and preferential attachment are the two ingredients of the scale-free network. Based on the Barabási-Albert scale-free model, we construct a more general model by replacing the linear preferential attachment with the nonlinear preferential attachment. We introduce different networks by controlling a parameter β which decides the preferential attachment of our model. We try to find the ... [Show full abstract] influence of β on the network structure and property. To study the influence, we investigate our model in three directions: topological characteristics, correlation of topological measures and attack vulnerability. Last Updated: 03 Nov 2022
CommonCrawl
Entropy clustering-based granular classifiers for network intrusion detection Hui Liu1,2, Gang Hao3 & Bin Xing2 Support vector machine (SVM) is one of the effective classifiers in the field of network intrusion detection; however, some important information related to classification might be lost in the reprocessing. In this paper, we propose a granular classifier based on entropy clustering method and support vector machine to overcome this limitation. The overall design of classifier is realized with the aid of if-then rules that consists of a premise part and conclusion part. The premise part realized by the entropy clustering method is used here to address the problem of a possible curse of dimensionality, while the conclusion part realized by support vector machines is utilized to build local models. In contrast to the conventional SVM, the proposed entropy clustering-based granular classifiers (ECGC) can be regarded as an entropy-based support function machine. Moreover, an opposition-based genetic algorithm is proposed to optimize the design parameters of the granular classifiers. Experimental results show the effectiveness of the ECGC when compared with some classical models reported in the literatures. In the past decades, lots of techniques such as artificial intelligence and mathematical methods have been applied for many applications [1,2,3,4,5]. With the effectiveness in high-dimensional spaces, support vector machine (SVM) becomes one of the most important classification models when solving the problem of classification. Many researchers have utilized the SVM for solving the classification problem in the field of network intrusion detection. Chitrakar and Huang [6] have presented the selection of candidate support vectors in incremental SVM for network intrusion detection. Shams et al. [7] have used trust aware SVM when dealing with the network intrusion detection problems. Aburomman and Reaz [8] have proposed a novel-weighted SVM multiclass classifier for the intrusion detection system. Yaseen et al. [9] have constructed multi-level hybrid SVM by means of K-means for network intrusion detection. Vijayanand et al. [10] have developed genetic-algorithm-based feature selection in the design of SVM for solving the network intrusion detection. Raman et al. [11] have proposed an efficient intrusion detection system with the aid of genetic algorithm optimized SVM. All these studies have developed SVM based on genetic algorithms or clustering methods; however, a design of SVM with both clustering methods and genetic algorithms remains open. The entropy clustering method (ECM) [12] is a novel clustering method based on the concept of entropy that has been widely used in network intrusion detection. In comparison with the conventional clustering method such as K-means and C-Means, the ECM can obtain the number of clustering once the features of dataset are determined. In the design of classification models, we require some crucial parameters for determining the structure. As one of the powerful optimization tools [13,14,15,16,17], genetic algorithms have been applied in lots of applications. In some previous studies [10], genetic algorithms have been successfully applied to optimize the support vector models. However, it should be stressed that the genetic algorithm could still get trapped in sub-optimal regions of the search space. Furthermore, the problem of finding "good" parameters in the design of the rule-based classification models remains open. In this study, we propose a rule-based granular classifier by means of entropy clustering method and support vector machine for network intrusion detection. The overall granular classifier is designed by means of a serial of rules that consist of a premise part and conclusion part. The premise part is realized by the entropy clustering method, while the conclusion part is realized with the aid of the support vector machine. In some senses, the proposed entropy clustering-based granular classifiers (ECGC) can be regarded as an entropy-based support function machine. Furthermore, an opposite-based genetic algorithm (OGA) is proposed to optimize the parameters of the granular classifier. The structure of the paper is organized as follows. Section 2 presents the design of ECGC. Section 3 deals with the opposite-based genetic algorithm and the optimization of ECGC. Section 4 reports on experiments by using comparative studies. Finally, some conclusions are summarized in Section 5. A design of the ECGCs In the design of ECGC, the overall classification is divided into a number of rules that consist of a premise part and conclusion part. The premise part of rules determined by the entropy clustering method is to capture "rough, major structure"; while the conclusion part (local model) realized based on SVM is to capture "subtle, accurate structure." In this way, we construct ECGC. Such rule-based classifiers can be expressed with some "if-then" rules $$ {\mathbf{R}}^i: IF\kern0.24em \mathbf{x}\; is\kern0.17em in\kern0.17em cluster\;{C}_i\; then\;{y}_i={f}_{\mathrm{i}}\left(\mathbf{x}\right) $$ where \( {}_{R^i} \) represents the ith rule, Ci denotes the ith cluster, i=1, ∙∙∙, n, n is equal to the number of rules, fi(x) denotes the consequent output of the ith rule, and pattern classifiers are described by means of some discriminant functions fi(x). An overall design of ECGC is described as shown in Fig.1. An overall design of ECGC Realization of premise part of rules using entropy clustering method In the design of ECGC, the premise part of rules is formed by the entropy clustering method. Let G = (V, D) be an undirected graph, where V denotes the vertex set and D stands for the edge set. The steps of entropy clustering method can be summarized as the following steps: [Step 1] Calculate the entropy rate E(P) by using the following expression. $$ E(P)=-\sum \limits_i{u}_i\sum \limits_j{h}_{ij}(P)\log \left({h}_{ij}(P)\right). $$ where E(P) represents ethe ntropy rate, which quantifies the uncertainty of a random process P = {Pt| t ∈ T} . Here, T denotes some index set. [Step 2] Calculate the balancing term B(P) by using the following formula. $$ B(P)=-\sum \limits_i\frac{\mid {S}_i\mid }{\mid V\mid}\log \left(\frac{\mid {S}_i\mid }{\mid V\mid}\right)-{N}_P,i=\left\{1,....,{N}_P\right\} $$ where NP represents the number of connected components in the graph, \( {S}_P=\left\{{S}_1,{S}_2,...,{S}_{N_P}\right\} \) which means the graph partitioning for P. [Step 3] Set E ← ∅ and U ← D. [Step 4] Set \( a1\leftarrow \underset{a\in U}{\mathrm{argmax}}F\left(P\cup \left\{a\right\}\right)-F(P) \), where F(P) = E(P) + kB(P), k denotes the number of clusters. [Step 5] If P ∪ {a1} ∈ I then set P ← P ∪ {a1}; else set U ← U ∪ {a1}. [Step 6] Repeat steps 4–5 until U = ∅. Construction of conclusion part of rules using support vector machines The conclusion part of the rules is realized by means of support vector machines [12]. Assume that the training dataset T is formed by the following expression: $$ T=\left\{\left({x}_1,{y}_1\right),\left({x}_2,{y}_2\right),...,\left({x}_{\mathrm{m}},{y}_{\mathrm{m}}\right),...,\left({x}_{\mathrm{n}},{y}_{\mathrm{n}}\right)\right\} $$ Where xm represents a training sample, ym ∈ {−1, 1} is the class label of xm, and n stands for the total number of training samples, m = 1, 2, ..., n. Suppose that w and b are parameters of hyperplane functions that can be expressed as follows: $$ f(x)= wx+b $$ Then, optimal value of w and b can be calculated by the following model: $$ {\displaystyle \begin{array}{l}\underset{p}{\min}\frac{1}{2}\sum \limits_{i=1}^n\sum \limits_{j=1}^n{y}_m{y}_j{p}_m{p}_jK\left({x}_m,{x}_j\right)-\sum \limits_{k=1}^n{p}_k\\ {}s.t.\kern1.7em \sum \limits_{l=1}^n{y}_l{p}_l=0,\\ {}\kern2.799999em 0\le {p}_l\le C.\end{array}} $$ where p = (p1, ......, pn)T denotes the Lagrange multiplier vector, C represents a penalty parameter, K stands for a kernel function, and xm, xj, ym, yj are the mth input sample, the jth input sample, label of the mth input sample, and label of the jth input label, respectively. The steps of the support vector machine can be summarized as the following steps: [Step 1] Divide the entire dataset into the training dataset and testing dataset. [Step 2] Estimating the parameters w and b based on the expression (6). [Step 3] Calculate the decision function of the support vector machine according to the expression (5). [Step 4] Calculate the labels of testing data based on the decision function. [Step 5] Obtain the classification results based on training and testing data. Optimization of ECGC using opposition-based genetic algorithms Like the other classification models, the performance of ECGC is dramatically affected by the parameters. Here we present an opposition-based genetic algorithm as the vehicle for the optimization of parameters in the design of ECGC. The mechanism of opposition-based learning (OBL) [18, 19] has been shown to be an effective concept to enhance various optimization approaches. Let us recall the basic concept. Opposition-based point [19]: let P = (x1, x2, ......, xD) be a point in a D-dimensional space, where x1, x2, ......, xD ∈ R and xi ∈ [ai, bi], ∀i ∈ {1, 2, ..., D}. The opposite point \( \overset{\cup }{P}=\left({\overset{\cup }{x}}_1,{\overset{\cup }{x}}_2,......,{\overset{\cup }{x}}_D\right) \) is completely defined by its components $$ {\overset{\smile }{x}}_i={a}_i+{b}_i-{x}_i $$ Opposition-based optimization (OBL) [19]: let P = (x1, x2, ......, xD) be a point in a D-dimensional space (i.e., a candidate solution). Assume f(•) is a fitness function. According to the definition of the opposite point, we say that \( \overset{\cup }{P}=\left({\overset{\cup }{x}}_1,{\overset{\cup }{x}}_2,......,{\overset{\cup }{x}}_D\right) \) is the opposite of site P = (x1, x2, ......, xD). Now, if \( f\left(\overset{\cup }{P}\right)\ge f(P) \), then the point P can be replaced with \( \overset{\cup }{P} \). Hence, the point and its opposite point are evaluated simultaneously in order to continue with one of the highest fitness. With the opposition concept, we develop the opposition-based genetic operator. The overall opposition-based genetic algorithm can be summarized as follows: [Step 1] Randomly generate the population of genetic algorithm, where the performance of ECGC is the objective function, the parameters in the design of ECGS are considered as chromosome. [Step 2] Update the population based on opposition-based population operator. [Step 2.1] Find the interval boundaries [ai, bi] in the population set P1, where \( {a}_j=\min \left({x}_j^k\right),{b}_j=\max \left({x}_j^k\right) \), j = 1, 2, ......, h; k = 1, 2, ......, d. Here, h denotes the size of population, and d represents the dimension of an individual. [Step 2.2] For each individual, generate a new individual \( {X}^{\mathrm{new}}=\left({x}_1^{\mathrm{new}},...,{x}_j^{\mathrm{new}},...,{x}_n^{\mathrm{new}}\right) \) based on the expression \( {x}_j^{\mathrm{new}}={a}_j+{b}_j-{x}_j. \) [Step 2.3] Obtain the opposition population set P2 by calculating the fitness value of each Xnew. [Step 2.4] Obtain the final population Pnew by selecting the best h individuals based on the P1 ∪ P2. [Step 3] Generate the new individual based on crossover. [Step 4] Generate the new individual based on mutation. [Step 5] Generate the new individual based on opposition-based genetic operator. [Step 5.1] Find the interval boundaries [ai, bi] in the population set P1, where \( {a}_j=\min \left({x}_j^k\right),{b}_j=\max \left({x}_j^k\right) \), j = 1, 2, ......, h; k = 1, 2, ......, d. Here, h denotes the size of the population, and d represents the dimension of an individual. [Step 6] Select the new individual in the current population. [Step 7] Repeat steps 3–6 until the terminal condition is satisfied. A design procedure of the ECGCs The overall design methodology of entropy-based clustering granular classification is described in this section. The design of ECGC can be summarized in the following steps see (Fig. 2). An overall flowchart of design of ECGC Step 1: Division of dataset The original data is divided into training and testing datasets. Training data is used to construct the model of ECGC, while the testing data is utilized to evaluate the performance of ECGC. Suppose that the original input–output dataset is denoted as (xi,yi) = (x1i, x2i, …, xni, yi), i = 1, 2, …, N, where N is the number of data points. Let T be the number of correct classification patterns. The classification rate (CR) can be represented as follows $$ CR=\frac{T}{N}\times 100\% $$ Furthermore, let TR be the classification rate for the training data, and TE be the classification rate for the testing data. It is evident that TR records the objective function (viz. performance index, PI) and TE stands for the testing performance index (TPI). Step 2: Design of ECGC architecture with the aid of OGA The overall design of ECGC can be regarded as the construction of rules that comprises the premise part and conclusion part. Here, the premise part is realized based on the entropy-clustering method, while the conclusion part is realized by means of SVMs. OGA is used here to optimize the parameters not only in the entropy-based clustering method but also in the design of SVMs. Specifically, In the ECGC, an individual is denoted as a vector comprising the number of clusters, the number of selected input variables, the input variable to be selected, and the parameters for each rule as shown in Fig. 3. The overall length of the individual corresponds to the number of clusters (viz. rules) to be used. Individual composition of OGA and its interpretation Step 3: Check the termination criterion As to the termination criterion, we have used two different conditions. The first condition is that the number of loops is not more than a predetermined number, while the second condition is that the performance of the current local model is worse than a predetermined value [20]. It has stressed that the final optimal ECGC has been experimentally determined based on a sound compromise between the high accuracy and the low complexity of models. Step 4: Final output Report the optimal ECGC and final output. This section reports the experimental results of the proposed ECGC models. To evaluate the performance of the ECGC, we first experimented some benchmark machine learning data [21,22,23,24,25,26], and then applied the ECGC in the network intrusion detection KDDCUP 99 data. The symbols used in these experiments are listed as follows: TR denotes the performance index of training data, while TE represents the performance index of testing data. Furthermore, the parameters and boundaries of OGA are summarized as shown in Table 1. (The selection of these specific values of parameters is referred to reference [10, 27]). Table 1 List of the parameter of the OGA Machine learning data Some machine learning data are used to evaluate the performance of the proposed ECGC. In these experiments, datasets are partitioned into two parts: 80% of data is considered as training data, while the rest 20% of data is regarded as testing data. Iris data The iris flower dataset is a multivariate dataset introduced by Sir Ronald Fisher as an example of discriminant analysis. This is a classical dataset consisting of 150 input-output pairs, four input variables, and three classes. Figure 4 depicts the values of the performance index (TR and TE) vis-à-vis the ECGCs with the increasing rules. As shown in Fig. 4, the value of the performance index for training data TR is increased with the increasing prediction abilities of GCs. It is clear that the optimal classifier could have emerged with the layer of assigned rules (clusters). The testing performance TE increases in the case of two rules, while it becomes the same as the rules are equal to five. This tendency illustrates that the substantial increase of the rules improves the prediction abilities. Performance index of ECGCs for the Iris data Figure 5 Displays the values of performance index TE range from one rule to five rules for the Iris data when selecting different parameters (penalty term and Kernel bandwidth of conclusion part). In most cases, the value of performance index TE raises with the increasing number of rules. This tendency demonstrates that the number of rules is beneficial to the enhanced TE. Performance index (TE) range from one rule to five rules for the Iris data Table 2 summarizes the experimental results. It is shown that the proposed ECGCs arrive at 98.25 ± 0.21 with five rules. Table 2 Comparison of classification rate with previous classifiers (Iris data) Some selected machine learning data Five selected machine learning data are further used to evaluate the performance of the proposed ECGC model. Here, the selected data with different number of input and variables are summarized as illustrated in Table 3. Table 3 Description of five selected machine learning data Table 4 further shows the comparative results of the proposed ECGC and some well-known machine learning models. As shown in Table 4, the proposed ECGC outperforms the better accuracy of classification as well as the prediction when compared to the models reported in the literatures. KDDCUP99 data In the field of network intrusion detection [27,28,29], some datasets can be obtained to evaluate the performance of models. To evaluate the performance of ECGC, here we experiment the ECGC on the benchmark KDDCUP99 data. The KDDCUP99 data has 5,000,000 labeled records (viz. patterns) and 41 features (viz. input variables) provided by the Massachusetts Institute of Technology. This dataset consists of 24 different types of attacks that are divided into four groups: DDOS, Probe, U2R, and R2L. According to some studies [30,31,32], the filtered 10% KDDCUP99 data described as shown in Table 5 is used when dealing with lots of network intrusion detection issues. In the experiments, the dataset is partitioned into two parts: 50% of data is utilized as training data and the remaining 50% data is considered as testing data. Moreover, in order to compare with other models, we also used the existing performance index [27,28,29,30,31,32]: Table 5 Distribution for the 10% KDD Cup 99 dataset True positive (TP). A TP represents one correct detection of an attack of network intrusion detection; False positive (FP). A FP denotes an indication of an attack on traffic that should have been classified as "normal"; True negative (TN). A TN stands for one correct classification of "normal Traffic" of network intrusion detection; False negative (FN). A FN is written as a real attack that was misidentified as "Normal" traffic; Accuracy. Accuracy is the common metric used for assessing the overall effectiveness of a classifier. The expression of Accuracy can be formulated as follows: $$ Accuracy=\left( TP+ TN\right)/\left( FP+ FN+ TP+ TN\right) $$ The experimental results of ECGC are compared with the results of several well-known models reported in the literatures as shown in Table 6. It is evident that the proposed ECGC outperforms the cited approaches in case of consistently good detection across all four types of attack classes. Table 6 Comparison results for the 10% KDD Cup 99 dataset In the reprocessing, conventional support vector machines have some inevitable limitations. One fact is that some important information related to classification might be lost. In this study, we have proposed ECGC to overcome this limitation. In the design of ECGC, SVMs are explored here as local models that are considered as the consequence part of rules, while the premise part of rules is realized with the aid of entropy-based clustering method. Genetic algorithm is utilized to optimize the parameters when constructing the ECGC. It is evident that the proposed ECGC can be regarded as the extended SVMs to some extent. Experimental results on several well-known datasets demonstrate the effectiveness of the ECGC, especially for the network intrusion detection dataset. More importantly, with the proposed ECGC, one can efficiently construct the optimal model (viz. optimization of the parameters in the design of model), which is the key issue to improve the performance when constructing models. For future studies, new optimization algorithm can be included. By taking into account new optimization algorithm, one can obtain optimized ECGC. Furthermore, several objectives can be considered to construct ECGC, one can also develop multiobjective optimized ECGC. This study aims at the design of classification for network intrusion detection. A granular classifier based on entropy-clustering method and supported vector machine is constructed to overcome the shortcoming that most of the conventional classifiers such as SVM may lose some important information in the reprocessing. The proposed granular classifier that is designed by means of a serial of rules can also be regarded as an entropy-based support function machine. Experiments illustrate that the performance of the granular classifier obtains "good" results in comparison with some well-known classifiers. It has to be stressed that, granular classifiers can further improve the performance with the aid of opposite-based genetic algorithm. Experimental results show that the performance of granular classifier can be generally improved. Also, the number of rules is quite effective in the final performance of the granular classifiers. Generally, with the growth of rules, the performance of granular classifiers is gradually increasing while its complex is rising. Classification rate ECGC: Entropy clustering-based granular classifiers ECM: Entropy clustering method FN: FP: GA: OBL: Opposition-based learning OGA: Opposition-based genetic algorithm TE: Classification rate for testing data TN: True negative TP: True positive TPI: Testing performance index Classification rate for training data P. Huijse, P.A. Estevez, P. Protopapas, J.C. Principe, P. Zegers, Computational intelligence challenges and applications on large scale astronomical time series databases. IEEE Computational Intelligence Mag. 9(3), 27–39 (2014) W. Huang, J. Wang, The shortest path problem on a time-dependent network with mixed uncertainty of randomness and fuzziness. IEEE Transac Intelligent Trans Syst. 17(11), 3194–3204 (2016) L. Wang, H. Zhen, X. Fang, S. Wan, W. Ding, Y. Guo, A unified two-parallel-branch deep neural network for joint gland contour and segmentation learning. Future Gen Comp Syst. 100(316-324) (2019) Q. Xu, L. Wang, X.H. Hei, P. Shen, W. Shi, L. Shan, GI/Geom/1 queue based on communication model for mesh networks. Int J Comm Syst. 27(11), 3013–3029 (2014) W. Huang, L. Ding, The Shortest Path Problem on a Fuzzy Time-Dependent Network. IEEE Transac Comm. 60(11), 3376–3385 (2012) R. Chitrakar, C. Huang, Selection of candidate support vectors in incremental SVM for network intrusion detection. Comp Sec. 45, 231–241 (2014) E.A. Shams, A. Rizaner, A.H. Ulusoy, Trust aware support vector machine intrusion detection and prevention system in vehicular ad hoc networks. Comp Sec. 78, 245–254 (2018) A.A. Aburromman, M.B.I. Reaz, A novel weighted support vector machines multiclass classifier based on differential evolution for intrusion detection systems. Info Sci. 414, 225–246 (2017) W. L. A. Yaseen, Z. A. Othman, M. Z. A. nazri, Expert Systems with Applications. Multi-level hybrid support vector machine and extreme learning machine based on modified K-means for intrusion detection systems. 67, 296-303 (2017). R. Vijayanand, D. Devaraj, B. Kannapiran, Comp Sec.. Intrusion detection system for wireless mesh network using multiple support vector machine classifiers with genetic-algorithm-based feature selection 77, 304-314 (2018). M.R.G. Raman, N. Somu, K. Kirthivasan, R. Liscano, V.S.S. Sriram, knowledge-based systems. An efficient intrusion detection system based on hypergraph genetic algorithm for parameter optimization and feature selection in support vector machine. 134, 1-12 (2017). M.Y. Liu, O. Tuzel, S. Ramalingam, R. Chellappa, Entropy-rate clustering: cluster analysis via maximizing a submodular function subject to a matroid constraint. IEEE Transac Pattern Anal Machine Intell. 36(1), 99–111 (2014) R. Zhang, P. Xie, C. Wang, G. Liu, S. Wan, classifying transportation mode and speed from trajectory data via deep multi-scale learning. Computer Networks. 162, 1–13 (2019) W. Huang, S.K. Oh, W. Pedrycz, Hybrid fuzzy wavelet neural networks architecture based on polynomial neural networks and fuzzy set/relation inference-based wavelet neurons. IEEE Transac Neural Networks Learn Syst. 29(8), 3452–3462 (2018) W. Li, X. Liu, J. Liu, P. Chen, S. Wan, X. Cui, On improving the accuracy with auto-encoder on conjunctivitis. App Soft Computing. 81, 1–11 (2019) A.R. Solis, G. Panoutsos, Interval type-2 radial basis function neural networks: a modeling framework. IEEE Transac Fuzzy Syst. 23, 457–473 (2015) W. Huang, L. Ding, Project-Scheduling problem with random time-dependent activity duration times. IEEE Transac Eng Management. 58(2), 377–387 (2011) W. Huang, S.K. Oh, Z. Guo, W. Pedrycz, A space search optimization algorithm with accelerated convergence strategies. App Soft Computing. 13, 4659–4675 (2013) S. Rahnamayan, H.R. Tizhoosh, M.A. Salama, Opposition-Based differential evolution. IEEE Transac Evol Comp. 12(1), 64–79 (2008) W. Huang, S.K. Oh, W. Pedrycz, IEEE Transac Fuzzy Syst Fuzzy Wavelet Neural Networks: Analysis and Design. 25(5), 3452-3462, 1329-1341 (2017). M.E. Tipping, Adv. Neural Inf. Process. Syst. The relevance vector machine. 12, 652-658 (2000). M.A. Tahir, A. Bouridane, F. Kurugollu, Simultaneous feature selection and feature weighting using hybrid tabu search/K-nearest neighbor classifier. Pattern Recog Letters. 28(4), 438–446 (2007) J.P. Mei, L. Chen, Fuzzy clustering with weighted methods for relational data. Pattern Recog. 43(5), 1964–1974 (2010) V. Vapnik, Spring-Verlag. The nature of statistical learning theory. (1995). T. Xiong and V. Cherkassy, Proceedings of the International Joint Conference on Neural Networks. A combined SVM and LDA approach for classification. 1455-1459 (2005). Wei Huang, Sung-Kwun Oh, Witold Pedrycz, Neural Networks. Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs). 60, 166-181 (2014). V. Jaiganesh, S. Mangayarkarasi, P. Sumathi, An efficient algorithm for network intrusion detection system. Int J Comp Applications. 90(12), 12–16 (2014) W. Wei, X.L. Yang, P.Y. Shen, B. Zhou, Holes detection in anisotropic sensornets: Topological methods. Int J Distr Sensor Networks. 8(10), 1–10 (2012) X. Wang, Z. Zhang, J. Li, Y. Wang, H. Cao, Z. Li, L. Shan, An optimized encoding algorithm for systematic polar codes. EURASIP J Wireless Comm Network. 193, 1–12 (2010) S.J. Jang, C.H. Han, K.E. Lee, S.J. Yoo, Reinforcement learning-based dynamic band and channel selection in cognitive radio ad-hoc networks. EURASIP J Wireless Comm Network. 131, 1–25 (2019) H. A. Nguyen, D. Choi, Berlin Heidelberg: Springer. Application of data mining to network intrusion detection: Classifier selection model. 2014. M. Sabhnani, G. Serpen, Application of machine learning algorithms to KDD intrusion detection dataset within misuse detection context. In MLMTA, 2003. This work was not supported by any fund. School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China Beijing Institute of Information Application Technology, Beijing, China Hui Liu & Bin Xing School of Computer Science and Engineering, Tianjin University of Technology, Beijing, China Gang Hao Bin Xing All authors contribute to the concept, the design and developments of the theory analysis and algorithm, and the simulation results in this manuscript. All authors read and approved the final manuscript. Correspondence to Gang Hao. Competing interest Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Liu, H., Hao, G. & Xing, B. Entropy clustering-based granular classifiers for network intrusion detection. J Wireless Com Network 2020, 4 (2020). https://doi.org/10.1186/s13638-019-1567-1 Entropy clustering-based granular classifiers (ECGC) Support vector machine (SVM) Genetic Algorithms (GAs) Algorithms and Architectures for Industrial Wireless Sensor Networks
CommonCrawl
Methodology article Statistical integration of two omics datasets using GO2PLS Zhujie Gu ORCID: orcid.org/0000-0001-7675-80001, Said el Bouhaddani1, Jiayi Pei2, Jeanine Houwing-Duistermaat1,3,4 & Hae-Won Uh1 Nowadays, multiple omics data are measured on the same samples in the belief that these different omics datasets represent various aspects of the underlying biological systems. Integrating these omics datasets will facilitate the understanding of the systems. For this purpose, various methods have been proposed, such as Partial Least Squares (PLS), decomposing two datasets into joint and residual subspaces. Since omics data are heterogeneous, the joint components in PLS will contain variation specific to each dataset. To account for this, Two-way Orthogonal Partial Least Squares (O2PLS) captures the heterogeneity by introducing orthogonal subspaces and better estimates the joint subspaces. However, the latent components spanning the joint subspaces in O2PLS are linear combinations of all variables, while it might be of interest to identify a small subset relevant to the research question. To obtain sparsity, we extend O2PLS to Group Sparse O2PLS (GO2PLS) that utilizes biological information on group structures among variables and performs group selection in the joint subspace. The simulation study showed that introducing sparsity improved the feature selection performance. Furthermore, incorporating group structures increased robustness of the feature selection procedure. GO2PLS performed optimally in terms of accuracy of joint score estimation, joint loading estimation, and feature selection. We applied GO2PLS to datasets from two studies: TwinsUK (a population study) and CVON-DOSIS (a small case-control study). In the first, we incorporated biological information on the group structures of the methylation CpG sites when integrating the methylation dataset with the IgG glycomics data. The targeted genes of the selected methylation groups turned out to be relevant to the immune system, in which the IgG glycans play important roles. In the second, we selected regulatory regions and transcripts that explained the covariance between regulomics and transcriptomics data. The corresponding genes of the selected features appeared to be relevant to heart muscle disease. GO2PLS integrates two omics datasets to help understand the underlying system that involves both omics levels. It incorporates external group information and performs group selection, resulting in a small subset of features that best explain the relationship between two omics datasets for better interpretability. With the advancements in high throughput technology, multiple omics data are commonly available on the same subjects. To identify a set of relevant related features across the omics levels, these datasets need to be integrated and analyzed jointly. For statistical integration of omics data, there are several challenges to overcome: complex correlation structure within and between omics data, high-dimensionality (\(p\gg n\), or "large p, small n"), heterogeneity between different omics datasets, and selection of relevant features in each dataset. To deal with the first two challenges, Partial Least Squares (PLS) has been proposed [1, 2]. Dimension reduction is achieved by decomposing two datasets X and Y into joint and residual subspaces. The joint (low-dimensional) subspace of one dataset represents the best approximation of X or Y based on maximizing the covariance of the two. However, by integrating two heterogeneous omics datasets, the PLS joint components also contain (strong) omic-specific variation. This heterogeneity can be caused by differences (e.g. between methylation and glycomics) in size, distribution, and measurement platform. Ignoring these omic-specific characteristics (variation specific to each of the data) in the model may lead to a biased representation of the underlying system. Two-way orthogonal partial least squares (O2PLS) [3, 4] was proposed to decompose each dataset into joint, orthogonal, and residual subspaces. The orthogonal subspaces in X and Y capture variation unrelated to each other, making the joint subspaces better estimates for the true relation between X and Y. Hence, O2PLS accounts for the heterogeneity of two omics datasets. However, the resulting low-dimensional latent components spanning the joint subspaces are linear combinations of all the observed variables. Therefore, to select a small subset of relevant features for better interpretation, one can impose sparsity on the loadings of the principal components. A straightforward approach is to ignore all loadings smaller than some threshold value, effectively treating them as zero, which can be misleading [5]. Several sparse methods based on PLS have been proposed. Chun and Keleş proposed sparse PLS (SPLS) [6] which fits PLS on a reduced X space, consisting of pre-selected X-variables using a penalized regression. Sparse PLS (sPLS) by Lê Cao et al. [7] imposes \(L_1\) penalty on the singular value decomposition (SVD) of the covariance matrix of X and Y, resulting in sparse loading vectors for both datasets. Often it is of interest to select a group of features instead of individual features, e.g. features within a gene or a pathway. By so doing, one can improve power by identifying aggregate effects of the selected features [8,9,10]. Liquet et al. extended sPLS to group PLS (gPLS) [10], imposing group-wise \(L_2\) penalties on the loadings of the pre-defined feature groups. It results in group-wise sparsity (i.e., features belonging to the same group will always be selected altogether). In this work, we propose to extend O2PLS to incorporate sparsity, called Group Sparse O2PLS (GO2PLS). GO2PLS obtains sparse solutions by pushing a large number of small non-zero weights (or loading values) to zeros, instead of employing hard thresholding using arbitrary cut-off values. Therefore, GO2PLS constructs joint low-dimensional latent components representing the underlying systems involving both omics levels while taking into account the heterogeneity of different omics data, incorporates external biological information such as known group structure, and performs variable selection by imposing group-wise penalties on the loading vectors in the joint subspaces. For illustration, we apply GO2PLS to datasets from two studies. Firstly, TwinsUK is a population based study [11, 12], where methylation (482K CpG sites) and 22 immunoglobulin G (IgG) glycans were measured. A previous research [13] suggested the presence of an indirect influence of methylation on IgG glycosylation that may in part capture environmental exposures. We integrate the two omics datasets, aiming to identify groups of CpG sites affecting IgG glycosylation. In the CVON-DOSIS case-control study [14], regulomics (histone modification) and transcriptomics data were measured on 13 hypertrophic cardiomyopathy (HCM) patients and 10 controls. Histone modification can have an impact on gene expression. Therefore we integrate the two omics datasets and identify a small set of regulatory regions and transcripts explaining this relationship. Moreover, the extreme imbalance in a high-dimensional setting (33K ChIP-seq and 15K RNA-seq vs 23 subjects) poses computational challenges. The resulting selected features are further studied using gene set enrichment analysis [15]. Several possible scenarios containing these characteristics are designed and investigated in an extensive simulation study. This paper is organized as follows. In the methods section, an overview of O2PLS is presented, followed by the formulation of GO2PLS. Via a simulation study, we explore the properties of GO2PLS and compare its performance to other competitive methods. We then apply GO2PLS to integrate methylation and glycomics in the TwinsUK study and regulomics and transcriptomics in the CVON-DOSIS study. We conclude with a discussion and possible directions to further extend the method. TwinsUK datasets Whole blood methylation (using Infinium HumanMethylation450 BeadChip) and IgG glycomics (Ultra Performance Liquid Chromatography) data were measured on 405 independent individuals, among which 392 are females and 13 are males. The age ranges from 18 to 81, with a median of 58. The methylation dataset consists of beta values (ratio of intensities between methylated and unmethylated alleles) at 482563 CpG sites. CpG sites with missing values, on allosomes, or labeled cross-active [16] were removed. We kept only the CpG sites on CpG islands or surrounding areas (shelves and shores) that mapped to genetic regions. Age, sex, batch effect, and cell counts were corrected for using multiple regression. The glycomics dataset contains 22 glycan peaks. These peaks were normalized using median quotient (MQ) normalization [17], log-transformed, and adjusted for batch effect, age, and sex as well. The remaining 126299 CpG sites were then divided into 16892 groups based on their target genes (biological information from the UCSC database [18, 19]). No group information was available for the glycomics data. CVON-DOSIS datasets In the CVON-DOSIS study, regulomics and transcriptomics datasets were measured on the samples taken from the heart tissues of 13 HCM patients and 10 healthy controls. HCM is a heart muscle disease that makes it harder for the heart to pump blood, leading to heart failure. The regulomics data were measured using ChIP-seq, providing counts of histone modification H3K27ac in 33642 regulatory regions. The transcriptomics data contain counts of 15882 transcripts, measured by RNA-seq. The raw counts of regulomics data were normalized with reads per kilobase million (RPKM) to adjust for sequencing depth. Transcriptomics data were normalized with counts per million (CPM) with effective library size (estimated using the Trimmed Mean of M-values (TMM) method in EdgeR R package [20]). Further, both normalized data were log-transformed. Two-way orthogonal partial least squares (O2PLS) Let X and Y be two data matrices with the number of rows equal to the sample size N and the number of columns equal to the dimensionality p and q, respectively. Let the number of joint, X-orthogonal (unrelated to Y) and Y-orthogonal components be \(K\), \(K_x\) and \(K_y\), respectively, where \(K\), \(K_x\) and \(K_y\) are typically much smaller than p and q. The O2PLS model decomposes X and Y as follows: $$\begin{aligned} X&= TW^{\top } + T_\perp P_\perp ^{\top } + E, \\ Y&= UC^{\top } + U_\perp Q_\perp ^{\top } + F. \end{aligned}$$ The relation between X and Y is captured through the inner relation between T and U, $$\begin{aligned} U&=TB_T + H, \\ T&= UB_U + \tilde{H}. \end{aligned}$$ In this model, the scores are: \(T\, (N \times K), \, U \,(N \times K), \, T_\perp \, (N \times K_x), \, U_\perp \, (N \times K_y).\) They represent projections of the observed data X and Y to lower-dimensional subspaces. The loadings, \(W\,(p \times K), \, C\,(q \times K), \, P_\perp \,(p \times K_x), \, Q_\perp \,(q \times K_y),\) indicate relative importance of each X and Y variable in forming the corresponding scores. Further, \(E\,(N \times p), \, F\,(N \times q), \, H\,(N \times K), \, \tilde{H}\,(N \times K),\) represent the residual matrices. In O2PLS, estimates of the joint subspaces are obtained by first filtering out the orthogonal variation. The filtered data matrices \({{\tilde{X}}}\) and \({{\tilde{Y}}}\) are constructed as follows: $$\begin{aligned} {{\tilde{X}}}&= ( I_N - T_\perp (T_\perp ^{\top } T_\perp )^{-1} T_\perp ^{\top }) X, \\ {{\tilde{Y}}}&= ( I_N - U_\perp (U_\perp ^{\top } U_\perp )^{-1} U_\perp ^{\top }) Y, \end{aligned}$$ where \(T_\perp, U_\perp\) are estimates for the orthogonal subspaces, and \(I_N\) is identity matrix of size N. For more details see [3]. The joint parts maximize the covariance between the joint scores \(T = {{\tilde{X}}}W\) and \(U = {{\tilde{Y}}}C\). Here, W and C consist of loading vectors (\(w_1, \ldots , w_K\)) and (\(c_1, \ldots , c_K\)), which can be found as the right and left singular vectors of the covariance matrix \({{\tilde{Y}}}^{\top } {{\tilde{X}}}\) [4]. Calculating and storing \({{\tilde{Y}}}^{\top } {{\tilde{X}}}\) of dimension \(q \times p\) can be cumbersome for high dimensional omics data. Therefore we consider the following optimization problem sequentially for components \(k = 1, \ldots , K\): $$\begin{aligned} \max _{\left\Vert c_k\right\Vert _2=1,\left\Vert w_k\right\Vert _2=1} c_k^{\top } {{\tilde{Y}}}_k^{\top } {{\tilde{X}}}_k w_k, \end{aligned}$$ where parameters \(w_k, \, c_k\) are the loading vectors of the k-th joint components and \({{\tilde{X}}}_k, \, {{\tilde{Y}}}_k\) are the filtered data matrices after \(k-1\) times of deflation. This can be solved efficiently using the Nonlinear Iterative Partial Least Squares (NIPALS) [21] algorithm, which starts with random initialization of the X-space score vector t and repeats a sequence of the following steps until convergence: $$\begin{aligned} &\left. 1\right) c_k = \frac{{{\tilde{Y}}}_k^{\top } t}{t^{\top } t}, \quad \left. 2\right) \left\Vert c_k\right\Vert _2 \rightarrow 1, \quad \left. 3\right) u = {{\tilde{Y}}}_k c_k, \\&\left. 4\right) w_k = \frac{{{\tilde{X}}}_k^{\top } u}{u^{\top } u}, \quad \left. 5\right) \left\Vert w_k\right\Vert _2 \rightarrow 1 , \quad \left. 6\right) t = {{\tilde{X}}}_k w_k.\\ \end{aligned}$$ In step 1 and 4, \(Y_k\) and \(X_k\) are projected onto the X-space score vector t and the Y-space score u to get the loading vectors \(c_k\) and \(w_k\). The loading vectors are then unitized (step 2 and 5) and used to calculated the new scores u and t. Convergence of the algorithm is guaranteed. A detailed description and proof of optimality of the O2PLS algorithm can be found in [3, 4]. While standard cross-validation (CV) over a 3-dimensional grid is often used to determine the optimal number of components \(K\), \(K_x\), and \(K_y\), the procedure is not optimal for O2PLS, since there is not a single optimization criterion for all three parameters. As in [4], we use an alternative CV procedure that first performs a 2-dimensional grid search of \(K_x\) and \(K_y\), with a fixed \(K\), to optimize prediction performance of \(T \rightarrow U\) and \(U \rightarrow T\). Then a sequential search of optimal \(K\) is conducted to minimize the sum of mean squared errors (MSE) of prediction concerning \(X \rightarrow Y\) and \(Y \rightarrow X\). Group sparse O2PLS (GO2PLS) GO2PLS extends O2PLS by introducing a penalty in the NIPALS optimization on the filtered data \({{\tilde{X}}}\) and \({{\tilde{Y}}}\). This penalty encourages sparse, or group-sparse solutions for the joint loading matrices W and C, leading to a subset of the original features corresponding to non-zero loading values being selected in each joint component. Briefly, we introduce an \(L_1\) penalty on each pair of joint loading vectors. The optimization problem for the k-th pair of joint loadings \(c_k, \, w_k\) is: $$\begin{aligned} \max _{\left\Vert c_k\right\Vert _2=1,\left\Vert w_k\right\Vert _2=1} c_k^{\top } {{\tilde{Y}}}_k^{\top } {{\tilde{X}}}_k w_k + \lambda _c \left\Vert c_k\right\Vert _1 + \lambda _w \left\Vert w_k\right\Vert _1, \end{aligned}$$ where \(\lambda _c, \, \lambda _w\) are penalization parameters that regulate the sparsity level. The optimization problem (6) can be solved [22] by iterating over the k-th pair of joint loadings, $$\begin{aligned} c_k = \frac{S({{\tilde{Y}}}_k^{\top } t, \lambda _c)}{\left\Vert S({{\tilde{Y}}}_k^{\top } t, \lambda _c)\right\Vert _2}, \; w_k = \frac{S({{\tilde{X}}}_k^{\top } u, \lambda _w)}{\left\Vert S({{\tilde{X}}}_k^{\top } u, \lambda _w)\right\Vert _2}, \end{aligned}$$ where \(t = {{\tilde{X}}}_k w_k\) and \(u = {{\tilde{Y}}}_k c_k\). Here, \(S(\cdot )\) is the soft thresholding operator: \(S(a, \text {const}) = sgn(a)(|a| - \text {const})_+\) (\(\text {const} \ge 0\) is a non-negative constant, \((x)_+\) equals to x if \(x > 0\) and equals to 0 if \(x \le 0\)). To perform group selection, we impose group-wise \(L_2\) penalty on the joint loading vectors. Let \({{\tilde{X}}}\) and \({{\tilde{Y}}}\) be partitioned into \(J \,(J \le p )\) and \(M \,(M \le q )\) groups, respectively. The submatrices \({{\tilde{X}}}^{(j)}\) and \({{\tilde{Y}}}^{(m)}\) (\(j = 1,\ldots , J; \, m = 1, \ldots , M\)) contain the j-th and m-th group of variables, with corresponding loading vectors \(w^{(j)}\) (of size \(p_j\)) and \(c^{(m)}\) (of size \(q_m\)). The optimization problem for the k-th pair of loading vectors \(c_k = ({{c_k^{(1)}}}^\top , \ldots , {{c_k^{(M)}}^{\top }})^\top\) and \(w_k = ({{w_k^{(1)}}}^\top , \ldots , {{w_k^{(J)}}^{\top }} )^\top\) can be written as follows: $$\begin{aligned} &\min _{c_k^{(m)},w_k^{(j)}} \left \{ -\sum_{j=1}^{J} \sum _{m=1}^{M} {c_{k}^{(m)}}^{\top } {{\tilde{Y}}}_{k}^{{(m)}^\top } {{\tilde{X}}}_k^{(j)} w_k^{(j)}\right.\\ & \quad \left.+\,\lambda _c \sum _{m=1}^{M} \sqrt{q_m} \left\Vert c_k^{(m)}\right\Vert _2 + \lambda _w \sum _{j=1}^{J} \sqrt{p_j} \left\Vert w_k^{(j)}\right\Vert _2\right. \\ & \quad \left. +\, \phi_c \left( \sum _{m=1}^{M} \left\Vert c_k^{(m)}\right\Vert _2^2 -1 \right) + \phi _w \left( \sum _{j=1}^{J} \left\Vert w_k^{(j)}\right\Vert _2^2 -1 \right) \right \}, \end{aligned}$$ where the last two terms are reformulations of the unit norm constraints on \(c_k\) and \(w_k\), with \(\phi _c\) and \(\phi _w\) being the Lagrangian multipliers. The effective penalization parameters on each group (\(\lambda _c, \, \lambda _w\)) are adjusted by the square root of the group size to correct for the fact that larger groups are more likely to be selected. This optimization problem can be solved using block coordinate descent (for details, see Additional file 1). The solution takes the form: $$\begin{aligned} &c_k^{(m)} = \frac{\left( \left\Vert {{\tilde{Y}}}_k^{{(m)}^{\top }} t\right\Vert _2-\sqrt{q_m}\lambda_c\right) _+}{2\phi _c \left\Vert {{\tilde{Y}}}_k^{{(m)}^{\top }} t\right\Vert _2}\quad {{\tilde{Y}}}_k^{{(m)}^{\top }} t, \\ &w_k^{(j)} = \frac{\left( \left\Vert {{\tilde{X}}}_k^{{(j)}^{\top }} u\right\Vert _2-\sqrt{p_j}\lambda _w\right) _+}{2\phi _w \left\Vert {{\tilde{X}}}_k^{{(j)}^{\top }} u\right\Vert _2}\quad {{\tilde{X}}}_k^{{(j)}^{\top }} u. \end{aligned}$$ The \({{\tilde{X}}}\)-variables within the j-th group will have non-zero weights if \(\left\Vert {{\tilde{X}}}_k^{{(j)}^{\top }}u\right\Vert _2\) (i.e., the contribution of the whole group to the covariance) is larger than the size-adjusted penalization parameter \(\sqrt{p_j}\lambda _w\). In the same way, the \({{\tilde{Y}}}\)-variables within the m-th group will be assigned non-zero loading values if \(\left\Vert {{\tilde{Y}}}_k^{{(m)}^{\top }} t\right\Vert _2 > \sqrt{q_m}\lambda _c\). Note that when all the groups have size 1, the summation of group-wise \(L_2\) penalties is equivalent to an \(L_1\) penalty on the unpartitioned loading vector and individual features will be selected (i.e., (8) reduces to (6)). In this specific case, to avoid confusion, we call the method Sparse O2PLS (SO2PLS). When the penalization parameters \(\lambda _w = \lambda _c = 0\), GO2PLS becomes to O2PLS. If the number of orthogonal components \(K_x = K_y = 0\), GO2PLS, SO2PLS, O2PLS are equivalent to gPLS, sPLS, and PLS, respectively. The k-th pair of joint loadings are orthogonalized with respect to the previous \(k-1\) loading vectors. Let \(\pi\) be an index set for selected variables in \(w_k\). The orthogonalization is achieved by first projecting \(w_k^{(\pi )}\) onto \(span\{ w_1^{(\pi )}, \ldots , w_{k-1}^{(\pi )} \}\), and then subtracting this projection from \(w_k^{(\pi )}\). When the previous \(k-1\) components do not select any variable in \(\pi\), \(span\{ w_1^{(\pi )}, \ldots , w_{k-1}^{(\pi )} \}\) is actually a zero subspace and no orthogonalization is needed. To determine the optimal sparsity level, it is more convenient and intuitive to focus on the number of selected \({{\tilde{X}}}\), \({{\tilde{Y}}}\) groups (donote \(h_x\), \(h_y\), respectively). If prior biological knowledge does not already specify certain \(h_x\) and \(h_y\), cross-validation can be used to search for combinations of \(h_x\) and \(h_y\) that maximize the covariance between each pair of estimated joint components \({\mathrm{Cov}}({\hat{t}}, {\hat{u}})\). Similar to LASSO [23], the "one-standard-error-rule" [24] can be applied to obtain a more stable CV result. The GO2PLS algorithm is described below: Simulation Study We evaluate the performance of GO2PLS in two scenarios. First, we investigate the ability to select the relevant groups under various scenarios, focusing on the joint subspace, where the group selection takes place. Second, we compare the performance of GO2PLS and SO2PLS with other methods: O2PLS, PLS, sPLS, and gPLS. We investigate joint score estimation, joint loading estimation, and feature selection performances. In the first scenario, we set the number of variables in X and Y to be \(p = 5000\) and \(q = 20\), respectively. There are 10 groups of variables in X with non-zero loading values. The first 5 groups have group sizes of 100, 50, 20, 5, and 1, respectively, in which all the variables have loading values equal to 1. The remaining 5 groups are of size 10, with loading values of variables equal to 5. Note that large loading values are assigned to the latter 5 groups to make the detection of the first 5 groups more difficult. The remaining variables have zero loading values and are divided into groups of size 10. All the Y-variables have the same loading values and are not grouped. The sample size N is set to 30. We simulate both data matrices with 1 joint component (T and U from Equation 1 are both standard normally distributed and have correlation 1). We perform 1000 simulation runs and record the number of the runs GO2PLS selected relevant groups; we compute the proportion of each truly relevant group (with non-zero loadings) being selected across the simulation runs (number of times being selected divided by 1000). The group importance measurement \({\left\Vert {X^{(j)}}^{\top } U\right\Vert _2}/ {\sqrt{p_j}}\), that determines whether a group is selected or not is recorded for the first 5 groups (with loading value 1) to investigate the stability of the selection procedure. In the second scenario, we vary the sample size N from 30 to 600, and set \(p = 20{,}000\) and \(q = 10{,}000\), mimicking the dimensionality of the CVON-DOSIS datasets. Both X- and Y- variables are evenly divided into 1000 groups. For each joint component, we select 50 relevant groups and assign non-zero loadings to the variables contained in them. Within each group, variables have the same loading values: 1 for the first group, 2 for the second,..., and 50 for the last relevant group. We set the number of joint components \(K = 2\) and the number of orthogonal components \(K_x = K_y = 1\). The scores \(T, T_\perp , U, U_\perp\) from Equation 1 are generated from normal distributions with zero mean. The relationship between the joint scores is represented by \(U = T + H\), where H accounts for \(20\%\) of the variation in U. The noise matrices E, F are generated from normal distributions with zero mean and variance such that the variance of the noise matrix accounts for a proportion \(\alpha\) (\(0< \alpha < 1\)) of the variance of the data matrix (i.e., \(\alpha = \mathrm{Var}(E)/\mathrm{Var}(X) = \mathrm{Var}(F)/\mathrm{Var}(Y)\)). The ratio of the variance of the orthogonal components to the variance of the joint components (\(\sigma ^2_{T_{\perp }} / \sigma _T^2\)), and noise level \(\alpha\) are varied. For evaluating the accuracy of the joint score estimation, we computed \(R^2_{\widehat{T}T} = 1 - \sum (\widehat{T} - T)^2/\sum T^2\) and \(R^2_{\widehat{T}\widehat{U}} = 1 - \sum (\widehat{U} - \widehat{T})^2/\sum \widehat{U}^2\), which quantify how well the true parameter T and the estimated Y-joint component \(\widehat{U}\) can be explained by the estimated X-joint component \(\widehat{T}\). The performance of feature selection and the accuracy of estimated loadings are evaluated by true positive rate (TPR = TP/(TP+FN), where TP = True Positive, FN = False Negative) and \(W^{\top } \widehat{W}\), which represents the cosine of the angle between the estimated loading vector and the true one. The performances of all methods are evaluated on an independent test dataset of size 1000. For each setting, 500 replications are generated. An overview of scenario settings is presented in Tables 1, 2. To make a clearer comparison of the behavior across all the methods, we use the optimum values for the tuning parameters (number of components and number of relevant variables or groups). Table 1 Settings of Scenario 1 to study the performance of selecting relevant groups Table 2 Settings of Scenario 2 to compare the performances regarding joint score estimation, joint loading estimation, and feature selection Results of simulation study Scenario 1 Simulation Scenario 1: Selection proportion of relevant groups with different sizes under varying noise. The proportion for larger groups is higher at low to moderate (\(\alpha < 0.7\)) noise levels, and shows robustness against increasing noise Simulation Scenario 1: Density plot of estimated group importance measurement \(\left\Vert {X^{(j)}}^{\top } U\right\Vert _2/{\sqrt{p_j}}\) for each group size under 3 different noise levels. The vertical dotted red line is the average threshold. When the measurement of a group is larger than the threshold, the group is selected. The total area on the right side of the threshold under each density curve equals to the selection proportion for the corresponding group. The less the density curve spreads out, the more stable is the estimate Figure 1 shows the selection proportion for each relevant group under each noise level. Compared to smaller groups, the proportion for larger groups is higher at low to moderate (\(\alpha < 0.7\)) noise levels, and shows robustness against increasing noise. When the noise level is very high (\(\alpha > 0.8\)), the method loses power to detect relevant group of any size, particularly, of larger size. Figure 2 shows the density of the group importance measurement \(\left\Vert {X^{(j)}}^{\top } U\right\Vert _2 /{\sqrt{p_j}}\) for the first 5 relevant groups with different group sizes under 3 different noise levels. The vertical dotted lines indicate the average threshold given the correct number of relevant groups. Since a group will be selected if exceeds the threshold, the total area on the right side of the threshold under each density curve equals the selection proportion for the corresponding group. The measurement for larger relevant group shows higher precision at all noise levels. The threshold increases along with the noise. Simulation Scenario 2: comparison of joint score estimation performance, under varying relative orthogonal signal strength (top row), and varying sample size (bottom row). On the Y-axis, \(R^2_{\widehat{T}T}\) (left) and \(R^2_{\widehat{T}\widehat{U}}\) (right) are the coefficient of determination of regressing T on \(\widehat{T}\), and \(\widehat{U}\) on \(\widehat{T}\), respectively, quantifying the joint score estimation performances. Boxes show the results of 500 repetition The performance of the joint score estimation is compared focusing on the difference between methods with orthogonal parts (GO2PLS, SO2PLS, O2PLS) and their counterparts without the "O2" filtering (gPLS, sPLS, PLS). The top row of Fig. 3 shows the performance measured by \(R^2_{\widehat{T}T}\) & \(R^2_{\widehat{T}\widehat{U}}\) under \(N = 30\), \(\alpha = 0.1\) and varying relative orthogonal signal strength from one fifth to five times of the joint signal. In the left panel, \(R^2_{\widehat{T}T}\) of the various methods is depicted, representing how well the joint component \(\hat{T}\) captured the true underlying T. Overall, penalized methods performed better than non-penalized ones, especially when the orthogonal variation is relatively small. PLS performed poorly compared to O2PLS, when the orthogonal variation exceeds the joint variation. As the orthogonal variation further increases, performances of sPLS and gPLS deteriorated, while SO2PLS and GO2PLS were less affected. In the right panel, \(R^2_{\widehat{T}\widehat{U}}\) is presented, an estimate of the true parameters \(R^2_{TU}\), capturing correlation of T and U. Across different settings, O2PLS-based methods performed better, especially when the orthogonal variation is large. The bottom row of Fig. 3 shows the score estimation performance under fixed relative orthogonal signal strength of 1, \(\alpha =0.1\), and varying sample size N from 30 to 600. Penalized methods performed better compared to non-penalized methods in general, when the sample size is small. Regardless of the sample size, O2PLS-based methods outperformed PLS-based methods. Simulation Scenario 2: comparison of feature selection and joint loading estimation performance, under varying noise level (top row), and varying sample size (bottom row). On the Y-axis are the True Positive Rate (left) and \(W^{\top } \widehat{W}\) (right), which is the cosine of the angle between the estimated loading vector \(\widehat{W}\) and the true one W. Boxes show the results of 500 repetition Lastly, we present the results of GO2PLS, SO2PLS, and O2PLS with regard to feature selection and estimation of joint loadings. Results of PLS-based methods are not included since the performances of gPLS, sPLS, and PLS in this regard are very similar to GO2PLS, SO2PLS, and O2PLS, respectively. In Fig. 4, the top row shows the TPR and \(W^{\top } \widehat{W}\) under \(N = 30\) and varying noise levels \(\alpha\) from low to high. At all noise levels, GO2PLS had higher TPR than SO2PLS and O2PLS, and performed robustly against increasing noise. Regarding \(W^{\top } \widehat{W}\), GO2PLS outperformed the other two as well. In the bottom row, when increasing sample size at a fixed noise level of 0.5, the variance appeared to decrease and the performances of all the methods converged. Overall, GO2PLS outperformed others. Application to data We demonstrate SO2PLS and GO2PLS on datasets from two distinct studies. In the TwinsUK study, our aim is to integrate methylation and glycomics data and identify important groups of CpG sites underlying glycosylation. In the CVON-DOSIS study, we integrate regulomics and transcriptomics data and select a subset of genes and regions that drive their relationship. TwinsUK study We performed GO2PLS on the data with 1 joint, no methylation-orthogonal, and 3 glycomics-orthogonal components based on 5-fold cross-validation. We set the sparsity parameters to select the top 100 groups in the methylation and kept all the 22 glycan variables. The selected CpG groups from GO2PLS were mapped to their targeted genes for interpretation. We performed gene set enrichment analyses on the selected genes using the ToppGene Suite [25]. The results appeared to be related to immune response. We listed the most significant molecular function, biological process, and pathway in Table 3 (the full list of significant results can be found in Additional file 2). An extra analysis was performed using another grouping strategy, where we grouped 55531 CpG sites that map to the promoter region (0-1500 bases upstream of the transcriptional start site (TSS)) of a gene to 14491 groups based on their targeted genes. We applied GO2PLS and selected 100 groups. Note that the sizes of these groups became smaller since many CpG sites in gene bodies are excluded. Enrichment analysis did not result in significant results. Table 3 TwinsUK study: top results of gene set enrichment analysis CVON-DOSIS study We applied SO2PLS on the regulomics and transcriptomics datasets, with 2 joint and 1 orthogonal components for each omics dataset. In each pair of the joint components, 1000 regulomics and 500 transcriptomics variables were selected. We then further identified the genes corresponding to the promoter regions where the selected 1000 histone modification locates (using ± 10K window from the transcription start site of the gene). These genes are of interest since they are likely to be related to epigenetic regulation of gene expression. Genes corresponding to the selected transcripts were also identified. These gene sets identified from each joint component of the two omics data were investigated separately using gene set enrichment analysis. The top results were listed in Table 4. The Gene Ontology (GO) enrichment analysis of the selected genes and regions showed terms related to HCM that were also found previously [28]. Due to the presence of the case-control status in both omics levels, we expect the joint components related to the disease. Plotting the joint scores of the two datasets showed a separation between HCM cases and controls (Fig. 5). For a comparison of score plots of PCA, PLS, O2PLS, and SO2PLS, please see Additional file 3. CVON-DOSIS study: SO2PLS joint score plots of regulomics (left) and transcriptomics (right). HCM patients and controls were plotted in different colors. Ellipses are the 95% confidence regions of each group Table 4 CVON-DOSIS study: Gene set enrichment analysis results Statistical integration of two omics datasets is becoming increasingly popular to gain insight into underlying biological systems. O2PLS is a method that integrates two heterogeneous datasets and takes into account omic-specific variation. The resulting joint and orthogonal components are linear combinations of all variables, making interpretation difficult. To introduce sparsity and identify relevant groups, GO2PLS incorporates biological information on group structures to perform group selection in the joint subspace. Depending on the group size, such an approach may also lead to a higher selection probability of relevant features. We performed an extensive simulation study and showed that O2PLS-based methods generally outperformed PLS-based methods regarding joint score estimation when orthogonal variation was present in the data. Since PLS does not take into account orthogonal parts, the joint components also include part of the orthogonal variation. Further, when the sample size was small or the noise level was high, penalized methods appeared to be much less prone to overfitting than non-penalized methods. This suggests that results based on GO2PLS are likely to be generalizable when applied to new datasets. Concerning feature selection, adding external group information led to higher TPR, and larger groups of relevant features had a higher proportion of being detected under a moderate noise level. We then applied GO2PLS to the TwinsUK study, where we selected 100 target genes comprising of CpG sites that are most related to IgG glycosylation. The results of the enrichment analysis on the selected genes showed GO-terms involving the immune system in which the IgG glycans play important roles. In the CVON-DOSIS study, we integrated regulomics and transcriptomics and identified 1000 regulatory regions and 500 transcripts, and mapped them to genes. Further analysis of the selected gene sets showed enrichment for terms related to heart muscle diseases. Moreover, the implementation of GO2PLS is computationally fast and memory efficient. It relies on an algorithm based on NIPALS that does not store large matrices of size \(p \times q\) when performing the group-penalized optimization. A regular laptop (8G RAM, quad-core 2.6 GHz) was able to run GO2PLS on omics data from both case studies. The group information should be chosen together with domain experts based on the research question and biological knowledge. For example, in our TwinsUK data application, we aimed to identify the genes comprising of CpG sites, rather than the individual CpG sites. Therefore, we grouped CpG sites in the same genetic region. Furthermore, the biological knowledge that close-by CpG sites tend to function together supported the choice of grouping. Different grouping information leads to a changed definition of groups, consequently the selected groups will have a different interpretation. In the same example of the TwinsUK study, extra analysis using smaller groups led to no significant results in the enrichment analysis, supposedly due to weaker aggregated group effects. When group information is not available, or the research goal is to identify individual features (e.g., in our CVON-DOSIS data application), SO2PLS can be used. In the CVON-DOSIS study, Plotting the first two joint components showed two distinct classes corresponding to the case-control status. This might be expected since the analysis was conditional on case-control status, yielding a correlation between the two omics datasets. This phenomenon is well known in regression analysis of secondary phenotypes [29], but not well studied in PLS type of methods. This is a topic of future research. Often omics data are collected to study their relationship with an outcome variable or to predict an outcome variable. To this end, our approach has to be extended to incorporate the outcome variable. Such an approach might also lead to a more sparse solution since the selected features have to be correlated among the two omics datasets and the outcome variable. Further extensions of GO2PLS are to incorporate more than two omics datasets to represent the actual biological system even better. Finally, it is possible to extend the GO2PLS algorithm to a probabilistic model. Extending latent variable methods to probabilistic models is not new. PCA was extended to Probabilistic PCA in [30], and Probabilistic PLS (PPLS) [31] was proposed to provide a probabilistic framework for PLS. It has been shown that the probabilistic counterpart has a lower bias in estimation and is robust to non-normally distributed variables [31]. More importantly, the probabilistic model will allow statistical inference, making it possible to interpret the relevance and importance of features in the population, and facilitating follow-up studies. These extensions of GO2PLS will be suited for various studies with more complicated designs. In this article, we proposed GO2PLS to integrate two omics data by estimating joint latent components. GO2PLS takes into account heterogeneity between different omics levels by including orthogonal components for each dataset. The method utilizes known group information among the features to select relevant groups of features, by imposing group-wise penalties in the joint subspace. Alternatively, the method can also choose features at the individual level. This flexibility facilitates investigation into different research questions. Our simulation study showed that GO2PLS behaved robust against noise and outperformed competing methods in terms of accuracy of joint score estimation, joint loading estimation, and feature selection. We applied GO2PLS to the datasets from two studies with distinct designs and showed that the results were biologically interpretable. To conclude, GO2PLS is a robust and flexible method for integrating two omics datasets and selecting important groups of features. The R scripts and functions for GO2PLS are publicly available in the OmicsPLS R package https://cran.r-project.org/package=OmicsPLS and can be installed in R via install.packages("OmicsPLS"). Because of the sensitive nature of the data collected for the CVON-DOSIS study, requests to access the dataset from qualified researchers trained in human subject confidentiality protocols may be sent to the corresponding authors. Individual level methylation and glycomics data from TwinsUK are not permitted to be shared or deposited due to the original consent given at the time of data collection. However, access to methylation and glycomics data can be applied for through the TwinsUK data access committee. For information on access and how to apply, visit http://www.twinsuk.ac.uk/data-access/submission-procedure/. GO2PLS:: Group Sparse Two-way Orthogonal Partial Least Squares SO2PLS:: Sparse Two-way Orthogonal Partial Least Squares PCA:: PLS:: Partial Least Squares O2PLS:: Two-way Orthogonal Partial Least Squares SPLS:: Sparse Partial Least Squares (Chun and Keleş) Sparse Partial Least Squares (Lê Cao et al.) gPLS:: Group Partial Least Squares SVD:: Singular Value Decomposition NIPALS:: Nonlinear Iterative Partial Least Squares CV:: Cross-Validation MSE:: Mean Squared Error TPR:: True Positive Rate IgG:: Immunoglobulin G HCM:: RPKM:: Reads per Kilobase Million CPM:: Counts per Million TMM:: Trimmed Mean of M-values GO:: Boulesteix A-LL, Strimmer K. Partial least squares: A versatile tool for the analysis of high-dimensional genomic data. Briefings Bioinform. 2007;8(1):32–44. https://doi.org/10.1093/bib/bbl016. Wold S, Ruhe A, Wold H, Dunn III WJ. The Collinearity Problem in Linear Regression. The Partial Least Squares (PLS) Approach to Generalized Inverses. SIAM J Sci Stat Comput. 1984;5(3):735–43. https://doi.org/10.1137/0905052 arXiv:1308.0863v1 Trygg J, Wold S. O2-PLS, a two-block (X-Y) latent variable regression (LVR) method with an integral OSC filter. J Chemom. 2003;17(1):53–64. https://doi.org/10.1002/cem.775. el Bouhaddani S, Houwing-Duistermaat J, Salo P, Perola M, Jongbloed G, Uh HW. Evaluation of O2PLS in Omics data integration. BMC Bioinform. 2016;17(2):1–20. https://doi.org/10.1186/s12859-015-0854-z. Jolliffe IT, Trendafilov NT, Uddin M. A modified principal component technique based on the LASSO. J Comput Graph Stat. 2003;12(3):531–47. https://doi.org/10.1198/1061860032148, arXiv:1205.0121v2 Chun, H., Keleş, S.: Sparse partial least squares regression for simultaneous dimension reduction and variable selection. J R Stat Soc Ser B: Stat Methodol 72(1), 3–25 (2010). https://doi.org/10.1111/j.1467-9868.2009.00723.x Lê Cao, K.A., Rossouw, D., Robert-Granié, C., Besse, P. A sparse PLS for variable selection when integrating omics data. Statist Appl Genet Mol Biol. 7(1) (2008). https://doi.org/10.2202/1544-6115.1390 Tyekucheva S, Marchionni L, Karchin R, Parmigiani G. Integrating diverse genomic data using gene sets. Genome Biology. 2011;12(10):105. https://doi.org/10.1186/gb-2011-12-10-r105. Yuan M, Lin Y. Model selection and estimation in regression with grouped variables. J R Stat Soc Ser B: Stat Methodol. 2006;68(1):49–67. https://doi.org/10.1111/j.1467-9868.2005.00532.x. Liquet B, De Micheaux PL, Hejblum BP, Thiébaut R. Group and sparse group partial least square approaches applied in genomics context. Bioinformatics. 2016;32(1):35–42. https://doi.org/10.1093/bioinformatics/btv535. Spector TD, Williams FMK. The UK Adult Twin Registry (TwinsUK). Twin Res Hum Genet. 2006;9(6):899–906. https://doi.org/10.1375/twin.9.6.899. Moayyeri A, Hammond CJ, Hart DJ, Spector TD. The UK adult twin registry (twinsUK resource). Twin Res Hum Genet. 2013;16(1):144–9. https://doi.org/10.1017/thg.2012.89. Wahl A, Kasela S, Carnero-Montoro E, van Iterson M, Štambuk J, Sharma S, van den Akker E, Klaric L, Benedetti E, Razdorov G, Trbojević-Akmačić I, Vučković F, Ugrina I, Beekman M, Deelen J, van Heemst D, Heijmans BT, Consortium BIOS, Wuhrer M, Plomp R, Keser T, Šimurina M, Pavić T, Gudelj I, Krištić J, Grallert H, Kunze S, Peters A, Bell JT, Spector TD, Milani L, Slagboom PE, Lauc G, Gieger C. IgG glycosylation and DNA methylation are interconnected with smoking. Biochimica et Biophysica Acta (BBA) - General Subjects 1862(3), 637–648 (2018). https://doi.org/10.1016/J.BBAGEN.2017.10.012 CVON-DOSIS – Cardiovascular Research Consortium. http://cvon-dosis.nl/. Accessed 18 Nov 2020 Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci USA. 2005;102(43):15545–50. https://doi.org/10.1073/pnas.0506580102. Chen Y-aA, Lemire M, Choufani S, Butcher DT, Grafodatskaya D, Zanke BW, Gallinger S, Hudson TJ, Weksberg R. Discovery of cross-reactive probes and polymorphic CpGs in the Illumina Infinium HumanMethylation450 microarray. Epigenetics. 2013;8(2):203–9. https://doi.org/10.4161/epi.23470. Uh H-W, Klarić L, Ugrina I, Lauc G, Smilde AK, Houwing-Duistermaat JJ. Choosing proper normalization is essential for discovery of sparse glycan biomarkers. Mol Omics. 2020. https://doi.org/10.1039/c9mo00174c. Kent WJ, Sugnet CW, Furey TS, Roskin KM, Pringle TH, Zahler AM, Haussler aD. The human genome browser at UCSC. Genome Res. 2002;12(6):996–1006. https://doi.org/10.1101/gr.229102. UCSC Genome Browser Home. https://genome.ucsc.edu/. Accessed 19 Nov 2020 Robinson MD, Oshlack A. A scaling normalization method for differential expression analysis of RNA-seq data. Tech Rep (2010). http://genomebiology.com/2010/11/3/R25 Wold H. Nonlinear Iterative Partial Least Squares (NIPALS) Modelling: Some Current Developments. In: Multivariate Analysis–III, pp. 383–407 (1973). https://doi.org/10.1016/b978-0-12-426653-7.50032-6. https://www.sciencedirect.com/science/article/pii/B9780124266537500326 Witten DM, Tibshirani R, Hastie T. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics. 2009;10(3):515–34. https://doi.org/10.1093/biostatistics/kxp008. Tibshirani R. Regression Shrinkage and Selection Via the Lasso. J R Stat Soc: Ser B (Methodological). 1996;58(1):267–88. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x. Hastie T, Tibshirani R, Wainwright M. Statistical learning with sparsity: the lasso and generalizations. Stat Learn Spars: Lasso General. 2015;84(1):1–337. https://doi.org/10.1201/b18401. Chen J, Bardes EE, Aronow BJ, Jegga AG. ToppGene Suite for gene list enrichment analysis and candidate gene prioritization. Nucleic Acids Res. 2009;37(SUPPL.2). https://doi.org/10.1093/nar/gkp427. Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc: Ser B (Methodological). 1995;57(1):289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x. Storey JD. A direct approach to false discovery rates. Technical Report. 2002;3. https://doi.org/10.1111/1467-9868.00346. Gao J, Collyer J, Wang M, Sun F, Xu F. Genetic dissection of hypertrophic cardiomyopathy with myocardial rna-seq. Int J Mol Sci. 2020;21(9). https://doi.org/10.3390/ijms21093040 Tissier R, Tsonaka R, Mooijaart SP, Slagboom E, Houwing-Duistermaat JJ. Secondary phenotype analysis in ascertained family designs: application to the Leiden longevity study. Stat Med. 2017;36(14):2288–301. https://doi.org/10.1002/sim.7281. Bishop CM, Tipping ME. Probabilistic Principal Component Analysis. J R Stat Soc. Ser B 61(iii), 611–622 (1999). https://doi.org/10.1111/1467-9868.00196 el Bouhaddani S, Uh HW, Hayward C, Jongbloed G, Houwing-Duistermaat J. Probabilistic partial least squares model: Identifiability, estimation and application. J Multivar Anal. 2018;167:331–46. https://doi.org/10.1016/j.jmva.2018.05.009. arXiv:1706.03597 The authors would like to thank M. Harakalova, and M. Mokry from the Dept. of Cardiology, UMC Utrecht for providing the CVON-DOSIS data and discussion on the analysis of the CVON-DOSIS datasets. We thank M. Michels and J. van der Velden for providing the HCM tissues, the biobank of UMC Utrecht, the biobank of the Washington University School of Medicine, and the Sydney Heart Bank for providing non-failing donor tissue. This work has received support from the EU/EFPIA Innovative Medicines Initiative 2 Joint Undertaking BigData@Heart grant (116074). TwinsUK is funded by the Wellcome Trust, Medical Research Council, European Union, Chronic Disease Research Foundation (CDRF), Zoe Global Ltd and the National Institute for Health Research (NIHR)-funded BioResource, Clinical Research Facility and Biomedical Research Centre based at Guy's and St Thomas' NHS Foundation Trust in partnership with King's College London. The research leading to these results has received funding and support from the European Union's Horizon 2020 research and innovation programme IMforFUTURE under H2020-MSCA-ITN grant agreement number 721815, from the EU/EFPIA Innovative Medicines Initiative 2 Joint Undertaking BigData@Heart grant (116074), and from the ERA-Net for Research Programmes on Rare Diseases (E-rare 3 – MSAomics project). The funding bodies did not play any role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Department of Data Science and Biostatistics, UMC Utrecht, div. Julius Centre, Huispost Str. 6.131, 3508 GA, Utrecht, The Netherlands Zhujie Gu, Said el Bouhaddani, Jeanine Houwing-Duistermaat & Hae-Won Uh Department of Cardiology, UMC Utrecht, Huispost Str. 6.131, 3508 GA, Utrecht, The Netherlands Jiayi Pei Department of Statistics, University of Leeds, LS2 9JT, Leeds, UK Jeanine Houwing-Duistermaat Department of Statistical Sciences, University of Bologna, Bologna, Italy Zhujie Gu Said el Bouhaddani Hae-Won Uh ZG performed the mathematical work, simulation study, data anaylysis, and wrote the manuscript. JH and H-WU provided the underlying idea of the methods. SB supported the programming. All authors contributed to the discussion of the methods, simulation and data analysis. All authors read and approved the final manuscript. Correspondence to Zhujie Gu. TwinsUK study: Ethical approval was granted by the National Research Ethics Service London-Westminster, the St Thomas' Hospital Research Ethics Committee (EC04/015 and 07/H0802/84). All research participants have signed informed consent prior to taking part in any research activities. CVON-DOSIS study: the study protocol was approved by the local ethics committee of the Erasmus MC (2010-409), the Biobank Research Ethics Committee of University Medical Center Utrecht (protocol number 12/387), and the Washington University School of Medicine Ethics Committee (Institutional Review Board). Informed consent was obtained from each patient prior to surgery or was waived by the ethics committee when acquiring informed consent was not possible due to the death of the donor. . The details of solving the optimization problem (8). . The full lists of significant results of gene set enrichment analyses in the TwinsUK study and the CVON-DOSIS study. . Additional analysis of the CVON-DOSIS datasets, where score plots of PCA, PLS, O2PLS, SO2PLS are shown and compared. Gu, Z., el Bouhaddani, S., Pei, J. et al. Statistical integration of two omics datasets using GO2PLS. BMC Bioinformatics 22, 131 (2021). https://doi.org/10.1186/s12859-021-03958-3 Accepted: 06 January 2021 Integration of Omics data Dimension reduction O2PLS Associated Content Novel computational methods for analysis of biological systems
CommonCrawl
Colloquia and Special Lectures Committee Brad Cox (Chair) Gia-Wei Chern P.Q. Hung Israel Klich Seunghun Lee Peter Schauss Weekly Talks: Special Colloquium Physics Building, Room 204 Note special date. "Generating New Physics from Known Particles: Baryogenesis and Dark Matter from Mesons" Gilly Elor , University of Washington [Host: Peter Arnold] I will address two of the most fundamental questions about our Universe -- "What is the Universe made of?" and "How did complex structures come to exist?". These translate into understanding the origins of dark matter and a mechanism to generate the asymmetry of matter over anti-matter in the early Universe. I will put these problems in context and then present a novel, testable, solution to both mysteries in the same framework. The key idea is to rely on Standard Model particles called Beauty (B) Mesons which decay into dark matter particles. This mechanism has multiple testable signals. Some experimental searches are already under development, the results of which could yield the first hints of how nature chose to address these profound questions. "What do we learn about gravity & nuclear physics from gravitational waves?" Kent Yagi , University of Virginia - Department of Physics [Host: Bob Jones] A hundred years after the prediction by Einstein, gravitational waves were directly detected for the first time in 2015 by LIGO, which marked the dawn of gravitational-wave astronomy. Gravitational waves are sourced by astrophysical compact objects, such as black holes and neutron stars. Due to their extremely large gravitational field and compactness, they offer us natural testbeds to probe strong-field gravity and dense matter physics. In this talk, I first give an overview of the current status of gravitational-wave observations. Next, I explain how well one can test General Relativity, constrain the equation of state of nuclear matter and measure nuclear parameters with gravitational waves. I also comment on how one can combine gravitational-wave information with the recent measurement of a neutron star radius by an X-ray payload NICER to further probe nuclear physics. "Quantum Open Systems and Field Theory" Duff Neill , Los Alamos National Lab When learning about the properties of a quantum mechanical system, for instance, the energy levels of its bound states, it is useful to think of the system as closed and isolated from any environment, though we know in any laboratory setting, all systems eventually will interact with an environment. However, we can often engineer such interactions to be weak, short-ranged, and controllable, so that the isolated approximation is a good one. I will argue that in many physically relevant field theories, the long-time observables or states of the theory can only be defined in the context of a quantum open system, where we take into account the interactions between the system and the environment continually in the evolution of the system. This is because excitations of the field theory will inevitably create their own environment, that is, states we must trace over. Resumming these interactions with the self-created environment is necessary to give a convergent expansion for observables over all of phase-space. "The holographic view on transport in strongly interacting plasmas" Saso Grozdanov , MIT Microscopic quantum interactions between elementary particles control transport in macroscopic states of matter, such as in fluids and plasmas. In numerous states of interest, these microscopic interactions are strong, including in water, among electrons in graphene and in quark-gluon plasma — a state of nuclear matter that filled the early Universe and that is currently being recreated in particle colliders. While macroscopic theories describing the dynamics of such states, in particular, hydrodynamics (of fluids) and magnetohydrodynamics (of magnetized plasmas) have been partially understood, a full description of transport also requires a certain microscopic knowledge of its underlying quantum physics. After more than a century of striking advance in quantum theories, our theoretical understanding of these microscopic processes remains mostly limited to states with weak interactions. Recently, however, string theory also enabled explorations of strongly interacting states through the mathematical statement of holographic duality, which translates otherwise intractable problems into simpler analyses of black holes and gravitational waves. In my talk, I will first discuss new aspects of the macroscopic theory of hydrodynamics, focusing on the properties of the infinite series of higher-order corrections to the infamous Navier-Stokes equations. By using a novel concept of generalized global symmetries, which can encode the fact that the number of magnetic flux lines in Nature is conserved, I will then describe the construction of a new, comprehensive theory of magnetohydrodynamics. This reformulation has led to a number of general theoretical and experimental predictions for transport in magnetized plasmas. I will then move on to discuss the microscopic physics responsible for transport in strongly interacting states. Beginning with an introduction of holographic duality, this section will summarize holographic insights into the problem of the "unreasonable effectiveness of hydrodynamics" for the description of quark-gluon plasma. Then, I will discuss how the descriptions of microscopic physics and transport transition between strongly and weakly interacting pictures. Finally, by utilizing the mathematical structure behind our new theory of magnetohydrodynamics, a holographic dual of magnetized plasmas will be presented along with the first analyses of strongly interacting magnetized transport. "Exploring the Nucleon Sea" Jen-Chieh Peng , University of Illinois at Urbana-Champaign [Host: Simonetta Liuti] Direct experimental evidence for point-like constituents in the nucleons was first found in the electron deep inelastic scattering (DIS) experiment. The discovery of the valence and sea quark structures in the nucleons inspired the formulation of Quantum Chromodynamics (QCD) as the gauge field theory governing the strong interaction. A surprisingly large asymmetry between the up and down sea quark distributions in the nucleon was observed in DIS and the so-called Drell-Yan experiments. In this talk, I discuss the current status of our knowledge on the flavor structure of the nucleon sea. I will also discuss the progress in identifying the "intrinsic" sea components in the nucleons. Future prospect for detecting some novel sea-quark distributions will also be presented. Joint Colloquium with Physics and Astronomy/NRAO Note special room. "Multimessenger astronomy of compact binaries from the vantage point of computational gravity" Prof. Vasileios Paschalidis , University of Arizona [Host: Kent Yagi] We live in an exciting era where strong-field gravity has become a central pillar in the study of astrophysical sources. For the first time in history the detection of gravitational waves and simultaneous electromagnetic signals (multimessenger astronomy) from the same source have the potential to solve some of the most long-standing problems in fundamental physics and astrophysics. Computational gravity plays an important role in the success of the multimessenger astronomy program. Using the vantage point of computational gravity, in this talk we will we focus on how observations of colliding neutron stars can teach us about the state of matter at densities greater than the nuclear density, and with a critical eye assess what we have learnt so far from the first observation of a binary neutron star (event GW170817). We will also discuss how multimessenger detection of collisions of binary black holes may inform us about their environments andthe nature of black holes. "Testing Einstein with numerical relativity: theories beyond general relativity, and the precision frontier" Professor Leo Stein , University of Mississippi Advanced LIGO and Virgo have already detected black holes crashing into each other at least ten times. With their upgrades we anticipate a rate of about 1 gravitational-wave detection per week. More signals and higher precision will take the dream of testing Einstein's theory of gravity, general relativity, and make it a reality. But would we know a correction to Einstein's theory if we saw it? How do we make predictions from theories beyond GR? And do current numerical relativity simulations have enough precision that we could be confident in any potential discrepancy between observations and predictions? I will discuss (i) how to perform simulations in beyond-GR theories of gravity, and (ii) how numerical relativity simulations need to improve to be ready for the precision frontier of gravitational wave astrophysics. "Studying the stars here on earth: Experimental investigations of the nuclear equation-of-state " Sherry Yennello , Texas A&M University Heavy-ion collisions can produce nuclear material over a range of densities and proton fractions to study the nuclear equation-of-state. These measurements are enabled by accelerating nuclei to – in some cases – GeV energies and detecting the fragments that are produced from the collisions. The detectors are multi-detector arrays capable of measuring dozens of particles simultaneously from a single collision. Data rates can range up to many hundreds of collisions per second. One can either explore the characteristics of the individual fragments that are produced, often extracting particle ratios or double ratios, or correlations between the fragments – in particular transverse collective flow. From very low density to about three times normal nuclear density measurements have been made of the density dependence of the asymmetry energy. I will present an overview of how these measurements have been made and the constraints they have set on the nuclear equation-of-state. "Phase space characterization of optical quantum states and quantum detectors" Rajveer Nehra , University of Virginia - Physics We are in the midst of a second quantum revolution fueled by "quantumness" of physical systems and the sophisticated measurement devices or detectors to produce and characterize these exotic systems. Thus, characterization of quantum states and the detectors is a key task in optical quantum science and technology. The Wigner quasi-probability distribution function provides such a characterization. In this talk, I present our recent results on quantum state tomography of a single-photon Fock state using photon-number-resolving measurements using superconducting transition-edge sensor [1]. We directly probe the negativity of the Wigner function in our raw data without any inference or correction for decoherence, which is also an important indicator of the "quantum-only" nature of a physical system. For the second part of the talk, we discuss a method to characterize quantum detectors by experimentally identifying the Wigner functions of the detector positive-operator-value-measures (POVMs), a set of hermitian operators completely describing the detector [2]. The proposed scheme uses readily available thermal mixtures and probes the Wigner function point-by-point over the entire phase space from the detector's outcome statistics. In order to make the reconstruction robust to the experimental noise, we use techniques from convex quadratic optimizations. 1. R. Nehra, A. Win, M. Eaton, R. Shahrokhshahi, N. Sridhar, T. Gerrits,A. Lita, S. W. Nam, and O. Pfister, "State-independent quantum state tomography by photon-number-resolving measurements," Optica 6,1356–1360 (2019). 2. R. Nehra and K. Valson Jacob (2019), "Characterizing quantum detectors by Wigner functions," [arXiv:1909.10628]. "Ferroelectric Polarons, Belgian Waffles, and Principles for "Perfect" Semiconductors" Professor Xiaoyang Zhu , Columbia University [Host: Seunghun Lee] Lead halide perovskites have been demonstrated as high performance materials in solar cells and light-emitting devices. These materials are characterized by coherent band transport expected from crystalline semiconductors, but dielectric responses and phonon dynamics typical of liquids. Here we explain the essential physics in this class of materials based on their dielectric functions and dynamic symmetry breaking on nano scales. We show that the dielectric function in the THz region may lead to dynamic and local ordering of polar nano domains by an extra electron or hole, resulting a quasiparticle which we call a ferroelectric large polaron, a concept similar to solvation in chemistry. Compared to a conventional large polaron, the collective nature of polarization in a ferroelectric large polaron may give rise to order(s)-of-magnitude larger reduction in the Coulomb potential. We show that the shape of a ferroelectric polaron resemble that of a Belgian waffle. Using two-dimensional coherent phonon spectroscopy, we directly probe the energetics and local phonon responses of the ferroelectric large polarons. We find that that electric field from a nascent e-h pair drives the local transition to a hidden ferroelectric order on picosecond time scale. The ferroelectric or Belgian waffle polarons may explain the defect tolerance and low recombination rates of charge carriers in lead halide perovskites, as well as providing a design principle of the "perfect" semiconductor for optoelectronics. "Quantum states, walks, tiles, and tensor networks" Israel Klich , University of Virginia - Physics A major challenge of physics is the complexity of many-body systems. While true for classical systems, the difficulty is exasperated in quantum systems, due to entanglement between system components and thus the need to keep track of an exponentially large number of parameters. In particular, this complexity places a challenge to numerical methods such as quantum Monte Carlo and tensor networks. Here, exactly solvable models are of crucial importance: we use these to test numerical procedures, to develop intuition, and as a starting point for approximations. In this talk, I will explain our current understanding of a new solvable "walk" model, the area deformed Motzkin model. The model shows that entanglement may be more acute than previously thought, in particular, it features a novel quantum phase transition between a non-entangled phase and extensively entangled "rainbow" phase. Most remarkably, the model motivated the construction of a new tensor network, providing, after many years, the first example for an exact tensor network description of a critical system. Finally, I will remark on open problems, and on exciting connections to other fields such as the notion of holography in field theory, and a famous problem in non-equilibrium statistical mechanics. Note special time. "Neutron stars droplets and the quarks within" Professor Or Hen , MIT - Massachusetts Institute of Technology [Host: Nilanga Liyanage] Neutron stars are one of the densest strongly-interacting many-body systems in our universe. A main challenge in describing the structure and dynamics of neutron stars steams from our limited understanding of the nuclear interaction at high-densities (i.e. short-distances) and its relation to the underlaying quark-gluon substructure of nuclei. In this talk I will present new results from high-energy electron scattering experiments that probe the short-ranged part of the nuclear interaction via the hard breakup of Short-Range Correlations (SRC) nucleon pairs. As the latter reach densities comparable to those existing in the outer core of neutron stars, they represent 'neutron stars droplets' who's study can shed new light to the dynamical structure of neutron stars. Special emphasis will be given to the effect of SRCs to the behavior of protons in neutron-rich nuclear systems and how it can impact the cooling rates and equation of state of neutron stars. Pursuing a more fundamental understanding of such interactions, I will present new measurements of the internal quark-gluon sub-structure of nucleons and show how its modification in the nuclear medium relates to SRC pairs and short-ranged nuclear interactions. Given time I will also discuss the development of new effective theories for describing short-ranged correlations, the way in which they relate to experimental observables, and the emerging universality of short-distance and high-momentum physics in nuclear systems. "From interacting Majorana to universal fractional quasiparticles" Jeffrey Teo , University of Virginia - Physics Ising anyons, Majorana fermions (MF) and zero energy Majorana bound states have promising prospects in topological quantum computing (TQC) because of their ability to store quantum states non-locally in space and insensitivity to local decoherence. Unfortunately, these objects are not powerful enough to assemble a TQC that can perform universal operations using topological braiding operations alone. On the other hand, there are anyonic quasiparticles, like the Fibonacci anyon in a Read-Rezayi quantum Hall state, that are universal in the braiding-based TQC sense. However, these are quantum dynamical excitations, which can be challenging to spatially manipulate and susceptible to temperature fluctuations in a thermodynamic system. We propose and define a new notion of universal fractional quasiparticles, which are semi-classical static topological defects, supported by many-body interacting MFs in a superconducting spin-orbit coupled topological electronic system. "From Multimessenger Astronomy to Neutrons and Protons" Andrew Steiner , University of Tennessee Of course, multimessenger astronomy promises to revolutionize astronomy and our understanding of nucleosynthesis. My research shows that it goes further: astronomical observations (via both photons and gravitational waves) provides a unique laboratory to deepen our understanding of QCD and the nucleon-nucleon interaction. Most current work is focused on the equation of state. While the equation of state is indeed important, in this talk, I show how we can go beyond energy density and pressure. I present the first large-scale Bayesian inference of neutron star observations and nuclear structure data to obtain novel results on the composition of dense matter and the nature of nucleonic superfluidity. "Emergence of Mass in the Standard Model" Dr. Craig D. Roberts , Argonne National Laboratory Quantum Chromodynamics (QCD), the nuclear physics part of the Standard Model, is the first theory to demand that science fully resolve the conflicts generated by joining relativity and quantum mechanics. Hence in attempting to match QCD with Nature, it is necessary to confront the innumerable complexities of strong, nonlinear dynamics in relativistic quantum field theory. The peculiarities of QCD ensure that it is also the only known fundamental theory with the capacity to sustain massless elementary degrees-of-freedom, gluons (gauge bosons) and quarks (matter fields); and yet gluons and quarks are predicted to acquire mass dynamically so that the only massless systems in QCD are its composite Nambu-Goldstone bosons. All other everyday bound states possess nuclear-size masses, far in excess of anything that can directly be tied to the Higgs boson. These points highlight the most important unsolved questions within the Standard Model, namely: what is the source of the mass for the vast bulk of visible matter in the Universe and how is this mass distributed within hadrons? This presentation will provide a contemporary sketch of the strong-QCD landscape and insights that may help in answering these questions. "Computing Images from Weak Optical Signals" Dr. Vivek Goyal , Boston University [Host: MIller Eaton] In conventional imaging systems, the results are poor unless there is a physical mechanism for producing a sharp image with high signal-to-noise ratio. In this talk, I will present two settings where computational methods enable imaging from very weak signals: range imaging and non-line-of-sight (NLOS) imaging. Lidar systems use single-photon detectors to enable long-range reflectivity and depth imaging. By exploiting an inhomogeneous Poisson process observation model and the typical structure of natural scenes, first-photon imaging demonstrates the possibility of accurate lidar with only 1 detected photon per pixel, where half of the detections are due to (uninformative) ambient light. I will explain the simple ideas behind first-photon imaging and lightly touch upon related subsequent works that mitigate the limitations of detector arrays, withstand 25-times more ambient light, allow for unknown ambient light levels, and capture multiple depths per pixel. NLOS imaging has been an active research area for almost a decade, and remarkable results have been achieved with pulsed lasers and single-photon detectors. Our work shows that NLOS imaging is possible using only an ordinary digital camera. When light reaches a matte wall, it is scattered in all directions. Thus, to use a matte wall as if it were a mirror requires some mechanism for regaining the one-to-one spatial correspondences lost from the scattering. Our method is based on the separation of light paths created by occlusions and results in relatively simple computational algorithms. Related paper DOIs: 10.1126/science.1246775 10.1109/TSP.2015.2453093 10.1109/LSP.2015.2475274 10.1364/OE.24.001873 10.1038/ncomms12046 "Interfaces in oxide quantum heterostructures" Dr. Ho Nyung Lee , Oak Ridge National Laboratory Complex oxides are known to possess the full spectrum of fascinating properties, including magnetism, colossal magneto-resistance, superconductivity, ferroelectricity, pyroelectricity, piezoelectricity, multiferroicity, ionic conductivity, and more. This breadth of remarkable properties is the consequence of strong coupling between charge, spin, orbital, and lattice symmetry. Spurred by recent advances in the synthesis of such artificial materials at the atomic scale, the physics of oxide heterostructures containing atomically smooth layers of such correlated electron materials with abrupt interfaces is a rapidly growing area. Thus, we have established a growth technique to control complex oxides at the level of unit cell thickness by pulsed laser epitaxy. The atomic-scale growth control enables to assemble the building blocks to a functional system in a programmable manner, yielding many intriguing physical properties that cannot be found in bulk counterparts. In this talk, examples of artificially designed, functional oxide heterostructures will be presented, highlighting the importance of heterostructuring, interfacing, and straining. The main topics include (1) charge transfer induced interfacial magnetism and topologically non-trivial spin textures in SrIrO3-based heterostructures and (2) lattice and chemical potential control of oxygen stability and associated electronic and magnetic properties in nickelate-and cobaltite-based heterostructures. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. "A Mathematical Journey Thru SUSY, Error-Correcting Codes, Evolution, and a Sustainable Reality " Jim Gates, Ph.D. , Brown University [Host: Diana Vaman] This presentation describes an arc in mathematical/theoretical physics traversing concepts from equations, graphs, error-correction, and pointing toward an evolution-like process acting on the mathematical laws that sustain reality. "Kelvin-Froude wake patterns of a traveling pressure disturbance" Genya Kolomeisky , University of Virginia - Physics [Host: Israel Klich] Water wave patterns behind ships fuel human curiosity because they are both beautiful and easily observed. These patterns called wakes were famously described in 1887 by Lord Kelvin. According to Kelvin, the feather-like appearance of the wake is universal and the entire wake is confined within a 39 degree angle. While such wakes have been observed, deviations from Kelvin's predictions have also been reported. In this talk summarizing my work with UVA alumnus Jonathan Colen I will present a quantitative reasoning based on classical surface water wave theory that explains why some wakes are similar to Kelvin's prediction, and why others are less so. The central result is a classification of wake patterns which all can be understood in terms of the problem originally treated by Kelvin. "Gravitational waves and fundamental properties of matter and spacetime" David Nichols , University of Amsterdam Gravitational waves from the mergers of ten binary black holes and one binary neutron star were detected in the first two observing runs by the Advanced LIGO and Virgo detectors. In this talk, I will discuss the eleven gravitational-wave detections and the electromagnetic observations that accompanied the neutron-star merger. These detections confirmed many of the predictions of general relativity, and they initiated the observational study of strongly curved, dynamical spacetimes and their highly luminous gravitational waves. One aspect of these high gravitational-wave luminosities that LIGO and Virgo will be able to measure is the gravitational-wave memory effect: a lasting change in the gravitational-wave strain produced by energy radiated in gravitational waves. I will describe how this effect is related to symmetries and conserved quantities of spacetime, how the memory effect can be measured with LIGO and Virgo, and how new types of memory effects have been recently predicted. I will conclude by discussing the plans for the next generation of gravitational-wave detectors after LIGO and Virgo and the scientific capabilities of these new detectors. These facilities could detect millions of black-hole and neutron-star mergers per year, and they can provide insights on a range of topics from the population of short gamma-ray bursts to the presence of dark matter around black holes. "Frontiers in Multi-Messenger Astrophysics at the interface of gravitational wave astrophysics, large scale astronomical surveys and data science " Eliu Huerta , University of Illinois at Urbana-Champaign The next decade promises fundamental new scientific insights and discoveries from Multi-Messenger Astrophysics, enabled through the convergence of large scale astronomical surveys, gravitational wave astrophysics, deep learning and large scale computing. In this talk I describe a Multi-Messenger Astrophysics science program, and highlight recent accomplishments at the interface of gravitational wave astrophysics, numerical relativity and deep learning. I discuss the convergence of this program with large scale astronomical surveys in the context of gravitational wave cosmology. Future research and development activities are discussed, including a vision to leverage data science initiatives at the University of Virginia to spearhead, maximize and accelerate discovery in the nascent field of Multi-Messenger Astrophysics. "Black Hole Bridges" Robert Penna , Columbia University Black holes are bridges between astrophysics and fundamental physics. I will describe three examples of this theme. First, I will explain how contemporary theoretical ideas deriving from the holographic principle have proven useful for interpreting numerical simulations of electromagnetic outflows from spinning black holes. These models are currently being tested against X-ray and radio observations of galactic black holes. Second, I will describe a correspondence between black holes and lower dimensional fluids and discuss the possibility of probing this correspondence with gravitational wave memory experiments. Finally, I will describe how gravitational wave observations of black hole tidal interactions might be used to find new symmetries acting on the event horizon. "Testing Gravity with Cosmology and Astrophysics" Jeremy Sakstein , University of Pennsylvania We are entering a golden age of cosmology and astrophysics. In the coming decade we will have cosmological data for over a billion galaxies, a census of objects in the Milky Way, and a network of gravitational detectors spanning the globe that will detect thousands of events per year. This presents us with the unprecedented opportunity to learn how gravity behaves at the largest distances, and in the most extreme environments. In this talk I will describe how we can use current and upcoming data to understand the unexplained mysteries of the Universe, such as why the expansion of the Universe accelerating (dark energy). I will also discuss how to connect physics in these disparate regimes and how to test cosmology on small scales. To maximize the discovery potential of the data requires us to construct robust theoretical models, identify novel probes, and connect theory with observation, and I will describe projects where I have attempted to accomplish this. I will conclude the talk by discussing how this interdisciplinary effort will continue into the next decade and beyond. "Probing Massive and Supermassive Black Holes with Gravitational Waves" Sarah Vigeland , University of Wisconsin Milwaukee Observations have shown that nearly all galaxies harbor massive or supermassive black holes at their centers. Gravitational wave (GW) observations of these black holes will shed light on their growth and evolution, and the merger histories of galaxies. Massive and supermassive black holes are also ideal laboratories for studying strong-field gravity. Pulsar timing arrays (PTAs) are sensitive to GWs with frequencies ~1-100 nHz, and can detect GWs emitted by supermassive black hole binaries, which form when two galaxies merge. The Laser Interferometer Space Antenna (LISA) is a planned space-based GW detector that will be sensitive to GWs ~1-100 mHz, and it will see a variety of sources, including merging massive black hole binaries and extreme mass-ratio inspires (EMRIs), which consist of a small compact object falling into a massive black hole. I will discuss source modeling and detection techniques for LISA and PTAs, as well as present limits on nanohertz GWs from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration. "The Electron Ion Collider Science " Markus Diefenthaler , Jefferson Lab Quantum Chromodynamics (QCD), the theory of the strong interaction, is a cornerstone of the Standard Model of modern physics. It explains all nuclear matter as bound states of point-like fermions, known as quarks, and gauge bosons, known as gluons. The gluons bind not only quarks but also interact with themselves. Unlike with the more familiar atomic and molecular matter, the interactions and structures are inextricably mixed up, and the observed properties of nucleons and nuclei, such as mass and spin, emerge out of this complex system. To precisely image the quarks and gluons and their interactions and to explore the new QCD frontier of strong color fields in nuclei, the Nuclear Physics community proposes an US-based Electron Ion collider (EIC) with high-energy and high-luminosity, capable of a versatile range of beam energies, polarizations, and ion species. The community is convinced that the EIC is the right tool to understand how matter at its most fundamental level is made. "Quantum Engineering: A Transdisciplinary Vision" Prem Kumar , Northwestern University A global quantum revolution is currently underway based on the recognition that the subtler aspects of quantum physics known as superposition (wave-like aspect), measurement (particle-like aspect), and entanglement (inseparable link between the two aspects) are far from being merely intriguing curiosities, but can be transitioned into valuable, real-world technologies with performances that can far exceed those obtainable with classical technologies. The recent demonstration by the Chinese scientists of using a low-earth-orbit satellite to distribute entangled photons to two ground stations that are over a thousand kilometers apart is a stunning technological achievement—direct entanglement distribution over the best available fiber links is limited to a few hundred kilometers—and a harbinger of future possibilities for globally secure communications guaranteed by the power of quantum physics. Harnessing the advantages enabled by superposition, measurement, and entanglement (SME)—the three pillars of quantum physics—for any given application is what is termed quantum engineering in general. In many instances, however, the details of the underlying science (high-temperature superconductivity, photosynthesis, avian navigation, are some examples) is still not fully understood, let alone how to turn the partially understood science into a potentially useful technology. Nevertheless, it has become clear in the last few decades that quantum engineering will require a truly concerted effort that will need to transcend the traditional disciplinary silos in order to create and sustain new breeds of science and technology communities that will be equally versed in quantum physics as they would be in their chosen area of technology. In this talk, I will present my vision for unleashing the potential of quantum engineering, taking some examples from ongoing and proposed research. "Frontiers of Multi-Messenger Black-Hole Physics" Stephen Taylor , California Institute of Technology [Host: Diana Vaman ] The bounty of gravitational-wave observations from LIGO and Virgo has opened up a new window onto the warped Universe, as well as a pathway to addressing many of the contemporary challenges of fundamental physics. I will discuss how catalogs of stellar-mass compact object mergers can probe the unknown physical processes of binary stellar evolution, and how these systems can be harnessed as standard distance markers (calibrated entirely by fundamental physics) to map the expansion history of the cosmos. The next gravitational-wave frontier will be opened within 3-6 years by pulsar-timing arrays, which have unique access to black-holes at the billion to ten-billion solar mass scale. The accretionary dynamics of supermassive black-hole binaries should yield several tell-tale signatures observable in upcoming synoptic time-domain surveys, as well as gravitational-wave signatures measurable by pulsar timing. Additionally, pulsar-timing arrays are currently placing compelling constraints on modified gravity theories, cosmic strings, and ultralight scalar-field dark matter. I will review my work on these challenges, as well as in the exciting broader arena of gravitational-wave astrophysics, and describe my vision for the next decade of discovery. " Using Topology to Solve Strongly Coupled Quantum Field Theories" Zohar Komargodski , Stony Brook University [Host: Marija Vucelja] I will begin by describing an interacting model in Quantum Mechanics where exact results about the ground state can be established by using tools from topology. I will then argue that such tools are also useful for tackling interesting problems in Quantum Field Theory. In particular, I will review Yang-Mills theory and argue that using topology one can make several predictions about its possible phases. We will then also extend the considerations to Quantum Chromodynamics and discuss possible connections with particle physics phenomenology and with condensed matter physics. "A new approach to search for neutron-antineutron oscillations, and a couple of other phenomena based on neutron reflection from surface: gravitational and whispering-gallery quantum states of neutrons" Valery Nesvizhevsky , Institut Laue Langevin, France [Host: Stefan Baessler] "An observation of neutron-antineutron oscillations (n-n ̅), which violate both B and B — L conservation, would constitute a scientific discovery of fundamental importance to physics and cosmology. A stringent upper bound on its transition rate would make an important contribution to our understanding of the baryon asymmetry of the universe by eliminating the post-sphaleron baryogenesis scenario in the light quark sector. We show that one can design an experiment using slow neutrons that in principle can reach the required sensitivity of 1010 s in the oscillation time, an improvement of 104 in the oscillation probability relative to the existing limit for free neutrons. This can be achieved by allowing both the neutron and antineutron components of the developing superposition state to coherently reflect from mirrors. We present a quantitative analysis of this scenario and show that, for sufficiently small transverse momenta of n/n ̅ and for certain choices of nuclei for the n/n ̅ guide material, the relative phase shift of the n and n ̅ components upon reflection and the n ̅ annihilation rate can be small. While the reflection of n ̅ from surface looks exotic and counterintuitive and seems to contradict to the common sense, in fact it is fully analogous to the reflection of n from surface. The later phenomenon is well known and used in neutron research from its first years. We illustrate it with two selected example of gravitational and whispering-gallery quantum states of neutrons." [V.V. Nesvizhevsky, A.Yu. Voronin, Surprising Quantum Bounces, Imperial College Press, London, 2015] "Energy-efficient neuromorphic computing with magnetic tunnel junctions" Mark Stiles , NIST [Host: Joe Poon & Avik Ghosh] Human brains can solve many problems with orders of magnitude more energy efficiency than traditional computers. As the importance of such problems, like image, voice, and video recognition, increases, so does the drive to develop computers that approach the energy efficiency of the brain. Magnetic devices, especially tunnel junctions, have several properties that make them attractive for such applications. Their conductance depends on the state of the ferromagnets making it easy to read information that is stored in their magnetic state. In addition, current can manipulate the magnetic state. Based on this electrical control of the magnetic state, magnetic tunnel junctions are actively being developed for integration into CMOS integrated circuits to provide non-volatile memory. This development makes it feasible to consider other geometries that have different properties. I describe two of the computing primitives that have been constructed based on the different functionalities of magnetic tunnel junctions. The first of these uses tunnel junctions in their superparamagnetic state as the basis for a population coding scheme. The second uses them as non-linear oscillators in the first nanoscale "reservoir" for reservoir computing. "Thermal relaxations, the Mpemba effect, and adaptation of bacteria " Marija Vucelja , UVA-Physics Most of my talk will be about anomalous thermal relaxations, such as the Mpemba effect. Towards the end of my talk, I will also highlight a few topics in population dynamics that I have been working on. The Mpemba effect is a phenomenon when "hot can cool faster than cold" - a "shortcut" in relaxation to thermal equilibrium. It occurs when a physical system initially prepared at a hot temperature, cools down faster than an identical system prepared at a colder temperature. The effect was discovered as a peculiarity of water. Despite following observations in granular gasses, magnetic alloys, and spin glasses, the effect is still most often referred to as an "oddity" of water, although it is widespread and general. I will describe how to define a Mpemba effect for an arbitrary physical system, and show how to quantify and estimate the probability of the Mpemba effect on a few examples. In the remaining time, I will briefly talk about the adaptation of bacterial populations and the immune system of bacteria with CRISPR. Besides being the biology's newest buzzword and favorite gene editing tool, CRISPR is also a mechanism that allows bacteria to defend adaptively against phages and other invading genomic material. From the standpoint of physics and biology, the coevolution of bacteria and phages yields fascinating open questions. ""Building a Quantum Computer Using Silicon Quantum Dots"" Susan Coppersmith , University of Wisconsin - Madison [Host: Despina Louca] The steady increase in computational power of information processors over the past half-century has led to smart phones and the internet, changing commerce and our social lives. Up to now, the primary way that computational power has increased is that the electronic components have been made smaller and smaller, but within the next decade feature sizes are expected to reach the fundamental limits imposed by the size of atoms. However, it is possible that further huge increases in computational power could be achieved by building quantum computers, which exploit in new ways of the laws of quantum mechanics that govern the physical world. This talk will discuss the challenges involved in building a large-scale quantum computer as well as progress that we have made in developing a quantum computer using silicon quantum dots. Prospects for further development will also be discussed. "Unveiling the Normal State of Cuprate High-Temperature Superconductors: Hidden Order of Cooper Pairs" Dragana Popovic , Florida State University Many unusual properties of strongly correlated materials have been attributed to the proximity of quantum phase transitions (QPTs), where different types of orders compete and coexist, and may even give rise to novel phases. In two-dimensional (2D) systems, the nature of the magnetic-field-tuned QPT from a superconducting to a normal state has been widely studied, but it remains an open question. Underdoped copper-oxide high-temperature superconductors are effectively 2D materials and thus present a promising new platform for exploring this long-standing problem. Although in cuprates the normal state is commonly probed by applying a perpendicular magnetic field (H) to suppress superconductivity, the identification and understanding of the H-induced normal state has been a challenge because of the complex interplay of disorder, temperature and quantum fluctuations, and the near-universal existence of charge-density-wave correlations. This talk will describe recent experimental advances in identifying and characterizing a full sequence of ground states as a function of H in underdoped cuprates. In both the absence and the presence of charge order, the results demonstrate the key role of disorder in the H-tuned suppression of 2D superconductivity, giving rise to an intermediate regime with large quantum phase fluctuations, in contrast to the conventional scenario. Most strikingly, the interplay of the "striped" charge order with high-temperature superconductivity leads to the emergence of an unanticipated, insulatinglike ground state with strong superconducting phase fluctuations, suggesting an unprecedented freezing (i.e. "the hidden order") of Cooper pairs. Possible scenarios will be discussed, including the implications of the results for understanding the physics of the cuprate pseudogap regime, as well as other 2D superconductors. "Charge density wave phase transitions in transition metal dichalcogenides" Utpal Chatterjee , University of Virginia - Department of Physics Layered transition-metal dichalcogenides (TMDs) are well known for their rich phase diagrams, which encompass diverse quantum states including metals, semiconductors, Mott insulators, superconductors, and charge density waves (CDWs). For instance, 2H-NbSe2 and 2H-TaS2 are canonical incommensurate CDW systems, while 1T-TiSe2 harbors a commensurate CDW order. There is a coexistence/competition of CDW and superconductivity in 2H-NbSe2 and 2H-TaS2, though this is not the case for pristine 1T-TiSe2. A subtle interplay of CDW and superconducting orders, however, appears in each of these materials via chemical intercalation or under pressure. Such a competition between or coexistence of proximate broken-symmetry phases resembles many aspects of the phase diagram of cuprate high temperature superconductors (HTSCs)—particularly, in the underdoped regime where the enigmatic pseudogap phase exists. The origin of the CDW order in these compounds is an intriguing puzzle despite decades of research. We will present our experimental data, which combine Angle Resolved Photoemission Spectroscopy, Scanning Tunneling Spectroscopy, scattering and transport measurements, to provide new insights into the relative importance of lattice and Coulomb effects in the CDW transitions of these compounds. These studies will also highlight the distinctive impacts of disorder and doping in commensurate and incommensurate CDW systems. Finally, comparing spectroscopic features of the CDW state of the TMDs with those of the normal state underdoped HTSCs, we will discuss whether a CDW order can possibly be the origin of the pseudogap phase in the cuprates. "Feynman's Footprints: Quantum Field Theory in Nuclear and Particle Physics" Roxanne Springer , Duke University 2018 is the 100th Anniversary of the birth of Richard Feynman. His discoveries and new formalisms, and the way he thought about solving problems, transformed the way we think about physics. I will talk about examples of how these impacted present results in nuclear and particle physics. I will also expand upon what might be called Feynman's Scientific Method, and how by following that method we can become better scientists ourselves and nurture the next generation of scientists. Slideshow (PDF) "Emergent Gravity" Diana Vaman , University of Virginia - Physics Is gravity a fundamental force? I will discuss a few scenarios in which gravity emerges from the dynamics of some underlying field theory. In holography (or AdS/CFT correspondence), Einstein's equations for the bulk gravity dual are linked to entanglement in the boundary field theory. In another example, the graviton emerges as a composite spin two massless particle in a scalar field theory. "Searching for Supersymmetry with the ATLAS experiment" Evelyn Thomson , University of Pennsylvania [Host: Chris Neu] The ATLAS experiment is searching new territory for evidence of new particles produced in proton collisions at the highest energies. Questioning assumptions is important in these searches. I will compare selected results from searches for supersymmetry with and without the assumption of R-Parity, a quantum number derived from the spin and type of particle. I will also present some of the detector-related challenges associated with measuring charged particle momenta, including the planned upgrade of the detector to cope with up to 200 proton collisions every 25 nanoseconds. "The Brave nu World" Andre Luiz De Gouvea , Northwestern University [Host: P. Q. Hung] I will review the current theoretical and phenomenological status of neutrino physics. In more detail, I will discuss our current understanding of neutrino properties, open questions, some new physics ideas behind nonzero neutrino masses, and the challenges of piecing together the neutrino mass puzzle. I will also comment on the new physics reach of the current and the next generation of neutrino oscillation experiments. "High-Q Optical Micro-cavities: Towards Integrated Optical Time Standards and Frequency Synthesizers" Kerry Vahala , Caltech [Host: OSA/SPIE Student Chapter] Communication systems leverage the respective strengths of optics and electronics to convey high-bandwidth signals over great distances. These systems were enabled by a revolution in low-optical-loss dielectric fiber, complex integrated circuits as well as devices that link together the optical and electrical worlds. Today, another revolution is leveraging the advantages of optics and electronics in new ways. At its center is the laser frequency comb which provides a coherent link between these two worlds. Significantly, because the link is also bidirectional, performance attributes previously unique to electronics and optics can be shared. The end result has been transformative for time keeping, frequency metrology, precision spectroscopy, microwave-generation, ranging and other technologies. Even more recently, low-optical-loss dielectrics, now in the form of high-Q optical resonators, are enabling the miniaturization of frequency combs. These new `microcombs' can be integrated with electronics and other optical components to potentially create systems on-a-chip. I will briefly overview the history and elements of frequency combs as well as the physics of the new microcombs. Application of the microcombs for spectroscopy and LIDAR will be discussed. Finally, efforts underway to develop integrated optical clocks and integrated optical frequency synthesizers using the microcomb element are described. "Teaching physics as it is done: A plea for qualitative methods " Prof. Jean-Marc Lévy-Leblond (Emeritus) [Host: Olivier Pfister] It is customary for young physicists, when entering their professional career, to be astonished by the huge difference between physics as it is done and physics as it is taught. The purpose is to show that teaching of physics as it is done is indeed possible and should be encouraged, despite the undeniable existence of didactical, epistemological and institutional obstacles. "Using Dynamic Interferometry to Measure Optics of Next Generation Telescopes" James Wyant , University of Arizona There are currently several large telescope projects. One new telescope is the James Webb Space Telescope (JWST) which is planned to be launched into space on an Ariane 5 rocket from French Guiana in Spring 2019. It is expected that JWST will be the premier observatory of the next decade, serving thousands of astronomers worldwide. It will study every phase in the history of our Universe, ranging from the first luminous glows after the Big Bang, to the formation of solar systems capable of supporting life on planets like Earth, to the evolution of our own Solar System. The primary mirror will consist of 18 mirror segments made of beryllium coated with gold to give a total aperture diameter of 6.5 m. It is critical that the 18 mirror segments are properly phased so they perform as a single 6.5 m diameter mirror. JWST's backplane is the large structure that holds and supports the big hexagonal mirrors of the telescope. The backplane has an important job as it must carry not only the 6.5 m diameter primary mirror plus other telescope optics, but also the entire module of scientific instruments. The mechanical stability and thermal characteristics of the graphite composite backplane are extremely important for optimum performance of the telescope. JWST has many challenging optical testing requirements including a) Primary mirror figure testing, b) Back structure measurement, c) Segment phasing, d) Thermal and mechanical strain, and e) Vibrational dynamics. Another telescope currently being constructed is the Giant Magellan Telescope (GMT), a large ground-based telescope consisting of seven 8.5 m diameter mirrors that will be built on a peak in the Andes Mountains near several existing telescope facilities at Las Campanas, Chile at an altitude of over 2,550 meters. The seven 8.4 m diameter mirror segments will be phased to give a telescope having the resolving power of a telescope 24.5 meters in diameter. GMT is expected to be operational for many decades, enabling breakthrough science ranging from studies of the first stars and galaxies in the universe to the exploration of extrasolar alien worlds. The GMT is poised to answer some of humanity's biggest questions about the nature of exoplanets and whether we are alone in the universe, about the beginning of the universe to understand the formation and evolution of the galaxies, about the origin of the chemical elements, and how black holes grow. Like the JWST, the GMT has many challenging testing requirements. During this talk we will describe the JWST and the GMT and the dynamic interferometry techniques that have been developed to measure high quality large telescope optics and the surface vibration and stability characteristics of the supporting structure required for high-quality performance large telescopes. "What are Gravitational Waves telling us about Theoretical Physics " Nicolas Yunes , Montana State University The recent gravitational wave observations of the collision of black holes and of neutron stars have allowed us to pierce into the extreme gravity regime, where gravity is simultaneously unfathomably large and wildly dynamical. These waves encode a trove of information about physics that is prime for the taking, including potential revelations about the validity of Einstein's theory of General Relativity and about nuclear physics in the extreme gravity regime. In this talk, I will describe some of the inferences we can make on both theoretical and nuclear physics from current and future gravitational wave observations. "Chasing Relativistic Electrons in Topological Quantum Materials" Adam Kaminski , Iowa State and Ames Lab. [Host: Utpal Chatterjee] The discovery of Dirac fermions in graphene has inspired a search for Dirac and Weyl semimetals in three dimensions thereby making it possible to realize exotic phases of matter first proposed in particle physics. Such materials are characterized by the presence of nontrivial quantum electronic states, where the electron's spin is coupled with its momentum and Fermi surfaces are no longer closed contours in the momentum space, but instead consist of disconnected arcs. This opens up the possibility for developing new devices in which information is stored and processed using spin rather than charge. Such platforms may significantly enhance the speed and energy efficiency of information storage and processing. In this talk we will discuss the electronic properties of several of newly discovered tellurium based topological quantum materials. In WTe2 we have observed a topological transition involving a change of the Fermi surface topology (known as a Lifshitz transition) driven by temperature. The strong temperature-dependence of the chemical potential that is at the heart of this phenomenon is also important for understanding the thermoelectric properties of such semimetals. In a close cousin, MoTe2, by using high-resolution laser based Angle Resolved Photoemission Spectroscopy (ARPES) we identify Weyl points and Fermi surface arcs, showing a new type of topological Weyl semimetal with electron and hole pockets that touch at a Weyl point. I will also present evidence for a new topological state in PtSn4, that manifests itself by presence of set of extended arcs rather than Dirac points, and so far is not yet understood theoretically. These results open up new directions for research aimed at enhancing topological responsiveness of new quantum materials. "Statistical mechanics for networks of real neurons" William Bialek , Princeton University Thoughts, memories, percepts, and actions all result from the interactions among large numbers of neurons. Physicists have long hoped that these emergent behaviors could be described using ideas from statistical mechanics. Recent experimental developments have made it possible to monitor, simultaneously, the electrical activity in hundreds or even thousands of cells. I will describe surprisingly simple statistical physics models that provide a detailed, quantitative account of these data, and then turn to renormalization group ideas that allow us to search explicitly for some underlying simplicity. There are signs that real networks are described by non-trivial fixed points, setting the stage for more ambitious theorizing. http://www.princeton.edu/~wbialek/wbialek.html Hoxton Lecture Chemistry , Room 402 "The Physics of Life" In the four hundred years since Galileo, the physics community has constructed a remarkably successful mathematical description of the world around us. From deep inside the atomic nucleus to the structure of the universe on the largest scales, from the flow air over the wing of an airplane to the flow of electrons in a computer chip, we can predict in detail what we see, and what will happen when we look in places we have never looked before. What are the limits to this predictive power? In particular, can we imagine a theoretical physicist's approach to the complex and diverse phenomena of the living world? Is there something fundamentally unpredictable about life, or are we missing some deep theoretical principles that could bring the living world under the predictive umbrella of physics? Exploring this question gives us an opportunity to reflect on what we expect from our scientific theories, and on many beautiful phenomena. I hope to leave you with a deeper appreciation for the precision of life's basic mechanisms, and with optimism about the prospects for better theories. "Multi-messenger Astrophysics in Light of LIGO's Recent Discoveries" Imre Bartos , University of Florida The recent discoveries of gravitational waves unveiled numerous opportunities in astrophysics, as well as in the study of the cosmos and the laws of physics. In particular, the multi-messenger detection of binary neutron-star merger GW170817 through gravitational waves and across the electromagnetic spectrum already delivered several important results. I will outline what we learned from GW170817 so far (its remnant is still observable!), along with the opportunities and challenges of near-future multi-messenger observations that will broaden our horizon with gravitational waves in the next few years. We can expect the proliferation of detected binary neutron star and binary black hole mergers, along with the large-scale efforts to rapidly identify the electromagnetic and neutrino counterparts of these events. Frequent multi-messenger observations will enable the study of exceptional events, source populations, and sufficient statistics to probe new physics and cosmology. "Quantum gas microscopy of many-body dynamics in Fermi-Hubbard and Ising systems" Peter Schauss , Princeton University The ability to probe and manipulate cold atoms in optical lattices at the atomic level using quantum gas microscopes enables quantitative studies of quantum many-body dynamics. While there are many well-developed theoretical tools to study many-body quantum systems in equilibrium, gaining insight into dynamics is challenging with available techniques. Approximate methods need to be benchmarked, creating an urgent need for measurements in experimental model systems. In this talk, I will discuss two such measurements. First, I will present a study that probes the relaxation of density modulations in the doped Fermi-Hubbard model. This leads to a hydrodynamic description that allows us to determine the conductivity. We observe bad metallic behavior that we compare to predictions from finite-temperature Lanczos calculations and dynamical mean field theory. Second, I introduce a new platform to study the 2D quantum Ising model. Via optical coupling of atoms in an optical lattice to a low-lying Rydberg state, we observe quench dynamics in the resulting Ising model and prepare states with antiferromagnetic correlations. "Optical and transport properties of geometric metals" Dmytro Pesin , University of Utah [Host: Israel Klich ] The effects of band geometry on measurable properties of electronic systems have been one of the central subjects in the recent history of condensed matter physics. In this talk, I will describe how non-trivial topology and, more generally, geometry of gapless electronic phases manifest in their optical and transport characteristics. The main focus of the talk will be on optical anomalous Hall effect and optical activity of Weyl metals, kinetic magnetoelectric effect in noncentrosymmetric conductors, as well as disorder physics in these materials. "Quantum Mixology: Creating Novel Interacting Bose-Fermi Mixtures with Cs and Li" Brian DeSalvo , University of Chicago A gas of atoms cooled to sufficiently low temperature will form either a Bose-Einstein condensate (BEC) or a degenerate Fermi gas (DFG) depending on the quantum statistics of the constituent particles. But what happens when you combine a BEC and a DFG in an optical trap and add a healthy dose of interspecies interactions? Mean-field theory predicts three possible outcomes: a miscible mixture for weak interactions, complete demixing for strong repulsive interactions, or a spectacular collapse due to the loss of mechanical stability for strong attractive interactions. In this talk, I will discuss our efforts to answer this question experimentally in the specific case where the bosons are much heavier than the fermions. To this end, we have created the first quantum degenerate mixture of bosonic 133Cs and fermionic 6Li and used an interspecies Feshbach resonance to tune the interactions between the bosons and fermions. For attractive interspecies interactions, we find two surprising results. First, we show that a degenerate Fermi gas of Li can be trapped by a Cs BEC, even in the absence of external potentials. Second, for strong attractive interactions where collapse is predicted, we observe no such instability. I will discuss the mechanisms at play to explain these results and comment on current and future studies delving deeper into these unexpected regimes. "New Frontiers of Electromagnetic Phenomena at the Nanoscale" Wade Hsu , Yale University Optics and photonics today enjoy unprecedented freedom. The ability to synthesize arbitrary light fields (through wavefront shaping) and the ability to design structures at the subwavelength scale (through nanofabrication) enable us to realize phenomena that could only be imagined in the past. In this talk, I will present several experiments and related theory that demonstrate exciting new phenomena which were previously inaccessible. A) Conventional textbook wisdom is that waves cannot be perfectly confined within the continuum spectrum of an open systems. Exceptions called "bound states in the continuum" were hypothesized by von Neumann and Wigner [1] but not realized. I will describe the first realization of such unusual states [2] and their manifestation as polarization vortices protected by topologically conserved "charges" [3]. B) Our ability to control radiation also enables the realization of non-Hermitian phenomena with no counterpart in closed systems. I will show how non-Hermiticity generates unique topologies in photonic band structures and lead to enhanced light–matter interactions [4,5]. C) Strong disorder in naturally occurring light-scattering media allows us to study mesoscopic physics in a new arena. I will describe the control of optical transport via wavefront shaping, and how the long-range correlations between multiply scattered photons enable us to simultaneously control orders of magnitudes more degrees of freedom than previously thought possible [6,7]. [1] C. W. Hsu*, B. Zhen* et al., Nature Reviews Materials 1, 16048 (2016). [2] C. W. Hsu*, B. Zhen* et al., Nature 499, 188 (2013). [3] B. Zhen*, C. W. Hsu* et al., Phys. Rev. Lett. 113, 257401 (2014). [4] B. Zhen*, C. W. Hsu* et al., Nature 525, 354 (2015). [5] H. Zhou et al., Science, eaap9859 (2018). [6] C. W. Hsu et al., Phys. Rev. Lett. 115, 223901 (2015). [7] C. W. Hsu et al., Nature Physics 13, 497 (2017). "Topological and nonreciprocal dynamics in an optomechanical system" Haitan Xu , Yale University Non-Hermitian systems exhibit rich physical phenomena that open the door to qualitatively new forms of control. In this talk, I will introduce our recent work on topological and nonreciprocal dynamics in a non-Hermitian optomechanical system. Specifically, we realized topological energy transfer between nearly degenerate modes by adiabatically encircling an exceptional point (a singularity of the complex spectrum). We also demonstrated that this energy transfer is non-reciprocal: a given topological operation can only transfer energy in one direction. We have extended the topological and nonreciprocal dynamics to highly non-degenerate modes by exploiting a generic form of nonlinearity, which should allow these effects to be exploited in a very wide range of physical systems. In addition, we realized nonreciprocal dynamics by optomechanical interference. "Quantum Molecular Dynamics of Strongly Correlated Electron Materials" Gia-Wei Chern , UVA-Department of Physics I will present a new formulation of quantum molecular dynamics for strongly correlated materials. Our novel scheme enables the study of the dynamical behavior of atoms and molecules with strong electron correlations. In particular, our scheme is based on the efficient Gutzwiller method that goes beyond the conventional mean-field treatment of the intra-atomic electron repulsion and captures crucial correlation effects such as band narrowing and electron localization. We use Gutzwiller quantum molecular dynamics to investigate the Mott metal-insulator transition in the liquid phase of a single-band metal and uncover intriguing structural and transport properties of the atoms. I will also discuss future plans for large-scale dynamical simulations of strongly correlated systems. "A Rare and Prolific r-process Event Preserved in an Ultra-Faint Dwarf Galaxy" Alexander Ji , Carnegie Observatories [Host: Xiaochao Zheng] The heaviest elements in the periodic table are synthesized through the rapid neutron-capture process (r-process), but the astrophysical site producing these elements has been a long-standing conundrum. Ultra-faint dwarf galaxies contain a simple fossil record of early chemical enrichment that provide an ideal laboratory to investigate the origin of r-process elements. Previous measurements found very low levels of neutron-capture elements in ultra-faint dwarfs, preferring supernovae as the r-process site. I present high-resolution chemical abundances of nine stars in the recently discovered ultra-faint dwarf Reticulum II, which display extremely enhanced r-process abundances 2-3 orders of magnitude higher than the other ultra-faint dwarfs. Stars with such extreme r-process enhancements are only rarely found in the Milky Way halo. The r-process abundances imply that the neutron-capture material in Reticulum II was synthesized in a single prolific event that is incompatible with r-process yields from ordinary core-collapse supernovae but consistent with a neutron star merger. Together with the recent gravitational wave observations of a neutron star merger and its electromagnetic afterglow, it is now clear that neutron star mergers dominate cosmic production of r-process elements. "Electron hydrodynamics in solid-state physics" Thomas Scaffidi , University of California, Berkeley Wolfgang Pauli called solid-state physics "the physics of dirt effects", and this name might appear well-deserved at first sight since transport properties are more often than not set by extrinsic properties, like impurities. In this talk, I will present solid-state systems in which electrons behave hydrodynamically, and for which transport properties are instead set by intrinsic properties, like the viscosity. This new regime of transport opens the way for a "viscous electronics", and provides a new angle to study how quantum mechanics can constrain and/or enrich hydrodynamics. "A programmable quantum computer based on trapped ions" Norbert Linke , Joint Quantum Institute, University of Maryland, and NIST Quantum computers can solve certain problems more efficiently than any classical computer. Trapped ions are a promising candidate for realizing such a system. We present a modular quantum computing architecture comprised of a chain of 171Yb+ ions with individual Raman beam addressing and individual readout [1]. We use the transverse modes of motion in the chain to produce entangling gates between any qubit pair. This creates a fully connected system which can be configured to run any sequence of single- and two-qubit gates, making it in effect an arbitrarily programmable quantum computer that does not suffer any swap-gate overhead [2]. Recent results from different quantum algorithms on five and seven ions will be presented [3,4], including a quantum error detection protocol that fault-tolerantly encodes a logical qubit [5]. I will also discuss current work and ideas to scale up this architecture. [1] S. Debnath et al., Nature 563:63 (2016). [2] NML et al., PNAS 114 13:3305 (2017). [3] C. Figgatt et al., Nat. Communs. 8, 1918 (2017). [4] NML et al., arXiv:1712.08581 (2017) [5] NML et al., Sci. Adv. 3, 10 (2017). "Quantum sensing in a new single-molecule regime" Peter Maurer , Stanford University Quantum optics has had a profound impact on precision measurements, and recently enabled probing various physical quantities, such as magnetic fields and temperature, with nanoscale spatial resolution. Such advancements in 'quantum sensing' have brought the elusive dream of performing nuclear magnetic resonance spectroscopy (NMR) on individual biomolecules closer to reality. In my talk, I will discuss the development and application of novel quantum metrological technologies to study biological systems at a single-molecule level. I will start with a general introduction to quantum sensing, with a focus on the measurement of magnetic fields at a nanoscale. I will then show how we utilize such sensing techniques to control the temperature profile in living systems with subcellular resolution. Finally, I will provide an outlook on how quantum sensing and single-molecule biophysics can be utilized to perform NMR spectroscopy with unprecedented sensitivity, possibly down to the level of individual biomolecules. "Topological Superconductivity From Electronic Interactions" Yuxuan Wang , UIUC Topological superconductors exhibit exotic Majorana modes at the boundaries and vortices, and can provide important applications in quantum computing. In addition to usual path of "engineering" topological superconductivity with heterostructure of conventional superconductors, we show that intrinsic topological superconductivity can also be naturally realized through electron-electron interactions. Specifically, we analyze the topological superconducting state that emerges near the onset of an inversion-breaking electronic order. Other than topological superconductivity, we show that the system has an enhanced U(1)xU(1) symmetry as well as a rich phase diagram. We address the relevance of our results with recent experiments in Cd2Ce2O7 and half-Heusler superconductors. We argue that important progress can be made at the intersection of topological superconductivity and unconventional superconductivity. "Imprints of complex landscapes on glassy materials" Sho Yaida , Duke University Amorphous solids are omnipresent in everyday life, from window glasses to plastics to piles of sand. Yet our understanding of their properties lags far behind that of their crystalline counterparts. Recent advances are rapidly changing the way in which we understand these materials. This talk overviews two such advances: (i) the algorithmic developments that link dramatic slowdown of glass-forming liquids to growing amorphous order, and (ii) the discovery of the critical replica-symmetry-breaking transition within solid glasses. Taken together, these results reinforce the overriding role of rugged free-energy landscapes in controlling glassiness. "Topological Quantum Chemistry" Jennifer Cano , Princeton University The past decade's apparent success in predicting and experimentally discovering distinct classes of topological insulators (TIs) and semimetals masks a fundamental shortcoming: out of 200,000 stoichiometric compounds extant in material databases, only several hundred of them are topologically nontrivial. Are TIs that esoteric, or does this reflect a fundamental problem with the current piecemeal approach to finding them? To address this, we propose a new and complete electronic band theory that highlights the link between topology and local chemical bonding, and combines this with the conventional band theory of electrons. We classify the possible band structures for all 230 crystal symmetry groups that arise from local atomic orbitals, and show which are topologically nontrivial. We show how our topological band theory sheds new light on known TIs, and demonstrate the power of our method to predict new TIs. "Constraints on multiparticle entanglement" David Meyer , University of California at San Diego States of a multiparticle quantum system are useful for quantum information processing when they are entangled, i.e., not product states relative to the tensor product decomposition of the Hilbert space corresponding to the particles. Arbitrary entanglements between parts of a quantum system are not possible, however; they must satisfy certain "monogamy" constraints which limit how much multiple different subsystems can be entangled with one another. The standard monogamy constraints can be generalized in several ways: in this talk we'll tighten some, generalize others to higher dimensional tensor factors, and derive inequalities satisfied by symmetric sets of entanglement measures. Along the way we'll contrast the quantum results with corresponding statements about classical random variables. https://math.ucsd.edu/people/profiles/david-meyer/ "Entanglement, chaos and order" Xiaoliang Qi , Stanford University In classical mechanics, chaos refers to the phenomenon that an arbitrarily small perturbation leads to a dramatic change at a later time. The analogous phenomenon in quantum mechanics---quantum chaos is generic in many-body systems. Although chaos makes it difficult to solve the many-body problem exactly, it actually provides new knowledge about dynamics of the system, such as thermalization. In understanding quantum chaos and thermalization, the concept of quantum entanglement plays an essential role. In this talk, I will discuss the connection between several related phenomena, including the dynamics of quantum entanglement, thermalization of isolated systems, and measure of quantum chaos. As a concrete model to study quantum chaos, I will discuss the Sachdev-Ye-Kitaev (SYK) model and its generalizations. This model provides an example of strongly correlated systems in which new kinds of "order" emerges from chaos. Entanglement dynamics in this model suggests an interesting interplay between thermalization and many-body localization. References: arXiv:1511.04021 arXiv:1609.07832 "Recent Advances on the Glass Problem" Patrick Charbonneau , Duke University Recent theoretical advances in the mean-field theory of glasses predict the existence, deep in the glass phase, of a novel phase transition, a so-called Gardner transition. This transition signals the emergence of a complex free energy landscape composed of a marginally stable hierarchy of sub-basins. It is also thought to be the onset of the anomalous thermal and transport properties of amorphous systems, and to ultimately lead to the unusual critical behavior at jamming. In this talk, I will present an overview of our recent theoretical and numerical advances in capturing and characterizing this novel materials feature. "Bad Metal Behavior and Mott Quantum Criticality" Vladimir Dobrosavljevic , Florida State University [Host: Gia-Wei Chern] According to early ideas of Mott and Anderson, the interaction-­driven metal-­insulator transition "the Mott transition" remains a sharp T=0 phase transition even in absence of any spin or charge ordering. Should this phase transition be regarded as a quantum critical point? To address this question, here we examine the phase diagram and transport properties of the maximally frustrated half-­filled Hubbard model, in the framework of dynamical mean-­field theory (DMFT). We identify a quantum Widom line (QWL) which defines the center of the corresponding quantum critical region associated with Mott metale insulator transition for this model. The evolution of resistivity with temperature is then evaluated along trajectories following (parallel to) the QWL, displaying remarkable scaling behavior characteristic of quantum criticality. Precisely this kind of behavior was found in very recent experiments on organic Mott systems [1,2]. In the case of the doping-driven Mott transition, we show that the mysterious Bad Metal behavior (T-linear resistivity around the Mott-Ioffee- Regel limit) coincides with the Quantum Critical region of the Mott transition. [1] Quantum criticality of Mott transition in organic materials, Tetsuya Furukawa, Kazuya Miyagawa, Hiromi Taniguchi, Reizo Kato & Kazushi Kanoda, Nature Physics, 9 Feb. 2015; doi:10.1038/nphys3235. [2] See also: http://condensedconcepts.blogspot.com/2015/03/quantum-criticality-near-mott.html "APS Bridge Program: Changing the Face of Physics Graduate Education" Ted Hodapp , APS Bridge Program In nearly every science, math, and engineering field there is a significant falloff in participation by underrepresented minority (URM) students who fail to make the transition between undergraduate and graduate studies. The American Physical Society (APS) has realized that a professional society can erase this gap by acting as a national recruiter of URM physics students and connecting these individuals with graduate programs that are eager to a) attract motivated students to their program, b) increase domestic student participation, and c) improve the diversity of their program. Now in its fifth year the APS has placed enough students into graduate programs nationwide to eliminate this achievement gap. The program has low costs, is popular among graduate programs, and has inspired other departments to adopt practices that improve graduate admissions and student retention. This presentation will review project activities, present data that demonstrate effectiveness, and discuss future actions. This material is based upon work supported in part by the National Science Foundation under Grant No. (NSF-1143070). "Lifting the Bandwidth Limit of Optical Homodyne Measurement - A Key for Broadband Quantum Information" Avi Pe'er , Bar Ilan University Homodyne measurement is a corner-stone of quantum optics. It measures the fundamental variables of quantum electrodynamics - the quadratures of light, which represent the cosine-wave and sine-wave components of an optical field. The quadratures constitute the quantum optical analog of position and momentum in mechanics and obey quantum uncertainty, indicating the inherent inability to measure both simultaneously. The homodyne process, which extracts a chosen quadrature amplitude by correlating the optical field against an external quadrature reference (local-oscillator, LO), forms the backbone of coherent detection in physics and engineering, and plays a central role in quantum information processing. Homodyne can reveal non-classical phenomena, such as squeezing of the quadrature uncertainty; It is used in tomography to fully characterize quantum states of light; Homodyne detection can generate non-classical states, provide local measurements for teleportation and serve as a major detector for quantum key distribution (QKD) and quantum computing. Yet, standard homodyne suffers from a severe bandwidth limitation. While the bandwidth of optical states can easily span many THz, standard homodyne detection is inherently limited to the electrically accessible, MHz to GHz range, leaving a dramatic gap between the relevant optical phenomena and the measurement capability. This gap impedes effective utilization of the huge bandwidth resource of optical states and the potential enhancement of the information throughput \emph{by several orders of magnitude} with parallel processing in quantum computation, QKD and other applications of quantum squeezed light. Here we demonstrate a fully parallel optical homodyne measurement across an arbitrary optical bandwidth, effectively lifting the bandwidth limitation completely. Using optical parametric amplification, which amplifies one quadrature while attenuating the other, we measure two-mode quadrature squeezing of 1.7dB below the vacuum level simultaneously across a bandwidth of 55THz using a single LO - the pump. This broadband parametric homodyne measurement opens a wide window for parallel processing of quantum information. Yaakobv Shaked, Yoad Michael, Rafi Vered, Leon Bello, Michael Rosenbluh and Avi Pe'er, Physics Dept. and BINA center for Nanotechnology, Bar Ilan University, Ramat Gan 5290002, Israel "Tomography of the Atomic Nucleus" Simonetta Liuti , UVA-Physics [Host: Joe Poon] The history of our exploration of subatomic matter has witnessed a major breakthrough with every new probe being introduced. In the 1950's Hofstadter and collaborators using elastic electron scattering measured for the first time the electromagnetic form factors of nucleons and nuclei and provided the first information on the nuclear spatial charge and magnetization distributions. In the late 1960's and early 70's, Friedman, Kendall and Taylor using Deep Inelastic Scattering of electrons off the nucleon, discovered its underlying quark structure displayed in their longitudinal momentum distributions. I will discuss probes at the next frontier that will allow us to access dynamically correlated distributions in both momentum and coordinate space -- the Wigner distributions -- at the femtoscale. Deeply Virtual Compton Scattering, namely a high energy lepton scattering off a nucleon target producing a high energy real photon and a small angle recoil proton, is one of such probes. I will explain how a detailed mapping of the quarks and gluons in the nucleon and nucleus in phase space, or a phase-space tomography, besides providing for the first time images of quarks and gluons spatial distributions, is essential for understanding the so far elusive nucleon mass and spin decompositions in terms of its quark and gluon components. "All-optical Switching for Photonic Quantum Networks" Quantum internet of the future will require device functionalities that implicitly respect the fundamental facts such as quantum information cannot be copied, and cannot be measured precisely. A quantum repeater, for example,—analog of an optical amplifier that enabled global reach of the ubiquitous Internet connectivity we enjoy today—is yet to be demonstrated, although recent years have seen tremendous progress. Many other device functionalities—switches, routers, format converters, etc.—would also be needed that do not unnecessarily disturb or corrupt the quantum information as it flows from one node of the internet to another. In recent years, my group has engineered an all-optical quantum switch that fulfills many of the requirements for distributing quantum information in a networked environment. In this talk, I will present our motivation, design, construction, characterization, and utilization of such a switch in near-term networked quantum applications. "Looking for New Physics with the Weak Interaction in Electron Scattering: Recent Results from Qweak and Future Perspectives" Kent Paschke , UVA-Physics The measurement of the violation of parity symmetry in electron scattering has proven to be a powerful technique for exploring nuclear matter and searching for new fundamental forces. In the Standard Model of particle physics, parity violation can only occur through the weak interaction. Precision measurements of this symmetry breaking can test the completeness of this description of the weak force at low energies. I will describe the result of one such measurement - the recently completed Qweak experiment - along with the experimental challenges and triumphs. Future measurements in the field of parity-violating electron scattering will also be reviewed, including other Standard Model tests and experiments using the weak force to determine the size of a heavy atomic nucleus. "Peeling the Atomic Onion" Xiaochao Zheng , UVA-Physics The word "atom" (a-tomos) originates from ancient Greek philosophers, who argued that objects can be eventually divided into discrete, small particles, beyond which matter is no longer cuttable. Our search for the answer to "What the matter is made of" has gone a long way, from the first experimental evidence of atoms in the 1800's, to Rutherford's alpha scattering on gold foils, to modern day's linear accelerators looking into the atomic nucleus. We now understand that matter is made of quarks and leptons, currently named elementary particles (objects of no size) that form the foundation of the Standard Model of Particle Physics. However, if we look back at this journey, one may wish to oppose the view of the ancient Greeks and argue that quarks and leptons cannot be the end of the story, that our quest for peeling the atomic onion may be a timeless journey. I will discuss the frontier research in electron scattering at the GeV energy level. I will focus on parity violation in electron scattering off the proton and the neutron and the extraction of neutral-weak effective couplings between electrons and quarks, and show how such high precision measurements are now helping us venturing further into the study of subatomic structure. "Future directions in gravitational-wave detection" Nergis Mavalvala , M.I.T. [Host: Cass Sackett] The Laser Interferometer Gravitational-wave Observatory (LIGO) detected gravitational waves for the first time in 2015. Since then there have been a couple more detections of binary black hole mergers. I will discuss the instruments that made these discoveries, the science so far, and plans for future improvements and upgrades to LIGO. Special Colloquium and Hoxton Lecture "The Warped Universe: the one hundred year quest to discover Einstein's gravitational waves" In 2016, scientists announced the first ever detection of gravitational waves from colliding black holes, launching a new era of gravitational wave astrophysics. Gravitational waves were predicted by Einstein a hundred years earlier. I will describe the science, technology, and human story behind these discoveries that provide a window into some of the most violent and warped events in the Universe. "Tune-out wavelength spectroscopy: a new technique to characterize atomic structure" Cass Sackett , UVA-Physics When you shine a laser on an atom, the electric field of the light induces a dipole moment, resulting in an energy shift. The dipole can be either parallel or anti-parallel to the field, depending on the frequency of the light. This corresponds to negative or positive energies. At certain frequencies, however, the induced dipole is zero. The corresponding light wavelength is called a tune-out wavelength. The location of the various tune-out wavelengths depend on the electronic wave function in the atom, particularly the dipole matrix elements . So by measuring the tune-out wavelength, the dipole matrix elements can be determined more accurately than by conventional techniques. This is useful because the dipole matrix elements are also used to relate precision atomic experiments like parity violation to fundamental particle properties like the weak mixing angle. We have developed a new technique for measuring tune-out wavelengths, which should improve our knowledge of many matrix elements by an order of magnitude or more. We hope that this will support new generations of precision atomic measurements. "Origin of Long Lifetime of Band-Edge Charge Carriers in Organic-Inorganic Lead Iodide Perovskites" Tianran Chen , UVA-Physics Long carrier lifetime is what makes hybrid organic-inorganic perovskites high performance photovoltaic materials. Several microscopic mechanisms behind the unusually long carrier lifetime have been proposed, such as formation of large polarons, Rashba effect, ferroelectric domains, and photon recycling. Here, we show that the screening of band-edge charge carriers by rotation of organic cation molecules can be a major contribution to the prolonged carrier lifetime. Our results reveal that the band-edge carrier lifetime increases when the system enters from a phase with lower rotational entropy to another phase with higher entropy. These results imply that the recombination of the photo-excited electrons and holes is suppressed by the screening, leading to the formation of polarons and thereby extending the lifetime. Thus, searching for organic-inorganic perovskites with high rotational entropy over a wide range of temperature may be a key to achieve superior solar cell performance. "Making Neutrino Astronomy Real" John Beacom , Ohio State [Host: Craig Group] For over 50 years, people have been discussing the promise of high-energy neutrino astronomy. For most of that time, wholesale theoretical conjecture was unmatched by even a trifling return of measured experimental facts. Then in 2013, the IceCube neutrino observatory discovered astrophysical neutrinos with energies up to fifteen orders of magnitude above those of visible light. What does this mean for understanding astrophysical sources, the properties of neutrinos, and the contents of the Universe? "Probing Molecular Dynamics from Within using FELs" Nora Berrah , University of Connecticut Short x-ray pulses from free electron lasers (FELs) open a new regime for all scientific research. The first x-ray FEL, the Linac Coherent Light Source (LCLS) at the SLAC National Laboratory on the Stanford campus, provides intense short pulses that allow the investigation of ultrafast non-linear and multi-photon processes, including time-resolved investigations in molecules. We will report on the femtosecond response of molecules to the ultra-intense, ultrafast x-ray radiation from FELs as well as on time-resolved investigation using x-ray pump-x-ray probe techniques. "Negative resistance and other wonders of viscous electronics in graphene" Gregory Falkovich , Weizmann Institute Quantum-critical strongly correlated systems feature universal collision-dominated collective transport. Viscous electronics is an emerging field dealing with systems in which strongly interacting electrons flow like a fluid. We identified vorticity as a macroscopic signature of electron viscosity and linked it with a striking macroscopic DC transport behavior: viscous friction can drive electric current against an applied field, resulting in a negative resistance, recently measured experimentally in graphene. I shall also describe current vortices, expulsion of electric field, conductance exceeding the fundamental quantum-ballistic limit and other wonders of viscous electronics. Strongly interacting electron-hole plasma in high-mobility graphene affords a unique link between quantum-critical electron transport and the wealth of fluid mechanics phenomena. http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys3667.html https://arxiv.org/abs/1607.00986 "[CANCELED] Engineering Quantum Thermal Machines" Adolfo Del Campo , University of Massachusetts Quantum thermodynamics has emerged as an interdisciplinary research field in quantum science and technology with widespread applications. Yet, the identification of scenarios characterized by quantum supremacy -a performance without match in the classical world- remains challenging. In this talk I shall review recent advances in the engineering and optimization of quantum thermal machines. I will show that nonadiabatic many-particle effects can give rise to quantum supremacy in finite-time thermodynamics [1]. Tailoring such nonadiabatic effects by making use of shortcuts to adiabaticity, quantum heat engines can be operated at maximum efficiency and arbitrarily high output power [2]. A thermodynamic cost of these shortcuts will be elucidated by analyzing the full work distribution function and introducing a novel kind of work-energy uncertainty relation [3]. I shall close by discussing the identification of scenarios with a quantum-enhanced performance in thermal machines run over many cycles [4]. [1] J. Jaramillo, M. Beau, A. del Campo, New J. Phys. 18, 075019 (2016). [2] M. Beau, J. Jaramillo, A. del Campo, Entropy 18, 168 (2016). [3] K. Funo, J.-N. Zhang, C. Chatou, K. Kim, M. Ueda and A. del Campo, Phys. Rev. Lett, 118, 100602 (2017). [4] G. Watanabe, B. P. Venkatesh, P. Talkner and A. del Campo, Phys. Rev. Lett. 118, 050601 (2017). The Rotunda, Room Dome Room "Storage at the Threshold: Li-ion Batteries and Beyond" George Crabtree , Joint Center for Energy Storage Research (JCESR) Argonne National Laboratory and University of Illinois at Chicago [Host: Bellave Shivaram] The high energy density and low cost of lithium-ion batteries have created a revolution in personal electronics through laptops, tablets, smart phones and wearables, permanently changing the way we interact with people and information. We are at the threshold of similar transformations in transportation to electric cars and in the electricity grid to renewable generation, smart grids and distributed energy resources. Many aspects of these transformations require new levels of energy storage performance and cost that are beyond the reach of Li-ion batteries. Next generation beyond Li-ion batteries and their potential to meet these performance and cost thresholds will be analyzed. George Crabtree, Elizabeth Kocs and Lynn Trahey, The energy-storage frontier: Lithium-ion batteries and beyond, MRS Bulletin 40, 1067 (2015) Bio: George Crabtree is Director of the Joint Center for Energy Storage (JCESR) at Argonne National Laboratory and Professor of Physics, Electrical, and Mechanical Engineering at University of Illinois-Chicago (UIC). He has wide experience in next-generation battery technology and integrating energy science, technology, policy and societal decision-making. He has led workshops for the Department of Energy on energy science and technology, is a member of the National Academy of Sciences and has testified before the U.S. Congress. "A New Way to Look at the Sky" Kirsten Tollefson , Michigan State University The High Altitude Water Cherenkov (HAWC) Gamma-ray Observatory was completed in March 2015 and is now giving us a new view of the sky. HAWC is a continuously operating, wide field-of-view observatory sensitive to 100 GeV – 100 TeV gamma rays and cosmic rays. It is 15 times more sensitive than previous generation extensive air shower gamma-ray instruments. It serves as a "finder" telescope and monitors the same sky as gamma-ray satellites (Fermi), gravity-wave (LIGO) detectors and neutrino observatories (IceCube) allowing for multi-wavelength and multi-messenger observations. HAWC hopes to answer questions such as "what is dark matter?" and "where do cosmic rays come from?" by observing some of the most violent processes in our Universe. I will present highlights from HAWC's first year of operation. "Controlling cell size and DNA replication in bacteria - insights from mathematical modeling" Ariel Amir , Harvard University Understanding how cells control and coordinate the various ongoing cellular processes, such as DNA replication, growth and division is an outstanding fundamental problem in biology. Remarkably, bacterial cells may divide faster than their chromosomes replicate, implying that cells maintain multiple rounds of chromosome replication, and that tight control over DNA replication must be in place. I will show how ideas from statistical mechanics and mathematical modeling can serve as alternative "microscopes" into this problem. Our results suggest that both cell size and chromosome replication may be simultaneously regulated by following a simple control mechanism, in which, effectively, a constant volume is added between two DNA replication initiation events. This model elucidates the experimentally observed correlations between various events in the cell cycle, and explains the exponential dependence of cell size on the growth rate, as well as recent experiments in which cell morphology is perturbed. http://amir.seas.harvard.edu/ "Ice Fishing for Neutrinos at the South Pole" Francis Halzen , UW - Wisconsin The IceCube project at the South Pole has melted eighty-six holes over 1.5 miles deep in the Antarctic icecap for use as astronomical observatories. The project recently discovered a flux of neutrinos reaching us from the cosmos, with energies more than a million times those of the neutrinos produced at accelerator laboratories. These neutrinos are astronomical messengers from some of the most violent processes in the universe--giant black holes gobbling up stars in the heart of quasars and gamma-ray bursts, the biggest explosions since the Big Bang. In a special public lecture, brought to you by the departments of physics, astronomy, and NRAO Francis Halzen, Gregory Breit Professor and Hilldale Professor of Physics at UW-Madison and the principal investigator of IceCube, will tell the story of the IceCube telescope and discuss highlights from recent scientific results. "Manipulating atoms with light: from spectroscopy to atomtronics" Bill Phillips , NIST [Host: Sanjay Khatri - OSA Student Chapter] Physicists have used light and its polarization to elucidate the internal state of atoms since the 19th century. Early in the 20th century, the momentum of light was used to change the center-of-mass motion of atoms. The latter part of the 20th century brought optical pumping, coherent laser excitation, and laser cooling and trapping as tools to manipulate both the internal and external states of atoms. Atom optics techniques like diffraction of atoms from light provided the elements needed for atom-wave interferometers. Bose-Einstein condensation created atomic samples having laser-like deBroglie-wave coherence. Now, in the 21st century, the circulation of superfluid atoms in ring-shaped structures enables "atomtronic" circuitry—an atomic analog of superconduction electric circuits. We observe persistent flow of atoms in toroidal traps, and can introduce a weak-link (a kind of Josephson junction) that allows control of the quantized circulation of atoms. "From Chirps to Jets: The extreme world of Black Holes and Neutron Stars" Francois Foucart , Lawrence Berkeley National Lab Black holes and neutron stars are extraordinary astrophysical laboratories. They allow us to test the laws of gravity and nuclear physics in extreme environments which cannot be reproduced on Earth. In this talk, I will discuss efforts to model these compact objects in two classes of astrophysical systems: mergers of black hole-neutron star and neutron star-neutron star binaries, and accretion disks around supermassive black holes. The first are powerful sources of gravitational waves, and emit bright electromagnetic transients. In the advanced gravitational wave detector era, they will provide us with new information about general relativity, the properties of matter above nuclear density, and the population of black holes and neutron stars. The second will soon be imaged by the Event Horizon Telescope with enough accuracy to resolve the horizon of two black holes, and to study the behavior of the nearly collisionless plasma accreting onto them. I will in particular focus on the role of numerical simulations using general relativistic codes, which will play a crucial role in our interpretation of these upcoming observations. "Quantum alchemy for the 21st century: accessing new horizons of quantum many-body dynamics through periodic driving" Mark Rudner Recent work on topological materials has revealed a wide variety of intriguing phenomena that may arise when particles move in "non-trivial" bands. At the same time, modern advances in experimental capabilities for controlling electronic, atomic, and optical systems open new possibilities for dynamically controlling the behaviors of a range of quantum systems. In this talk I will review the basic ideas behind topological band theory, and then explain how periodic driving can be used to gain dynamical control over the topological properties of quantum matter. In the driven case, intriguing new types of robust non-equilibrium topological phenomena emerge. To illustrate, I will show how the combination of driving, topology, and interactions can bring about a new regime of universal quantized transport, and discuss potential near-term experimental realizations. **THE RECEPTION WILL BE HELD AT 3:00PM IN ROOM 313** "The Past, Present, and Future of 21cm Cosmology" Adrian Liu , UC Berkeley In the next few years, low-frequency radio telescopes will use the 21cm line of neutral hydrogen to make unprecedentedly large maps of our observable Universe. These will provide exquisite constraints on the properties of the first stars and galaxies. Along these lines, I will review recent results from the Precision Array to Probe the Epoch of Reionization (PAPER) experiment, which have begun to shed light on heating processes in the early universe. I will also discuss how comparing theory and observations will become difficult as one enters the regime of "big data" and theoretical models become increasingly complicated. I will describe how machine learning techniques make such comparisons computationally feasible. Finally, I will discuss the recently commenced Hydrogen Epoch of Reionization Array (HERA) experiment, including its forecasted ability to constrain fundamental parameters such as the neutrino mass. Looking to the future, I will highlight additional opportunities to constrain cosmology and particle physics using the 21cm line. "Progress and challenges in designing a universal Majorana quantum computer" Torsten Karzig , Station Q, UCSB I will discuss a promising design proposal for a scalable topological quantum computer. The qubits are envisioned to be encoded in aggregates of four or more Majorana zero modes, realized at the ends of topological superconducting wire segments that are assembled into superconducting islands with significant charging energy. Quantum information can be manipulated according to a measurement-only protocol, which is facilitated by tunable couplings between Majorana zero modes and nearby semiconductor quantum dots. The key virtue of the proposed architecture is its modular and scalable design and a natural suppression of quasiparticle poisoning by charge protection. In the second part of the talk I will comment on the importance of elevating these designs to full quantum universality by so called magic state injection. The latter relies on a high fidelity source of specific quantum states and I will point out some of ideas and challenges for providing them. "Gravitational waves from binary black holes across the spectrum" Michele Vallisneri , Jet Propulsion Laboratory, Caltech On September 14, 2015, the two LIGO detectors simultaneously observed a transient gravitational-wave signal, which was named GW150914. The signal fit very precisely the general-relativistic prediction for the inspiral, merger, and ringdown of a pair of stellar-mass black holes, with component masses greater than was thought possible in standard evolution scenarios. This was the first direct detection of gravitational waves and the first observation of a binary black-hole merger. I describe the mechanics and behind-the-scenes of the detection, and its implications for astrophysics and fundamental physics. Two additional black-hole binaries were detected in LIGO's first observing run, and more are expected from current data taking. At the low-frequency side of the gravitational-wave spectrum, signals from massive black-hole binaries are targeted by the space-based observatory LISA, now on track for launch in the early 2030s, and by pulsar-timing arrays, with a positive detection expected in ten years. I discuss the science case, prospects, and requirements of these programs. "Bosonic Symmetry Protected Topological States: Theory, Numerics, and Experimental Platform" Yi-Zhuang You , Harvard University Topological phases of matter is an active research area of condensed matter physics. Among various topics, the bosonic symmetry protected topological (BSPT) states have attracted enormous theoretical interest in the last few years. BSPT states are bosonic analogs of topological insulators. The Haldane phase of spin-1 chains is one famous example. I will talk about our recent proposal to realize two-dimensional BSPT states in the twisted bilayer graphene with strong magnetic field, as well as numerical simulations of the lattice model in various parameter regimes. The proposed BSPT state is a quantum spin Hall insulator with bosonic boundary modes only. The bosonic modes are spin and charge collective excitations of electrons. The quantum phase transition between the topological and the trivial phases happens by closing the gap of bosonic modes in the bulk, without closing the single particle gap of electrons, which is fundamentally different from all the well-known topological transitions in free fermion topological insulators. On the theory side, the phase transition is related to topics of deconfined criticality and duality of (2+1)D conformal field theories. The theoretical, numerical and experimental studies will deepen our understanding of quantum phase transitions. " Probing Extreme Gravity with Black Holes and Neutron Stars" Kent Yagi , Princeton University Black holes and neutron stars are extremely compact astrophysical objects that are produced after the death of very massive stars. Due to their large compactness and population, such compact objects offer us excellent testbeds for probing fundamental physics. In this talk, I will focus on probing extreme (strong and dynamical-field) gravity that was previously inaccessible. Regarding black hole based tests of gravity, I will explain how stringently one can probe various fundamental pillars in General Relativity with the recently-discovered gravitational wave events. Regarding neutron star based tests of gravity, I will use approximate universal relations ("I-Love-Q relations") among certain neutron star observables that are almost insensitive to the unknown stellar internal structure, and describe how one can probe extreme gravity by combining future gravitational wave and binary pulsar observations. I will conclude with a summary of important future directions. "Superconductivity from repulsion" Andrey Chubukov , University of Minnesota [Host: Genya Kolomeisky] In my talk, I review recent and not so recent works aiming to understand whether a nominally repulsive Coulomb interaction can by itself give rise to a superconductivity. I discuss a generic scenario of the pairing by electron-electron interaction, put forward by Kohn and Luttinger back in 1965, and modern studies of the electronic mechanisms of superconductivity in the lattice systems which model cuprates, Fe-pnictides, and doped graphene. I show that the pairing in all three classes of materials can be viewed as a lattice version of Kohn-Luttinger physics, despite that the pairing symmetries are different. I discuss under what conditions the pairing occurs and rationalize the need to do renormalization-group studies. I also discuss the interplay between superconductivity and density-wave instabilities. "The Art of Metal Joining and How It's Used NASA Michoud Assembly Facility" Renee Horton , NASA Metal joining is a controlled process used to fuse metals. There are several techniques of metal joining of which friction stir welding is one of the more basic forms. Friction stir welding is an innovative weld process that continues to grow in use, in the commercial, defense, and space sectors. It produces high quality and high strength welds in aluminum alloys. The process consists of a rotating weld pin tool that plasticizes material through friction. The plasticized material is welded by applying a high weld forge force through the weld pin tool against the material during pin tool rotation. Self-reacting friction stir welding (SR-FSW) is one variation of the FSW process developed at the National Aeronautics and Space Administration (NASA) for use in the fabrication of propellant tanks and other areas used on the Space Launch System (SLS) NASA's SLS is an advanced, heavy-lift launch vehicle which will provide an entirely new capability for science and human exploration beyond Earth's orbit. The SLS will give the nation a safe, affordable and sustainable means of reaching beyond our current limits and open new doors of discovery from the unique vantage point of space. "Bouncing" Anna Ijjas , Princeton University In this talk, I will focus on cosmologies that replace the big bang with a big bounce. I will explain how, in these scenarios, the large-scale structure of the universe is determined during a contracting phase before the bounce and will describe the recent development of the first well-behaved classical (non-singular) cosmological bounce solutions. "Searching for Dark Matter in Gravitational Waves" Ilias Cholis , Johns Hopkins University The nature of dark matter is one of the most longstanding and puzzling questions in physics. With cosmological measurements we have been able to measure its abundance with great precision. Yet, what dark matter is composed of remains a mystery. In 2016 the first ever observation of gravitational waves from the coalescence event of two black holes was achieved by the LIGO interferometers. Together with my collaborators we recently advocated that the interactions of 30 solar masses primordial black holes composing the dark matter could explain this event. This opens up a new window in indirect searches for dark matter. In my talk, I will discuss the various probes to distinguish between these mergers of primordial black holes, from the more traditional astrophysical black hole binaries. One is through their mass spectrum, another is through cross-correlation of gravitational events with future overlapping galaxy catalogs. A third, is through their contribution to the stochastic gravitational wave background. Finally a fourth probe uses the fact that primordial black black holes form binaries with highly eccentric orbits. Those will then merge on timescales that in some cases are years, days or even minutes, retaining some eccentricity in the last seconds before the merger, which can be detected by LIGO and future ground based interferometers. "Physics of Imperfect Graphene" Eva Andrei , Rutgers University Graphene in its pristine form has transformed our understanding of 2D electron systems leading to fundamental discoveries and to the promise of important applications. I will discuss new and surprising phenomena that emerge when the perfect honeycomb lattice of graphene is disrupted. In particular I will focus on the effects of single atom vacancies on graphene's electronic and magnetic properties as revealed by scanning tunneling microscopy and spectroscopy. These include charging the vacancy site into the supercritical regime where we observe the formation of an artificial 2D atom 1, and electrostatically controlled Kondo screening of the vacancy magnetic moment. 1 J.Mao, Y.Jiang, D. Moldovan, G. Li, K. Watanabe, T. Taniguchi, M. R. Masir, F.M. Peeters, E.Y. Andrei, Tunable Artificial Atom at a Supercritically Charged Vacancy in Graphene, Nature Physics 2016, doi:10.1038/nphys3665 "Physics of the Dark Energy Survey (TBC)" Marcelle Soares-Santos , Fermilab DES is an ongoing imaging sky survey, the largest such survey to date. Its main science goal is to shed light onto dark energy by making precision measurements of the expansion history and growth of structure in the universe. In this talk I present an overview of our latest results and introduce a new DES initiative: searches for optical counterparts to gravitational wave events. Special Nobel Lecture The Paramount, Room The Auditorium "The Accelerating Universe" Adam Reiss , Johns Hopkins University A public lecture by Adam Reiss, recipient of the 2011 Nobel Prize for the detection of the accelerating expansion of the universe using distant supernovas. The lecture will be presented at 7 pm on Wednesday, November 9, at the Paramount Theater on Charlottesville's Downtown Mall. The Physics and Astronomy Departments at the University of Virginia in partnership the National Radio Astronomy Observatory invite the community to a special FREE public lecture by Nobel Laureate Adam Riess at The Paramount Theater on Wednesday, November 9 at 7:00PM. Prof. Riess will speak on the fascinating topic of the accelerating universe. Due to his critical contributions, Prof. Riess shared the 2011 Nobel Prize "for the discovery of the accelerating expansion of the Universe through observations of distant supernovae." The term "supernovae" refers to stars exploding at the end of their lives. The team Prof. Riess worked with used a particular kind of supernova, called type Ia supernova, to understand the properties of distant galaxies. His research team found that light from distant supernovae was weaker than expected ‐ this was a sign that the expansion of the Universe was accelerating. Prof. Riess will discuss the excitement of this discovery, its implications, and current ongoing work to help answer remaining questions. https://conference.phys.virginia.edu/indico/event/1/page/14-the-accelerating-universe-public-lecture-by-nobel-laureate-adam-riess "Resolution of the black hole information paradox" Samir Mathur , Ohio State Some 40 years ago Hawking found a remarkable contradiction: if we accept the standard behavior of gravity in regions of low curvature, then the evolution of black holes will violate quantum mechanics. Resolving this paradox would require a basic change in our understanding of spacetime and/or quantum theory. This paradox has found an interesting resolution through string theory. While quantum gravity is normally expected to be important only at distances of order planck length, the situation changes when a large number N of particles are involved, as for instance in the situation where we make a large black hole. Then the length scale of quantum gravity effects grows with N, altering the black hole structure to a "fuzzball"; this effect resolves the paradox. "Nuclear Weapons: Sources of Strength or Vulnerability? " Aron Bernstein , MIT [Host: Gordon Cates] An objective overview of the nuclear arms race will be presented with an emphasis on the present situation. A brief sketch of how nuclear weapons work and some ironic lessons from history will be presented. Scientists' discussions about preventing proliferation and use started in the secrecy of the Manhattan Project and continued in public during the rapid cold war buildup to the present (e.g., Bulletin of the Atomic Scientists). The central role of the nuclear non-proliferation treaty, the Iran agreement, possible pathways to nuclear conflict, and a personal view of the outlook to prevent future nuclear weapons use, including the vital role of education, will be presented. Aron Bernstein is Professor of Physics, Emeritus, MIT. His physics research has focused on experimental tests of the symmetries of the standard model (chiral anomaly and symmetry). He has followed the nuclear arms race carefully since the Cuban Missile Crisis, has taught courses on this subject, and has done research on arms control issues such as the dangers posed by the Russian and US short ballistic missile launch and warning times. He is a National Board Member of the Council for a Livable World, started by physicist Leo Szilard, which works with Congress on nuclear arms control issues. "Electron Circular Dichroism and the Origin of Life On Earth" Tim Gay , University of Nebraska We have bombarded chiral halocamphor molecules in the gas phase with low-energy (< 1 eV), longitudinally-spin-polarized electrons, and investigated dissociative electron attachment (DEA) reactions: e- + HA → H- + A, where H is a halogen atom (Br or I) and A is the residual camphor fragment. We observe that for a given target handedness, the total DEA cross section depends on the helicity of the incident electron. In the case of iodocamphor at the lowest incident electron energies, this effect can be as large as two parts in 1000. The observation of chiral sensitivity in a break-up reaction is important because, among other things, it validates the premise of the Vester-Ulbricht hypothesis regarding the origins of biological homochirality. The sordid history of previous attempts to demonstrate such effects will be briefly reviewed. "The Remarkable Story of LIGO's Detection of Gravitational Waves" Peter Shawhan , University of Maryland On February 11, LIGO scientists announced the direct detection of gravitational waves, confirming a century-old prediction of Einstein's general theory of relativity. This milestone was finally made possible with the incredibly sensitive Advanced LIGO detectors, combined with a certain measure of luck. This first event is already enough to investigate the properties of the source, test the theory of gravity, and project what more we can learn from future events. I will share both the scientific meaning of the discovery and some of the personal stories behind it. "30 years of high Tc: Superfluid and normal-fluid densities in the cuprate superconductors" David Tanner , University of Florida It was in April 1986 when Bednorz and Mueller of the IBM Zürich laboratories sent a paper about "possible high-Tc superconductivity" to Zeitschrift für Physik B. The resulting bombshell changed condensed-matter physics forever. Experimenters and theorists developed methods to measure and calculate in ways that were much improved over prior years. However, despite 30 years of intense study, the description of these materials remains incomplete. I'll discuss the discovery of the high Tc cuprates from the perspective of a participant. I'll then turn to what infrared spectroscopy can tell us about their properties. Measurements for a number of cuprate families of optical reflectance over a wide spectral range (far-infrared to ultraviolet) have been analyzed using Kramers-Kronig analysis to obtain the optical conductivity, s(w), and (by integration of the real part of the conductivity) the spectral weight of low- and mid-energy excitations. For the Kramers-Kronig analysis to give reliable results, accurate high-frequency extrapolations, based on x-ray atomic scattering functions, were used. When the optical conductivities of the normal and superconducting states are compared, a transfer of spectral weight from finite frequencies to the zero-frequency delta-function conductivity of the superconductor is seen. The strength of this delta function gives the superfluid density, rs. There are two ways to measure rs, using either the low energy spectral weight or by examination of the imaginary part, s2(w); both estimates show that 98% of the ab-plane superfluid density comes from low energy scales, below about 0.15 eV. Moreover, there is a notable difference between clean metallic superconductors and the cuprates. In the former, the superfluid density is essentially equal to the conduction electron density. The cuprates, in contrast, have only about 20% of the ab-plane low-energy spectral weight in the superfluid. The rest remains in finite-frequency, midinfrared absorption. In underdoped materials the superfluid fraction is even smaller. The consequences of this observation for the electronic structure will be addressed. "How many electrons make a semiconductor nanocrystal film metallic? " Boris Shklovskii , Univ. Minnesota [Host: Eugene Kolomeisky] Films of semiconductor nanocrystals are used as a novel, low-cost electronic materials for optoelectronic devices. To achieve their full potential a better understanding of their conductivity as a function of concentration of donors is required. So far, it is not known how many donors will make a nanocrystal film metallic. In bulk semiconductors, the critical concentration of electrons at the metal-insulator transition is universally described by the famous Mott criterion. We show theoretically that in a dense NC film, where NCs touch each other by small facets with radius r << d, the critical concentration of electrons N at the metal-insulator transition satisfies the condition is given by N r^3 = 1. This critical concentration is typically 100 times larger than the Mott one. In the accompanying experiments, we investigate the conduction mechanism in films of phosphorus-doped silicon nano-crystals. At the largest electron concentration achieved in our samples, which is half the predicted N, we find that the localization length of hopping electrons is close to three times the nano-crystals diameter, indicating that the film approaches the metal-insulator transition. "How to Understand Molecular Transport through Channels: The Role of Interactions" Anatoly Kolomeisky , Rice University The motion of molecules across channels and pores is critically important for understanding mechanisms of many biological, chemical, physical and industrial processes. Here we investigate the role of different types of interactions in the channel-facilitated molecular transport by analyzing exactly solvable discrete-state stochastic models. According to this approach, the channel transport is a non-equilibrium process that can be viewed as a set of coupled quasi-chemical transitions between discrete spatially separated states. It allows us to obtain a full dynamic description of the translocation via the pore, clarifying many aspects of these complex processes. We show that the strength and the spatial distribution of the molecule/channel interactions can strongly modify the particle fluxes through the system. Our analysis indicates that the most optimal transport is achieved when the binding sites are near the entrance or near the exit of the pore, depending on the sign of the interaction potentials. These observations allow us to explain current single-molecule experiments on the translocation of polypeptides through biological channels. We also suggest that the intermolecular interactions during the channel transport might significantly influence the overall translocation dynamics. Our explicit calculations show that the increase in the flux can be observed for some optimal interaction strengths. But the flux can also be fully suppressed for some conditions. The relevance of these results for biological systems is discussed. The physical-chemical mechanisms of these phenomena are analyzed from the microscopic point of view. Special Colloquium: Hoxton Lecture "Impact and Intrusion: the surprises and elegance of how nature arranges the texture of our lives" Sidney R. Nagel , University of Chicago Many complex phenomena are so familiar that we hardly realize that they defy our normal intuition. Examples include the anomalous flow of granular material, the long messy tendrils left by honey spooned from one dish to another, the pesky rings deposited by spilled coffee on a table after the liquid evaporates or the common splash of a drop of liquid onto a countertop. Aside from being uncommonly beautiful to see, many of these phenomena involve non-linear behavior where the system is far from equilibrium. Although most of the world we know is beyond description by equilibrium theories, we are still only at the threshold of learning how to deal with such deep and complex behavior. Thus, these are phenomena that can lead the inquisitive into new realms of physics. Joint Physics-Astronomy Colloquium "Seeds of Supermassive Black Holes at High Redshifts" Isaac Shlosman , University of Kentucky Detection of distant quasars at redshifts of ~6-7 provides a challenge to the standard picture of structure formation in the universe within the hierarchical framework, as the universe is less than a Gigayear old at this time. What are seeds of supermassive black holes (SMBHs) that power these luminous objects? After all, massive objects should form late in the evolution ... Did SMBHs form as a result of stellar evolution? In my talk, I will address various aspects of this problem and discuss viable and emerging alternatives to this paradigm. "Do Electrons in a Metal Have the Same Charge as Free Electrons in Vacuum?" Neil Zimmerman , NIST [Host: Jongsoo Yoon] At NIST, we have the enjoyable jobs of combining i) state-of-the-art research with ii) the fascinating search for higher and higher accuracy measurements. In this talk, I will i) introduce the SI system of units and explain why devices that can move around single electrons one-by-one are of great interest, ii) give an introduction to the nanoelectronic devices known as single electron transistors and pumps, iii) describe a high-accuracy measurements of the charge of electrons in a metal, and iv) discuss the possibility that this charge is not the same as the charge of a free electron in vacuum. Wilsdorf Hall, Room 200 "Opportunities for Collaboration at Oak Ridge National Laboratory" Ian Anderson , ORNL "Golden Era of Modern Magnetism " Prof. Chia Ling Chien , Johns Hopkins University Magnetism has been an old subject dating back to antiquity. Few could envision that modern magnetism would enter a golden era with the realization of so many new phenomena and game-changing technologies. These remarkable advances are due to spin ½ of electrons, as illustrated in this talk through several recent examples, including pure spin current phenomena, skyrmion materials, and p-wave superconductivity. Maury Hall, Room 209 "Rediscovering Pluto: a panel on NASA's recent Pluto Mission (New Horizons)" Alice Bowman and Anne Verbiscer [Host: UVA Physics and Astronomy Departments] On July 14, 2015, New Horizons, an interplanetary space probe, flew by Pluto, capturing the first high resolution photographs of Pluto's surface and atmosphere. New Horizons data has revealed stunning geology including ice volcanoes, Nitrogen ice glaciers, mysterious internal heating, and the "snakeskin" patterns among 50 other discoveries. UVA's planetary astronomer, Anne Verbiscer, will describe Pluto's amazing science phenomena, and Alice Bowman will explain the missions and engineering side of New Horizons sharing her experience as New Horizons' MOM. This event is meant for the general public, and everyone is invited. A reception will follow the talk. Sponsored by UVA Physics and Astronomy Departments. Organized by Astronomical Society at UVA and SPS A map showing the location of Maury Hall is available online at the following address: http://www.virginia.edu/webmap/popPages/55-MauryHall.html "Atom Interferometry Measurements of Atomic Polarizabilities and Tune-out Wavelengths" Alex Cronin , University of Arizona Atom interferometry, in which de Broglie waves of matter are coherently split and later recombined to make interference fringes, is a precision measurement method with applications in many fields of physics. In Arizona, we measured the ground-state static electric-dipole polarizabilities of Cs, Rb, and K atoms with 0.2% uncertainty using an atom beam interferometer. We also measured a tune-out wavelength for K atoms with sub-picometer uncertainty. I will discuss how these experiments use electric field gradients to induce polarizability-dependent phase shifts for atomic de Broglie waves. Our measurements provide benchmark tests for atomic structure calculations and thus test the underlying theory used to predict van der Waals forces and to interpret atomic parity non-conservation experiments. "Physics Informed Machine Learning" Michael Chertkov , LANL Machine Learning (statistical engineering) capabilities are in a phase of tremendous growth. Underlying these advances is a strong and deep connection to various aspects of statistical physics. There is also a great opportunity in pointing these tools toward physical modeling. In this colloquium I illustrate the two-way flow of ideas between physics and statistical engineering on three examples from our team LANL. First, I review the work on structure learning and statistical estimation in power system distribution (thus physical) networks. Then I describe recent progress in constructive understanding of graph learning (on example of inverse Ising model) illustrating that the generic inverse task (of learning) is computationally easy in spite of the fact that the direct problem (inference or sampling) is difficult. I conclude speculating how macro-scale models of physics (e.g. large eddy simulations of turbulence) can be learned from micro-scale simulations (e.g. of Navier-Stocks equations). https://sites.google.com/site/mchertkov/ "Metamagnetism - its ubiquity and universality" Bellave Shivaram , University of Virginia - Physics Dept. [Host: Vittorio Celli] The emergent universality in the nonlinear magnetic response of itinerant metamagnets will be discussed. Recent experimental work on heavy fermions, Hunds metals, and single molecule magnets will be presented. The appeal of the 'single energy scale model' developed in the context of these new measurements [(a) "Universality in the Nonlinear Magnetic Response of Strongly Correlated Metals", B.S. Shivaram, D.G. Hinks, M.B. Maple and P. Kumar, Phys. Rev., B89, 241107(Rapid Communication), 2014. [b] "Metamagnetism and the Fifth Order Susceptibility in UPt3", B.S. Shivaram, Brian Dorsey, D.G. Hinks and Pradeep Kumar, Phys. Rev., B89, 161108(Rapid Communication), (2014). [c] "High Field Ultrasound Measurements in UPt3 and the Single Energy Scale Model of Metamagnetism", B.S. Shivaram, V.W. Ulrich, P. Kumar and V. Celli, Phys. Rev.B, 91, 115110, 2015] will be critically examined. "Tailoring properties of single and bilayer layer transition metal dichalcogenides: looking beyond graphene*" Talat Rahman , University of Central Florida Single-layer of molybdenum disulfide (MoS2) and other transition metal dichalcogenides appear to be promising materials for next generation nanoscale applications (optoelectronic and catalysis), because of their low-dimensionality, intrinsic direct band-gap which typically lies in the visible spectrum, and strikingly large binding energies for excitons and trions. Several experimental groups have already reported novel electronic and transport properties which place these material beyond graphene for device applications. MoS2 is also known to be a leading hydrodesulphurization catalyst. Efforts are underway to further tune these properties through alloying, defects, doping, coupling to a substrate, and formation of bilayer stacks (homo- and hetero-structures). In this talk I will present results [1-3] which provide a framework for manipulating the functionality of these fascinating materials and take us closer to the goal of rational material design. My emphasis will be on properties of pure and defect-laden single layer MoS2 with and without underlying support. I will also provide rationale for the differences in the excitation energetics and ultrafast charge dynamics in single and bilayer (hetero and homo) dichalcogenides. [1] D. Sun, et al., "An MoSx Structure with High Affinity for Adsorbate Interaction," Angew. Chem. Int. Ed. 51, 10284 (2012). [2] D. Le, T. B. Rawal, and T. S. Rahman, "Single-Layer MoS2 with Sulfur-Vacancies: Structure and Catalytic Application," J. Phys. Chem. C 118, 5346 (2014). [3] A. Ramirez-Torres, V.Turkowski, and T. S. Rahman, "Time-dependent density-matrix functional theory for trion excitations: application to monolayer MoS2," Phys. Rev. B 90, 085419 (2014). "The Physics of Climate Change" Michael Mann , Pennsylvania State Univ. [Host: GPSA] I will review the basic underlying science of climate and climate change, including physically-based models of Earth's climate. I will motivate the use of a simple, zero-dimensional "Energy Balance Model" of Earth's radiative balance that can be used to estimate the global mean surface temperature of Earth. I will show how this model successfully reproduces the observed historical changes in global temperature, and how it can be used to assess various questions about future human-caused climate change. "Things that go bump in the data: QCD Puzzles, Predictions, and Prognoses" Fred Olness , Southern Methodist University The very successful Run I of the Large Hadron Collider (LHC) culminated in the discovery of the Higgs boson which was the subject of the 2013 Nobel Prize. How will we know if there are other "undiscovered" particles in the data? (And, was there a hint from CERN last month???) This will require improved calculations, and the key ingredients are: i) higher-order theoretical cross section calculations, and ii) precise Parton Distribution Functions (PDFs) that characterize the proton structure. Surprisingly, these predictions are influenced by a wide range of data including precision low-energy nuclear results. Recent theoretical developments improve our ability to address the QCD multi-scale problem and higher orders across the full kinematic range. We look at some of the topics, puzzles, and challenges that lie on the horizon, and identify areas where additional efforts are required. Colloquium: Cancelled due to snow "Building a quantum computer from the top down: massive-scale entanglement in the quantum optical frequency comb" Olivier Pfister , UVA-Physics Quantum computing offers revolutionary promises of scientific and societal importance, based on its exponential speedup of particular tasks: on the one hand, Richard Feynman's quantum simulator would allow us to tackle currently intractable quantum chemistry problems (nitrogen fixation, carbon sequestration) as well as quantum physics ones (high-Tc superconductivity); on the other hand, Peter Shor's algorithm for factoring integers would render RSA encryption obsolete. Building a practical quantum computer demands that one address the challenges of decoherence and scalability. While the platforms of trapped-ion qubits and of superconducting qubits have made spectacular progress in the fight against decoherence, our approach has been to tackle scalability, in particular by discovering a new "top down," rather than "bottom up," method for generating the entangled states, or "cluster" states, that enable the particular flavor of quantum computing called measurement-based, or one-way, quantum computing. Our method uses the continuous variables of light — "qumodes," rather than qubits — defined by the quadrature amplitudes of the quantized electromagnetic field emitted over the cavity modes, or quantum optical frequency comb, of a single optical parametric oscillator. We demonstrated a world-record cluster state size of 60 entangled qumodes (3 × 103 in progress), all simultaneously available in the frequency domain. I will also present our new proposal for generating cluster states of unlimited size by using both the time and frequency degrees of freedom. "Spatial Imaging of Quarks and Gluons in the Proton" Charles Hyde , Old Dominion University For many years, it was generally thought that spatial imaging of the proton was impossible in principle, due to the relativistic recoil of the proton when ever it is probed at momentum scales of order of the inverse of the proton size. Two decades ago, a new set of QCD matrix elements were defined, called Generalized Parton Distributions (GPDs), that encode the spatial distributions of quarks and gluons transverse to a preferred momentum axis. It was independently discovered that the GPDs are accessible experimentally in Deep Virtual Exclusive reactions: high energy reactions of the type e p —> e p gamma. I will review the progress of experimental efforts to measure GPDs, and discuss the extent to which spatial distributions can be extracted from present and future data. "Discovery of Electron Neutrino Appearance from a Muon Neutrino Beam in T2K and Future Outlook for Discovery of CP Violation in Lepton Sector in DUNE at LBNF" Chang Kee Jung , SUNY at Stony Brook Matter-antimatter asymmetry is one of the most outstanding mysteries of the universe that provides a necessary condition to our own existence. There have been various attempts to solve this mystery including 'Baryogenesis' hypothesis. However, the B-factory experiments during the last decade showed that the observed CP-violation (CPV) in the quark sector is not big enough for baryogenesis to be a viable solution to the matter-antimatter asymmetry. This leads us to the 'Leptogenesis' hypothesis, in which CPV in the lepton section plays a crtical role to create the matter-antimater asymmetry at the onset of the Big Bang. Thus, experimental observation of CPV in the lepton sector could prove to be tantamount to one of the most important discoveries in our understanding of the universe. In 2011, the T2K experiment published a result that provided the first indication for a non-zero $\theta_{13}$, the last unknown mixing angle in the lepton sector at that time, at 2.5 sigma level of significance. In 2013, after analyzing two more years of data taken since 2011, the experiment reported "Observation of electron neutrino appearance from a muon neutrino beam" at 7.3 sigma level of significance. While neutrino oscillation has been well-established since the discovery by the Super-Kamiokande experiment in 1998, there have not been a definitive observation of neutrino oscillation in a so-called "appearance mode", and this new T2K observation is the first time an explicit neutrino flavor (electron) appearance is observed from another neutrino flavor (muon). This observation also opens the door to study CPV in neutrinos. When incorporating recent precision measurements on $\theta_{13}$ by the reactor experiments along with other neutrino oscillation parameter measurements, T2K data show an intriguing initial result on the $\delta_{CP}$, which is further corroborated by the Super-Kamiokande atmospheric neutrino results as well as the most recent results from NOvA. Ultimately, however, in order to establish unequivocal results on leptonic CPV, we need a next generation experiment with a more powerful beam, and a larger and/or higher resolution detector. The Deep Underground Neutrino Experiment (DUNE) in US that is newly established as a truly international collaboration, is such an experiment. Physics goals of DUNE include: discovery of CPV in the lepton sector, detrmination of mass hierarchy, discovery of proton decay and observation of neutrinos from the Type-II supernovae. In this talk I will introduce the rapidly developing DUNE experiment and the collaboration, and also discuss possible opportunities for participation. I will also briefly comment on the Nobel Prize in Physics 2015 as well as The Breakthrough Prize in Fundamental Physics 2016 that were given to the neutrino oscillation experiments. "Friction, magnetism and superconductivity: Are they interrelated?" Jackie Krim , North Carolina State University Studies of the fundamental origins of friction have undergone rapid progress in recent years with the development of new experimental and computational techniques for measuring and simulating friction at atomic length and time scales. The increased interest has sparked a variety of discussions and debates concerning the nature of the atomic-scale and quantum mechanisms that dominate the dissipative process by which mechanical energy is transformed into heat. Measurements of the sliding friction of physisorbed monolayers and bilayers in gaseous and liquid enviroments provide information on the relative contributions of electronic, magnetic, electrostatic and phononic dissipative mechanisms. The experiments will be discussed within the context of current theories of how friction originates at the atomic scale. "A "Rough" View of Friction and Adhesion" Mark Robbins , Johns Hopkins University Friction affects many aspects of everyday life and has played a central role in technology dating from the creation of fire by rubbing sticks together to current efforts to make nanodevices with moving parts. The friction "laws" we teach today date from empirical relationships observed by da Vinci and Amontons centuries ago. However, understanding the microscopic origins of these laws remains a challenge. While Amontons said area was proportional to load and independent of area, most modern treatments assume that friction is proportional to the real area of contact where atoms on opposing surfaces are close enough to repel. Calculating this area is complicated because elastic interactions are long range and surfaces are rough on a wide range of scales. In many cases they can be described as self-affine fractals from nanometer to millimeter scales. The talk will first show that this complex problem has a simple solution. Dimensional analysis implies a linear relation between real contact area and load that can explain both Amontons' laws and many exceptions to them. Next the talk will explain why we can't climb walls like spiderman even though the attractive interactions between atoms on our finger tips should provide enough force to support our weight. The talk will conclude by considering how forces in the contact area give rise to friction. Friction shows surprisingly counterintuitive and complex behavior in nanometer to micrometer scale contacts and only a few explanations are consistent with macroscopic measurements. "Broadband Molecular Rotational Spectroscopy for Chemical Dynamics and Molecular Structure" Brooks Pate , UVA - Chemistry [Host: Thomas Gallagher] Until about 2005, molecular rotational spectroscopy was performed using narrowband (~1 MHz) excitation of a low-pressure gas in a resonant cavity. This method offers high sensitivity for each data acquisition, but the time required to perform a spectrum scan over about 10 GHz, needed to capture the rotational spectrum, was a major limitation to applications of the technique. Advances in high-speed digital electronics have made it possible to design spectrometers that offer instantaneous, broadband (> 10 GHz) performance. During our initial work with high-speed arbitrary waveform generators and digitizers (with Tom Gallagher) we developed the method of chirped pulse Fourier transform rotational spectroscopy that uses a pulse with linear chirp to phase-reproducibly excite the gas sample. The subsequent coherent emission (free induction decay) is detected with the high-speed digitizer and the frequency domain spectrum is obtained using FFT analysis. Since the introduction of the technique in 2008 [1], the method has been applied to unimolecular reaction dynamics [2], the structures of molecular clusters [3], and the laboratory identification of molecules in the interstellar medium [4]. The technique has been extended to mm-wave spectroscopy with applications to Rydberg spectroscopy [5], chemical reaction dynamics, and analytical chemistry. The broadband technique has also enabled a new generation of molecular structure studies in the field of chirality [6] with the potential for solving significant challenges for real-time pharmaceutical manufacturing. [1] G.G. Brown, B.C. Dian, K.O. Douglass, S.M. Geyer, and B.H. Pate, "A Broadband Fourier Transform Microwave Spectrometer Based on Chirped Pulse Excitation" Rev. Sci. Instrum. 79, 053103 (2008). [2] B.C. Dian, G.G. Brown, K.O. Douglass, and B.H. Pate, "Measuring Picosecond Isomerization Dynamics via Ultra-broadband Fourier Transform Microwave Spectroscopy", Science 320, 924-928 (2008). [3] C. Pérez, M.T. Muckle, D.P. Zaleski, N.A. Seifert, B. Temelso, G.C. Shields, Z. Kisiel, and B.H. Pate, "Structures of Cage, Prism, and Book Isomers of Water Hexamer from Broadband Rotational Spectroscopy", Science 336, 897-901 (2012). [4] D.P. Zaleski, N.A. Seifert, A.L. Steber, M.T. Muckle, R.A. Loomis, J.F. Corby, O. Martinez, Jr., K.N. Crabtree, P.R. Jewell, J.M. Hollis, F.J. Lovas, D. Vasquez, J. Nyiramahirwe, N. Sciortino, K. Johnson, M.C. McCarthy, A.J. Remijan, and B.H. Pate, "Detection of E-cyanomethanimine towards Sagittarius B2(N) in the Green Bank Telescope PRIMOS Survey", Ap. J. Letters, 765, L10 (2013). [5] K. Prozument, A.P. Colombo, Y. Zhou, G.B. Park, V.S. Petrovic, S.L. Coy, and R.W. Field, "Chirped-pulse Millimeter-wave Spectroscopy of Rydberg-Rydberg Transitions", Phys. Rev. Lett. 107, 143001 (2011). [6] D. Patterson, M. Schnell, and J.M. Doyle, "Enantiomer-specific detection of chiral molecules via microwave spectroscopy", Nature 497, 475 (2013). "Surfing on a Plasma Wave - Can a Grand Challenge for Engineering Answer the Big Questions of Physics?" Thomas Katsouleas , Provost and Department of Physics UVA [Host: Joseph Poon] The National Academy of Engineering has identified 14 Grand Challenges for Engineering for the 21st Century, spanning human needs from sustainability to security, health and joy of living. The last category includes such topics as engineering the tools of scientific discovery. This talk will include a brief review of the NAE Grand Challenges and the response of higher education to the Challenges, and will conclude with a review of one Grand Challenge topic: the development of ultra-compact particle accelerators based on surfing at (nearly) the speed of light on waves created by lasers or particle beams in a plasma gas. The prospects of these devices as tools of scientific discovery as well as beam therapy will be discussed. "Iron Chef: recipes for building magnetic structures atom by atom" Adrian Feiguin , Northeastern University Understanding Magnetism is a complex undertaking: it relies on our knowledge of the exact position of magnetic ions in a crystal and their interactions. More important, at its core, this is fundamentally a quantum problem and requires understanding the cooperative effects of many degrees of freedom. In the past decade, we have witnessed enormous progress in experiments that consist of placing magnetic atoms at predetermined positions on substrates, and building magnetic nanostructures, one atom at a time. The electrons in the substrate mediate the interactions between the spins, and scanning tunneling microscopy allows one to study their properties. In order to understand these interactions, we rely on a theory developed decades ago by Ruderman, Kittel, Kasuya, and Yosida, dubbed "RKKY Theory", which applies when the spins are classical. The quantum nature of the electronic spin introduces more complexity, and competition with another quantum phenomena: the Kondo effect. The combined effect is non-trivial, and can only be studied by numerical means. I will describe this effect by introducing an exact mapping onto an effective one-dimensional problem that we can solve with the density matrix renormalization group method (DMRG). I will also show that for dimension d>1, Kondo physics dominates even at short distances, while the ferromagnetic RKKY state is energetically unfavorable. This may have important implications for our understanding of heavy fermion materials and magnetic semiconductors. "Hoos at Fermilab" Craig Group , University of Virginia I will review the current status of the Mu2e and NOvA experiments at Fermilab. These experiments hope to shed light on several important topics in particle physics: neutrino oscillations, lepton flavor, the matter/antimatter asymmetry of the Universe, dark matter, and others. As I describe the status and goals of these experiments I will highlight the impact of our UVA efforts. "Love triangles, quantum fluctuations and spin jam " Seung-Hun Lee , UVA-Physics When magnetic moments are interacting with each other in a situation resembling that of complex love triangles, called frustration, a large set of states that are energetically equivalent emerge. This leads to exotic spin states such as spin liquid and spin ice. In their paper recently published in the Proceedings of the National Academy of Sciences (PNAS), we presented evidence for the existence of a topological glassy state, that we call a spin jam, induced by quantum fluctuations. The case in point is SrCr9pGa12-9pO19 (SCGO(p)), a highly frustrated magnet, in which the magnetic Cr ions form a quasi-two-dimensional triangular system of bi-pyramids. This system has been an archetype in search for exotic spin states. Understanding the nature of the state has been a great intellectual challenge. Our new experimental data and theoretical spin jam model provide for the first time a coherent understanding of the phenomenon. Furthermore, the findings strongly support the possible existence of purely topological glassy states. Special Colloquium: INPP Annual Lecture "The Novel World of Hadron Physics" Stanley J. Brodsky , SLAC, Stanford University [Host: Dinko Pocanic] I will survey a number of exciting new developments in hadron physics. These include: new insights into the nature of the color-confi ning con finement quark potential in quantum chromodynamics; a novel application of supersymmetry to hadron physics; the relation between the parameter ΛMS which controls high-energy interactions of quarks to the mass of the proton; and the elimination of the renormalization scale ambiguity for perturbative QCD calculations. I will also discuss several novel experimental tests of QCD which can be performed at JLab, including: hard exclusive and diffractive reactions, flavor-dependent antishadowing of nuclear interactions; intrinsic strange- and charm-quark phenomena; the production of tetraquarks and other exotic hadronic states; and factorization-breaking lensing phenomena. OSA & SPIE joint Student Chapter at UVA "Ultracold molecules – a new frontier for quantum physics and chemistry" Jun Ye , JILA, National Institute of Standards and Technology, University of Colorado Molecules cooled to ultralow temperatures provide fundamental new insights to strongly correlated quantum systems, molecular interactions and chemistry in the quantum regime, and precision measurement. Complete control of molecular interactions by producing a molecular gas at very low entropy and near absolute zero has long been hindered by their complex energy level structure. Recently, a range of scientific tools have been developed to enable the production of molecules in the quantum regime. Here, molecular collisions follow full quantum descriptions. Chemical reaction is controlled via quantum statistics of the molecules, along with dipolar effects. Further, molecules can be confined in reduced spatial dimensions and their interactions precisely manipulated via external electromagnetic fields. For example, by encoding a spin-1/2 system in rotational states, we realize a spin lattice system where many-body spin dynamics are directly controlled by long-range and anisotropic dipolar interactions. These new capabilities promise further explorations of strongly interacting and collective quantum effects in exotic quantum matter. Chemistry Building, Room Chemistry Building, Room 402 "Inflationary Cosmology: Is Our Universe Part of a Multiverse?" Alan Guth , MIT [Host: Brad Cox] Inflationary cosmology gives a plausible explanation for many observed features of the universe, including its uniformity, its mass density, and the patterns of the ripples that are observed in the cosmic microwave background. Beyond what we can observe, most versions of inflation imply that our universe is not unique, but is part of a possibly infinite multiverse. I will describe the workings of inflation, the evidence for inflation, and why I believe that the possibility of a multiverse should be taken seriously. "Engaging with DPRK for Science Diplomacy and World Peace" ChanMo Park , Chancellor of PUST/Former President of POSTECH As Chancellor of the Pyongyang University of Science & Technology (PUST) starting in October, 2010, I have seen the slow changes in DPRK, especially after Kim Jong Un took over the power in January, 2012. Globalizing DPRK is essential for peaceful coexistence and eventual reunification of two Koreas. One way to achieve this goal is Science Diplomacy, in particular educating young talents and globalizing them. In this colloquium, a brief introduction about collaborative activities in science and technology between South and North Koreas will be presented, followed by more extensive presentation about the history and current status of PUST. And then the globalization efforts of PUST will be presented, to show that the PUST students are being internationalized and it is hoped that they will globalize their country in the near future, which will make an important contribution to the World Peace. "Discovering Ultra-High Energy Cosmic Rays with your Smartphone" Mike Mulhearn , UC Davis [Host: Bob Hirosky] Cosmic rays which encounter the Earth's atmosphere produce showers of muons and high-energy photons, which can be detected using a smartphone camera. The CRAYFIS experiment was devised to observe cosmic rays at ultra-high energy (UHE) using the existing network of smartphones as a ground detector array. We'll describe our custom app, our lab measurements of smartphone efficiency, and our latest projections for the sensitivity of the CRAYFIS array to UHE cosmic rays. "A new way to image: MRI with a 10,000,000-fold increase in sensitivity" Gordon Cates , University of Virginia "Screening of charge and structural motifs in oxides" Peter Littlewood , Argonne National Laboratory and University of Chicago The boundary between metal and insulator remains a fruitful source of emergent phenomena in materials, ranging from oxides, to cold atoms. Typically the insulating side of this boundary is occupied by an electronic crystal (though often disordered), and at higher temperatures a polaronic liquid or bad metal. While the paradigm Hamiltonian for this transition involves only short –range electronic correlations, in practice the transition is tuned by disorder, by screening of longer range Coulomb forces, and by coupling to the lattice. These lectures will discuss a few of these phenomena in real oxide systems including bulk and interface transition metal oxides. Heterostructure oxides offer the opportunity to build in electric fields by precise control of chemistry on the atomic scale, used recently to generate modulation doping of two- dimensional electron gases (2DEG) in oxides. The origin of the 2DEG, whether in pristine or defected materials, is under debate. I will discuss the role of surface redox reactions, in particular O vacancies, as the source of mobile carriers, and also discuss their role in the switching of ferroelectricity in ultra-thin films. While electric charges can be screened by mobile carriers, the same is not true of strain fields, which have intrinsic long-range interactions that cannot be screened. When strain fields are produced as a secondary order parameter in phase transitions - as for example in ferroelectrics - this produces unexpected consequences for the dynamics of order parameter fluctuations, including the generation of a gap in what would otherwise have been expected to be Goldstone modes. In some cases, eg manganites and nickelates, other intra-cell modes can nonlinearly screen the order parameter, which produces a strong sensitivity of ordering to octahedral rotations, essentially a jamming transition. This is relevant for tuning entropic effects at phase transitions, perhaps to enhance electro-caloric effects. "Quantum-gas physics in orbit: prospects for microgravity Bose-Einstein condensates aboard NASA's Cold Atom Laboratory" Nathan Lundblad , Bates College Notions of geometry, topology, and dimensionality have directed the historical development of quantum-gas physics, as has a relentless search for longer-lived matter-wave coherence and lower absolute temperature. With a toolbox of forces for confinement, guiding, and excitation, physicists have used quantum gases to test fundamental ideas in quantum theory, statistical mechanics, and in recent years notions of strongly-correlated many-body physics from the condensed-matter world. Some of this work has been hampered by terrestrial gravity; levitation schemes of varying degree of sophistication are available, as are atomic-fountain and drop-tower microgravity facilities, but the long-term free-fall environment of low-Earth orbit remains a tantalizing location for quantum-gas experiments. I will review a planned NASA microgravity program set to launch to the International Space Station in 2016. One set of experiments will explore a trapping geometry for quantum gases that is both theoretically compelling and difficult to attain terrestrially: that of a spherical or ellipsoidal shell. This trap could confine a Bose-Einstein condensate to the surface of an experimentally-controlled "bubble." Other experiments will focus on atom interferometry and few-body physics. I will also review recent terrestrial work tailoring periodic geometries for BEC toward interesting solid-state analogues. "From correlated topological insulators to iridates and spin liquids" Stephan Rachel , Dresden University of Technology The non-interacting topological insulators (TIs) have attracted great interest in the last decade. While this class of materials is today well-understood, the effect of electron-eletron interactions in such systems remains in general elusive. In this talk, I will address two different aspects of interactions in 2D topological band structures: (i) in some cases, strongly interacting TI models can be used to describe the exotic magnetic properties of certain transition metal oxides. (ii) in other cases, strong interactions can drive a TI into spin-liquid phases. "Inflation, Dark Matter, and Gravity Waves" Qaisar Shafi , Bartol Institute, University of Delaware [Host: PQ Hung] The Standard Model of strong and electroweak interactions, together with Einstein's theory of general relativity, provide the basis for the highly successful hot big bang cosmology. A large quantity of groundbreaking cosmological observations favor an epoch of primordial inflation, during which the very early universe experienced an exponentially rapid expansion phase before transitioning to a hot, radiation dominated ('big bang') phase. The present universe, it is now widely accepted, is largely dominated by dark energy, whose nature is entirely mysterious, and also by dark matter, presumably consisting of some relic and still undetected elementary particle. Some aspects of inflationary cosmology will be reviewed, including its prediction concerning the existence of primordial gravity waves whose discovery would have profound implications for high energy physics and cosmology. "Uncovering the Fibonacci Phase in Z3 Parafermion Systems" Miles Stoudenmire , Perimeter Institute Recently there has been great progress in realizing platforms for topological quantum computation, with mounting evidence of the experimental observation of Majorana zero modes. However, braiding such zero modes does not yield a set of transformations sufficient to perform universal, fault tolerant computation. One way forward is to engineer systems realizing Z3 parafermion zero modes, which generalize Majorana zero modes. Coupled Z3 parafermions could hybridize into a phase supporting bulk Fibonacci anyons, a type of non-Abelian anyon that does have universal braiding statistics. Using the density matrix renormalization group (DMRG), we study a two-dimensional model of coupled Z3 parafermions. By working close to the weakly-coupled chain limit, we are able to identify the Fibonacci phase on cylinders as small as four sites in circumference then track its evolution, finding it survives even to the isotropic limit of our model on larger cylinders. We examine the extent of this phase and the wider phase diagram of our model, which turns out to harbor a second topological phase. "First-principles studies of oxide surfaces and interfaces" Andrei Malashevich , Yale University Quantum-mechanical calculations based on methods that do not require any empirical input (first-principles calculations) have become an indispensable tool in studies of materials properties. In this talk, I will focus on applications of first-principles methods to studies of perovskite oxide surfaces and interfaces. Depending on the choice of cations, oxides can have almost any desired property. I will present two examples of materials that exhibit relation between structure and electronic properties at surfaces and interfaces. First, I will discuss an interface between metallic LaNiO3 thin film and ferroelectric PbTiO3. The polar field created by a ferroelectric can be used to modulate the conductivity of a channel material. This allows one to design non-volatile electronic devices based on the ferroelectric field effect. Typically, in the ferroelectric field effect, switching the polar state of a ferroelectric changes the carrier density in the channel material. I will show that in the LaNiO3/PbTiO3 interface the conductivity of the interface changes due to changes in carrier mobility, which in turn is related to structural distortions at the interface and appearance of two-dimensional conductivity in PbTiO3 at the interface. Second, I will present a study of properties of the (001) surfaces of thin LaNiO3 films. These films show dramatic differences in conductivity depending on the surface termination (LaO vs NiO2). We find that in this case, the conductivity is related to the polar structural distortions appearing at the surfaces of films. "Phase incoherence driven melting of order parameters in cuprate high temperature superconductors and disordered charge density wave systems" Utpal Chatterjee , University of Virginia Charge density waves (CDWs) and superconductivity are canonical examples of symmetry breaking in materials. Both are characterized by a complex order parameter – namely an amplitude and a phase. In the limit of weak coupling and in the absence of disorder, the formation of pairs (electron-electron for superconductivity, electron-hole for CDWs) and the establishment of macroscopic phase coherence both occur at the transition temperature Tc that marks the onset of long-range order. But, the situation may be different at strong coupling or in the presence of disorder. We have performed extensive experimental investigations on pristine and intercalated samples of 2H-NbSe2, a CDW material with strong electron-phonon coupling, using a combination of structural (X-ray), spectroscopic (photoemission and tunnelling) and transport probes. We find that Tc(δ) is suppressed as a function of the concentration (δ) of the intercalated atoms and eventually vanishes at a critical value of δ=δc leading to quantum phase transition (QPT). Our integrated approach provides clear signatures that the phase of the order parameter becomes incoherent at the quantum/ thermal phase transition, although the amplitude remains finite over an extensive region above Tc and beyond δc. This leads to the persistence of an energy gap in the electronic spectra even though there is no long-range order, a phenomenon strikingly similar to the so-called pseudogap in completely different systems such as high temperature superconductors, disordered superconducting thin films and cold atoms. "Stiffness from disorder in frustrated quasi-two-dimensional magnets" Gia-Wei Chern , Los Alamos National Lab Frustrated magnetism has become an extremely active field of research. The concept of geometrical frustration dates back to Wannier's 1950 study of Ising antiferromagnet on the triangular lattice. This simple system illustrates many defining characteristics of a highly frustrated magnet, including a macroscopic ground-state degeneracy and the appearance of power-law correlations without criticality. In this talk I will discuss a simple generalization of the triangular Ising model, namely, a finite number of vertically stacked triangular layers. Our extensive numerical simulations reveal a low temperature reentrance of two Berezinskii-Kosterlitz-Thouless transitions. In particular, I will discuss how short-distance spin-spin correlations can be enhanced by thermal fluctuations, a phenomenon we termed stiffness from disorder. This is a generalization of the well-known order-by-disorder mechanism in frustrated magnets. I will also present an effective field theory that quantitatively describes the low-temperature physics of the multilayer triangular Ising antiferromagnet. "Jamming and the Anticrystal" Andrea Liu , University of Pennsylvania [Host: Seunghun Lee & Israel Klich] When we first learn the physics of solids, we are taught the theory of perfect crystals. Only later do we learn that in the real world, all solids are imperfect. The perfect crystal is invaluable because we can describe real solids by perturbing around this extreme limit by adding defects. But such an approach fails to describe a glass, another ubiquitous form of rigid matter. I will argue that the jammed solid is an extreme limit that is the anticrystal--an opposite pole to perfect order. Like the perfect crystal, it is an abstraction that can be understood in depth and used as a starting point for understanding the mechanical properties of solids with surprisingly high amounts of order. Unlike the crystal, it is also a starting point for developing mechanical metamaterials whose Poisson ratios can be tuned anywhere from the completely incompressible to the completely auxetic limit. "Massless and Massive Electrons: Relativistic Physics in Condensed Matter Systems" Vidya Madhavan , University of Illinois at Urbana-Champaign [Host: Jeffrey Teo] Electrons in free space have a well-defined mass. Recently, a new class of materials called topological insulators were discovered, where the low energy electrons have zero mass. In fact, these electrons can be described by the same massless Dirac equation that is used to describe relativistic particles travelling close to the speed of light. In this talk I will describe our recent experimental and theoretical investigations of a class of materials called Topological Crystalline Insulators (TCIs) [1]. TCIs are recently discovered materials [2,3] where topology and crystal symmetry intertwine to create linearly dispersing Fermions similar to graphene. To study this material we used a scanning tunneling microscope [3,4,5]. With the help of our high-resolution data, I will show how zero-mass electrons and massive electrons can coexist in the same material. I will discuss the conditions to obtain these zero mass electrons as well the method to impart a controllable mass to the particles and show how our studies create a path to engineering the Dirac band gap and realizing interaction-driven topological quantum phenomena in TCIs. [1] L. Fu, Topological Crystalline Insulators. Phys. Rev. Lett. 106, 106802 (2011). [2] T. H. Hsieh et al., Topological crystalline insulators in the SnTe material class. Nat.Commun. 3, 982 (2012). [3] Y. Okada, et al., Observation of Dirac node formation and mass acquisition in a topological crystalline insulator, Science 341, 1496-1499 (2013) [4] Ilija Zeljkovic, et al., Mapping the unconventional orbital texture in topological crystalline insulators, Nature Physics 10, 572–577 (2014) [5] Ilija Zeljkovic, et al., Dirac mass generation from crystal symmetry breaking on the surfaces of topological crystalline insulators, arXiv:1403.4906 "Twists in quantum magnets" David Alan Tennant , Spallation Neutron Source, Oak Ridge National Laboratory Neutrons provide the ability to look at quantum states of matter in unrivaled detail. Using quantum magnets in conjunction with magnetic fields exotic phases of matter can be generated that are highly quantum entangled. The wave functions in these states can be probed in space and time and in quantum critical states in particular elaborate symmetries and strange properties like fractional quantum numbers are revealed. In this talk I will show some of the remarkable physics that can be explored and how neutron scattering can be used to investigate what is going on. CANCELED Peter Littlewood , Argonne National Laboratory "Leptogenesis" Yuval Grossman , Cornell University There are three open questions in physics which seem unrelated: Why is there only matter around us? How neutrinos acquire their tiny masses? Why all the particles in Nature have integer electric charge? It turns out that these open questions may be related. In this talk I will explain the questions, the connection between them and describe the on-going theoretical and experimental efforts in understanding them. "Hoping to get something out of nothing: Vacuum fluctuations and Newtonian (?) gravity" Ricardo Decca , IUPUI [Host: Genya Kolomeisky & Israel Klich] This talk deals with measurements of small forces at sub-micron separations. It tries to address an innocent enough question: Is Newtonian gravity valid at all distances? I will try to convey the deep sense of ignorance we still have in this topic, and describe the efforts undertaken to advance our knowledge. In particular, our experiments are sensitive to Yukawa-like corrections (i.e. interactions mediated by massive bosons) in the 0.1 to 1 micron range. It will be shown that when trying to measure the gravitational interaction at short separations (on the order of 100 nm), other forces have to be taken into account. Among them, vacuum fluctuations are the more ubiquitous ones. A brief description of how macroscopic bodies (classical objects) interact with vacuum fluctuations (a purely quantum effect) will be presented… towards developing approaches that are insensitive to them! These approaches use an engineered sample which allows to establish better constraints in Yukawa-like interactions. This is accomplished by measuring the difference in forces in configurations where vacuum fluctuations are the same, but the corrections to Newtonian gravity (if any) are not. "Fermion space charge in narrow-band gap semiconductors, Weyl semimetals and around highly charged nuclei" Genya Kolomeisky , University of Virginia The field of charged impurities in narrow-band gap semiconductors and Weyl semimetals can create electron-hole pairs when the total charge $Ze$ of the impurity exceeds a value $Z_{c}e.$ The particles of one charge escape to infinity, leaving a screening space charge. The result is that the observable dimensionless impurity charge $Q_{\infty}$ is less than $Z$ but greater than $Z_{c}$. There is a corresponding effect for nuclei with $Z >Z_{c} \approx 170$, however in the condensed matter setting we find $Z_{c} \simeq 10$. Thomas-Fermi theory indicates that $Q_{\infty} = 0$ for the Weyl semimetal, but we argue that this is a defect of the theory. For the case of a highly-charged recombination center in a narrow band-gap semiconductor (or of a supercharged nucleus), the observable charge takes on a nearly universal value. In Weyl semimetals the observable charge takes on the universal value $Q_{\infty} = Z_{c}$ set by the reciprocal of material's fine structure constant. "The Sacred Volcano, and other true stories from North Korea" Richard Stone , American Association for the Advancement of Science "Non-WIMPy Dark Matter Searches at Fermilab" William Wester , Fermilab With the observation of the Higgs Boson in July 2012, high energy physics has claimed victory in accounting for all the known particles within the Standard Model of Particle Physics. However, great and profound questions remain unanswered regarding the nature of energy, matter, space and time. Among these questions is "What is the nature of Dark Matter" that accounts for approximately 80% of the matter in the universe. Gaining popularity is to invoke the possibility that "Non-WIMPy" new particles form the dark matter in contrast to usual Weakly Interacting Massive Particles hypothesis. Novel ideas and novel experiments, often at a very small scale, are exploring large areas of previously unexplored parameter space. Plus, they are a lot of fun too! "AS I REMEMBER. A Walk Through My Years at Hughes Aircraft" Scott Walker , Hughes Aircraft and GM Hughes Electronics The basic theme of the talk is to emphasize the value of a physics degree when dealing with a wide spectrum of technical products and systems. Following the story line of his career as presented in his recent autobiography "As I Remember", Scott will discuss the development of a number of military systems, US and international. In a light handed manner, he will summarize some community projects such as the Discovery Science Center in southern California, funding of a tax initiative for improved local transportation, and the formation by local businesses of the Robert Saunders scholarship for the school of engineering at UC Irvine. He will relate his physics background to the development of new semiconductor components and advanced automotive electronics, including the development of the EV-1, the first US commercial all electric vehicle by GM. Bio: Dr. Walker received his Ph.D. in nuclear physics from the University of Virginia in 1961. Moving to California upon graduation, he joined Hughes Aircraft Company later to become GM Hughes Electronics. Over the next 37 years at Hughes he was given increasingly wide management responsibilities for the design and production of military systems, advanced industrial electronic components, international programs for air traffic control systems, both military and commercial, and advanced automotive electronics. Retiring in 1997 as Corporate Senior Vice President and member of the Office of the Chairman, Scott and his wife of fifty six years now reside in Indianapolis, Indiana. "Hoo's going to help me? START to help you keep the evidence" Ralph Allen , University of Virginia, Environmental Health & Safety [Host: Rick Marshall] Accidents in academic research labs around the country have called attention to the lack of attention to safety when researchers assume that students understand the risks in their labs. Many of the hazardous materials have become increasingly regulated and granting agencies are now demanding evidence that environmental and safety regulations are followed. The first thing that regulators review are training records. The Office of Environmental Health and Safety has developed a program to help researchers document training and assist in improving laboratory safety. "Quantum microscopy with NV centers in diamonds" Alex Retzker , The Hebrew University of Jerusalem In recent years there has been a growing effort to develop a new type of microscopy, which is based on the NV (Nitrogen Vacancy) centers in diamonds. In this colloquium I will overview the latest results in this field and present a few quantum enhanced measurement schemes to image single protons using NV centers in diamond. These schemes will use ideas from quantum coherent control and quantum computing and will mainly target imaging with biological goals. "The Life and Death of a Drop: Transitions and Singularities" Sidney Nagel , University of Chicago The exhilarating spray from waves crashing onto the shore, the distressing sound of a faucet leaking in the night, and the indispensable role of bubbles dissolving gas into the oceans are but a few examples of the ubiquitous presence and profound importance of drop formation and splashing in our lives. They are also examples of a liquid changing its topology as it breaks into pieces. Although part of our common everyday experience, these changes are far from understood and reveal profound surprises upon careful investigation. For example, in droplet fission the fluid forms a neck that becomes vanishingly thin at the point of breakup so that there is a dynamic singularity in which physical properties such as pressure diverge. Singularities of this sort often organize the overall dynamical evolution of nonlinear systems. In this lecture, I will give the life history of a drop – from its birth to its eventual demise – illustrating the passage of its existence with the scientific surprises that determine its fate. "Unknowns of energy concentrating phenomena" Seth Putterman , UCLA The path to equilibrium is not controlled by entropy production. Although entropy increases with every time step, dynamical motion can be dominated by nonlinear physical processes that spontaneously concentrate energy density. In sonoluminescence a bubble concentrates the energy of a traveling sound wave by 12 orders of magnitude to create picoseconds flashes of blackbody radiation that originate in a new state of matter. When surfaces are brought into and out of contact they exchange charge: a process called tribo-electrification. This phenomenon can be so strong that the power applied to peel sticky tape is efficiently transduced into a flux of high energy electrons, and x-ray photons that can expose an image in a few seconds. For a ferroelectric crystal, instabilities in the phonon spectrum lead to a spontaneous polarization that for Lithium Niobate reaches 15.million volts per cm. The temperature dependence of this field can be used to build a neutron generator based on the fusion of deuterium nuclei. These phenomena challenge a reductionist approach to the theoretical physics of emergent phenomena. The degree to which the energy density of a continuous system can be concentrated by off-equilibrium motion has not been determined by theory. For sonoluminescence, we do not know if the parameter space includes a region where an extra factor of 100 in energy density makes it possible to realize thermonuclear fusion. For triboelectrification, we do not have an ab-initio theory of charge transfer. And for ferroelectrics we do not have an ab-initio theory of the limits of spontaneous polarization which can be designed. INPP Second Annual Lecture "String Theory, Our Real World, and Higgs bosons" Gordon Kane , University of Michigan String theory is exciting because it can address most or all of the questions we hope to understand about the physical world, about the quarks and leptons that make up our world, and the forces that act on quarks and electrons to form our world, cosmology, and much more. It's nice that it provides a quantum theory of gravity too. I'll explain why string theory is testable in basically the same ways as the rest of physics, why many people including string theorists are confused about that, and how string theory is already or soon being tested in several ways, including Higgs boson physics and LHC physics. "Unveiling the order of the high temperature superconductors" Subir Sachdev , Harvard University A central mystery posed by the Cu-based high temperature superconductors has been the nature of their electronic state at low hole density. I will survey the remarkable progress made by recent experiments towards solving this mystery. The experiments show that there is a density-wave order with d symmetry. This is distinct from the d symmetry of the wavefunction of the Cooper pairs responsible for the superconductivity. I will review theories which anticipated these developments. Colloquium: Optical Society of America, UVA Student Chapter "Whither Quantum Computing?" Barry Sanders , University of Calgary [Host: Niranjan Sridhar] A working quantum computer would be revolutionary because certain problems, such as simulating quantum materials or factorization, are easily solved on a quantum computer and probably forever hard on non-quantum computers no matter how small or how fast. Quantum computing technology is at an early stage so we do not yet know which medium is best. I discuss the principles of quantum computing, technological efforts for its realization (embellished with animated films), and applications for when a quantum computer eventually works. Chemistry 402, Room 402 "Quantum Networks in Quantum Optics" H. Jeff Kimble , Caltech This talk will discuss the opportunities for the exploration of physical systems that have not heretofore existed in the natural world. "Quantum computing with hypercubes of light" Pei Wang , University of Virginia Quantum computing promises exponential speedup for particular computational tasks, such as factoring integers[1] and quantum simulation[2]. There are two main flavors of quantum computing: the circuit model and the measurement-based model---in particular, one-way quantum computing [3], which is implemented by applying measurements on an entangled resource known as a cluster state. Complicated computation tasks require the scalable generation of cluster states, which remains a formidable challenge. Pfisterlabs at UVa has been working on generating scalable cluster states and has successfully built some interesting cluster states [4,5]. In this colloquium, I will first explain continuous variable one-way quantum computing, cluster states, and then present our new proposal of a simple, "top-down" setup to generate large-size, D-hypercubic-lattice CV cluster states of more than 6000 entangled modes using D identical optical parametric oscillators (OPOs), each with a two-frequency pump [6]. These cluster states are sufficient for universal one-way quantum computation [3], and the high dimensional lattices are useful in quantum error correction based on Kitaev's surface code [7]. Our optical construction methods eschews the limitations of a three-dimensional world, enabling simulation of measurements on these high-valence cluster graphs and also inviting theoretical and experimental investigations of their topological properties [8]. 1. P.W.Shor, in Proceedings, 35th Annual Symposium on Foundations of Computer Science, edited by S.Goldwasser(IEEE press, Los Alamitos, CA, Santa Fe, NM,1994) pp. 124-134. 2. R.P.Feynman, Int.J.Theor.Phys.21,467 (1982) 3. R.Raussendorf and H.J.Briegel, "A one-way quantum computer", Phys.Rev.Lett. 86,5188(2001) 4. M.Pysher et al., "Parallel generation of quadripartite cluster entanglement in the optical frequency comb", Phys.Rev.Lett. 107, 030505(2011) 5. M.Chen, N.C.Menicucci,and O.Pfister,"Experimental realization of multipartite entanglement of 60 modes of the quantum optical frequency comb", arXiv:1311.2957[quant-ph](2013) 6. P.Wang, M.Chen, N.C.Menicucci,and O.Pfister,"Weaving quantum optical frequency combs into hypercubic cluster states", arXiv:1309.4105[quant-ph] (2013) 7. R.Raussendorf, J.Harrington, and K.Goyal,"A fault-tolerant one-way quantum computer", Ann. Phys.(N.Y.) 321, 2242–2270 (2006) 8. T. F. Demarie, T. Linjordet, N. C. Menicucci, and G. K. Brennen, Detecting Topological Entanglement Entropy in a Lattice of Quantum Harmonic Oscillators, arXiv:1305.0409 [quant-ph] (2013) "Universality in the Magnetic Response of Metamagnetic Materials" Pradeep Kumar , University of Florida "Non-equilibrium statistical physics, population genetics and evolution" Marija Vucelja , Rockefeller University I will present a glimpse into the fascinating world of biological complexity from the perspective of theoretical physics. Currently the fields of evolution and population genetics are undergoing a renaissance, with the abundance of accessible sequencing data. In many cases the existing theories are unable to explain the experimental findings. The least understood aspects of evolution are intrinsically quantitative and statistical and we are missing a suitable theoretical description. It is not clear what sets the time scales of evolution, whether for antibiotic resistance, emergence of new animal species, or the diversification of life. I will try to convey that physicists are invaluable in framing such pertinent questions. The emerging picture of genetic evolution is that of a strongly interacting stochastic system with large numbers of components far from equilibrium. In this colloquium I plan to focus on the dynamics of evolution. I will discuss evolutionary dynamics on several levels. First on the microscopic level - an evolving population over its history explores a small part of the whole genomics sequence space. Next I will coarse-grain and review evolutionary dynamics on the phenotype level. I will also discuss the importance of spatial structures and temporal fluctuations. Along the way I will point out similarities with physical phenomena in condensed matter physics, polymer physics, spin-glasses and turbulence. "Neutrinos: Masters of Surprise" Bob McKeown , Jefferson Lab [Host: Nilanga Liyanage & Gordon Cates] In the recent past the experimental study of neutrinos has yielded a series of surprising results. The very light masses and strong flavor mixing have challenged theorists and also motivated experimentalists to perform ever more sensitive experiments. In this colloquium I will discuss these developments, including the observation of the unexpectedly large mixing angle θ13 by the Daya Bay reactor neutrino experiment. Prospects for future studies, and opportunities for additional surprises, will also be discussed. "Recent Discoveries of Cosmic Ray Anomalies" Eun-Suk Seo , University of Maryland "Majorana Materializes" Jason Alicea , Caltech [Host: Paul Fendley] The 1937 theoretical discovery of Majorana fermions (particles that are their own anti-particles) has since impacted diverse problems ranging from neutrino physics and dark matter searches to the quantum Hall effect and superconductivity. This talk will survey recent revolutionary advances in the condensed matter pursuit of these elusive objects. In particular, I will discuss new ways of "engineering" Majorana platforms using exceedingly simple building blocks, along with pioneering experiments that have made impressive progress towards realizing Majorana fermions. These developments mark the first steps of a fascinating research program that could eventually overcome one of the grand challenges in the field—the synthesis of a scalable quantum computer. "Particle and Nuclear physics with cold neutrons" Nadia Fomin , University of Tennessee, Knoxville "Experiences with a "Jackson by Inquiry" electromagnetism course and the connection with neural networks, recent cognitive neuroscience, and modern theories of teaching/learning" Bruce Patton , The Ohio State University The electromagnetism course is often a singular experience in the education of a physics student, graduate and undergraduate. We described recent experiments to deliver the highly technical material in the electromagnetism course in an active learning studio lab format that current physics education research suggests is more optimal. Exploration of the results leads to connections with neuronal network models of the brain, modern neurophysiology and cognitive science, a simple phenomenological model of the teaching/learning process, and optimal design of the learning environment. "The Interplay Between the Top Quark and the Higgs Boson: How a discovery from a generation ago can help us understand the latest breakthrough in particle physics" Chris Neu , University of Virginia The recent discovery at the LHC of a new fundamental particle has generated a significant amount of excitement around the globe -- an excitement unmatched in particle physics since the discovery of the top quark in 1995. Given its observed decay channels, its mass and a handful of its properties, indications are that this new particle could be the long-sought Higgs boson, the particle which is purported to be the linchpin in understanding the imposition of mass to the fundamental particles. However much remains to be known -- it could be the Higgs boson predicted by the standard model or it could be something more exotic. Complete characterization of this new particle must be done in order to understand its true nature; its interactions with the top quark will play a vital role in this endeavor. Herein I describe the importance the top quark will play in studies of this new particle, and describe in detail one particularly important channel in the characterization effort: the search for production of the Higgs boson in association with top-quark pairs at CMS. "When a theorist met an experimentalist or vice versa" Israel Klich & Seunghun Lee , University of Virginia "Bloch, Landau, and Dirac: Hofstadter's Butterfly in Graphene" Philip Kim , Columbia University Electrons moving in a periodic electric potential form Bloch energy bands where the mass of electrons are effectively changed. In a strong magnetic field, the cyclotron orbits of free electrons are quantized and Landau levels forms with a massive degeneracy within. In 1976, Hofstadter showed that for 2-dimensional electronic system, the intriguing interplay between these two quantization effects can lead into a self-similar fractal set of energy spectrum known as "Hofstadter's Butterfly." Experimental efforts to demonstrate this fascinating electron energy spectrum have continued ever since. Recent advent of graphene, where its Bloch electrons can be described by Dirac fermions, provides a new opportunity to investigate this half century old problem experimentally. In this presentation, I will discuss the experimental realization Hofstadter's Butterfly via substrate engineered graphene under extremely high magnetic fields controlling two competing length scales governing Dirac-Bloch states and Landau orbits, respectively. Chemistry Building, Room Chemistry 402 "The World According to Higgs" Chris Quigg , Fermi National Accelerator Laboratory New developments in particle physics offer a new and radically simple conception of the universe. Fundamental particles called quarks and leptons make up everyday matter, and two new laws of nature rule their interactions. Until July 4, 2012, our neat story was missing one piece, a particle called the Higgs boson. Without it, there would be no atoms, no chemistry, no liquids or solids, and no basis for life. Why did thousands of physicists devote decades to the hunt, and how does the "discovery of the century" change the way we see the world? Joint Chemistry & Physics Colloquium "Nonlinear optics at the nanoscale Physics" Eric Mazur , Harvard University [Host: Kevin Lehmann & Brad Cox] "Current Results in Neutrino Physics" Christopher White , Illinois Institute of Technology [Host: Craig Dukes] Despite decades of research, neutrinos are still one of the least understood fundamental particles. Despite a world-wide experimental effort over the past 20 years that has started to reveal the neutrino's secrets, there is much more to be learned. One of the hopes is that a detailed study of neutrino properties will help us understand the observed asymmetry between matter and antimatter in the universe. It is also becoming clear that neutrinos play a key role in a variety of astrophysical phenomena, such as supernova explosions. I will review what is currently known about neutrinos as well as near and longer term experimental efforts. "Medical applications of nuclear physics" Thia Keppel , Jefferson Lab "Emergent phenomena and universality in quantum systems far from thermal equilibrium" Ehud Altman , Weizmann Institute Recent experiments with ultra-cold atomic gases and trapped ions as well as solid-state devices such as superconducting circuits designed to manipulate q-bits, are posing a new challenge for theory. As in traditional atomic physics these systems are often prepared far from equilibrium, or continuously driven by electromagnetic fields. At the same time they retain a many-body character and intricate quantum correlations, which define a new class of quantum matter. I will first review recent experimental advances in this field and then address a theoretical question: Can the complexity of quantum dynamics in these systems give rise to robust universal phenomena in spite of the non-equilibrium conditions? Joint Colloquium: Physics Department & Society of Physics Students "The Physics IQ Test" Richard Berg , University of Maryland The assembled throngs vote on the outcome of counterintuitive "brainteaser" type physics questions, which are then answered by performing simple physics demonstration experiments. One interesting result is that the average score is about the same for groups ranging from high school students to physics professors. "Superconducting Quarks: Condensed Matter in the Heavens" Mark Alford , Washington University in St. Louis In this talk I will describe the densest predicted state of matter—color-superconducting quark matter. A color superconductor is very different from an "ordinary" electrical superconductor: it occurs at ultra-high density and has a much richer phase structure because quarks come in many varieties. This form of matter may well exist in the core of neutron stars, and the search for signatures of its presence is currently proceeding. I will give an accessible review of the features of color-superconducting quark matter, and discuss some ideas for finding it in nature. "Exploring topological states with cold atoms and photons" Eugene Demler , Harvard University I will review recent theoretical ideas and experimental realizations of topological states using ultracold atoms in optical lattices and quantum walk protocols with photons. Such systems enabled several types of measurements, which had not been possible in solid state systems, including direct measurements of the Berry/Zak phases of Bloch bands and observation of edge states on domain walls in the one dimensional SSH model. I will also discuss new types of topological states in periodically modulated Floquet-Bloch bands which have been realized in photon quantum walks. "Clusters, Correlations, and Quarks: A High-Energy Perspective on Nuclei" John Arrington , Argonne National Laboratory [Host: Donal Day] While nuclei form the core of matter, an understanding of their structure in terms of their fundamental constituents, quarks and gluons, is still well out of reach due to the complex nature of quark interactions in Quantum chromodynamics (QCD). Therefore, "effective" models of nuclei are needed as input for different measurements, from the simple collection of quasifree quarks used in high energy scattering measurements to complex shell structure studied in low energy nuclear physics. These models are useful because of the large separation between the natural energy and distance scales associated with QCD, nuclear binding, and atomic physics. However, there are regions where interactions at vastly different scales have non-trivial interactions which can be seen in high-precision measurements or for specific, well-chosen observables. I will provide some examples of this mixing of energy scales and then focus on the overlap between the scales relevant for nuclear structure and those probed in medium- and high-energy studies of nucleon structure. High-density configurations and large virtual excitations in nuclei provide increased interplay between nuclear scales and QCD, providing opportunities for higher energy measurements to probe details of nuclear structure, and yielding phenomena where low energy nuclear structure may impact the quark description of matter. "The Supernova Early Warning System (SNEWS)" Alec Habig , University of Minnesota Duluth SNEWS is a cooperative effort between the world's neutrino detection experiments to spread the news that a star in our galaxy has just experienced a core-collapse event and is about to become a Type II Supernova. This project exploits the ~hours time difference between neutrinos promptly escaping the nascent supernova and photons which originate when the shock wave breaks through the stellar photosphere, to give the world a chance to get ready to observe such an exciting event at the earliest possible time. A coincidence trigger between experiments is used to eliminate potential local false alarms, allowing a rapid, automated alert. A new experiment which will participate in SNEWS is the Helium and Lead Observatory. HALO is a new, dedicated supernova neutrino experiment being built in SNOLAB from a combination of lead and the SNO experiment's old He3 neutron counters. It is designed to be a low-maintenance, high-livetime, and long-lived experiment to complement existing, multi-purpose neutrino detectors. "Topological Band Theory and Twisted Multilayer Graphene" Gene Mele , University of Pennsylvania Topological insulators are a recently discovered quantum electronic phase of matter. This talk will give a brief overview of the known electronic phases of matter, focusing on the unique properties of topological insulators and their discovery from a careful consideration of the low energy electronic physics of single-layer graphene. Closely related topological ideas are then used to analyze the mysterious electronic behavior of a family of multilayer graphenes known as "twisted" graphenes in which a rotation of neighboring layers leads to unexpectedly rich low energy physics. "Bright Coherent Ultrafast X-Ray Beams on a Tabletop and Applications in Nano and Materials Science" Margaret Murnane , University of Colorado at Boulder [Host: Reihaneh Shahrokhshahi] Ever since the invention of the laser 50 years ago, scientists have been striving to extend coherent laser-like beams into the x-ray region of the spectrum. Very recently, the prospects for tabletop x-ray beams at wavelengths <10Å have brightened considerably. This advance is the direct result of a new ability to manipulate electrons on their natural, attosecond (10^-18s), time-scales using femtosecond lasers. In recent work we uncovered a new regime of nonlinear optics, where bright laser-like X-ray supercontinua with photon energies >1.6keV (wavelengths < 8Å) can be produced from a tabletop femtosecond laser [1]. This represents the most extreme >5001 order nonlinear optical process known. X-rays are powerful probes of the nanoworld. They penetrate thick samples and can image small objects. This talk will also highlight how ultrafast x-rays can capture the coupled motions of charges, spins, phonons and photons that underlie function on the fastest timescales. [2,3] 1. Popmintchev et al, Science 336, 1287 (2012). 2. Mathias, et al, PNAS 109, 4792 (2012). 3. Rudolf et al., Nature Commun 3, 1037 (2012). "Holograms of strings" Diana Vaman , University of Virginia "Precision studies of the nucleon" Nilanga Liyanage , University of Virginia "Fundamental physics with free neutrons" Stefan Baessler , University of Virginia "Quantum fluctuations: From the Casimir Effect to Quantum Entanglement" Israel Klich , University of Virginia In the world of quantum mechanics, nothing is certain, including the meaning of "nothing". Indeed, the Casimir effect, an attraction between two mirrors separated by vacuum, sometimes called "A force from nothing", is an example of the intricate consequences of taking quantum mechanics seriously. The Casimir effect has been in the spotlight in the last decade, as its importance beyond fundamental physics and its experimental demonstration have been realized. The effect is gaining relevance in areas as diverse as cosmology, quantum field theory, condensed matter physics, biology and nanotechnology. In this talk, I will explore the role of quantum fluctuations, and radiation matter coupling in creating this force, as well as present new results on another aspect of quantum fluctuations of great importance: that of entanglement. In particular, I will explore the entanglement between radiation and matter in a framework inspired by the Casimir effect. "Higgs Boson Searches at the CDF Experiment: Highlights of UVa Contributions to a Successful Search with the Full Data Set" I present the results from the CDF experiment on the direct searches for a Standard Model Higgs boson produced in p-pbar collisions at a center of mass energy of 1.96 TeV, using the data corresponding to integrated luminosity of up to 10fb-1. The searches are performed in the Higgs boson mass range from 100 to 200 GeV/c2. The dominant decay channels, H → bb and H → W W , are combined with all the secondary channels and significant analysis improvements have been recentlyimplemented to maximize the search sensitivity. UVa contributions to these analyses, their recent improvements, and to the leadership of the effort are highlighted. The results from the CDF experiment are combined with the D0 experiment, both using their full data sets, and a significant excess of data events compared to background prediction is reported. The highest local significance is 3.0 standard deviations while global significance for such an excess anywhere in the full mass range investigated is approximately 2.5 standard deviations. Both experiments at the Large Hadron Collider have recently reported excesses of greater than 5standard deviations in their searches for the Higgs boson. The complementarity and relevance of the Tevatron results are discussed in the context of this recent discovery from the LHC. "Devoting your life to scientific research does not mean that you should actually risk your life and granting as well as regulatory agencies will make sure that you don't" Ralph Allen , University of Virginia "Graphene: how electrons move and interact in the ultimate flatland" Enrico Rossi , College of William & Mary Graphene is a one atom-thick layer of carbon atoms arranged in a two-dimensional honeycomb lattice that was first realized in a laboratory in 2004. In graphene the electrons are strictly con fined to live in two dimensions and behave as massless Dirac fermions described by two-dimensional Quantum Electro Dynamics (QED), albeit with a much lower (1/300 th) speed of light and bigger (≈ 1), and tunable, fine structure constant. Due to its unique electronic structure graphene exhibits anomalous electronic properties. In this talk I will discuss the unusual transport properties of graphene and provide a theoretical explanation of the "puzzles" posed by graphene transport measurements since its discovery. I will then discuss the effect of electron-electron interactions. Most of the experiments suggest that in single layer graphene the interactions have only a quantitative effect. However, recently very high quality graphene heterostructures have been realized and the experimental measurements conducted on them suggest that in these structures the interactions can drive the electrons into novel spontaneously broken symmetry ground states. I will present our theoretical study of "hybrid" heterostructures formed by one sheet of single layer graphene and one sheet of bilayer graphene and show that in these structures the spontaneously broken symmetry ground state is 2-fold degenerate with one of the degenerate states analogous to a superfluid chiral state. The chiral nature of one of the degenerate ground states opens the possibility to observe in graphene heterostructures topologically protected midgap states analogous to Majorana modes. "101 Years of Superconductivity - My Contributions Therein" Bellave Shivaram , University of Virginia Starting from a brief history of its initial discovery I will trace the development/study of various classes of superconducting materials. I will cover the phenomenology in these different classes with references to microscopic theory where appropriate, and also present a concurrent description of my own experimental contributions. "The National Ignition Facility: Pathway to Energy Security and Physics of the Cosmos" Edward Moses , National Ignition Facility The National Ignition Facility (NIF), at Lawrence Livermore National Laboratory in Livermore, California, is the world's most energetic laser system. NIF is capable of producing over 1.8 MJ and 500 TW of ultraviolet light, 100 times more than any other operating laser. Completed in March 2009, it is maturing rapidly and transitioning into the world's premier high-energy-density science experimental facility, while supporting its strategic security, fundamental science, and energy security missions. By concentrating intense laser energy into target only millimeters in length, NIF can, for the first time, produce conditions emulating those found in planetary interiors and stellar environments and creating fusion energy to power our future. The extreme conditions of energy density, pressure, and temperature will enable scientists to pursue fundamental science experiments designed to address a range of scientific questions, from observing new states of matter to exploring the origin of ultrahigh-energy cosmic rays. Early experiments have been successfully completed in support of materials equations of state, materials strength, and radiation transport in extreme temperature and pressure conditions. The National Ignition Campaign, an international effort pursued on the NIF, aims to demonstrate fusion burn and generate more energy output than the laser energy delivered to the target. Achieving this ignition goal will validate the viability of inertial fusion energy (IFE) as a clean source of energy. A laser-based IFE power plant will require advances in high-repetition-rate lasers, large-scale target fabrication, target injection and tracking, and other supporting technologies. These capabilities could lead to an operational prototype IFE power plant in 10 to 15 years. LLNL, in partnership with academia, national laboratories, and industry, is developing a Laser Inertial Fusion Energy (LIFE) baseline design concept and examining technology choices for developing a LIFE prototype power plant. This talk will describe the unprecedented experimental capabilities of the NIF, its role in strategic security and fundamental science, and the pathway to achieving fusion ignition to create a clean and secure energy future. "Fundamental measurements of the proton's sub-structure using high-energy polarized proton-proton collisions " Bernd Surrow , Temple University Understanding the structure of matter in terms of its underlying constituents has a long tradition in science. A key question is how we can understand the properties of the proton, such as its mass, charge, and spin (intrinsic angular momentum) in terms of its underlying constituents: nearly massless quarks (building blocks) and massless gluons (force carriers). The strong force that confines quarks inside the proton leads to the creation of abundant gluons and quark-antiquark pairs (QCD sea). These 'silent partners' make the dominant contribution to the mass of the proton. Various polarized deep-inelastic scattering measurements have shown that the spins of all quarks and antiquarks combined account for only 25% of the proton spin. New experimental techniques are required to deepen our understanding on the role of gluons and the QCD sea to the proton spin. High energy polarized proton-proton (p + p) collisions at RHIC at Brookhaven National Laboratory provide a new and unique way to probe the proton spin structure using very well established processes in high-energy physics, both experimentally and theoretically. A major new tool has been established for the first time using parity-violating W boson production in polarized p + p collisions at √s = 500 GeV demonstrating directly the different polarization patterns of different quark flavors, paving the path to study the polarization of the QCD sea. Various results in polarized p + p collisions at √s = 200 GeV constrain the degree to which gluons are polarized suggesting that the contribution of the gluons to the spin of the proton is rather small, in striking contrast to their role in making up the mass of the proton. "A Scientific Analysis of 21st Century Environmental and Economic Challenges" Sir David King , Smith School at Oxford [Host: John T. Yates, Jr., Ian Harrison, & Brad Cox] Unprecedented improvements in human wellbeing over the nineteenth and twentieth centuries have been driven largely by developments flowing from advances in engineering, medicine, agriculture and technology, and by political and economic developments coupled to consumerism. But a necessary consequence of these successes has been an equally unprecedented growth in the global population. The twenty first century will be dominated by the challenges posed by a mid-century population of around 9 billion people, all seeking a high standard of living. Ecosystem services, an essential element of our continued wellbeing as a species, are already under threat as our need for food production, fresh water, energy sources, minerals etc. grows exponentially to meet unfettered demand. Climate change, driven by fossil fuel usage and by deforestation, provides the biggest challenge of all, since it requires a collective response of the global population, to mitigate the effect and to manage the growing impacts upon our societies. Well designed technological solutions are desirable and can be compatible with the continued growth of human wellbeing. The socio-political challenges in directing such a collective response are beyond anything previously managed. This may well lead to a mid-century slide into conflict caused by environmental and resource-driven challenges on a scale not previously experienced. The thesis presented here is that meeting these challenges will require a global cultural and technological transformation on much the same scale as the European Renaissance or the Industrial Revolution itself, and a clear understanding by all societies of the need to adapt and strengthen global governance procedures. Decision making at all levels will require significantly enhanced knowledge and understanding. Special Colloquium: Institute of Nuclear and Particle Physics Annual Lecture "Tales from the Darkside of Particle Physics" Bill Marciano , Brookhaven National Laboratory [Host: INPP] The "Minimal" Standard Model of particle physics is almost complete. Interesting hints of a standard Higgs Boson are starting to appear at CERN and one can ask, Is it all that remains to be discovered? Dark matter observations suggest that an invisible universe of massive particles may exist all around us, but coupled to normal matter primarily by gravity. Can we detect dark particles and study their properties at accelerators? In this talk, I will discuss the implications of a Higgs Boson discovery and speculate on its possible connection to "dark" matter physics. In particular, properties of the "dark" photon, a hypothetical "dark" force carrier, along with ongoing and proposed experimental efforts to discover, it will be described. "The Next Generation of Nuclear Reactor Designs" Sama Bilbao Y Leon , Virginia Commonwealth University There are today over 440 commercial nuclear power reactors operating in 30 countries. They provide about 14% of the world's electricity in the form of economic, environmentally sound and reliable base-load power. In addition, 63 new nuclear reactors are currently under construction in 14 countries. But much has changed in the design of nuclear reactors since the first commercial nuclear power stations started operating in the 1950s. Modern nuclear reactors, those that will be built in the short term, achieve improvements over existing designs through small to moderate modifications, with a strong emphasis on maintaining design provenness and building upon the lessons learnt from 40 years of successful operation, to minimize technological and investment risks. At the same time, nuclear designers are already working on a new generation of nuclear reactor concepts incorporating radical conceptual changes in design approaches or system configuration in comparison with existing practice. Substantial research and development efforts, feasibility tests, as well as a prototype or demonstration plant are probably required prior to the commercial deployment of these innovative designs. This talk will provide an overview of the most recent developments in nuclear reactor design, including those using alternative fuel cycles, such as Thorium. "Imaging the microscopic structure of shear thinning and thickening colloidal suspensions" Xiang Cheng , Cornell University While a simple Newtonian fluid such as water flows with a constant viscosity, many structured fluids ranging from polymer melts to surfactant solutions exhibit fascinating non-Newtonian flow behaviors including shear thinning and shear thickening. One typical example is a colloidal suspension, where its viscosity can vary by orders of magnitude depending on how quickly it is sheared. Although these non-Newtonian behaviors are believed to arise from the arrangement of suspended particles and their mutual interactions, microscopic particle dynamics in such suspensions are difficult to measure directly. Here, by combining fast confocal microscopy with simultaneous force measurements, we systematically investigate a suspension's structure as it transitions through regimes of different flow signatures. Our measurements of the microscopic single-particle dynamics unambiguously show that shear thinning results from the decreased relative contribution of entropic forces and that shear thickening arises from particle clustering induced by inter-particle hydrodynamic lubrication forces. Furthermore, we explore out-of-equilibrium structures of sheared colloidal suspensions and report a novel string phase, where particles link into log-rolling strings normal to the plane of shear. Our techniques illustrate an approach that complements current methods for determining the microscopic origins of non-Newtonian flow behavior in complex fluids. "Spin Ice and Quantum Spin Liquid in Geometrically Frustrated Magnets" Haidong Zhou , National High Magnetic Field Lab In geometrically frustrated magnets (GFMs), the incompatibility between the interactions of the magnetic degrees of freedom in a lattice and the underling crystal geometry leads to the frustration. The massive level of degeneracy introduced by this frustration can persist to low temperatures to enhance the spin fluctuations and suppress the magnetic ordering, therefore resulting exotic spin ground states with abnormal thermo-dynamics. In this talk, two GFMs will be introduced: (i) Spin ice with pyrochlore structure, in which the ground state is a short range ordering of the "two spin in two spin out" configurations on tetrahedrons following the "ice rule"; (ii) Quantum spin liquid (QSL), in which the strong quantum fluctuations of the spins with small number (S = 1/2 and 1) destroy the magnetic ordering and lead to a spin-liquid like ground state. Following the introduction, we present our recently studies on new pyrochlore materials Pr2Sn2O7 and Dy2Ge2O7, and new QSL materials Ba3CuSb2O9 and Ba3NiSb2O9 with a triangular lattice of S = 1/2 and S = 1, respectively. "Oxide Nanoelectronics on Demand" Cheng Cen , IBM Complex oxides and their heterostructures have exhibited a great collection of novel functionalities and are considered one of the most promising candidate for next generation technological materials. At the interface formed between LaAlO3 and SrTiO3, by scanning a biased conducting atomic force microscope (AFM) tip along a programmed trajectory at room temperature, we can reversibly control in nanoscale the metal-insulator transition. With this technique, a variety of rewritable nanoscale devices and structures have been studied. These nanostructures, which are mainly assembled from basic elements including conductive wires and dots with characteristic dimensions just a few nanometers, show great performance as field effect transistors, nanodiodes and photodetectors. At low temperatures, a variety of electronic, spintronic and superconducting properties are observed, with enormous potential for exploitation in quantum devices. "Pseudo-spin Resolved Transport Spectroscopy of the Kondo Effect" Sami Amasha , Stanford University In strongly-correlated materials, such as high-temperature superconductors and heavy fermion compounds, electrons form many-body states with properties different from those of non-interacting electrons. A simpler and better understood example of electron correlations is the Kondo effect, which describes how spins of conduction electrons screen the spin of a localized electron that has degenerate spin states (spin-up and spin-down in the case of a localized spin-1/2 electron). This screening generates spin correlations. Electrical transport measurements of a single quantum dot can probe Kondo physics; however, to directly access the spin correlations one needs spin-resolved measurements. We address this challenge by using the orbital states of a double quantum dot as pseudo-spin states: an electron on the left/right dot is associated with pseudo-spin up/down. When the energies of these pseudo-spin states are degenerate, Kondo screening occurs. We establish a correspondence between spin Kondo in a single dot and pseudo-spin Kondo in double dots. We use this to show that our pseudo-spin resolved spectroscopy measurements of the Kondo state in a double dot correspond to predictions for spin-resolved spectroscopy of spin Kondo. Finally, we explore the interplay between orbital and spin degeneracy in this double dot system. "The Lead Radius Experiment PREX" Robert W. Michaels , Thomas Jefferson National Accelerator Facility The Lead Radius Experiment PREX ran in the Spring of 2010 in Hall A at the Thomas Jefferson National Accelerator Facility (JLab). The experiment measures the parity-violating asymmetry in the elastic scattering of longitudinally polarized electrons from a 208Pb nucleus at an energy of 1.06 GeV and a scattering angle of 5◦. The Z boson that mediates the weak neutral interaction couples mainly to neutrons and provides a clean, model-independent measurement of the RMS radius Rn of the neutron distribution in the nucleus. This measurement is a fundamental test of nuclear structure theory, and our result establishes the existence of the neutron skin, i.e. that Rn > Rp. A precise measurement of Rn pins down the density-dependence of the symmetry energy of neutronrich nuclear matter, which has impacts on neutron star structure, heavy ion collisions, and atomic parity violation experiments. The experiment involves all aspects of the JLab accelerator, from the polarized source to the detector, and capitalizes on JLab's unique strengths for carrying out high-precision parity experiments. In addition to the 2010 data, several technical challenges will be described, as well as prospects for future measurements at JLab from 208Pb and other nuclei such as 48Ca. "Tailoring Dirac Fermions in Molecular Graphene" Kenjiro Gomes , Stanford University The dynamics of electrons in solids is tied to the band structure created by a periodic atomic potential. The design of artificial lattices, assembled through atomic manipulation, opens the door to engineer electronic band structure and to create novel quantum states. We present scanning tunneling spectroscopic measurements of a nanoassembled honeycomb lattice displaying a Dirac fermion band structure. The artificial lattice is created by atomic manipulation of single CO molecules with the scanning tunneling microscope on the surface of Cu(111). The periodic potential generated by the assembled CO molecules reshapes the band structure of the two-dimensional electron gas, present as a surface state of Cu(111), into a "molecular graphene" system. We characterize the band structure through Fourier transform analysis of impurity scattering maps. We tailor this new tunable class of graphene to reveal signature topological properties: an emergent mass and energy gap created by breaking the pseudospin symmetry with a Kekule bond distortion; gauge fields generated by applying atomically engineered strains; and the condensation of electrons into quantum Hall-like states and topologically confined phases. "N-polaron systems and mathematics" Lawrence Thomas , University of Virginia The polaron is a mathematical model for a "dressed" particle consisting of an electron together with its entourage of local excitations of a quantized phonon field. We will give a brief historical review of the polaron, including the analysis of its ground state by a Brownian motion functional integral and by a related variational expression. For the case of two or more electrons, the interaction of the electrons with the phonon field gives rise to an effective attraction between electrons that causes the particles to bind together. For N electrons, N → ∞, the systems are unstable in the sense that the binding energy grows faster than linearly in N. We will discuss recent work with Frank, Lieb, and Seiringer which shows that sufficiently strong Coulomb repulsion between electrons can compensate for this binding and provide stability for polaron systems for large N. "Characterizing phase diagram of High Temperature Superconductors via Angle Resolved Photoemission Spectroscopy" Utpal Chatterjee , Argonne National Laboratory High Temperature Superconductors (HTSCs) were discovered more than 25 years ago. However, a microscopic theory of them is yet to be realized. In order to identify the mechanism behind superconductivity in these systems, we must understand the normal state from which superconductivity emerges. From our detailed Angle Resolved Photoemission Spectroscopy (ARPES) measurements on Bi2Sr2CaCu2O8+δ (BISCO 2212) HTSCs we have found that unlike conventional superconductors, where there is a single temperature scale Tc separating the normal from the superconducting state, HTSCs are associated with two additional temperature scales. One is the so-called pseudogap scale T*, below which electronic states are partially gapped, while the second one is the coherence scale Tcoh, characterizing the onset of a significant enhancement in electronic lifetime. We have observed that both T* and Tcoh change strongly with carrier concentration and they cross each other near optimal doping, i.e. the carrier concentration at which an HTSC attains its maximum Tc. Furthermore, there is an unusual phase in the normal state where the electronic excitations are gapped as well as coherent. Quite remarkably, this is the phase from which the superconductivity with maximum Tc emerges. Our experimental finding that T* and Tcoh intersect is not compatible with the theories invoking "single quantum critical" point near optimal doping, rather it is more naturally consistent with the theories of superconductivity for doped Mott insulators. "Putting the Genie Back in the Bottle: The Science of Nuclear Non-Proliferation" Jerry Gilfoyle , University of Richmond "How Green Can Algae Be? Alternative Energy from the Chesapeake Algae Project" William Cooke , College of William and Mary [Host: Tom Gallagher] "Searching for Supersymmetry at the LHC" Daniel Elvira , Fermi National Accelerator Lab Supersymmetry is a theory build under the hypothesis that there is a relation between bosons and fermions. The particle physics community finds it very compelling because it provides a solution to the mass hierarchy problem, allows a percent level unification of gauge couplings, and predicts a particle candidate for dark matter. The Large Hadron Collider (LHC) at CERN is the best instrument with count with at the moment to search for supersymmetric particles. It has delivered proton-proton collisions at a center of mass energy of 7 TeV since 2010. The CMS and ATLAS experiments at the LHC are expected to collect 4-5 fb-1 of data before the end of 2011 and explore a very significant fraction of the phase space associated with the most simple supersymmetric models. This talk will go over the experimental strategy for SUSY searches at the LHC, explain the techniques to evaluate the main backgrounds to potential SUSY signals, and review the most recent results. "Peering into dark corners at Fermilab and CERN" Bob Hirosky , University of Virginia "Atomic calculations for tests of fundamental physics" Marianna Safronova , University of Delaware [Host: Kent Paschke] I will give an overview of applications of atomic calculations for atomic physics tests of fundamental physics, including the study of parity violation, search for EDM, and search for variation of fundamental constants. The goals of high-precision atomic parity violation (APV) studies are to search for new physics beyond the standard model of the electroweak interaction by accurate determination of the weak charge and to probe parity violation in the nucleus. I will discuss the current status and future prospects of atomic parity violation studies and the implications for searches for physics beyond the standard model. The recent advances in theoretical methodology that allowed to reduce theoretical uncertainty in the analysis of the cesium experiment are briefly outlined. I will also discuss recent accurate calculation of the nuclear spin-dependent parity-violating amplitude. New result still leads to the discrepancy between constraints on weak nucleon-nucleon coupling obtained from the cesium anapole moment and those obtained from other nuclear PV measurements. "Nearly perfect fluidity: From cold atoms to hot quarks and gluons" Thomas Schaefer , North Carolina State University A dimensionless measure of fluidity is the ratio of shear viscosity to entropy density. In this talk we will argue that fluidity is a sensitive probe of the strength of correlations in a fluid. We will also discuss evidence that the two most perfect fluids ever observed are also the coldest and the hottest fluid ever created in the laboratory. The two fluids are cold atomic gases (~10^(-6) K) that can be probed in optical traps, and the quark gluon plasma (~10^{12} K) created in heavy ion collisions at RHIC (Relativistic Heavy Ion Collider at Brookhaven National Laboratory). Remarkably, both fluids come close to a bound on the shear viscosity that was first proposed based on calculations in string theory, involving non-equilibrium evolution of back holes in 5 (and more) dimensions. "Electrons and Mirror Symmetry" Kent Paschke , University of Virginia "What Have We Learned from Electron Deep Inelastic Scattering?" Xiaochao Zheng , University of Virginia "Topological Insulators: From Fundamentals to Applications" Di Xiao , Oak Ridge National Lab Topological insulators are materials that have a bulk band gap like an ordinary insulator but support protected conducting states on their edge or surface. These edge/surface states are predicted to have special properties that could be useful for applications ranging from spintronics to quantum computing. In this talk, I will explain the nontrivial band topology of these materials using the Berry phase concept, review current progress on material prediction and realization, and discuss some of the applications in surface catalysis and electronics. "Diving For Treasure In Complex Data " Marvin Weinstein , Stanford University All fields of scientific research have experienced an explosion of data. It is a formidable computational challenge to analyze this data to extract unexpected patterns. Meeting this challenge will require new, advanced methods of analysis. Dynamic Quantum Clustering is such a tool. The algorithm, invented by David Horn (Tel Aviv University) and myself, provides a highly visual and interactive tool that allows one to explore complicated data that has unknown structure. My talk will provide a brief introduction to the distinction between supervised and unsupervised methods in data mining (clustering in particular). Then, I will, very briefly, discuss the theory of DQC. The bulk of my talk will be devoted to showing results on a data set coming from the Stanford Synchrotron Radiation Laboratory and some results from data on earthquakes in the Middle East. These examples show the power of DQC applied to data sets on which the currently most favored unsupervised data mining techniques fail to obtain any interesting results. The message will be that large, complex, data sets typically exhibit extended structures that are significant and that cannot be seen by other methods. "Statistical mechanics and dynamics of multicomponent quantum gases" Austen Lamacraft , University of Virginia "Antihydrogen Trapped" Francis Robicheaux , Auburn University Atoms made of a particle and an antiparticle are unstable, usually surviving less than a microsecond. Antihydrogen, the bound state of an antiproton and a positron, is made entirely of antiparticles and is believed to be stable. It is this longevity that holds the promise of precision studies of matter-antimatter symmetry. Low energy (Kelvin scale) antihydrogen has been produced at CERN since 2002. I will describe the experiment which has recently succeeded in trapping antihydrogen in a cryogenic Penning trap for times up to approximately 15 minutes. Colloquium: Kickoff Event for The Optical Society of America at UVA "Multi-Photon and Entangled-Photon Imaging and Lithography" Malvin Teich , Boston University [Host: Lauren Levac] Nonlinear optics, which governs the interaction of light with various media, offers a whole raft of useful applications in photonics, including multiphoton microscopy and multiphoton lithography. It also provides the physicist with a remarkable range of opportunities for generating light with interesting, novel, and potentially useful properties. As a particular example, entangled-photon beams generated via spontaneous optical parametric down-conversion exhibit unique quantum-correlation features and coherence properties that are of interest in a number of contexts, including imaging. Photons are emitted in pairs in an entangled quantum state, forming twin beams. Such light has found use, for example, in quantum optical coherence tomography, a quantum imaging technique that permits an object to be examined in section. Quantum entanglement endows this approach with a remarkable property: it is insensitive to the even-order dispersion inherent in the object, thereby increasing the resolution and section depth that can be attained. We discuss the advantages and disadvantages of a number of techniques in multiphoton and entangled-photon imaging and lithography. "The cultural and ethical world of nuclear weapons scientists" Hugh Gusterson , George Mason University The Los Alamos and Lawrence Livermore National Laboratories are the two largest employers of physicists in the country. Their primary mission is nuclear weapons science. Based on over two decades studying the culture of nuclear weapons scientists as an anthropologist, the speaker discusses the values of nuclear weapons physicists, the reasons young physicists have for choosing a career in nuclear weapons design, the ethical challenges they confront, and the degree of job satisfaction they report. "Electronic Detection and Diagnosis of Health and Illness of Premature Infants" John Delos , College of William and Mary The pacemaking system of the heart is complex; a healthy heart constantly integrates and responds to extracardiac signals, resulting in highly complex heart rate patterns with a great deal of variability. In the laboratory and in some pathological or age related states, however, dynamics can show reduced complexity that is more readily described and modeled. Reduced heart rate complexity has both clinical and dynamical significance - it may provide warning of impending illness or clues about the dynamics of the heart's pacemaking system. Here we describe simple and interesting heart rate dynamics that we have observed in premature human infants - reversible transitions to large- amplitude periodic oscillations - and we show that they give early warning of bacterial infections in premature infants, and we show that the appearance and disappearance of these periodic oscillations can be described by a simple mathematical model, a Hopf bifurcation. "Having your cake and seeing it too: In Situ Observation of Incompressible Mott Domains in Ultracold Atomic Gases" Cheng Chin , University of Chicago Atoms at ultralow temperatures are fascinating quantum objects, which can tunnel through barriers, repel or attract each other, and interfere like electromagnetic waves. This wavy behavior of ultracold atoms evidently illustrates the particle-wave duality as discussed in modern physics. By loading repulsively interacting atoms into a regular array of tiny optical cells (called optical lattices), we show that the wavy nature of the atoms can be completely destroyed. At the same time, the gaseous sample develops an interesting multi-layer structure with quantize density plateaus, resembling a multi-tier wedding cake. Our observation of the cake structure in ultracold gases of atoms [1] raises new prospects to investigate the dynamics and transport across a phase boundary [2] and to identify universal critical behavior in the transition regime [3]. Surprising findings along these directions will be reported. "Application of Machine Learning Methods to Genome-Wide Maps of Histone Methylations" Stefan Bekiranov , UVA Medical School The physical length of one copy of the human genome is a little over 1 meter. It is packaged into a nucleus, which is on the order of micrometers in diameter. This is achieved by wrapping the DNA around histones. In the last decade, many breakthroughs have lead to the understanding that these histones control subsets of genes that are turned on or off depending on chemical modifications on their tails. They accomplish this by controlling the accessibility of proteins—responsible for turning genes on—to DNA. This accessibility can be characterized by two states: open and closed. Remarkably, over 60 different locations on these tails are subject to at least one of eight types of chemical modifications. Recently, it has been shown that many of these modifications work together to robustly turn genes on or off; however, we are at the beginning of uncovering this complex control network. To shed light on this network, we apply computational methods, which identify statistically significant combinations, to genome wide maps of histone modifications. We indeed find that crosstalk among these modifications is extensive and predict novel combinations, which strongly synergize in our models, for further biochemical study. National Physics Day Show "Physics professors Bob Jones, Olivier Pfister, Cass Sackett, and Steve Thornton will delight the crowd with strange and mystifying events." A Family-Oriented Event , See rockets shooting around the auditorium, balls suspended in air, curve balls flying overhead, Van de Graaff generators, skaters spinning around. You will see you a bunch of fascinating things you should never do at home. We might even put someone on a bed of nails and crush a cement block on top of them. As usual, there will be plenty of surprises in store. These demonstrations will intrigue and excite both young and old and from novice to expert. Bring your family and friends, but come on time. For more information about this free public event call 924-3781. "The Quincunx Point" Sylvester J. Gates , University of Maryland Sometimes theoretical physics problems resist resolution for decades. Endeavoring to solve such problems can lead to a new and unexpected viewpoint. Prof. Gates will describe such a problem and describe how trying to solve it has possibly led to a quincunx point at the five-fold overlap of art, mathematics, music, science, and perhaps... "Elementary Particles of Superconductivity" Assa Auerbach , Technion, Israel Institute of Technology Historically, two paradigms competed to explain superconductivity (i) Bose Einstein Condensation of weakly interacting Charge 2e pairs (Schafroth), and (ii) Pairing instability of the Fermi liquid (BCS). BCS theory was the unquestionable winner until the late 80's. BCS approximations however, have suffered major setbacks in the advent of high temperature, short coherence length superconductors, such as cuprates, pnictides, and granular superconducting films. A third paradigm has offered itself: Hard Core lattice Bosons (HCB), which are experimentally realized in cold atoms on optical lattices. HCB behave less like weakly interacting bosons or fermions, but (strangely) more like quantum spins. Their static correlations are very well understood by theories of quantum antiferromagnets. Recent calculations of the conductivity of Hard Core Bosons suggests a new route to understanding linear in temperature resistivity and other strange metallic properties above the transition temperature. "Beyond Smoke and Mirrors: Climate Change and Energy in the 21st Century" Burton Richter , Stanford University Professor Richter is the co-winner of the 1976 Nobel Prize in physics for the discovery of the J/Ψ particle which was the first observation of a particle containing a fourth quark named the charm quark and was a central part of the so-called November revolution of particle physics. He has accumulated many other honors in his career including a long tenure as the director of Stanford Linear Acceleratory Laboratory from 1984 to 1999. He has also been the recipient of the E.O. Lawrence Medal, has served as president of the American Physical Society, and is a member of the National Academy of Sciences. He presently serves on the board of directors of Scientists and Engineers for America, an organization focused on promoting sound science in American government and is a Senior Fellow by Courtesy of the Center for Environmental Science and Policy at Stanford Institute for International Studies. In the past several years Professor Richter has turned his attention to the central problem of the 21st century, the effect of human activity on the global climate. He has written a book with the same title as his lecture. "Modern math in medieval islamic architecture" Peter Lu , Harvard University The conventional view holds that girih (geometric star-and-polygon) patterns in medieval Islamic architecture were conceived by their designers as a network of zigzagging lines, and drafted directly with a straightedge and a compass. I will describe recent findings that, by 1200 C.E., a conceptual breakthrough occurred in which girih patterns were reconceived as tessellations of a special set of equilateral polygons (girih tiles) decorated with lines. These girih tiles enabled the creation of increasingly complex periodic girih patterns, and by the 15th century, the tessellation approach was combined with self-similar transformations to construct nearly-perfect quasicrystalline patterns. Quasicrystal patterns have remarkable properties: they do not repeat periodically, and have special symmetry---and were not understood in the West until the 1970s. I will discuss some of the properties of Islamic quasicrystalline tilings, and their relation to the Penrose tiling, perhaps the best known quasicrystal pattern. "Supermassive Black Holes and the Evolution of Galaxies" Jim Condon , NRAO The first galaxies were small condensations of baryonic matter that fell into the gravitational potentials of dark-matter halos, and larger galaxies are still being assembled from smaller ones by heirarchical merging. Black holes quickly formed and grew in their centers, and energy feedback from these supermassive black holes (SMBHs) dominated the subsequent growth and stellar composition of large galaxies, making them "red, dead, and elliptical" today. To constrain the role of SMBHs in galaxy evolution we recently measured accurate nuclear masses of six Seyfert galaxies using the Keplerian rotation curves of circumnuclear water masers observed with 0.0003 arcsec resolution. The nuclear mass densities are so high that they are consistent only with supermassive black holes, not dense star clusters. Because nearly all galaxies contain SMBHs, recently merged galaxies should contain inspiraling binary SMBHs that may merge and emit very energetic and anisotropic bursts of gravitational radiation. We recently began the first systematic search for inspiraling, binary, or recoiling SMBHs in hundreds of nearby massive galaxies. "Top Quarks at the Large Hadron Collider: In Pursuit of Truth and its Consequences" The top quark is a unique member of the collection of known fundamental particles. Its mass is exceedingly large -- nearly that of a single atom of gold -- which is remarkable given that the top quark is considered to be a point particle with no substructure. Further, the top quark decays rapidly, long before having the chance to form a bound state with other quarks. Hence, the study of top-quark decays affords a direct glimpse at the properties of the parent quark itself, allowing measurements of its mass, spin, charge and other properties. Finally, several signatures of new phenomena accessible at particle colliders either suffer from top-quark production as a significant background or contain top quarks themselves. With the advent of the operational era of the Large Hadron Collider (LHC), the Compact Muon Solenoid (CMS) experiment has the opportunity to perform precision measurements of top-quark production and decay for the first time away from Fermilab's Tevatron collider, whose experiments produced the discovery of the top quark in 1994. In this talk I will present some of the first results of the CMS top-quark physics program, results in which members of the University of Virginia CMS group made significant contributions. "Entanglement and Entropy in many body systems" As physical systems are cooled down, their properties may no longer be described in classical terms, and we enter a quantum regime. Perhaps the most fascinating quantum property is entanglement. Recently, with understanding of entanglement between a few particles, many-body entanglement has received great interest in such varied fields as condensed matter, cosmology and quantum information. Indeed, the scaling of entanglement in large systems is a sensitive measure of the nature of interactions and phases. In contrast with typical thermodynamical behavior, the entanglement entropy of a sub region in a physical system often grows as it's boundary area, and not as its volume. In this talk, I will describe such "area laws", their appearance and relation to quantum phase transitions. I will also discuss a yet more detailed analysis of such entanglement, known as entanglement spectrum. Finally, I will exhibit a universal relation between entanglement and statistics of current flowing through a quantum point contact, which provides a way to experimentally measure entanglement entropy. "Supersymmetry" Stephen Martin , Northern Illinois University Supersymmetry is a proposed symmetry of particle physics that relates fermions and bosons to each other. It makes the exciting prediction that for every known elementary particle there is a heavier "superpartner" particle waiting to be discovered. One of these superpartners may be the dark matter required by astrophysical and cosmological observations. I will explain the motivations behind supersymmetry, the predicted properties of the superpartner particles, and review indirect evidence suggesting that at least some of them are likely to be discovered at the Large Hadron Collider within the next few years. Several of the most likely possibilities for the discovery signature for superpartners will be discussed. "Memristance and Negative Differential Resistance in Transition Metal Oxides" Stan Williams , HP [Host: Stu Wolf] Memristive devices are nonlinear dynamical systems that exhibit continuous, reversible and nonvolatile resistance changes that depend on the polarity, magnitude and duration of an applied electric field. The memristive properties of metal/metal oxide/metal (MOM) materials systems were discovered in the 1960s and studied without reaching a consensus on the physical mechanism, while the theoretical foundation of memristance was derived by Chua in 1971 without realizing there were physical examples of this circuit property. Recent studies on the mechanism revealed that memristive switching is caused by electric field-driven motion of charged dopants that define the interface position between conducting and semiconducting regions of the film. There have also been multiple reports of current-controlled negative differential resistance (CC-NDR) in electroformed MOM devices since the early 1960s (e.g. oxides of V, Nb, Ta, Ti and Fe), and there have been a variety of proposals for the physical mechanism. Current work presents persuasive evidence that CC-NDR in these materials is due to a Joule-heating induced metal-insulator transition (MIT). We have found that both memristance and CC-NDR coexist in many transition metal oxides, and the fact that both effects have been called "switching" has caused a great deal of confusion in the literature and prevented comprehensive understanding of these systems. I will explain the origin of both effects in titanium oxides and show some potential applications of combining the two effects in a single nanoscale device. "Sport Science and the Perfection Point" John Brenkus , ESPN [Host: Lou Bloomfield] UVA alum John Brenkus will talk about the science of sport, drawing upon his vast experience as the creator, executive producer, and host of the Emmy Award-winning show "Sport Science" on ESPN. He will also discuss his recent book "The Perfection Point," which debuted at #1 on BarnesAndNoble.com when it was released on September 1. On "Sport Science," Brenkus has the top athletes on the planet into his state-of-the-art laboratory to uncover sports' biggest myths and mysteries by using cutting-edge technology to measure momentum, friction and the laws of gravity (Sport Science website). Brenkus often wires himself up and steps in the line of fire against pro athletes to see how a "normal" guy stacks up against the pros (human crash-test dummy video). "Status of LHCb Experiment" Tomasz Skwarnicki , Syracuse University LHCb experiment is dedicated to searches for new forces in decays of heavy flavors. I will give an introduction to its physics program. I will discuss the detector performance as measured on the first data, present first results and make projections to near and further future. Joint Chemistry-Physics Colloquium "CaF: Just Large Enough, and Ca: Even Smaller" Robert Field , MIT CaF is as "not-atom" as a diatomic molecule can be. The core-penetrating and core-nonpenetrating Rydberg states of CaF are observed by two-color Resonance Enhanced Ionization spectroscopy. The observed rovibronic energy levels are input to an energy- and internuclear distance-dependent Multichannel Quantum Defect Theory fit model. The fitted quantum defect matrix, μ(E,R), accounts for nearly all spectra and dynamics of CaF. A "zone of death" is observed, where selection-rule-shattering "indirect" interactions of all Rydberg states with each other, is caused by one repulsive electronic potential curve. A STIRAP-like, multiphoton, chirped pulse, millimeter wave scheme for "jumping over" this zone of death is being developed. Progress toward "pure electronic spectroscopy" and magnetic resonance-like manipulation of molecular Rydberg states requires taking a step that Arthur Schawlow would have liked, back from CaF, with its one atom too many, to the Ca atom. 5 kilo-Debye Rydberg-Rydberg transitions in Ca are directly detected by Free Induction Decay signals, rather than indirectly, via ions or UV fluorescence, in a pulsed supersonic jet. "GEM*STAR (Green Energy-Multiplier: Sub-critical, Thermal spectrum, Accelerator-driven, Recycling Reactor)" Bruce Vogelaar , Virginia Tech [Host: Blaine Norum] The world faces serious energy issues, and while nuclear energy could in principle address base-line needs, current methods intrinsically link it to proliferation, waste, high-construction cost, and safety issues. Advances (as confirmed in the 2010 Department of Energy study) in accelerator technology (e.g. SRF at JLab) now allow neutrons to be reliably generated at low-enough cost that a reactor core with a critical mass of fissile material is no longer required. The combination obviates the historical incremental approach to nuclear energy being pursued in this country. The GEM*STAR approach to such an Accelerator Driven System (ADS) thus intrinsically breaks the links to issues which have crippled the nuclear energy option. It does this by requiring: no enrichment, no reprocessing, no critical-mass on site; and providing far deeper burning with orders-of-magnitude less releasable radioactivity in its core and resulting in far less final waste. The project will demonstrate electricity cheaper than coal, and could beneficially utilize today's LWR spent fuel producing no additional waste. Results from the recent workshop on ADS (hosted by VT and JLab) along with the new report from the DOE will be presented. GEM*STAR is a project of ADNA Corp. and the Virginia GEM*STAR Consortium (VCU, VT, JLab, UVA). "Quantum computing over the rainbow" Olivier Pfister , University of Virginia Quantum computing has attracted much attention over the past sesquidecade because it makes integer-factoring easy, even though that has been a historically (if not provably) hard mathematical problem [1]. Another major interest is the exponential speedup of quantum simulations [2]. The physical implementation of nontrivial quantum computing is an exciting, if daunting, experimental challenge, epitomized by the issues of decoherence and scalability of the quantum registers and processors. In this talk, I will present a novel scheme for realizing a scalable quantum register of potentially very large size, entangled in a "cluster" state, in a remarkably compact physical system: the optical frequency comb (OFC) defined by the eigenmodes of a single optical resonator. The classical OFC is well known as implemented by the femtosecond, carrier-envelope-phase- and mode-locked lasers which have redefined time/frequency metrology and ultraprecise measurements in recent years [3,4]. The quantum version of the OFC is then a set of harmonic oscillators, or "Qmodes," whose amplitude and phase are analogues of the position and momentum mechanical observables. The quantum manipulation of these continuous variables for one or two Qmodes is a mature field. Recently, we have shown theoretically that the nonlinear optical medium of a single optical parametric oscillator (OPO) can be engineered, in a sophisticated but already demonstrated manner, so as to entangle, in constant time, the OPO's OFC into a cluster state of arbitrary size, suitable for one-way quantum computing over continuous variables [5,6]. I will describe the mathematical proof of this result and report on our progress towards its experimental implementation at the University of Virginia. [1] P. W. Shor, "Algorithms for quantum computation: discrete logarithms and factoring," in Proceedings, 35th Annual Symposium on Foundations of Computer Science, S. Goldwasser, ed., pp. 124–134 (IEEE Press, Los Alamitos, CA, Santa Fe, NM, 1994). [2] R. P. Feynman, "Simulating Physics With Computers," Int. J. Theor. Phys. 21, 467 (1982). [3] J. L. Hall, "Nobel Lecture: Defining and measuring optical frequencies," Rev. Mod. Phys. 78, 1279 (2006) [4] T. W. Hänsch, "Nobel Lecture: Passion for precision," Rev. Mod. Phys. 78, 1297 (2006). [5] N. C. Menicucci, S. T. Flammia, and O. Pfister, "One-way quantum computing in the optical frequency comb," Phys. Rev. Lett. 101, 130501 (2008). [6] S. T. Flammia, N. C. Menicucci, and O. Pfister, "The optical frequency comb as a one-way quantum computer," J. Phys. B, 42, 114009 (2009). "Exploring the Universe with Gamma-Rays" Brian Winer , Ohio State University [Host: Chris Neu ] The most energetic phenomena in the cosmos are often revealed through their gamma-ray emissions. Observing gamma-rays up to ~100 GeV requires a space-born observatory. The Fermi Gamma-Ray Space Telescope (FGST) was launched in June 2008 and is beginning its third year of observation of a mission that will last at least 5 years. The primary instrument on FGST is the Large Area Telescope (LAT), which is sensitive to gamma rays from ~20 MeV to over 300 GeV. The current status of the Fermi mission will be discussed along with results from a variety of astrophysical topics including the search for indirect evidence of dark matter. "Materials world under scrutiny: the view using a very powerful probe" Despina Louca , University of Virginia The emergence of unique physical properties in solids is a manifestation of the coexistence and competition of several degrees of freedom. They are probed by neutrons which provide details on the structure and dynamics. Examples of systems that will be discussed include the magnetoresistive perovskite oxides, bulk metallic alloys, and the new class of superconductors. Understanding the macroscopic functionality of these systems can be potentially very useful for industrial applications. "Science, Political Science, and Social Responsibility" J.J. Suh and Seunghun Lee , Johns Hopkins University / University of Virginia J.J. Suh, a political scientist, and S.-H. Lee, a physicist, have been working together to find out what really happened to the South Korean (SK) Navy corvette, the Cheonan, that sank on March 26, 2010 in the Yellow Sea near the sea border with North Korea. On May 20 after almost two months of investigation, the SK-appointed Joint Investigation Group concluded that the Cheonan had been destroyed by a North Korean torpedo. Our close examination of the JIG's evidence, however, shows that its conclusion is scientifically untenable and that the integrity of some of its scientific data has been compromised. This episode clearly illustrates the need of interaction and collaboration between social science and natural science experts when science gets entangled with politics, as it often does in this technologically ever-developing world. "The Jefferson Lab Program on Inclusive and Semi-Inclusive Deep Inelastic Scattering" Sebastian Kuhn , Old Dominion University [Host: Don Crabb] Nucleons (protons and neutrons) play a dual role as the building blocks of atomic nuclei (which constitute nearly all of the mass visible around us) and as stable systems bound by the fundamental strong force of Quantum ChromoDynamics (QCD). When studied with the most powerful microscopes (accelerators) on Earth, nucleons appear as a chaotic jumble of a nearly infinite number of "partons" (quarks, antiquarks and gluons). However, at the more moderate resolution available at Jefferson Lab, a simpler picture emerges: the quantum numbers of the nucleon are due to just three "valence" quarks which carry a large fraction of its energy-momentum, plus a few quark-antiquark pairs and gluons. One of the main research programs at Jefferson Lab is a detailed study of the distribution in space and momentum space of these partons, and their intrinsic spins. Deep inelastic scattering (DIS), where a relatively large momentum and energy is transferred from a scattered electron to the struck nucleon, is a primary tool to unravel this "medium resolution" structure of the nucleon. Additional information becomes available when one detects part of the final-state debris as well as the scattered electron (semi-inclusive DIS). In my talk, I will give some examples of experiments at Jefferson Lab that employ these tools, and explain what we can learn from them. "Beauty is only skin deep; probing thin film and membrane structure by neutron reflection" Chuck Majkrzak , NIST Over the course of the last two decades, neutron reflectometry has become established as an important structural probe of thin films and multilayered composites, most notably of hydrogenous and magnetic materials. As an introduction, the basic principles and typical applications of neutron reflectometry are briefly reviewed. Examples of neutron reflectometry studies of thin film systems of interest in condensed matter physics, chemical physics, and biophysics are presented. In particular, the scattering length density (SLD) depth profile along the surface normal, averaged over in plane, can be deduced from specular neutron reflectivity measurements (wavevector transfer Q normal to the surface). The SLD profile, in turn, is directly related to the corresponding material composition distribution. Under favorable conditions, specular neutron reflectometry can resolve variations in the compositional depth profile on a length scale of the order of a nanometer for a thin film having a single unit repeat, whereas for a periodic multilayered system, the spatial resolution can approach an Angstrom. For specular neutron reflection, the complex reflection amplitude or phase associated with an "unknown" segment of a composite film structure can be determined exactly, using reference segments, and a subsequent direct inversion can be performed, thereby ensuring, in principle, a unique result [1]. Thus, the phasesensitive neutron reflection / inversion process results in a realspace picture without fitting or any adjustable parameters. We will discuss how, because of the onetoone correspondence between the complex reflection amplitude and the SLD, phasesensitive NR can be viewed, in effect, as being equivalent to a realspace imaging process one in which the inversion computation plays an analogous role to that of the brain, for instance, in interpreting the optical image of an object focused on the retina of the eye [2]. In performing phasesensitive reflectivity measurements in practice, what ultimately limits the accuracy and spatial resolution of the depth profile are the maximum range of Q attainable and the statistical uncertainty in the measured reflected intensities. These effects can be analyzed quantitatively [3] and we will consider the spatial resolution currently possible as well as what can be reasonably expected in the future with more advanced neutron sources and instrumentation (e.g., employing polychromatic beams at continuous sources). Finally, we will critically examine a possible alternative approach to performing neutron reflectivity measurements, which involves the quantum phenomenon of "Interaction Free Measurement" (IFM) of the type first proposed by Dicke [4] and realized in rudimentary fashion by Kwiat et al. with visible light [5]. The scheme utilized by Kwiat et al. purportedly optimizes the efficiency for performing an IFM of the reflectivity (or transmission) by application of the quantum Zeno effect (which requires polarized photons or neutrons) within an interferometer. "The Search for the Heisenberg-Schwinger Effect: Nonperturbative Pair Production from Vacuum" Gerald Dunne , University of Connecticut The Heisenberg-Schwinger effect is the non-perturbative production of electron-positron pairs when an external electric field is applied to the quantum electrodynamical (QED) vacuum. The inherent instability of the quantum vacuum in an electric field was one of the first non- trivial predictions of QED, but the effect is so weak that it has not yet been directly observed. However, new developments in ultra-high intensity lasers come tantalizingly close to opening a new window on this unexplored extreme ultra-relativistic regime. This necessitates a fresh look at both experimental and theoretical aspects of the Heisenberg-Schwinger effect. I review the basic physics of the problem and describe some recent theoretical ideas aimed at making this elusive effect observable, by careful shaping of laser pulses. This is an example of an emerging new field using ultra-intense lasers to probe fundamental problems in particle physics, gravity and quantum field theory. "Applied string theory -- from gravitational collapse to quark-gluon liquids" Paul Chesler , M.I.T. A remarkable result from heavy ion collisions at the Relativistic Heavy Ion Collider is that shortly after a collision, the medium produced behaves as a nearly ideal liquid. The system is very dynamic and evolves from a state of two colliding nuclei to a liquid in a time roughly equivalent to the time it takes light to cross a proton. Understanding the mechanisms behind the rapid approach to a liquid state is a challenging task. In recent years string theory has emerged as a powerful tool to study non-equilibrium phenomena, mapping the (challenging) dynamics of quantum systems onto the dynamics of classical gravitational systems. The creation of a liquid in a quantum theory maps onto the classical process of gravitational collapse and black hole formation. I will describe how one can use techniques borrowed from numerical relativity in astrophysics to study processes which mimic the dynamics of heavy ion collisions. "Casimir effect due to a single boundary as a manifestation of the Weyl problem" The Casimir self-energy of a boundary is ultraviolet-divergent. In many cases the divergences can be eliminated by methods such as zeta- function regularization or through physical arguments (ultraviolet transparency of the boundary would provide a cutoff). Using the example of a massless scalar field theory with a Dirichlet boundary we explore the relationship between such approaches, with the goal of better understanding the origin of the divergences. We are guided by the insight due to Dowker and Kennedy (1978) and Deutsch and Candelas (1979), that the divergences represent measurable effects that can be interpreted with the aid of the theory of the asymptotic distribution of eigenvalues of the Laplacian first discussed by Weyl. In many cases the Casimir self-energy is the sum of cutoff-dependent (Weyl) terms having geometrical origin, and an "intrinsic" term that is independent of the cutoff. The Weyl terms make a measurable contribution to the physical situation even when regularization methods succeed in isolating the intrinsic part. Regularization methods fail when the Weyl terms and intrinsic parts of the Casimir effect cannot be clearly separated. Specifically, we demonstrate that the Casimir self-energy of a smooth boundary in two dimensions is a sum of two Weyl terms (exhibiting quadratic and logarithmic cutoff dependence), a geometrical term that is independent of cutoff, and a non-geometrical intrinsic term. As by-products we resolve the puzzle of the divergent Casimir force on a ring and correct the sign of the coefficient of linear tension of the Dirichlet line predicted in earlier treatments. "Particle Physics – The Exciting Times at Fermilab" Rob Roser , FNAL There has never been a more exciting time in Particle Physics. The Tevatron scientists are currently mining huge data samples and expect to double the sample yet again before the current run is through. Meanwhile, the intensity frontier effort is ramping up as Fermilab readies itself for life beyond the energy frontier. In my talk, I will discuss some of the exciting physics results that are currently coming out of the Tevatron program and discuss the future plans of the lab "The Race for the Higgs Boson (A Tevatron Perspective)" R. Craig Group , Fermilab I will begin by motivating the Higgs boson as an important piece of the Standard Model of particle physics that has yet to be experimentally verified. I will then give a short review of high energy colliders and particle detectors and will describe the challenges of discovering a Higgs boson with these machines. I will summarize the status at the Tevatron Collider at Fermilab and the Large Hadron Collider at CERN and portray the excitement at these two labs as the race to discover the Higgs boson tightens up. "Low-Background Searches for Rare Events: The MAJORANA Neutrinoless Double-Beta Decay Experiment, and the CLEAN/DEAP Dark Matter Search" Victor Gehman , Los Alamos National Laboratory Rare event searches will have a profound impact on the search for physics beyond the Standard Model in the coming years. This is particularly true in searches for neutrinoless double-beta decay and dark matter, and we will discuss one experiment of each type. The MAJORANA experiment will search for neutrinoless double-beta decay in 76Ge by constructing an array of HPGe detectors in ultra-clean electro-formed copper cryostats deep underground. Recent advances in HPGe detector technology, particularly the development of P-type Point Contact (PPC) detectors present excellent new opportunities in identifying and reducing backgrounds to the double-beta decay signal. The CLEAN/DEAP collaboration is fielding MiniCLEAN, a 400-kg, single-phase detector capable of being filled with either liquid neon or argon. MiniCLEAN uses a spherical geometry to maximize light yield and pulse shape analysis techniques to identify nuclear recoil signals and reject electron recoil backgrounds. Careful attention is being paid to reducing the contamination of detector surfaces by environmental radon gas. We will present an overview and highlight recent R&D progress of both experimental programs. "Dark Energy: Taking Sides" Rocky Kolb , University of Chicago Dark energy appears to be the dominant component of the present mass-density of the Universe, yet there is no persuasive theoretical explanation for its existence or magnitude. While the simplest explanation might be Einstein's cosmological constant, there are other possibilities, including dynamical dark energy, modification of general relativity, or back reactions of inhomogeneities. After framing the dark-energy problem, I will discuss possible theoretical solutions, as well as an observational program to study the properties of dark energy. "The MuLan Experiment: Measuring the Muon Lifetime to 1ppm" Kevin Lynch , Boston University The Standard Model of Particle and Nuclear physics makes thousands of successful predictions, based on roughly 20 experimentally determined input parameters. Studies on the Electroweak frontier in particular require extremely precise values for a subset of those parameters, including the Fermi Constant. I will describe the MuLan experiment, which has measured the muon lifetime with unprecedented part per million accuracy, improving our knowledge of the Fermi Constant by a factor of 20. I will describe the physics motivation for the measurement, emphasize the subtle design and analysis challenges of a measurement on the precision frontier, and discuss both our published results and current progress towards our ultimate physics goals. "Entering an Era of Precision Neutrino Physics" Mitchell Soderberg , Yale University The discovery just over a decade ago that neutrinos can change identities by oscillating between flavors was a revolutionary change to the Standard Model description of particle physics. This discovery implies that neutrinos are not massless, and that they could play a crucial role in answering some of the most fundamental questions in particle physics, such as whether the observed matter-antimatter asymmetry in the universe can be attributed to CP violating neutrino interactions. Many experiments are currently attempting to solve the remaining mysteries of neutrino behavior, but this is a challenging task due to the elusive nature of these particles. Liquid Argon Time Projection Chambers (LAr TPCs) are ideally suited for the study of neutrino interactions thanks to their precision detection capabilities that make them the modern day equivalent of bubble chambers. In this talk I will motivate the compelling questions in neutrino physics and introduce the LAr TPC technique, highlighting recent work in the development of this technology, including discussion of the ArgoNeuT (Argon Neutrino Test) test-beam project and the MicroBooNE experiment. Finally, I will discuss preliminary ideas for the ultimate experiment that could be conducted at the Deep Underground Science and Engineering Laboratory (DUSEL) in South Dakota as part of a world-class U.S. neutrino program that is currently being planned. "Searches for a Standard Model Higgs Boson at the Collider Detector at Fermilab" Jennifer Pursley , University of Wisconsin In the standard model of particle physics, the Higgs mechanism is theorized to explain the broken symmetry of the electromagnetic and weak forces by giving mass to the W and Z gauge bosons. One consequence of this theory is the existence of another massive elementary particle, called the Higgs boson. While this theory of electroweak symmetry breaking was first introduced in the 1960's, the Higgs boson has yet to be observed experimentally and the theory remains unproven. Finding the Higgs boson is currently one of the primary goals of the Fermilab Tevatron collider and the Large Hadron Collider at CERN. In this colloquium I will start with a brief overview of the standard model of particle physics, the role played by the Higgs mechanism, and previous searches for a Higgs boson. Then I will introduce the Fermilab particle accelerator complex and the Collider Detector at Fermilab experiment, and discuss my own research searching for this elusive piece of the standard model. My focus is on the search for a high-mass Higgs boson, which primarily decays to two W bosons. Although we have not yet discovered a Higgs boson, at the Tevatron we are narrowing the possibilities. Within a few years we should know whether the standard model Higgs boson exists, or if we need a new solution. "Novel magnetism in ultracold atomic gases" "Strings and QCD" "QCD in five dimensions" Chris Dawson , University of Virginia "Why does the (free) neutron decay (?)" "Meeting Future Energy Demand Through Unconventional Technology " Kambiz Safinya , Schlumberger Research Crude oil production forecasts point to a drop of 40 M b/d of conventional oil by 2030. Although the financial and economic crisis has driven global energy lower in 2009 for the first time since 1981 on any significant scale, demand will resume its long-term upward trend once the economic recovery gathers pace. By 2030, world primary energy demand is forecast to be around 45% higher than today – this is like adding two more United States to world consumption. There is therefore a drive to develop alternative energy sources as well as unconventional hydrocarbon reserves to replace the lost production from conventional reservoirs. Given that conservative estimates of Heavy Oil reserves approach 6 trillion barrels, and that heavy oil production today is approaching 10% of world production, it is reasonable to suppose that a significant percentage of the production shortfall would be filled through the production of heavy oil. These facts and the significant increase in average crude oil price since the turn of the century have led to an increased level of interest in these types of reservoirs. It is also true that due to the nature of heavy oil, while the reserves are significant, the recoverable reserves are around 5%-7%. The challenge is therefore to develop technologies that can significantly increase the recovery factors of heavy oil reservoirs in an environmentally acceptable manner. This talk will focus on the current approach adopted by industry and the technologies which will be required to address the challenges stated here. "Nondispersing Rydberg Wavepackets" Tom Gallagher , University of Virginia As first pointed out by Schrodinger, it is possible to make a "classical" atom, one in which the electron moves in an orbit around the nucleus, by creating superpositions of stationary quantum eigenstates. In quantum terms the probability has a moving spatial maximum. The idea lay dormant until the mode locked laser allowed the creation of atomic (and molecular) wavepackets. Such wavepackets usually disperse, that is, they lose their spatial localization after a few orbits. Dispersion can be prevented by applying an weak microwave field at the orbital frequency. The microwave field phase locks the electron's orbital motion, and by altering the microwave field it is possible to alter the electron's orbit. For example, increasing or decreasing the microwave frequency increases the orbital frequency, and changing the microwave polarization from linear to circular produces a circular orbit. "Detecting Gravitational Waves (and doing other cool physics) with Millisecond Pulsars" Scott Ransom , NRAO The first millisecond pulsar was discovered in 1982. Since that time their use as highly-accurate celestial clocks has improved continually, so that they are now regularly used to measure a variety of general relativistic effects and probe a variety of topics in basic physics, such as the equation of state of matter at supra-nuclear densities. One of their most exciting uses though, is the current North American (NANOGrav) and international (the International Pulsar Timing Array) efforts to directly detect nanohertz frequency gravitational waves, most likely originating from the ensemble of supermassive black hole binaries scattered throughout the universe. In this talk I'll describe how we are using an ensemble of pulsars to try to make such a measurement, how we could make a detection within the next 5-10 years, and how we get a wide variety of very interesting secondary science from the pulsars in the meantime. "Nonequilibrium thermodynamics at the microscale" Christopher Jarzynski , Univ. of Maryland [Host: Austen Lamacraft] "Nanotube & Graphite based electronics" Keith Williams , University of Virginia "Entropy in Quantum Information Theory and Condensed Matter Physics" Matthew Hastings , Station Q, UCSB While entropy was introduced in thermodynamics to describe heat engines, its applications have spread to widely different areas. I will talk about recent research on two such problems. The first is a problem in information theory: how much information can we send over a noisy communication channel, given that the world is described by quantum mechanics? I will explain the so-called "additivity conjecture", which was a proposed way to calculate the communication capacity of such a channel, and I will explain my recent result disproving this conjecture, showing that we can use entanglement to boost communication capacity. The second problem is in quantum systems far from equilbrium. Here I will describe how entropy can arise from quantum entanglement, and I will discuss novel simulation algorithms and future experiments probing the relaxation back to local thermal equilibrium. "Energy: No Such Thing as a Free Lunch" Eric Prebys , Fermilab Mankind has had a long obsession with the quest for limitless or virtually limitless sources of energy. This quest did not stop with the advent of modern physics, but much of it moved out of the realm of science and into the realm of pseudo-science. Today, "free energy" is a thriving, multi-million dollar business. It involves a colorful cast of characters that range from the sincerely self-deluded to outright charlatans. The fact their claims are given greeted with such credulity by both the public and the news media has profound implications about the general state of scientific understanding in our society. "Dark Energy and the Hubble Constant" Dark energy dominates the expansion of the universe and will determine its ultimate fate. The best complement to cosmic microwave background data for constraining the nature of dark energy is an accurate measurement of the current expansion rate (Hubble constant). The goal of the Megamaser Cosmology Project is to measure the Hubble constant by using the Green Bank Telescope and the Very Long Baseline Array to discover and image 22 GHz water masers orbiting the nuclei of Seyfert galaxies. We can show that these compact nuclei contain supermassive black holes, not just dense clusters of stars, and determine their masses. In the past year we improved our measurement of the angular-size distance to the galaxy UGC 3789, imaged four more masing galaxies, and derived a preliminary estimate for the Hubble constant. "Exploring the Nature of Matter: Jefferson Lab and its plans" Hugh Montgomery , Director of JLab Thomas Jefferson National Accelerator Facility (Jefferson Lab) is one of the premier facilities for nuclear and hadronic physics in the world. With high luminosity and high polarization continuous wave electron beams, the 6 GeV physics program has produced exciting results during the past decade. Currently the laboratory is executing an upgrade of the accelerator from 6 GeV to 12 GeV: this project was recommended as the top priority in the most recent US nuclear physics long-range plan. The upgrade, which also includes changes to the experimental facilities, will open new avenues of investigation. Beyond this upgrade Jefferson Lab is preparing the case for a future Electron Ion Collider. " E = mc^2, High energy and intensity opens windows on the world" Young-kee Kim , Deputy Director, Fermilab/University of Chicago The profound discovery of Einstein a century ago, that particles can both be made from energy and disappear back into energy, inspires the experiments that provide our knowledge of the smallest building blocks of matter. The experiments, done at enormous energy and intensity frontier accelerators, have led to a consistent theory of the origins of our world up to a certain point. However, at an energy scale not far above what we can attain at existing accelerators, this picture is predicted to break down. Moreover, the theory of the very small is intimately connected to cosmology -- the ultimate cause and structure of our universe. Cosmological observations again point to the need for a new theory in this energy range. In this colloquium, I will trace out the path from where we are and what we need to do to take the next step towards understanding the nature of space and time. The discovery of new particles or new laws at energy and intensity frontier accelerators will open up windows on this world. "Superconductivity at the Dawn of the Iron Age" Zlatko Tesanovic , Johns Hopkins University Recent discovery of iron-based high temperature superconductors hints at a new pathway to the room temperature superconductivity. The new materials feature FeAs layers instead of the signature CuO2 planes of much-studied cuprate superconductors. The antiferromagnetism also appears to be involved, although the d-electrons in FeAs seem considerably more mobile than their cuprate cousins. This high mobility, facilitated by a large overlap between atomic orbitals of Fe and As, plays a crucial role in warding off Hund's rule and the large local moment magnetism of Fe ions, the archrival of superconductivity. I will present a pedagogical review of the current status of the field, highlighting similarities and differences between iron pnictides and cuprates, and emphasizing the importance of the multiband nature of magnetism and superconductivity in these new materials. "Academic Fraud and a Calculus of Death" George Gollin , University of Illinois [Host: Craig Dukes ] For a price, it is possible to acquire unearned academic degrees from non-existent universities that market diplomas over the internet. The most sophisticated of these diploma mill cartels, based in Spokane, Washington, used the turmoil in Western Africa to foster the illusion of recognition and accreditation by the Republic of Liberia. But these credentials were obtained through payments to government officials, and were no more legitimate than the supporting web of fake diplomatic missions, schools, accreditors, and credential evaluators created by the "Saint Regis University" group. Their operation spanned at least eighteen states and twenty-two countries, and their stable of degree mills included over seventy non-existent schools selling degrees in medicine, nursing, nuclear and aeronautical engineering, addiction counseling, and special education, among other fields. Falsely identifying herself as a Liberian official, the principal owner of St. Regis wrote to the University of Illinois in 2003 threatening legal action over information I had posted to a university web page. The resulting brawl led to a multi-agency federal criminal investigation: prosecutors indicted the owners and staff of St. Regis for mail fraud, wire fraud, money laundering, and bribery of foreign officials in late 2005. All eight defendants pled guilty; five began serving prison terms in late 2008. This is a serious issue. The investigation revealed an alarming mix of consumer protection, public safety, and national security issues raised by the activities of the Saint Regis group. In addition, the delay in Liberia's recovery from two decades of civil war, due to the corrupting influences of the St. Regis organization, convolves with Liberia's infant mortality rate in a ghastly calculus of death. And we now see a next-generation diploma mill, having learned from St. Regis' mistakes, attacking the higher education systems in the two African nations immediately to the west of Darfur. We are beginning to make progress. New federal legislation intended to begin the long process of obliterating the diploma mill industry is a direct result of the St. Regis case. Several states have also drafted new laws, or otherwise tightened their oversight of degree providers. But it is an international problem of great complexity, and we are slow to respond. I will tell you stories, all of which are true. "The Quantum Mechanics of Global Warming" Brad Marston , Brown University Quantum mechanics plays a crucial, albeit often overlooked, role in our understanding of the Earth's climate. In this talk three well known aspects of quantum mechanics are invoked to present a simple physical picture of what may happen as the concentrations of greenhouse gases such as carbon dioxide continue to increase. Historical and paleoclimatic records are interpreted with some basic astronomy, fluid mechanics, and the use of fundamental laws of physics such as the conservation of angular momentum. I conclude by discussing some possible ways that theoretical physics might be able to contribute to a deeper understanding of climate change. "Non-Abelian anyons: New particles for less than a billion" Kirill Shtengel , UC Riverside The notion of quantum topological order has been a subject of much interest recently, in part because it falls outside of the well-established Landau paradigm whereby states of matter are classified according to their broken symmetries. Topologically ordered phases cannot be described by any local order parameter, yet they have many peculiar properties clearly distinguishing them from the conventionally disordered phases. For example, in two dimensions, they may support anyonic excitations - the quasiparticles that are neither bosons nor fermions. Moreover, anyons with *non-Abelian* braiding statistics are expected to occur, particularly in the fractional quantum Hall regime. Interesting in their own right, such systems may also provide a platform for topological quantum computation. Interferometric experiments are likely to play a crucial role in both determining the non-Abelian nature of these states and in their potential applications for quantum computing. I will discuss solid state interferometers designed to detect such non-Abelian quasiparticle statistics. Should these experiments succeed, such interferometers could also become key elements in a topological quantum computer. "High Temperature Superconductivity - After 23 years, where are we at? " Mike Norman , Argonne National Laboratory The field of high temperature cuprate superconductivity remains as controversial as ever. Although certain matters have been settled, for instance the symmetry of the order parameter, there is no accepted microscopic framework for describing these materials. This might seem surprising given their relatively simple electronic structure, but the issues involved touch some of the most fundamental ones facing physics - in particular the problem of how to properly treat strong correlations between electrons. In this talk, I will discuss the progress that has been made, but also the many issues that will have to be resolved before we can say that we have "solved" the cuprate problem. "Quantum Manipulation of Neutral Atoms Without Forces" Thad Walker , University of Wisconsin Interactions between pairs of Rydberg atoms can be so strong that the energy level structure of one atom is dramatically altered by the presence of a second atom 10 microns away. This "Rydberg blockade" is predicted to allow conditional quantum manipulation of individual atoms based on the quantum state of a distant neighboring atom. When successful, the resulting entanglement process occurs without the atoms experiencing any significant interatomic forces. I will describe experiments at the University of Wisconsin that demonstrate blockade-conditioned coherent evolution of a single Rb atom based on the quantum state of a second atom 11 microns away. Extensions of these ideas to deterministic single atom and single photon sources with atomic ensembles will be presented. "Beyond E=mc^2: Using Rare Particle Decays to Probe the Energy Frontier" Craig Dukes , University of Virginia Although there is great excitement in particle physics these days, with the advent of the Large Hadron Collider upon us and the great discoveries we hope it will bring, for the first time in some seventy years there are no plans for any new accelerators to take us to the next energy regime. So we will need to look for tiny indirect signs such as rare particle decays in order to find out what may be lurking beyond what we can directly produce in collisions at particle accelerators. There is a long history of such searches for new physics, a history that predates particle physics itself. I will show how such searches will probe mass scales unobtainable by any conceivable particle accelerator and describe the types of accelerators and experiments that are being planned, in particular a very high-sensitivity search for lepton flavor violation in muon decays. MEC, Room 205 "Graphene-Based Electronics" Nathan Guisinger , Argonne [Host: Keith Williams] "FRIB: A New Accelerator Facility for the Production of Radioactive Beams" Richard York , MSU The 2007 Long Range Plan for Nuclear Science had as one of its highest recommendations the "construction of a Facility for Rare Isotope Beams (FRIB) a world-leading facility for the study of nuclear structure, reactions, and astrophysics. Experiments with the new isotopes produced at FRIB will lead to a comprehensive description of nuclei, elucidate the origin of the elements in the cosmos, provide an understanding of matter in the crust of neutron stars, and establish the scientific foundation for innovative applications of nuclear science to society." A heavy-ion driver driver linear accelerator (linac) will be used to provide stable beams of >200 MeV/u at beam powers up to 400 kW that will be used to produce rare isotopes. Experiments can be done with rare isotope beams at velocities similar to the linac beam, at near zero velocities after stopping in a gas cell, or at intermediate (0.3 to 10 MeV/u) velocities through reacceleration. An overview of the science and the design proposed for implementation on the campus of Michigan State University leveraging the existing infrastructure will be presented. "Studying strong and electroweak interactions using electron scattering at Jefferson Lab" I will present two research topics of Jefferson Lab: The first topic is focused on a planned precision measurement of the parity violating asymmetry in e-2H deep inelastic scattering (PVDIS). This asymmetry is sensitive to the electroweak neutral coupling $C_{2q}$ of the Standard Model. The experiment (E08-011) has been approved to run from November to December 2009. I will present the progress in the preparation of E08-011, in particular the development of a new fast-counting DAQ system. The second topic is on the extraction of double and single-target spin asymmetries of pion electro-production using JLab Hall B(CLAS)/EG4 data. We expect to extract these asymmetries in the very low $Q^2$ region Q^2<0.1 (GeV/c)^2. These data will provide important inputs to global analyses of the nucleon resonance structure. Preliminary results using a 3 GeV beam and a NH$_3$ target will be presented. "The Quantum Spin Hall Effect and Topological Band Theory" Charlie Kane , U. Penn A topological insulator is a material with a bulk excitation gap generated by the spin orbit interaction, which is topologically distinct from an ordinary insulator. This distinction - characterized by a topological invariant - necessitates the existence of gapless metallic states on the sample boundary, which have important implications for electronic transport. In two dimensions, the topological insulator is a quantum spin Hall insulator, which is a close cousin of the integer quantum Hall state. In this talk we will outline our theoretical discovery of this phase and describe two recent experiments in which the signatures of this effect have been observed. (1) Transport experiments on HgTe/HgCdTe quantum wells have demonstrated the existence of the edge states predicted for the quantum spin Hall insulator. (2) Photoemission experiments on the semiconducting alloy Bi_{1-x} Sb_x have observed the signature of the gapless surface states predicted for a three dimensional topological insulator. We will close by arguing that the proximity effect between an ordinary superconductor and a 3D topological insulator leads to a novel two dimensional interface state which may provide a new venue for realizing proposals for topological quantum computation. "The study of neutron quantum states in the Earth's gravitational field" I will discuss the discovery and characterization of gravitational bound neutron states. In the previous experiments, the lowest neutron quantum states in the gravitational potential were distinguished and characterized by a measurement of their spatial extent. The future detection of resonant transitions between these neutron quantum states with the help of the GRANIT spectrometer (under construction) promises to give further and more precise information. Here, transitions between different quantum states induced by RF pulses shall be observed. These measurements are not only demonstrations of standard quantum mechanics. I will discuss applications of these measurements in the search for spin-dependent short-range interactions. "Cavity optomechanics" Pierre Meystre , University of Arizona Recent experimental advances in laser cooling have brought macroscopic oscillators closer than ever before to operating in the quantum regime. Fundamental interest in this frontier lies in the fact that quantum mechanics has never been tested at such a macroscopic scale, particularly with respect to counter-intuitive effects such as superposition and entanglement. From a more practical point of view, mechanical oscillators operating in the quantum offer considerable promise as sensors whose precision is fundamentally restricted by quantum mechanics. The talk will present a broad review of the basic principles of the laser cooling of opto-mechanical cantilevers, and then turn to a discussion of some possible applications in the coherent control of atomic and molecular systems. "Novel Physics with Frozen-Spin Polarized Solid Hydrogen" Andy Sandorfi , JLab "Quantum-limited measurements: One physicist's crooked path from quantum optics to quantum information" Carl Caves , University of New Mexico Quantum information science has changed our view of quantum mechanics. Originally viewed as a nag, whose uncertainty principles restrict what we can do, quantum mechanics mechanics is now seen as a liberator, allowing us to do things, such as secure key distribution and efficient computations, that could not be done in the realistic world of classical physics. Yet there is one area, that of quantum limits on high-precision measurements, where the two faces of quantum mechanics remain locked in battle. Using my own career as a convenient backdrop, I will trace the history of quantum-limited measurements, from the use of nonclassical light to improve the phase sensitivity of an interferometer, to the modern perspective on how quantum entanglement can be used to improve measurement precision, and finally to how to do quantum metrology without entanglement. "SCIENTIFIC CHALLENGES IN HYDROGEN STORAGE: BREAKTHROUGHS AT UVa" I will describe results of recent experiments at UVa which have revealed that hydrogen storage upto 14 wt.% can be achieved. This is a world record for hydrogen uptake. I will also review the significant scientific challenges that remain and discuss possible solutions. Related work in other laboratories will be discussed as well. "Magnetic field-induced phase transition in a quantum gapped system: is the Bose-Einstein condensation concept useful?" Seunghun Lee , University of Virginia "Physics –Fundamentals for Business" Mark Adams , Vice President, ITT Corporation [Host: Bascom Deaver] Mr. Adams discusses a number of poignant experiences as an undergraduate Physics major at UVA and traces how these lessons have been foundational in his approach to building businesses throughout his career. The technical and operational challenges of remaking a failed $7B company with annual losses exceeding $1B are described from the perspective of a closet physicist. Mr. Adams relates his physics inspired approaches – ranging from the futile to the fruitful – to creating an organization that supports over 300,000 subscribers in over 100 countries worldwide. He also discusses the physics behind his second business startup, which has grown to over a hundred professionals with locations in three states. "Bending Back the Light: The science of negative refraction" Costas Soukoulis , Ames Lab [Host: Michael Fowler] "Detecting Cosmic Messengers with Antarctic Balloon Flights " Cosmic rays bring us information about physical processes that accelerate particles to relativistic energies, the effects of those particles in driving dynamical processes in our Galaxy, and the distribution of matter and fields in interstellar space. These cosmic messengers can far exceed the energies produced by man-made particle accelerators on Earth. Balloon-borne instruments configured with particle detectors are flown in Antarctica to study cosmic-ray origin, acceleration and propagation. They are also used to explore a possible supernova acceleration limit and to search for exotic sources such as dark matter and antimatter. Our on-going efforts with balloon-borne experiments will be presented and challenges of extending precision measurements to highest energy practical will be discussed. "The Ancient Science of Violinmaking" Oded Kishony , Charlottesville, Violinmaker "Neutrinos, and the dark side of the light fermions" Alexander Kusenko , UCLA The past decade has been marked by some remarkable discoveries in the neutrino physics: the particles once believed to be massless have turned out to be massive and have shown evidence of lepton family number violation, as well as other interesting phenomena. While this is exciting, the future may hold even more dramatic discoveries, the hints for which begin to appear in astrophysics and cosmology. The observed neutrino masses imply the existence of some yet undiscovered "right-handed" states, which can be very massive and unreachable, but which can also be light enough to constitute the cosmological dark matter and to account for a number of astrophysical phenomena, from supernova asymmetries and the pulsar kicks to the peculiarities in the reionization and formation of the first stars. I will review the recent progress in neutrino physics, as well as the clues that may lead to future discoveries. "Photon Wave Mechanics and Spin-Orbit Interaction in Single Photons" Michael Raymer , University of Oregon We often use the term "photon" in reference to individual quantum objects, or particles of light, rather than as excitations of the electromagnetic field. Yet, quantum mechanics textbooks contain no satisfactory wave equation for the photon wave function. I review the analog of the Dirac equation for a photon, which completely describes the evolution of the photon's quantum wave function in coordinate space. Single photons carry orbital angular momentum as well as spin angular momentum. When a single photon travels in a multimode optical fiber, its spin and orbital angular momenta interact, modifying the shape of the photon wave function as it travels. Close analogy of this behavior can be found with that of an electron in a cylindrical potential, in spite of the fact that a photon has no magnetic moment. We are carrying out related experiments to illustrate the usefulness of the photon wave function concept. "Interaction between a molecular magnet monolayer and a metallic surface" Kyungwha Park , Virginia Tech Over the past decade, molecular magnets or single-molecule magnets have drawn considerable attention due to observed magnetic quantum tunneling and interference and a possibility of using them for information storage or devices. There have been so far significant efforts to build and characterize thin films or monolayers of single-molecule magnets on surfaces or single-molecule magnets bridged between electrodes. However, there is need to understand changes of the properties of single-molecule magnets in those environments using atomic-scale simulations. In this regard, we simulate, within density-functional theory, a nanostructure in which prototype Mn12 molecules are adsorbed via a thiol group onto a gold surface. Based on a supercell calculation, we investigate how much charge and spin are transferred between a Mn12 molecule and the metal surface. In addition, we compare the electronic structure and magnetic properties of the nanostructure with those of an isolated Mn12 molecule in the absence and presence of spin-orbit interaction. Joint Astronomy-Physics-NRAO Colloquium "Is the search for the origin of the highest energy cosmic rays over?" Alan Watson , Leeds University, England This question can now be asked because of two results obtained using data recorded at the Pierre Auger Observatory. It has been established, at the 6-sigma level, that the flux of the highest energy cosmic rays is suppressed at energies beyond 5 x 10 19 eV and that above this energy an anisotropy in the arrival directions of the particles is apparent. The arrival directions appear to be associated with sources within the GZK horizon (z ~ 0.018 or 75 Mpc). From these observations it seems probable that we have observed the long-sought Greisen-Zatsepin-Kuzmin effect, demonstrating that ultra-high energy cosmic rays are of extragalactic origin. It is also probable that these particles are protons, thus offering the possibility of insights into features of particle physics at centre-of-mass energies 30 times greater than will be reached at the LHC. Preliminary conclusions from studies of detailed features of extensive air showers suggest that extrapolations from Tevatron energies may not be what have been anticipated hitherto. Much further work remains to be done. Physics Building, Room Chemistry Building, Room 402 "The Birth of Cosmic Ray Astronomy on the Argentine Pampas" Alan Watson , University of Leeds, United Kingdom [Host: Physics Department] "Dark Energy and Cosmic Sound" Daniel Eisenstein , University of Arizona I present galaxy clustering results from the Sloan Digital Sky Survey that reveal the signature of acoustic oscillations of the photon-baryon fluid in the first million years of the Universe. The scale of this feature can be computed and hence the detection in the galaxy clustering serves as a standard ruler, giving a geometric distance to a redshift of 0.35. I will discuss the implications of this measurement for the composition of the universe, including dark energy and spatial curvature. I will close with a more general discussion of SDSS-III, a new collaborative project that will feature a large redshift survey aimed at refining the acoustic oscillation distance scale to 1% as well as surveys for extrasolar planets and the structure of the Milky Way. "Nucleon Form Factors...50 Years Later" John Arrington , Argonne National Lab The structure of the proton and neutron can be expressed in terms of the electric and magnetic form factors which can be measured from elastic electron-proton scattering. Fifty years ago, the first electron scattering measurements of the proton form factors started the process of mapping out the distribution of charge and magnetization of the proton. Four decades of measurements gave us a simple picture of the nucleon, but our understanding was severely limited by the experimental techniques and theoretical understanding. The last ten years as provided several new experimental and theoretical techniques, giving us a much clearer picture of nucleon structure, and providing a few surprises along the way. Wilsdorf Hall, Room Atrium "Nanoscale Assembly with DNA" Ned Seeman , NYU "Probing Nucleons Inside Nucleus" Seonho Choi , Seoul National University The interior world of the nucleus is still a mystery in nuclear physics. While it is well known that the nucleus is made of nucleons, their properties inside the nucleus are still a big puzzle. There has been a series of experiments to probe the nucleons inside the nucleus. However, the results are still controversial. One main remaining question is regarding the Coulomb Sum Rule (CSR). The colloquium will cover the basic concept of probing microscopic world with high energy electron beams, the key issues of the CSR problem and the recent, new experiment at Jefferson Lab to study the CSR problem. "Fermilab's race for the Higgs boson" Benjamin Kilminster , Ohio State University One of the most important mysteries in our understanding of the universe is how elementary particles acquire mass. Our best explanation for this requires the existence of a particle called the Higgs boson, which has not yet been directly observed. Particle physicists at Fermilab, near Chicago, are currently capable of producing and detecting Higgs bosons from collisions of matter and antimatter at very high energies. I will explain what exactly these physicists are looking for, and present the experimental challenges involved in a few particular methods for differentiating Higgs bosons from other background processes. Finally, I will discuss future prospects for Higgs boson discovery at Fermilab, as well as the discovery potential of future experiments. "W Bosons and b Quarks at the Tevatron: Understanding the Haystack to Help Find the Needle" Christopher Neu , University of Pennsylvania Particle physics is at the threshold of an exciting new era. A crucial experimental pursuit is the search for and observation of the Higgs boson, a prominent missing piece in the widely successful standard model of the fundamental world. Searches at the Tevatron proton-antiproton collider in Illinois are closing in on the Higgs, while experiments at the new Large Hadron Collider in Switzerland are scheduled to begin operations later this year. One of the main signatures for the Higgs contains a W boson and one or more b quarks. However, this signature is shared by more common electroweak and strong processes that have not been determined precisely by experiment until now. Herein I will present a new measurement by CDF of W boson and b quark production. This measurement will contribute to improvements in the theoretical models, and I will discuss how this result can be used to sharpen searches for the Higgs and for physics beyond the standard model at both the Tevatron and the Large Hadron Collider. "The Quest for the SM Higgs" Sabine Lammers , Columbia University The Standard Model predicts the existence of one final particle, the Higgs Boson, which is the physical manifestation of spontaneous symmetry breaking as a mechanism for electroweak symmetry breaking, and is responsible for the masses of the known gauge bosons. Without the Higgs, the Standard Model is certainly incorrect or at least incomplete. We are at a precipice in the study of particle physics today because the answer to the question of the existence of the Higgs is about to be revealed. Constraints from precision LEP electroweak data indicate that the Higgs is light, making it within reach of observation by modern high energy particle colliders. I will discuss the state-of-the-art searches for the Standard Model Higgs Boson at the Tevatron and the plans for searches at the LHC. In particular, I will highlight the search techniques that are relevant at each collider and how Higgs searches at the LHC can benefit from knowledge gained at the Tevatron. "A More Accurate Measurement of Pion to Positron Decay" Marvin Blecher , Virginia Tech "Searching for Physics Beyond the Standard Model with Neutrinos" Zelimir Djurcic , Columbia University Although there has been tremendous progress over the past decade, many basic properties of neutrinos are still unknown and the possibility of future surprises remains strong. Recent neutrino experiments have conclusively observed that neutrinos have non-zero masses and that neutrinos change from one flavor to another. The MiniBooNE experiment at Fermilab recently presented its first neutrino oscillation results, where no significant excess of events was observed at higher energies, but a sizeable excess of events was observed at lower energies. The lack of a significant excess at higher energies allowed MiniBooNE to rule out simple 2-neutrino oscillations as an explanation of the LSND signal; however, the excess at lower energies is presently unexplained. Other data sets, including the NuMI, antineutrino, and SciBooNE data, should allow the collaboration to determine whether the lower-energy excess is due to background or to new physics. "Life, the Universe, and Electroweak Symmetry Breaking" Andrew Askew , Florida State University One of the largest remaining questions in particle physics is the mechanism by which the W and Z bosons gain their mass. In the Standard Model of Particle Physics, this electroweak symmetry breaking occurs via the Higgs mechanism, though this remains experimentally unverified. I will overview this question and then concentrate on how diboson production and kinematics can give us information about this symmetry breaking. Experimental studies of boson pairs produced at the Tevatron and observed at the D0 experiment will be presented, ending with prospects for further study at the LHC. "Protein Folding: Energy, Entropy, and Prion Diseases" Bernard Gerstman , Florida International University [Host: Art Brill] Living systems are the epitome of self-organized complexity. The self-organization occurs on all scales, from the molecular up to the organismal level. The machines responsible for maintaining organization are protein molecules that receive energy and convert it to work. However, protein molecules themselves must self-organize into highly specific shapes. The folding of proteins is a self-organizing process in which a long chain heteropolymer in a disorganized configuration spontaneously changes its shape to a highly organized structure in milliseconds. I explain how the energy and entropy landscape of protein chains is shaped to allow self-organization. I also show how these principles can be used in molecular level investigations of protein-protein interactions that lead to both beneficial dimerization or disastrous, disease producing and potentially fatal protein aggregation. "The Deep Puzzle of High-Temperature Superconductivity" T. Egami , University of Tennessee It is already 21 years since high-temperature superconductivity (HTSC) in the cuprate was discovered by Müller and Bednorz. At the beginning many theoreticians, including several Nobel Laureates, claimed they knew the answer. Even today, they keep claiming so, while they acknowledge that they actually do not know how to solve the problem theoretically. In the mean time experimentalists succeeded in making impressive improvements of their capabilities, and we now know the remarkable details of the cuprates physics and the HTSC phenomena. What emerged from the vast amounts of experimental results is the realization that while the existing theories can describe parts of the observed phenomena, something fundamental appears to be lacking from the theory. The puzzle may be deeper than people prefer to admit. In my view one of the most fundamental problems is that the transition from the Mott-Hubbard insulator due to strong electron-electron interaction to the Fermi-liquid state is an abrupt one, while any mean-field approximation makes it falsely continuous. In this talk I discuss evidences from neutron scattering experiments that this transition involves nano-scale phase separation, reflecting the discontinuity in transition, and how this conflict could contribute to the HTSC phenomena. "Designer atoms: Engineering Rydberg atom wavepackets using pulsed electric fields " Barry Dunning , Rice University Advances in experimental technique now allow application of pulsed unidirectional electric fields, termed half-cycle pulses (HCPs), to Rydberg atoms whose characteristic times are much less than the classical electron orbital period. In this limit each HCP simply delivers an impulsive momentum transfer or "kick" to the excited electron. A number of protocols for controlling and manipulating Rydberg atom wavepackets using carefully tailored sequences of HCPs will be described with emphasis on the production of quasi one-dimensional and near circular Rydberg states, on navigating electron wavepackets in phase space, and on studying reversible and irreversible dephasing using electric dipole echoes. Insights provided by this work into classical-quantum correspondence, physics in the ultra-fast ultra-intense regime, and decoherence in mesoscopic quantum systems will be discussed. "Entanglement in real magnets" Gabriel Aeppli , University College, London [Host: Seung-Hun Lee] Quantum entanglement is well-known to have consequences for optics and atomic physics, but is less recognized as impacting the properties of solids. Three examples - a dilute rare earth fluoride(Nature 425, 48), a transition metal oxide chain (Science 317, 1049), and a layered organometallic compound (PNAS 104, 15264), where entanglement matters for three real magnets are described. "Physics with top quarks" Reinhard Schwienhorst , MSU [Host: Bob Hirosky ] Experimental particle physics has reached a threshold that promises new and exciting insight into the fundamental structure of matter and the origin of particle masses in coming years. Due to its large mass, the top quark plays a key role in this quest for a deeper understanding of nature. We are currently learning a lot about the top quark through measurements at the Fermilab Tevatron. At the LHC at Cern, which starts in 2008, the top quark will become a probe for new physics and a tool for understanding mass generation. I will present our current understanding of the top quark and discuss its role in finding the new physics at the Tevatron and the LHC. "Measurement of the π 0 Lifetime: Probing the QCD Axial Anomaly" Aron Bernstein , Massachusetts Institute of Technology The π 0 lifetime has been measured with significantly improved accuracy at Jefferson Lab using the Primakoff effect. This was achieved by careful control of all of the experimental parameters and included auxiliary measurements of the Compton effect and pair production. This measurement is a test of a prediction based on the QCD axial anomaly plus few percent chiral corrections which are proportional to the mass difference of the up and down quarks. The basic physics, and a comparison of theory and experiment, will be presented in the context of spontaneous chiral symmetry breaking in QCD, some of its physical consequences, and other experimental tests. "Creating a Quark Gluon Plasma with Heavy Ion Collisions" David Hofman , University of Illinois Chicago It has now been seven years since a new era in relativistic heavy ion research began with the first beams at the Relativistic Heavy Ion Collider (RHIC). The primary goal of this effort was to heat a small volume of space so high that normal matter, comprised of protons and neutrons, dissolves into their constituent parts, the quarks and gluons, thus possibly creating a quark gluon plasma and perhaps even providing a window into how the universe may have looked in the first micro-seconds of its birth. In this talk, I will review the motivation and foundations for this endeavor, discuss several discoveries since RHIC began, explore a few of the more recent measurements, and look forward to what the very exciting and promising future will bring, especially in light of the startup of the new Large Hadron Collider in CERN. "Natural Nuclear Reactor in Oklo" Alex Meshik , Washington University, St. Louis Natural nuclear reactors were probably abundant on Earth about 2 billion years ago, but so far only 17 have been found in Equatorial Africa, just a few miles apart from each other. We will talk about how these natural reactors were predicted, searched for and discovered, and how the major characteristics of these reactors have been determined. Then we will show how isotope analyses of fission xenon led to realization of the operational mode of natural reactors and understanding of why the reactors did not explode just after they reached criticality.Finally, we will consider some physical, environmental and geochemical implications of this fascinating natural phenomenon. "Interplay of disorder and interactions in two dimensions" Sergey Kravchenko , Northeastern University The discovery of the metal-insulator transition (MIT) in two-dimensional electron systems challenged the veracity of one of the most influential conjectures in the physics of disordered electrons, which states that "in two dimensions, there are no true metals"; no matter how weak the disorder, electrons would be trapped and unable to conduct a current. However, that theory did not account for electron-electron interactions. Recently, we have investigated the interplay between interactions and disorder near the MIT using simultaneous measurements of electrical resistivity and magnetoconductance. It turns out that both the resistance and interaction amplitude exhibit a fan-like spread as the MIT is crossed. From these data we have constructed a resistance-interaction flow diagram of the MIT that clearly reveals a quantum critical point that separates the metallic state, stabilized by interactions, from the insulating state, where disorder prevails. The metallic side of this diagram is quantitatively described by the recent renormalization group theory (Punnoose and Finkelstein, Science 310, 289 (2005)) without any fitting parameters. "JLab Scientific and Technological Advances with Commonwealth of Virginia Universities" Ganapati Myneni , JLab The Continuous Electron Bean Accelerator Facility (CEBAF) at Jefferson Lab in Newport News was established by the Department of Energy as a result of the initiatives from the faculty of Physics at the University of Virginia. Initially the design called for room temperature copper accelerator structures. However, the first director of CEBAF chose Superconducting Radio Frequency (SRF) Technology for the acceleration of the high quality electron beams. This led to many world class scientific and technological advances at JLab including the core SRF and 2 K refrigeration systems. In this presentation I would like to narrate the development of single crystal large grain niobium technology for the benefit of SRF accelerator cavities including the Ganni 2 K refrigeration cycle for the efficient cooling of these accelerator structures. Further recent innovations and evolution of 10 - 50 MeV beam test facility, efficient design of cryomodules and compact THz sources are also discussed. In addition the plans of bringing all these scientific and technological advances for the benefit of the commonwealth of Virginia Universities under the umbrella of UVa are also explained. "A Century of Photo Physics: Mitchell Memorial Colloquium" The fascinating history of photography actually extends back more than one millennium, with pre-modern chemical photography finally catching hold around the 1820s through the pioneering work of Niépce and his subsequent collaboration with Daguerre. Viable silver emulsions were developed shortly thereafter by Talbot and others, but it was not until the 1880's that Eastman introduced prototype, flexible films familiar to modern photographers. At the turn of the last century, Eastman's silver halide films had already revolutionized the art world, opened new doors in optical spectroscopy, and established an entirely new mode of journalism. However, the underlying physical process itself was not understood until the late 1930s, when Mott and Gurney published their theory of latent image formation. Until that point, photographic capabilities were still severely limited because latent images were not stable, and emulsions were still quite slow. J.W. Mitchell established a more comprehensive theory of latent image formation that laid the foundation for improvement. His important contributions defined a turning point in modern film photography, and helped to bring high-performance emulsions to the market, where they have dominated for a half century and are still preferred by many professional photographers today. This talk will provide a visual review of the past century of photography, providing examples of daguerreotypes, cyanotypes, kalotypes, and modern silver halide photographs in the context of their role in science, art, and journalism. I will also present a brief survey of recent developments in digital image capture and discuss my expectations for advances in the near future. This memorial colloquium is given in recognition of the contributions J.W. Mitchell, emeritus Professor of Physics at UVa. "Can a solid be "superfluid" ?" Moses Chan , Penn State University Abstract: At temperatures below 2.176K, liquid He-4 enters into a superfluid state and flows without any friction. The onset of superfluidity is associated with Bose-Einstein condensation where the He-4 atoms, which are bosons, condensed into a single momentum state and acquire quantum mechanical coherence over macroscopic distances. Recent torsional oscillator measurements of solid helium confined in porous media [1,2] and in bulk form [3,4] found evidence of non-classical rotational inertia indicating superfluid behavior below 0.2K. These measurements have been replicated in four other laboratories. Specific heat results will also be discussed. This work is done in collaboration with Eunseong Kim, Tony Clark, Xi Lin and Josh West and it is supported by the (U.S.) National Science Foundation. "Strongly-Coupled Plasmas and Gauge/String Duality" Larry Yaffe , University of Washington The quark-gluon plasma produced in relativistic heavy ion collisions has been found to behave like a low viscosity fluid whose properties are very different from those of a weakly interacting gas of quarks and gluons. It is an example of a strongly coupled, strongly correlated system, for which perturbative approximation techniques are not adequate. However, it is now understood that certain 3+1 dimensional gauge theories, similar to QCD, may be exactly reformulated as string theories in higher dimensions --- and this "gauge/string duality" is easiest to use in the strongly coupled limit of the gauge theory. Under this duality, properties of a high temperature, strongly coupled plasma are directly related to gravitational dynamics around 4+1 dimensional black holes. Using this duality, it is possible to compute, reliably, dynamical properties such as viscosity, energy loss of heavy particles, and emission spectra in certain strongly coupled gauge theory plasmas. This talk will describe this progress and discuss its applicability to the quark-gluon plasma produced in current and upcoming experiments. "An Ultrafast Quantum Camera - Observing and Controlling Molecular Dynamics in Real Time" Thomas Weinacht , SUNY Stony Brook Ultrafast laser pulses allow us to 'take pictures' of atoms and molecules on their natural timescales (~10 -14 s). They can also be used to exert very strong and controlled forces, allowing us to direct the dynamics of the system they interact with. I will describe a series of experiments which aim to control and measure the wave function for a molecule as it dissociates. The ultimate aims of our efforts are to use shaped laser pulses as 'photonic reagents' and to make 'molecular movies', which depict the evolution of the molecular wave function as a function of time. "The CMS Experiment at the CERN Large Hadron Collider" Dr Daniel Green , Fermi National Accelerator Lab The US is heavily involved in the Compact Muon Solenoid (CMS) experiment at the CERN Large Hadron Colllider (LHC). This new facility is explicitly designed to successfully search for the Higgs boson and generally to search for new symmetries of Nature such as Supersymmetry. The status of the LHC accelerator and the CMS experiment will be discussed as well as studies of the physics potential of CMS. "A New Search on Neutron Electric Dipole Moment" Haiyan Gao , Duke University A new experiment is being planned to search for the neutron Electric Dipole Moment (EDM) with an unprecedented sensitivity. The proposed search aims at a two orders of magnitude improvement over the current experimental limit. A search for a non-zero value of the neutron EDM is a direct search of the time reversal symmetry (T) violation. It provides a unique insight into CP violation because of the CPT theorem. The Standard Model (SM) prediction for the neutron EDM is below the current experimental limit by six orders of magnitude. However, many proposed models of electroweak interaction which are extensions beyond the SM predict much larger values of neutron EDM. The new experiment has the potential to reduce the acceptable range of predictions by two orders of magnitude. Furthermore, if new sources of CP violation are present in nature beyond the Standard Model and are relevant to hadronic systems, this experiment offers a unique opportunity to measure a non-zero value of neutron EDM. The current understanding of the baryogenesis suggests that other sources of CP violation might exist in nature beyond the Standard Model and beyond what have been observed so far. To explain the baryon number asymmetry in the universe through the grand unified theory or electroweak baryogenesis, substantial New Physics in the CP violation sector is required. In this talk, I will discuss this new experiment following a brief review of previous neutron EDM experiments. "Topological defects in nanomagnets" Oleg Tchernyshyov , Johns Hopkins The interplay of local and long-range forces in ferromagnets leads to the formation of mesoscopic domains with sharp boundaries (domain walls). The physics changes drastically when the magnet size becomes smaller than the width of a domain wall. In submicron magnets the magnetization forms intricate smooth patterns that involve the more exotic topological defects: integer and fractional vortices, skyrmions, merons, and magnetic monopoles. I will describe recent experiments with these entities and our attempts to describe their static and dynamic properties. "From the Big Bang to the Nobel Prize and Beyond" John Mather , Goddard Space Flight Center The Cosmic Background Explorer (COBE) satellite, proposed in 1974 and launched by NASA in 1989, measured the cosmic microwave and infrared background radiation from the Big Bang and everything that happened later. The COBE team made three key measurements: the spectrum of the cosmic microwave background radiation (CMBR) matches a blackbody within 50 ppm (rms), the CMBR is anisotropic, with 10 ppm variations on a 7o angular scale, and the cosmic infrared background from previously unknown objects is as bright as all the known classes of galaxies. The first measurement confirmed the Hot Big Bang theory with unprecedented accuracy, the second is interpreted as representing quantum mechanical fluctuations in the primordial soup and the seeds of cosmic structure and the basis for the existence of galaxies, and the third is still not fully understood. I will describe the project history, the team members, the hardware and data processing, the major results, and their implications for science, and end with the outlook for future progress with new background measurements and large telescopes such as the James Webb Space Telescope. I will show recent progress on building the JWST, with illustrations of the key technologies. "Quantum simulations and quantum computation with atoms in optical lattices" David Weiss , Penn State University I will review the physics of 1D Bose gases, show how we experimentally implement them, and describe experiments that confirm the longstanding exact theory across all coupling regimes. I will also describe quantum Newton's cradles, which are out of equilibrium 1D gases that act unlike any other many-body system. Finally, I will show how we image 3D arrays of hundreds of single atoms, an important step on the way to making a neutral atom quantum computer. "The physics of nanoelectronic devices" Avik Ghosh , University of Virginia Nanoscale conductors, such as ultrasmall molecular wires, allow us to test our understanding of fundamental non-equilibrium transport physics, as well as explore new device possibilities. I will start with a generic treatment of current flow through a single energy level, and then generalize to include realistic bandstructure models and a full quantum kinetic theory of current flow. This allows us to interpolate between semi-empirical models that provide quick physical insights, and 'first-principles' models with no adjustable parameters. Using this formalism, we can quantitatively explain various experimental features and fundamental performance limits of molecular electronics. In the above treatments, we treat electrons as weakly interacting, operating in the 'mean field limit'. However, ultra-short molecules are unique in that they often possess large electronic and vibronic correlation energies with prominent experimental signatures. Strong correlation requires a completely different transport approach in the molecular many-body Fock space that accounts for non-perturbative interactions. I will show that many features such as negative differential resistance, Coulomb Blockade, hysteretic switching and random-telegraph noise can be understood in terms of the dynamics of such many-body levels and their state filling under bias. A lot of the applications of nanoelectronics could involve bridging the mean-field and strongly correlated regimes, where the theory becomes particularly challenging. For instance, the tunable quantum coupling of current flow in present day silicon transistors with engineered molecular adsorbates could lead to devices operating on completely novel principles. "The Evolution from BCS to Bose-Einstein Condensation: Superfluidity in Metals, Neutrons Stars, Nuclei, and Ultra-Cold Atoms" Carlos Sa de Melo , Georgia Tech > Superfluidity is a very interesting phenomenon that has been found in metals, > neutron stars, nuclei and more recently in ultra-cold atoms. For a given > metal, neutron star, or nuclei there is essentially "zero" tunability of the > particle density or interaction strength, and thus superfluid properties can > not be controlled at the turn of a knob. However, in ultra-cold Fermi atoms > the interaction strength and the particle density can be tuned to change > qualitatively and quantitatively superfluid properties. This tunability allows > for the study of the evolution from BCS (weak coupling) superfluidity of large > Cooper pairs to Bose-Einstein condensation (strong coupling) superfluidity of > tightly bound molecules. I will discuss the BCS to BEC evolution in s-wave > and p-wave angular momentum channels, and will conclude that this evolution is just a crossover phenomenon for s-wave, while a quantum phase transition takes place for the p-wave case. "Molecular Electronics- Past, Present and Future" "New magnetic twists for multiferroicity" Sang-Wook Cheong , Rutgers University "New Ideas in Neutrino Physics" Dan Kaplan , Illinois Institute of Technology [Host: E. Craig Dukes] The existence of neutrinos -- neutral, massless, almost- noninteracting counterparts of the electron -- was first proposed in 1930, in response to apparently incomprehensible experimental results. Neutrinos have been a puzzle ever since! One indicator of their importance is the unusually large number of Nobel prizes awarded for neutrino work, the most recent in 2002. A brief account of the neutrino story will lead to a discussion of current issues in neutrino physics, including the intriguing possibility that neutrino interactions explain the existence of all matter in the universe. Techniques for the future study of neutrino physics will be described. "N/A" Thanksgiving Recess , N/A [Host: N/A] "Searching for the mechanism of electroweak symmetry breaking" Csaba Csaki , Cornell University [Host: P.Q. Hung] The standard model of particle physics has been very successful at explaining all collider experiments to date. However, it does not give a well-motivated explanation for why the electroweak symmetry should be spontaneously broken. Recently several new possible theories have been suggested to cure this shortcoming. I describe the motivations and the consequences of some of these new theories, including large and warped extra dimensions, higgsless and little higgs models. "Potential Room Temperature Superconductivity in Metallic Nanoclusters" Vladimir Kresin , LBL Superconductivity is a peculiar state of matter which is manifested in such diverse fields as solid state physics, nuclear physics, astrophysics, biology, etc. In this talk we focus on small metallic nanoclusters (N 102-103 where N is the number of free carriers) which contain delocalized electrons. These electrons form shells similar to those in atoms or nuclei. It turns out that under special, but perfectly realistic conditions, superconducting pairing is very strong and can lead to high values of Tc. We have shown that for realistic sets of parameters one can observe very high values of Tc (Tc 102 K ) as well as a strong modification of the energy spectrum. Nanoclusters should form a new family of high temperature superconductors and in principle, it should be possible to raise Tc up to room temperature. We have proposed specific experiments aimed at detecting this phenomenon (e.g. spectroscopy and magnetic properties). This phenomenon is quite promising for the creation of high Tc superconducting tunneling networks. "Conditional measurements in cavity QED" Luis Orozco , University of Maryland One of the striking differences between the classical world and the quantum world is the measurement process. This opens interesting possibilities to study how a quantum system evolves after a measurement. We have implemented a cavity QED system, where an atom or a few atoms interact with a single mode of the electromagnetic field. This interaction is such that a quantum fluctuation, the emission of a single photon, is a large event. We are studying, by conditional measurements, the dynamics of the cavity QED system as it returns to steady state after a fluctuation and can now relate this to some of its intrinsic properties such as entanglement. "Almost everything that you'd like to know about frustrated magnets" "Magnetically induced electronic states in two-dimensional superconductors" Jongsoo Yoon , University of Virginia "The Plasma Physics of Quark-Gluon Plasma (a theorist's perspective)" Peter Arnold , University of Virginia "Jefferson Lab Hall: A neutron spin structure program" Nilanga Liyange , University of Virgina "An Atom Interferometer Using Bose-Einstein" Cass Sackett , University of Virginia ""A Bose Condensate in an Optical Lattice: cold atoms meet solid state"" An atomic-gas Bose-Einstein Condensate, placed in the periodic light-shift potential of an optical standing wave, exhibits many features that are similar to the familiar problem of electrons moving in the periodic potential of a solid-state crystal lattice. Among the differences are that the BEC represents a wavefunction whose coherence extends over the entire lattice, with what is essentially a single quasi momentum and that the lattice potential can be turned on and off or accelerated through space. Experiments that are not easily done with solids are often straightforward with optical lattices, sometimes with surprising results. "Nonclassical Light and Glauber's Theory of Optical Coherence" Howard Carmichael , University of Auckland, New Zealand The year 2005 celebrated the seminal contributions of Albert Einstein to physics, including his treatment of the photoelectric effect and his introduction of the quantum of light. The same year saw Roy Glauber awarded the Nobel Prize in Physics -- ``for his contribution to the quantum theory of optical coherence''. My talk will explore the connections between Glauber's and Einstein's work, while at the same time posing the question...in what sense, exactly, does light act as a particle and not a wave? "The National Science Foundation, One Particle Physicist's Experience" Randy Ruchti , Notre Dame University and NSF During a recent term of service on the High Energy Physics Advisory Panel (HEPAP), which jointly advises DOE and NSF on particle physics matters, the speaker was persuaded of the importance of direct participation by active research scientists in the process of federal funding for research and education programs. This view has motivated a temporary term of service by the speaker at the National Science Foundation. The presentation will provide a view of how the NSF conducts its business in Elementary Particle Physics, from the perspective of an university-based experimentalist and faculty member serving as a visiting program officer. "Topological Quantum Computation" Paul Fendley , University of Virginia "The Unusual Symmetry of Ferroelectricity in Incommensurate Magnets" Brooks Harris , University of Pennsylvania The coupling between electric and magnetic properties in condensed matter systems is usually very weak. In part this may be viewed as being a result of the fact that the electric and magnetic fields exhibit different symmetries which do not naturally couple to one another. Here I discuss a class of materials which display a very unusual phase transition in which magnetic ordering and the development of ferroelectricity occur simultaneously. This coupling has drawn great interest recently mainly due to various experimental results for this fascinating coupling whereby magnetic ordering induces erroelectricity. My main objective is to understand the phenomonology of this magnetoelectric coupling via a Landau expansion whose consequences depend crucially on the symmetry properties of the magnetic order and the consistency of this order with ferroelectric ordering. Even this simple phenomenological theory explains a number of nontrivial ferroelectric properties which have been observed. I discuss briefly the advantages of such a symmetry analysis versus specific microscopic models. "The Ocean Tides: Myth and Truth from Galileo to GPS" Vittorio Celli , University of Virginia [Host: Steve Thornton] It is widely believed, and taught in Physics courses, that two high tides of equal magnitude occur daily. In reality, the tides are a complicated sloshing of the oceans with three main periods: M2 (12.82h) due to the Moon, S2 (12h) due to the Sun, and K1 (23.93h) due to both and to the tilt of the Earth's axis. Following Newton, one can compute the magnitude of the tidal forces, but an understanding of tide dynamics, based on the work of Laplace and Lord Kelvin, is incomplete even today. Over most coastlines M2 is dominant, but in New Orleans, for instance, there is only one high tide each day. In the North Atlantic, the M2 tide runs up the European coast and down the American coast, circling a mid-Ocean point of zero amplitude. This "amphidromic" behavior is seen in many basins, and is due to the Coriolis force acting on tidal currents. Thus, the ocean tides are a direct proof of the Earth's rotation, as Galileo maintained. In fact, his kinematic theory of the "ebb and flow of the waters", based on the Copernican motions of Earth, Sun and Moon, is basically correct, although incomplete. An accurate global picture of tidal amplitudes (but not yet of tidal currents) has been obtained by GPS satellites, and is in turn relevant to space age science and technology. "Experiments With Polarized 3He at the Mainz Microtron (MAMI)" Daniela C. Rohe , University of Basel, Switzerland Polarized 3He is an interesting target for nuclear physics experiments due to its particular spin structure which allows its use as a polarized neutron as well as polarized proton target. Further, the nucleus is simple enough that exact solutions of its wave function and the reaction channels are available. On the other hand all important interactions between the three nucleons are present and can be studied. Polarization experiments open up new degrees of freedom and find a wide field of application due to their particular sensitivity as well as due to the advantages of asymmetry measurements in general. In this talk I will discuss polarized target technology and will explain the technique and installation used to polarize 3He for the nuclear physics target at MAMI. The emphasis of the talk is on the results achieved so far at MAMI with polarized 3He. Their purpose is twofold: To test the reliability of the theoretical description of 3He and to measure the electric form factor of the neutron. An outlook about ongoing and future research will be given. "What Have We Learned from Polarized Deep Inelastic Scattering?" Xiaochao Zheng , MIT Since the 1980's development in polarized electron sources and polarized target techniques has brought the experimental study of the nucleon into a new era: The spin structure of the nucleon has been explored with polarized electron scattering. Now twenty years have passed. What have we learned from the data? Do they agree with predictions from quantum chromo-dynamics (QCD), the theory for strong interactions? And what about predictions from constituent quark models? I will start from an introduction to the study of hadron structure using lepton deep inelastic scattering and give an overview of world data and what we have already learned about nucleon structure. Then I will present results from a precision experiment completed at Jefferson Lab on the neutron spin in the valence quark region, and discuss about the future of this measurement. The last 10 minutes of the talk will be devoted to a different topic: using polarized electron scattering to test the electro-weak Standard Model and hadronic structure, and introducing the PV-DIS program that is being just launched at Jefferson Lab. The talk will be given on an non-expert level. "Using Parity Violation to Probe Strange Quarks in the Nucleon" Kent Paschke , University of Massachusetts The basic nuclear building block of our day-to-day world, the nucleon, is well described in terms of quarks of only two varieties: the up and down quarks. However, the nucleon is more complex than the apparent success of the constituent quark model would imply. One example of this complexity is the possible role of the strange quark in the nucleon. Precision measurements of parity violation in electron scattering, a symmetry violation which is forbidden under the electromagnetic interaction but allowed by the weak force, can be used to disentangle the contributions of strange quarks from other components of the nucleon electric and magnetic structure. I will report new results on the most precise measurement to date of parity-violation in electron-nucleon scattering, from the HAPPEX collaboration at Thomas Jefferson National Accelerator Facility, and discuss implications for the question of strange quarks in the nucleon. "The Design, Growth, Discovery and Characterization of Novel Intermetallic Compounds" Paul C. Canfield , Ames Laboratory and Department of Physics and Astronomy, Iowa State University In this talk I will review the motivations as well as means for the design, growth or search for novel materials. I will provide examples of what physics you can peruse ranging from superconductivity in MgB2, to the spin-glass state in rare earth based quasicrystals, to field induced quantum criticality in Yb-based intermetallics. The emphasis will be on the joy of, and tools for, discovery. "Can Quarks in a Polarized Nucleon Tell Left from Right ?" Xiaodong Jiang , Rutgers University In the strong interaction, which follows the parity-conserving theory of Quantum Chromodynamics (QCD), can quarks in a polarized nucleon manage to tell left from right ? For "collinear quarks" in a longitudinally polarized nucleon, the answer is simply NO. However, when a nucleon's spin is oriented transverse to it's momentum, quarks inside can figure out left from right through their transverse spin distributions (transversity) and through their angular motions. Recent spin physics experiments from HERMES at DESY and COMPASS at CERN have revealed such an amazing behavior of quarks for the first time, left us with even more questions. Two upcoming Jefferson Lab experiments are designed to provide more answers as to how exactly u- and d-quarks tell left from right in a transversely polarized nucleon. "Recent Results and Future Prospects in Neutrino Physics" Peter Shanahan , Fermi National Accelerator Laboratory More than 40 years ago, a Nobel Prize winning experiment showed that neutrinos come in distinct flavors: neutrinos created in association with muons produced only muons when they interacted, and not electrons. Over the past decade, however, a series of experiments have established that the flavor of a neutrino does indeed change with time. The most likely explanation of this phenomenon is neutrino flavor oscillation, requiring a finite neutrino mass and therefore an extension of the Standard Model of Particle Physics. Related physics at energies far beyond direct experimental reach may well explain the preponderance of matter over antimatter in the universe. The impact of accelerator-based experiments in our understanding of neutrino masses and flavor will be discussed, with an emphasis on current and anticipated experiments at Fermilab. "News from the Energy Frontier" Yuri Gershtein , Florida State University It is exciting time for particle physics. Currently, Fermilab's Tevatron, the highest energy accelerator, delivered more than 1 fb-1 to the experiments (CDF and DZero). In just over a year, the Large Hadron Collider (LHC) at CERN will turn on, moving the energy frontier by almost an order of magnitude - an event the likes of which we did not see in almost three decades. I will talk about the fundamental questions that are addressed by doing physics at the energy frontier, present some new results from DZero experiment and describe the status and prospects of the CMS detector at the LHC. "Nucleon Structure Studies with Polarized Photons and Polarized Nucleons" Oscar Rondon-Aramayo , UVA The quark and gluon structure of the nucleons (protons and neutrons) was established by illuminating atomic nuclei with high energy unpolarized real and virtual photons. The interactions between quarks follow "scaling" rules that were also established with unpolarized photons. With polarized photons it is possible to explore the nucleon structure even further. Polarized photons have been used to determine that quarks carry only 1/3 of the spin, but the distribution of spin among types ("flavors") of quarks is still under study. And the "missing" spin carriers are still being investigated. The interactions between quarks and gluons have barely been explored experimentally. Polarized photons can also uncover the details of those interactions and relate them to calculations based on Quantum Chromodynamics - QCD, the fundamental theory of strong interactions. There is an extensive program of nucleon structure studies with polarized photons and polarized nuclear targets at Jefferson Lab with the goal of answering some of these and other related questions. Highlights of the Hall C component of this program will be presented. "Whither Particle Physics" Tom Ferbel , DOE/University of Rochester "Adventures at the Terascale" Sally Dawson , Brookhaven National Laboratory Exciting opportunities are in store for particle physics over the coming decade, with new tools and experiments poised to explore the frontiers of high energy, the smallest distance scales, and processes of great rarity. Einstein's dream of a unification of all forces will be tested at new energy scales and with greater precision than ever before. The Large Hadron Collider at CERN will begin the exploration of higher energy scales than have been tested previously and a possible future high energy lepton collider will continue our explorations. "The MESSENGER Mission to Mercury: Science and Status" Ralph McNutt , Johns Hopkins University RESERVED , UVA [Host: JKG] "The Center for Nanophase Materials Sciences" Linda Horton , ORNL The Center for Nanophase Materials Sciences is the newest user facility at Oak Ridge National Laboratory. Located adjacent to the Spallation Neutron Source, the CNMS is one of 5 nanoscience user facilities being built by the Department of Energy. CNMS is open to scientists and engineers for research to understand the phenomena that control the properties of nanoscale materials. CNMS emphasizes synthesis and characterization, including neutron scattering and electron microscopy. One important capability is a 10,000 sq ft nanofabrication clean room facility. CNMS will also integrate theory and modeling with the experimental program, a critical aspect of the research. The presentation will discuss the capabilities of the new facility, the scientific program, and opportunities for research and collaboration. Joint Astronomy-Physics Colloquium "Preliminary Results on the Nature of the Dark Energy from the ESSENCE Supernova Survey " Christopher Stubbs , Harvard University The discovery of the accelerating expansion of the Universe provides clear evidence of physics beyond the standard model. Our current challenge is figuring out what it means! I will describe the initial results we have obtained in the ESSENCE supernova survey. This project was designed to detect 200 type Ia supernovae in the redshift range between 0.2 < z < 0.8, with the goal of measuring the equation of state parameter of the Dark Energy. We are paying particular attention to potential sources of systematic errors that might afflict the measurement, and I will describe some of the steps we are taking to both control and quantify these effects. "Exploding Stars, Neutrinos, and Nucleosynthesis" Gail McLaughlin , North Carolina State University The subject of supernovae is a unique combination of many different branches of physics and there are different ways in which we can probe the inner workings of these objects. Beyond examining light curves from the explosion, one can study nucleosynthesis products and neutrino spectra. The discovery of a whole new type of supernova, one which creates a gamma ray burst, has created a new frontier in research on neutrinos and element synthesis. I will discuss the role neutrinos play in determining whether the heaviest elements, such as uranium and thorium, are produced in these environments. "Nuclear Spin-Electron Spin Interactions in the Three-Atom System H2N" Art Brill , UVA H2N has one unpaired electron and three nuclei of non-zero spin. The four H2N isotopes from 1H, 2H, 14N and 15N have corresponding sets of hyperfine interactions. Measurements of these constrain calculations of electronic wavefunctions and energies, and provide basic knowledge for application to more complex systems. Nuclear spin-state mixing arises from the off-diagonal elements of the nuclear energy matrix, e.g. Mxx ≡ σκ 〈ψ|Σ (Skzx2kn/r5kn + Sk'zx2k'n/r5k'n|Ψ〉 (Airne and Brill, Phys. Rev.A 63 052511). The principle hyperfine A-values can be expressed in terms of the M's, e.g. Azz = AFermi - (4/3σ)( Mxx + Myy - 2 Mzz), thereby simplifying the energy matrices. In the absence of nuclear spin-state mixing (i.e. each state pure mI) there are, e.g. 10 epr transitions in D215N and 15 in D214N, all ΔmI = 0 fully allowed. In the presence of mixing there are 243 in D215N and 729 in D214N, with large differences in probability among transitions. Because of numerous, at least partially allowed, overlapping transitions, useful information can be obscured in H2N magnetic resonance spectra. Research is required to arrive at effective experimental conditions. The wide range of transition probabilities will cause H2N resonances to exhibit a corresponding range of microwave power saturation behavior. Simulations display remarkable effects which call for experimental verification by employing a wide range of powers. The nuclear Zeeman interaction (proportional to B) perturbs both the energy and state mixing of nuclear levels, thereby affecting the separation and probability of resonances. Of special interest are the fields Bcross at which pairs of hyperfine levels draw closest. A spectrometer with microwave frequency scanning at fixed B would be useful for centers like H2N in which on-diagonal hyperfine energy matrix elements depend significantly upon B. "Physics and the CMS Detector at the CERN Large Hadron Collider" Roger Rusach , University of Minnesota In 2007 a new proton-proton collider, the LHC, will turn on and a whole new energy domain will become accessible to experiment. Indications of what we might observe come from current measurements in experiments in high-energy physics, astrophysics and cosmology. We will discuss what problems in physics might be resolved with data from the LHC, describe how the detectors work and what are the special challenges associated with building a detector of the scale required for this energy region. "Glassy Metals – Complexity Made Simpler " Joe Poon , UVA Although ubiquitous in nature and technology, the microscopic study of liquids and glasses lags far behind that of crystals and quasicrystals. This is because liquids and glasses do not exhibit long-range order, which frustrates theoretical description. To date, the common approaches for modeling the dynamics and glass transition of liquids are based on the potential energy landscape paradigm. Theoretical approaches such as the mode-coupling theory and replica method, although successful in advancing our understanding of the dynamics and thermodynamics of the liquid-glass transition, have not provided specific predictions of the important parameters of the glassy state. Recently, a simple complementary model based on atomic-level fluctuations in the amorphous network has been successfully applied to the computation of these parameters. The latter approach may also provide a pathway to a more general microscopic understanding of liquids and glasses. The rest of this talk will focus on glassy metals as futuristic metals with certain promising and enabling properties. "Oxide-Semiconductor Materials for Quantum Computation" Jeremy Levy , University of Pittsburgh Quantum computers, as yet undeveloped, are believed to be able to efficiently solve strategically important problems like number factorization, database search, and the Schrodinger equation itself. The staggering potential of these and other applications has led to a worldwide race to build the first working quantum computer. The state of experimental quantum computation is primitive--neither quantum bits (qubits) nor quantum gates (qugates) have been demonstrated in a scalable form. In this talk, I will give an overview of the new field of quantum information science and technology, and will describe a proposal to create a quantum information processor using ferroelectrically coupled electron spins in silicon. This approach combines the latest advances in nanostructure and heterostructure design, ultrafast optical control, measurement science and signal processing. Progress toward these goals, pursued within the Center for Oxide-Semiconductor Materials for Quantum Computation (COSMQC), will be described. This work is supported by DARPA QuIST through ARO contract number DAAD-19-01-1-0650. "Probing the Geodynamo" Peter Olson , John Hopkins University - Earth and Planetary Sciences "Clusters: a route to study stability at nanometer scale" Catherine Brechnigac "The Importance of Spin in Particle Physics" Gary Goldstein , Tufts University [Host: Simonetta Liuti ] "The CMS Experiment" Sarah Eno , University of Maryland This will be a joint Math/Physics/History colloquium. "All Was Light: Isaac Newton's Revolutions" Mordechai Feingold , Caltech "The Chemistry of the Universe" William Klemperer , Harvard University "Charge Particle Radiography for National Security" Chris Morris , Los Alamos National Laboratory Intermediate energy protons are being used for very fast (flash) radiography. Proton beams have shown to provide a flexible time format, excellent position resolution, and adjustable contrast, for a wide range of high explosive driven experiments. These experiments are playing an increasingly important role in the nuclear stockpile stewardship program. An outgrowth of this work has been the development of cosmic ray radiography for cargo and vehicle inspection. An overview of charge particle radiography and its uses for national security applications will be presented. "Understanding The Columbia Shuttle Accident and NASA's Challenges Posed by Discovery" Doug Osheroff , Stanford University On 1 February 2003 space shuttle Columbia broke up during re-entry over the plains of East Texas. The speaker was a member of the board appointed to investigate that disaster. It was ultimately found that the physical cause of the accident was a piece of thermally insulating foam that struck the leading edge of the left wing during launch. This foam had a density of just 1/30th the density of water, yet it created a hole estimated to be approximately 25 cm square, which allowed superheated gases to enter the wing on re-entry, consuming the interior of the wing in a matter of a few minutes. The final report showed that NASA had that such foam strikes had occurred before, but continued to fly in the face of clear and persistent danger. The speaker will also discus the organizational aspects of this accident, many of which are common to all large organizations, and the future of the program in light of Discovery's foam shedding. "The Spallation Neutron Source: A Powerful Tool for Materials Research" Thom Mason , Director, Spallation Neutron Source - Oak Ridge National Laboratory The wavelengths and energies of thermal and cold neutrons are ideally matched to the length and energy scales in the materials that underpin technologies of the present and future: ranging from semiconductors to magnetic devices, composites to biomaterials and polymers. The Spallation Neutron Source will use an accelerator to produce the most intense beams of neutrons in the world when it is complete in 2006. The project is being built by a collaboration of six U.S. Department of Energy laboratories. It will serve a diverse community of users drawn from academia, industry, and government labs with interests in condensed matter physics, chemistry, engineering materials, biology, and beyond. [Coffee will be served in Room 205 at 3:30 PM] "Towards a Quantum Laboratory on a Chip" Professor Theodor Hansch , Max Planck Institute for Quantum Optics "Tidbits About Qubits: Spin Computation in Nanostructures" Sankar Das Sarma , University of MarylandCondensed Matter Theory Center - I will provide an introduction to the emerging field of spintronics and spin qubits in this talk. Active control of carrier spin in nanostructures of semiconductors and other electronic materials is projected to lead to new device functionalities in the future. In particular, it may be possible to envision memory and logic operations being carried out on the same 'spintronic' chip. I will discuss various aspects of fundamental physics related to this new research area of spin electronics with the particular emphasis on localized electron spins in semiconductor nanostructures, such as GaAs quantum dots and P donors in Si. A revolutionary possibility in the (perhaps, far) future is using the natural two-level quantum dynamics of electron spin to create robust quantum bits ('qubits') which could be used to carry out solid state quantum information processing or quantum computation. I will discuss in details the questions of entanglement, decoherence, quantum error correction, and quantum gates in semiconductor nanostructure-based solid state spin quantum computer architectures, critically discussing from a theoretical perspective the current status of the field and the prospects for carrying out large-scale quantum computation using solid state spin qubits. Aspects of fundamental spin physics in the solid state environment will be emphasized in this talk. This research has been supported by LPS, ARDA, NSA, ARO, DARPA, ONR, and Please see http://www.physics.umd.edu/cmtc for the relevant publications. "In Search of New Physics: The Clues From Charm" Marina Artuso , Syracuse The study of the interactions between the fundamental building blocks of matter is a critical component of our understanding of the history of the universe and its dynamics. My talk will describe how our experimental study of charm quark decays may test key features of our present understanding of these interactions, and, possibly, open a window towards new physics. The experimental data discussed are taken at the CESR electron-positron collider. "The Asymmetry Between Matter and Anti Matter - or -How to Know if it is Safe to Shake an Alien's Hand?" Klaus Hon , Ohio State University Most of us have looked at the spectacular pictures taken by the Hubble Space Telescope. Galaxies, nebulae, super novae -- but there is something peculiar about these images. Where ever we look in space we only see matter. No significant quantities of anti-matter have been found. Since we believe equal amounts of matter and anti-matter have been produced originally we must conclude that there is an asymmetry between particle and anti-particle decays. In the laboratory, however, nature always seems to obey the particle - antiparticle symmetry with one known exception. Almost 40 years ago a small difference has been found in the neutral kaon system. But the nature of this system made it extremely difficult for both theorists and experimentalists to extract a clear picture of this effect. For years there has been great hope in the particle physics community that a large matter - antimatter asymmetry can be observed in a new system - the weak decays of massive B mesons. The past decade has seen a vigorous experimental effort to produce the large quantities of B mesons required to discover the cause of this asymmetry. Particle accelerators have been upgraded and new detectors were constructed. As we enter the Golden Age of B physics nearly a billion B meson decays have been recorded by these experiments. I will review some of the old questions that have been answered and discuss some of the new puzzles that have been uncovered. "The Search For the Exotic 5 Quark Baryons" Marco Mirazita , INFN, Laboratori Nazionali di Frascati All the well established particles can be classified using the constituent quark model as quark-antiquark states for mesons and 3-quarks states for baryons. However, QCD does not forbid the existence of more complicated internal structures. All the states with quark content different than quark-antiquark or 3-quarks are called "exotic". Exotic particles have been searched for many year in the past, but no positive results have been find until 2003, when several experimental groups reported the first evidences (even if with low statistical significance) for an exotic pentaquark state, the Theta+(1540). On the other hand, several other experiments did not find positive evidence for this state, thus suggesting that, if the Theta+ exists, it should be a really exotic particle. After these first experimental results, several laboratories planned new high-statistic experiments, such those performed and presently under analysis at Jefferson Laboratory. The aim of these experiments is first of all to confirm the existence of Theta+(1540), then to set in an unanmbiguous way its properties. In this talk, a review of the experimental situation will be given, and what we need to conclude that the first exotic baryon has been discovered will be discussed. "Production of Microscopic Black Holes by Cosmic Rays" Al Shapere , University of Kentucky Cosmic ray events may create black holes if extra dimensions exist and are sufficiently large. In particular, neutrino cosmic rays may produce black holes deep in the atmosphere, initiating characteristic quasi-horizontal showers far above the standard model rate. The fact that no such showers have been observed to date places an upper bound on the size of these extra dimensions. Continued nonobservation of such events over the next few years would improve these bounds significantly, and sharply limit the rate of black hole production at LHC. On the other hand, if black hole mediated showers are observed in the next few years, they could provide the first experimental evidence for extra dimensions, string theory, and the formation and decay of microscopic black holes. "TO BE ANNOUNCED" AVAILABLE , TO BE ANNOUNCED "High-Temperature Superfluidity in Ultra-Cold Fermi Gases" J. E. Thomas , Duke University An optically-trapped Fermi gas of 6Li atoms becomes strongly interacting when it is tuned to a Feshbach scattering resonance. Such a gas is predicted to be a very high temperature superfluid - the transition temperature is a large fraction of the Fermi energy. I will describe experimental evidence for superfluidity which arises in anisotropic expansion of the gas, in the heat capacity, and in collective damping. These cold Fermi gases provide desktop analogs of exotic, strongly-interacting fermions in nature, from high temperature superconductors and neutron stars to quark-gluon plasmas. "Planetary Models From the Middle Ages" E. Paschos , University of Dortmund, Germany A small and compact article from AD 1300 describes models for the planets and the moon. It proposes epicyclic theories which deviate from Ptolemy' s Almagest. The Colloquium reviews the models and their accuracy . Then compares them with Arabic models of that time as well as the Newtonian theory. It also demonstrates how scientific knowledge was preserved in the Middle Ages and was transmitted to Italy to spark the beginning of the Copernican Rovolution. "Acoustic Inertial Confinement Nuclear Fusion - Status and Challenges" Rusi P. Taleyarkhan , The Purdue University Energetic bubble implosions can generate sonoluminescence (SL) light flashes along with extreme states of compression and temperatures. In cavitation experiments with chilled deuterated acetone, neutron and tritium nuclear emissions were detected, indicative of thermonuclear fusion. The neutron emissions were time correlated with SL light emission. The gamma ray emissions were delayed as would be expected from neutron slowing down and capture. Control experiments with normal acetone did not result in tritium activity or neutron emissions. Fusion was observed during experiments in which the nanoscale nucleation of bubbles was induced in chilled deuterated acetone using a pulse neutron generator as well as with an isotope neutron source. Video images clearly indicate the existing of complex bubble clusters when bubble fusion occurs, and also the formation of comet-like structures which were detrimental to bubble nuclear fusion. Hydrodynamic shock code simulations have supported the experimental findings and indicate temperatures during implosion in the 108K range along with Gbar shock pressures in the imploding bubbles within bubble clusters, but not in single bubble environments. Recent results of experiments will be presented along with discussions related to key technical challenges concerning modeling and experimentation. "NANOMACHINES: From Atomic Lattice Gears to Cystic Fibrosis" Rich Superfine , University of North Carolina The promise of nanotechnology will be realized through the interplay of new tools and the appreciation of the lessons from biological systems. The challenge of nanomachines ranges from the understanding of the interactions between atomic scale systems to the harnessing of the force generation capabilities of biological systems. We are developing a suite of tools for nanoscale science including the combination of force measurement and manipulation systems in conjunction with scanning probe, electron and optical microscopy. For the basic elements of nanomachines, we have studied gears, springs and electrical contacts of carbon nanotubes. Through the study of carbon nanotube dynamics we have observed that atomic lattices can act like gears in promoting the rolling of nanotubes. Most recently, we have begun a study of nanotubes as torsional springs, have measured the torsional spring constants in freely suspended paddles and have observed strain hardening in individual nanotubes. Finally, biology has developed its own nanomachines and microfluidic systems that include beating cilia to produce flow and complex closed loop feedback mechanisms. We have begun to study this system within a cell culture using a new 3D manipulation system, and will discuss our early results in quantifying the forces applied by beating cilia and studies of the resulting flow. Joint Colloquium; Physics-Astronomy. **PLEASE NOTE ROOM NUMBER CHANGE** "Life, the Universe, and Nothing: The Future of Life in an Ever-Expanding Universe" Lawrence Krauss , Case Western Reserve University In this talk, I will ruminate on the future of the Universe itself, and also on the future of life within it, using as my starting point recent observations in cosmology. I will first discuss why the Universe we appear to inhabit is the worst of all possible universes, as far as considerations of the quality and quantity of life is concerned. Then, I will describe how fundamental aspects of the way in which we teach cosmology, in particular the relation between geometry and destiny, has been forever altered by recent discoveries. Finally, I will address the fascinating question of whether life might be eternal in an eternally expanding universe. The answer to this question appears to hinge on issues of basic physics, in particular on issues of quantum mechanics and computation, which may determine whether life is ultimately analogue or digital. "Photon-ion Collisions and Molecular Clocks" C. L. (Lew) Cocke , Kansas State University The timing of molecular rearrangemnts can be followed in the time domain on a femtosecond scale by using momentum imaging techniques. Three examples will be discussed: First, the diffraction of electrons ejected from the k-shell of one of atomic constituents of the molecule takes a "picture" of the molecule, and the correlation between the momentum vector of the photoelectron and the subsequent fragmentation pattern is used to estimate the time delay which accompanies the latter process. Second, the kinetic energy release of proton pairs from the double ionizaton of hydrogen by fast laser pulses is timed using the 2.7 fs optical cycle as a clock. The mechanisms of rescattering, sequential and enhanced ionization are clearly identified in the momentum spectra. Pump probe experiments allow us to follow the simultaneous propagation of coherently launched wave packets in different exit channels. Third, the operation of rescattering double ionization in the case of nitrogen and oxygen molecules will be discussed. The use of rescattering to probe the structure of the outer orbitals in molecules will be demonstrated. "Gravitational Waves as a Tool to Investigate Neutron Star Structure" Alessandro Drago , Universita' degli Studi di Ferrara The new generation of Gravitational Wave detectors, including in particular Laser Interferometers as LIGO, is now becoming fully operative. This will offer the possibility to confirm the existence of the waves predicted by General Relativity and it will also provide the nuclear and astrophysics communities with a new tool to investigate the inner structure of compact stellar objects. "Electronic Liquid Crystals: Novel Phases of Electrons in Two Dimensions" Alan Dorsey , University of Florida There is growing experimental evidence that electrons confined to two dimensions (in a semiconductor heterostructure, for instance) at low temperatures and high magnetic fields can display a plethora of partially ordered phases which have the same symmetries as classical liquid crystal phases, such as nematics and smectics. I will review the experimental evidence for these novel quantum phases of matter, discuss several analogous classical systems, and motivate some of the theoretical models for these "quantum Hall liquid crystals". "Random Walks with a Zooplankton" Frank Moss , University of Missouri St. Louis [Host: Acar Isin] Theories of swarming and pattern formation have recently become of interest to engineers, chemists and physicists. Interesting examples are offered by various self-propelled biological agents both in simulations and in reality. But well-defined swarming experiments in the lab using real biological agents have been problematic up to now due to size limitations of the animal groups or lack of precise knowledge of the agent-agent or agent-medium interactions. We present the results of lab experiments with the zooplankton /Daphnia/, or "water flea" ­ intermediate in size and complexity between bacteria and birds or fish, for example. Our experiments are compared to predictions of the "Active Brownian Particle" theory developed by a group at Humboldt University in Berlin. /Daphnia/ show the entire range of the theoretically predicted behaviors from single agent to collective motions of swarms and can be observed to perform a fascinating bio-hydrodynamic vortex under certain conditions. "Hidden Dimensionality in Frustrated Magnets and Complex Superconductors" Joel Moore , Berkeley The idea that the true dimensionality of a system may differ from its superficial dimensionality appears in many areas of modern theoretical physics. A central theme of recent research in correlated electrons is that two- and three-dimensional materials can, in some cases, show exotic physics familiar from one spatial dimension. Quantum phenomena typically restricted to one dimension, like exact self-duality and a vortex-mediated (Kosterlitz-Thouless) phase transition, can appear in dimensions d>=2 as well. We discuss specific examples of this "dimensional reduction" that are based upon four-spin interactions generated in frustrated magnets and in effective descriptions of some superconductors. "The Science and Sociology of Pentaquarks" Thomas Cohen , University of Maryland "A Physicist Approach to Complex Problems" Professor Shmuel Nussinov , Tel Aviv University "The Physics of Confined DNA" Jane' Kondev , Brandeis DNA in viruses and in cells is packed in spaces much smaller than its natural size. This state of confinement places interesting constraints on a variety of biological processes DNA is involved in, such as viral infection, gene expression, and recombination. Quantitative experimentation using techniques such as laser tweezers, cryo-electron microscopy and fluorescence spectroscopy has recently begun to probe in detail the confined state of DNA, both in living cells and in the test tube. In this talk I will describe this emerging experimental landscape and outline the theoretical challenges it poses. The particular examples I will focus on will be provided by DNA packing in viruses and gene regulation in bacteria. "New Opportunities in Neutrino Oscillation Physics" Milind Diwan , Brookhaven National Lab I will describe the remarkable new observations that have transformed our knowledge of the neutrino in the past few years. For over 70 years we knew very little about these particles becuase they are so difficult to detect. Now a new consistent picture has emerged about their basic properties. We can now ask new fundamental questions that might bridge the gap between our knowledge of the quarks and the leptons. "Primal Scream- Sounds From the Infant Universe" Mark Whittle , University of Virginia Astronomy Cosmology's extraordinary development shows no signs of slowing down. With the evolution of the Universe's average properties now fairly well understood, the focus has switched to the evoution of perturbations -- how an extremely smooth infant Universe changes into an extremely lumpy old Universe, with galaxies strewn to the horizon. Remarkably, the roots of present day structure can be traced back to sound waves in the early Universe. Even more remarkable, the power spectrum of the sound shows a fundamental and harmonics, as if the Universe were a kind of primitive musical instrument. This talk aims to unpack the relatively new subject of "Big Bang Acoustics", using reproductions of the primordial sound as a vehicle for discussing the physics of that remote time. It turns out that, as with many vibrating objects, the nature of the sound reveals much about the nature of the object as well as the nature of the stimulus. "The Dilute, Cold Bose Gas: A truly quantum-mechanical many-body problem" Elliott Lieb , Princeton University [Host: E. Kolomeisky] The peculiar quantum-mechanical properties of the ground states of Bose gases that were predicted in the early days of quantum-mechanics have been verified experimentally relatively recently. The mathematical derivation of these properties from Schroedinger's equation have also been difficult, but progress has been made in the last few years (with R. Seiringer, J-P. Solovej and J. Yngvason) and this will be reviewed. For the low density gas with finite range interactions these properties include the leading order term in the ground state energy, the validity of the Gross-Pitaevskii description in traps, Bose-Einstein condensation and superfluidity in traps, and the transition from 3-dimensional behavior to 1-dimensional behavior as the cross-section of the trap decreases. The latter is a highly quantum-mechanical phenomenon. For the charged Bose gas at high density, the leading term in the energy found by Foldy in 1961 for the one-component gas and Dyson's conjecture of the N^{7/5} law for the two-component gas has also been verified. These results help justify Bogolubov's 1947 theory of pairing in Bose gases. "Unraveling the mysteries in complex oxides by neutron scattering" Seunghun Lee , National Institute of Standards and Technology Neutron scattering is one of the most powerful tools for studying magnetic and structural properties of solids. It has made seminal contributions in a wide range of fields in condensed matter physics and material science, from high Tc superconductivity, colossal magnetoresistance to quantum magnetism. In this talk, I will begin by introducing the basic principles of elastic and inelastic neutron scattering techniques. I will then describe a few exemplary neutron scattering results from high Tc superconductors and quantum magnets that were crucial to understanding the physics of these systems. Ela Barbaris , Northeastern University "High Energy Physics - On the Ground, Underground, and in Space" Dr. Kathleen Turner , Dept of Energy, Office of Science, Office of High Energy Physics The Office of High Energy Physics' mission is to explore the fundamental nature of matter, energy, space and time. The core of the program centers on investigations of elementary particles and the interactions between them using high energy particle accelerators. In order to fully explore the science, experiments are also done on the ground, underground and in space. The DOE HEP program provides about 90 percent of the federal support for high energy physics research in the U.S. and involves over 2,450 researchers at over 100 universities and 8 laboratories. The High Energy Physics current experimental program will be described, along with a look towards possibilities for the future. "Quantum Computers and Atom-Scale Electronics in Silicon" John R. Tucker , Department of Electrical and Computer Engineering - University of Illinois at Urbana-Champaign Over the past ten years, my colleague T.-C. Shen and I have developed a process for patterning individual phosphorous donors and self-ordered arrays into silicon with atomic resolution. This technique is now employed by the Australian Centre for Quantum Computer Technology and ourselves in efforts to build a silicon quantum computer. Our current research is focused on developing planar single-electron transistors to probe the quantum states of individual P donor 'qubits' inside the silicon crystal. Thus far, we have demonstrated electron wave interference across a 10nm-linewidth Aharanov-Bohm ring. Prospects for realizing a silicon quantum computer will be outlined, along with additional thoughts on future nanoelectronics and transport experiments. "Atom Interferometry using Bose-Einstein condensates" Cass Sackett , UVA One of the chief applications envisioned for Bose-Einstein condensation is atom interferometry, in which the wave-like nature of a condensate is used to full advantage. A plethora of uses can be imagined, ranging from inertial sensing to probing surfaces. However, a variety of practical and fundamental obstacles must be overcome before condensate interferometry can be competitive with other techniques, even in a research setting. I will discuss our current understanding of these problems and some possible solutions, and report on progress in our experimental effort to build a condensate interferometer. "Measuring the Information Velocity in Fast- and Slow-Light Media" Daniel J. Gauthier , Duke University - Fitzpatrick Center for Photonics and Communication Systems By all accounts, modern science and engineering has a very good understanding of how to use pulses of light to communicate information. It is, after all, the basis for one of the world's biggest and fastest-growing industries. And yet, the fundamental question of how fast information travels remains unanswered. The engineering community, starting with the seminal work by Shannon, has studied information rates, but has essentially ignored the question of the velocity of information. The physics community, initially prompted by an apparent challenge to Einstein's special theory of relativity, has been debating the issue off and on for almost 100 years. Surprisingly, the issue remains unresolved. There is no clear definition of the information velocity because there is only a vague understanding of where information is contained on a waveform. I will review the information velocity debate and present a technique for experimentally measuring the velocity of information for the case were the group velocity of a pulse of light vastly exceeds the speed of light in vacuum (a so-called "fast-light medium") or is much slower than the speed of light in vacuum (a "slow-light medium"). Our research suggests that the information velocity is equal to the speed of light in vacuum, independent of the characteristics of the medium. A tutorial on this topic, including links to recent publications, can be found at: By all accounts, modern science and engineering has a very good understanding of how to use pulses of light to communicate information. It is, after all, the basis for one of the world's biggest and fastest-growing industries. And yet, the fundamental question of how fast information travels remains unanswered. The engineering community, starting with the seminal work by Shannon, has studied information rates, but has essentially ignored the question of the velocity of information. The physics community, initially prompted by an apparent challenge to Einstein's special theory of relativity, has been debating the issue off and on for almost 100 years. Surprisingly, the issue remains unresolved. There is no clear definition of the information velocity because there is only a vague understanding of where information is contained on a waveform. I will review the information velocity debate and present a technique for experimentally measuring the velocity of information for the case were the group velocity of a pulse of light vastly exceeds the speed of light in vacuum (a so-called "fast-light medium") or is much slower than the speed of light in vacuum (a "slow-light medium"). Our research suggests that the information velocity is equal to the speed of light in vacuum, independent of the characteristics of the medium. A tutorial on this topic, including links to recent publications, can be found at: http://www.phy.duke.edu/research/photon/qelectron/proj/infv/ "Superconductivity in 2-dimension" Jongsoo Yoon , UVA Superconductivity occurring in 2-dimension has been understood in the framework of Kosterlitz-Thouless theory, and the nature of the transition is very different from that in 3-dimension. We present our recent data on superconducting properties of ultra-thin tantalum films, and compare with predictions based on the Kosterlitz-Thouless theory. Breakdown of superconductivity near the critical current and new findings on the phenomenon will also be discussed. "Precision exploration of neutron spin structure at Jefferson Lab Hall A" Nilanga Liyanage , UVA Spin structure functions provide basic information about the spin of the quark distributions inside the nucleon. Experimental understanding of the nucleon spin in the kinematic region where the three basic ("valence") quarks dominate the nucleon wave function is still rather poor. Jefferson lab, with its high quality, high polarization continuous electron beam and state of the art polarized nucleon targets in each of its three experimental halls is ideally suited for spin structure measurements in the valence region. An experimental program is underway at Jefferson Lab to measure the spin structure of the nucleon in the valence region with unprecedented precision. The planed upgrade of Jefferson lab CEBAF accelerator to 12 GeV will significantly increase the accessible kinematic range and the precision of these measurements. In this presentation I will give an overview of the neutron spin physics program at Jefferson Lab Hall A. I will also describe new experimental opportunities that will become possible in Hall A with the arrival of 12 GeV beam. "Fun with Fermions: Exploring and Manipulating a Fermi Gas of Atoms" Debbie Jin , Univ. of Colorado "Superconductors of a Different Stripe: Charge Inhomogeneity and Superconductivity in Copper Oxides" John Tranquada , Brookhaven National Lab. [Host: D. Louca] The standard model of electronic structure in solids is founded on the notion that electrons inevitably delocalize. In contrast, strong Coulomb repulsion in certain transition-metal oxide compounds can cause electron localization, resulting in the so-called "Mott-insulator" state. Cuprate superconductors consist of electronically-doped Mott insulators. Much of the continuing controversy over how to understand the cuprates concerns the issue of whether one can apply more or less conventional concepts of delocalized electrons, or whether radical new concepts are necessary. I will present experimental evidence, especially from neutron scattering, that the competition been kinetic and Coulomb energies leads to spatial inhomogeneities of charge carriers and antiferromagnetic correlations. It is possible that dynamic inhomogeneities are essential to achieving superconductivity at high temperature. "Conditional Dynamics and quantum feedback; an experiment in cavity QED" Luis Orozco , U. of Maryland Quantum systems that are strongly coupled have fluctuations that are larger than the average value of their steady state. When the fluctuation is a single photon, as is the case in cavity QED, the return to the steady state after the detection of a single photon follows conditional dynamics measurable with quantum optical correlations. The conditional dynamics can be modified, via quantum feedback, based on a single quantum and the knowledge of the conditional state. Work performed in collaboration with J. E. Reiner, W. P. Smith, M. L. Terraciano, and H. M. Wiseman with support from NSF and NIST of the USA. "Quantum Coherence in Magnets" Collin Broholm , John Hopkins University Magnetic materials are typically found in one of two qualitatively different states: Thermally disordered at high temperatures or spin ordered at low temperatures. In this talk I describe a third distinct state of an interacting spin system: quantum ordered magnetism. I present neutron scattering data that provide evidence for quantum order in zero, one, two, and three-dimensional spin systems. La4Cu3MoO12 contains spin-trimers that develop quantum order at low temperature where each trimer becomes a composite spin-1/2 degree of freedom<1. Y2BaNiO5 is an antiferromagnetic spin-1 chain with an extensive one-dimensional Haldane ground state. I present scattering data that provide clear evidence for long range coherence in the absence of conventional spin order2. (C4H12N2)Cu2C16 (PHCC) is a frustrated bi-layer antiferromagnet with interactions that span a two-dimensional plane. I show that there are coherent triplet excitations and argue that competing interactions favor quantum order over spin order3. Cu2(C5H12N2)2Cl4 (CuHpCl) has a cooperative singlet ground state and was initially thought to be a spin ladder. However, neutron scattering data show that it is in fact a three dimensional frustrated system with quantum order<4. Apart from describing and comparing the low temperature quantum ordered states in these pure systems, I shall also touch on the fascinating effects of impurities5 and the field driven quantum phase transitions that can be accessed experimentally in several of these systems<6. 1. Y. Qiu, C. Broholm, S. Ishiwata, M. Azuma, M. Takano, R. Bewley, and W. J. L. Buyers, cond-mat/0205018. 2. Guangyong Xu, J. F. DiTusa, T. Ito, H. Takagi, K. Oka, C. Broholm and G. Aeppli, Phys. Rev. B 54, R6827 (1996). 3. M. B. Stone, I. A. Zaliznyak, Daniel H. Reich, and C. Broholm, Phys. Rev. B 64, 144405 (2001). 4. M. B. Stone, J. Rittner, Y. Chen, H. Yardimci, D. H. Reich, C. Broholm, D. V. Ferraris, and T. Lectka, Phys. Rev. B 65, 064423 (2002). 5. M. Kenzelmann, G. Xu, I. A. Zaliznyak, C. Broholm, J. F. DiTusa, G. Aeppli, T. Ito, K. Oka, and H. Takagi. Phys. Rev. Lett. 90, 087202 (2003). 6. Y. Chen, Z. Honda, A. Zheludev, C. Broholm, K. Katsumata, and S. M. Shapiro Phys. Rev. Lett. 86, 1618 (2001). "The BEC transition temperature of dilute gases: a not-so-simple problem in statistical mechanics" Peter Arnold , UVA The phase transition temperature for Bose-Einstein condensation of a three-dimensional ideal gas of bosons at fixed density is something that every physicist learns to calculate in graduate school, if not before. Amusingly, the first correction to that result, from arbitrarily weak interactions, is sufficiently challenging that only now is there beginning to appear some theoretical agreement on its magnitude, roughly 80 years after Einstein computed the ideal gas result. "Entangled Photons for Quantum Information: 101 uses for a Schroedinger cat" Paul Kwiat , U of Illinois, Urbana-Champaign We have developed a means of producing entangled pairs of photons, using the process of spontaneous parametric downconversion in a novel two-crystal geometry. The quality of the source has enabled us to produce states of unparalleled purity, while the brightness has permitted an extreme violation of Bell's inequalities. Furthermore, the source is tunable, and we have been able to produce for the first time non-maximally entangled states, and states of arbitrary purity. The result is the capability to produce (almost) any two-photon quantum (polarization) state. Such states have application to such problems in quantum information as quantum cryptography, quantum teleportation, and quantum cooking. Physics and Materials Science Joint Colloquium "A new spin on electronics - spintronics" Dr. Stuart A. Wolf , University of Virginia, and DARPA at Arlington, VA [Host: Joe Poon and James Groves] Until very recently, the spin of the electron was ignored in mainstream electronics. The discovery of the giant magnetoresistance (GMR) effect in magnetic multilayers in 1988 and the subsequent development of sensors based on it began a transformation that will soon provide new paradigms for electronics for the new millenium. This talk will concentrate on the evolution of the DARPA spin electronics or spintronics project. The motivation, the science and the remarkable prospects for the future will be described in some detail. "Where does the Standard Model come from" Qaisar Shafi , Bartol Institute The Standard Model (SM) provides a highly successful description of strong, weak and electromagnetic interactions at present energies. In combination with Einstein's general relativity, it helps lay the foundation of another successful theory, the hot big bang cosmology. Some recent attempts to go beyond this theoretical framework will be discussed, necessitated in part by some exciting experimental discoveries, namely neutrino oscillations, existence of non-baryonic dark matter, CMB anisotropy,etc. "The Quantum Hall Bilayer: A New Superfluid" Herb Fertig , University of Kentucky Superfluids and superconductors are known to possess a unique stiffness related to the phase of their groundstate wavefunctions. Under appropriate circumstances, double layer quantum Hall systems possess an analogous stiffness that may be understood in terms of a condensation of particle-hole pairs. The relation between these systems has motivated both theoretical and experimental efforts to find properties in the bilayer quantum Hall system usually associated with superfluids. Most prominently, an effect reminiscent of Josephson tunneling has been observed in experiments with high quality samples, although there is considerable dissipation whose origin is not understood. Using a renormalization group analysis and results from Langevin dynamics simulations, we demonstrate that the likely source of dissipation is vortices in the phase degree of freedom. Vortex pairs are shown to have a very unusual thermal deconfinement transition in this system, and can also be broken apart at low temperature by disorder. In the latter case, simulations show the system possesses properties reminiscent of a glassy state which qualitatively account for many of the experimental observations. "Envisioning Particles and Interactions" Professor Chris Quigg , Fermi National Accelerator Laboratory I will present a new way to envision the particles and interactions: a pair of interpenetrating tetrahedra that we might call the double simplex, in homage to the double helix that has just celebrated its fiftieth anniversary. Any chart or mnemonic device should be an invitation to narrative and a spur to curiosity, and that is what I intend for the double simplex. My goal is to represent what we know is true, what we hope might be true, and what we don't know--in other terms, to show the connections that are firmly established, those we believe must be there, and the open issues. I want also to express the spirit of play, of successive approximations, that animates the way scientists work. "How a doctor of particle physics found happiness working with 'real doctors'" Prof. John Malko , Emory University [Host: Oscar Rondon] SPECIAL PHYSICS AND ASTRONOMY COLLOQUIUM "The First Stars in the Universe" Tom Abel , Pennsylvania State Univ. Recent progress in our ability to follow numerically the formation of the first objects in the universe predict the first stars to massive and to form in isolation. They are copious producers of UV radiation and begin to reionize the intergalactic medium. The single currently allowed model for structure formation turns out to be able to match all aspects of the most recent accurate measurements of the cosmic microwave background radiation. In this talk we highlight our new understanding of the physics of the formation of the first stars, their lifes, their remnants and their impact on subsequent structure formation. "Nanotechnology, nanotubes and molecules as tinker toys" Sean Washburn , University of North Carolina at Chapel Hill Nanotechnology holds many promises for the future and many (possibly insurmountable) challenges before the promises can be implemented. Carbon nanotubes with their superb mechanical and electrical properties are a canonical example of both of these aspects. The possibility of assembling them into designed forms as new materials or or into nanometer mechanical and electrical devices might lead to improved strengths, speeds, etc. Some elementary experiments indicate that while the promise is still great, the barriers to implementing such nano-devices are still ahead of us. The methods of the experiments already have shown that many academic disciplines and new techniques will be involved in invoking the new improvements. Some examples of such efforts will be reviewed. "The Fascination of Neutrino Oscillations: Their discovery and their future study" Leslie Camilleri , Fermilab/CERN Anomalies have been observed in both solar and atmospheric neutrinos. How the study of these anomalies has led to the discovery of neutrino oscillations will be summarized. Many experiments are now being built to further our understanding of these oscillations. These experiments, and the next generation of experiments being planned to complete our understanding of how neutrinos mix and what their mass spectrum is, will be described. "Interactions and Disorder in Quantum Dots: A New Large-g Approach" Ganpathy Murthy , University of Kentucky Understanding the combined effects of disorder and interactions in electronic systems has emerged as one of the most challenging theoretical problems in condensed matter physics. It turns out that one can solve this problem non-perturbatively in both disorder and interactions in the regime when the system is finite (as in a quantum dot) but its dimensionless conductance g under open-lead conditions is large. This regime is experimentally interesting for the statistics of Coulomb Blockade in quantum dots and persistent currents in rings threaded by a flux. First some RG work will be described which shows that a disordered quantum dot with Fermi liquid interactions can be in one of two phases; one controlled by the so-called Universal Hamiltonian and another regime where interactions become large. These two are separated in the infinite-g limit by a second-order phase transition. I will show how to solve for the strong-coupling phase, which is characterized by a Fermi surface distortion, by a large-N approximation (where N=g is in fact large for realistic systems). Predictions will be presented for finite but large g for the statistics of the Coulomb Blockade peak spacings and other correlators. Finally, the relationship of these results to puzzles in persistent currents in mesoscopic rings will be presented. "A new look at rare pion and muon decays" Dinko Pocanic , University of Virginia [Host: Eugene Kolomiesky] Pion and muon, the lightest unstable particles, were discovered more than fifty years ago, and have been well studied since. However, over time the Standard Model (SM) of elementary particles and interactions has become so successful that for several key pion and muon properties its predictions are far less uncertain than the best available measurements, primarily those concerning the particles' rare decay modes. Thus, slight deviations from the SM predictions can provide valuable clues to new physics outside of the current SM. In its first phase, the PIBETA experiment has measured accurately several such rare decays at PSI, the Swiss meson facility. The talk will focus on the motivation, experimental apparatus, method, and the unexpected first results of these measurements. "The Illusive Bose Metal" Philip Phillips , UIUC Cooper pairs (bosons) are thought to exist in two quite distinct ground states: 1) localized in a Mott insulator or 2) condensed in a superconductor. However,recent experiments on 2D insulator-superconductor transitions indicate that there may be a third possibility: a metal with a finite resistivity at zero temperature. I will review the standard theoretical framework used to understand the insulator-superconductor transition, the recent experimental results and I will show quite generally how bosons lacking phase coherence can form a metal in the presence of disorder rather than an insulating phase. The metallic state is rather weird, however. The phase degrees of freedom are glassy. At the heart of the metallic state is the dissipation inherent in the glassy state. Bosons moving in such a glassy environment fail to localise because no true ground state exists. "Nuclear Spin Relaxation, Dispersion, and Intermolecular Exploration" Robert Bryant , UVA - Chemistry "Experimental evidence for hadronic deconfinement in pbar-p collisions at 1.8 TeV" Rolf Sharenberg , Purdue University - (E-735 Collaboration) [Host: Ken Nelson] We have measured deconfined hadronic volumes, 4.4 < V < 13.0 fm3, produced by a one dimensional (1D) expansion. These volumes are directly proportional to the charged particle pseudorapidity densities 6.75 < dNc/d0 < 20.2. The hadronization temperature is T = 179.5±5(syst) MeV. Using Bjorken's 1D model, the hadronization energy density is F = 1.10±0.26(stat) GeV/fm3 corresponding to an excitation of 24.8±6.2(stat) quark–gluon degrees of freedom. "Gender and Physics: a Hard Look at a Hard Science" Amy Bug , Swarthmore College "Prospects for Quantum Computation" David Divincenzo , IBM A "standard model" for the physical implementation of a quantum computer was laid out some years ago. It indicated a set of capabilities that had to be achieved to make quantum processing possible: 1) systems with well-characterized qubits must be constructed. 2) These qubits should be initializable to the "0" state. 3) It must be possible to control the one- and two-qubit Hamiltonian of the system, so that unitary quantum logic gates are enacted. 4) Decoherence and imprecision of gate operations must be kept very low. 5) Reliable measurements of the quantum state of individual qubits must be possible. In this talk I will indicate progress towards these goals, after first reviewing why we want to do quantum computation. "Tabletop probes for TeV physics: searches for the electric dipole moment of the electron" David DeMille , Yale Remarkably, the virtual exchange of exotic heavy particles--such as those predicted to exist in supersymmetric and grand unified theories-- can lead to observable effects in ordinary matter. This talk will describe a set of experiments searching for such an effect: namely, a permanent electric dipole moment along the spin of the electron. The most sensitive experiments of this type are already sensitive to new physics at the TeV scale, and set important limits on possible extensions to the standard model. I will report on our progress in developing a new technique, which promises several orders of magnitude improvement in sensitivity. " 0.7-anomaly in Quantum Point Contacts A" Konstantin Matveev , Duke University A remarkable property of one-dimensional conductors is the quantization of their resistance in units of Planck constant divided by the square of the elementary charge. This effect is well understood and readily observed in low-temperature experiments with relatively short one-dimensional conductors called the quantum point contacts. A puzzling feature of the transport through such contacts was reported a few years ago, when it was discovered that at somewhat higher temperatures the conduction drops to about 0.7 of its quantized value. This phenomenon, often referred to as the 0.7-anomaly, has been studied extensively in the last few years. I will discuss the latest experimental data and the theoretical attempts at understanding this effect. "Physics at DZERO: Exploring the Microscopic Structure of the Universe" Jerry Blazey , NIU To explore the microscopic structure of the universe very energetic beams of submicroscopic particles and complicated detectors, such as the DZERO detector, are required. These huge machines, built by graduates students, physicists, and engineers, have the potential to explain the origins of mass and to explore extra spatial dimensions. The technology behind these investigations and their current state will be described. Prof. Blazey serves as spokesman for the D-Zero Collaboration at Fermilab and director of NICADD, the Northern Illinois Center for Accelerator and Detector Development. "Tunable Interactions in Ultracold Bose and Fermi Gases - Solitons to Superfluids" Randy Hulet , Rice University Bose-Einstein condensation of ultracold atomic gases, first achieved only seven years ago, has lead to remarkable demonstrations of matter wave phenomena. One of the most compelling aspects of ultracold atoms is the experimental ability to alter the strength and even the sign of the interactions between atoms using magnetically tuned "Feshbach resonances". We have exploited this tunability to create matter wave solitons composed of Bose-Einstein condensates of lithium atoms [1]. A similar experiment was performed in Paris [2]. Soliton waves arise when a nonlinearity exactly compensates for wavepacket dispersion. This compensation enables a soliton to propagate without spreading. Solitons are observed in a variety of physical systems, including water waves, plasma waves, and optical pulses, to name but a few. The nonlinearity in ultracold atoms arises from their interactions. By changing the interactions from repulsive to attractive, the condensate is observed to form a multi-soliton "train" of up to 15 individual solitons. The solitons maintain their size and shape for a propagation time of up to 3 s. Adjacent solitons are observed to interact repulsively. We are also pursuing the possibility of creating Cooper pairs of fermionic 6Li atoms, which would be an atom analog of superconductivity, in the gas phase. The necessary attraction would again be generated using a Feshbach resonance, which could enable the first exploration of superconductivity in the strong coupling regime. [1] K.E. Strecker, G.B. Partridge, A.G. Truscott, R.G. Hulet Nature 417, 150 (2002). [2] L. Khaykovich et al., Science 296, 1290 (2002). "Superconducting Schrodingers Cat and its Application to Quantum Computing" Siyuan Han , Department of Physics and Astronomy, University of Kansas [Host: B. Shivaram] Since the beginning days of quantum mechanics the possibility of having coherent superposition of macroscopic quantum states, e.g., Schrodingers Cat, has stimulated much theoretical debates. The idea can actually be tested out experimentally in superconducting electronic devices called Josephson junctions (JJs) and SQUIDs. Ill show that when sufficiently isolated from environments a current biased JJ is a very well characterized and controllable macroscopic quantum system and that Rabi oscillations can be utilized to create coherent superposition of macroscopic quantum states. In a recent experiment, we have succeeded in placing a JJ in the superposition of its ground (alive) and excited (dead) states and observing its time evolution as it oscillates coherently between the alive and dead states of the junction [Y. Yu et al., Science 296, p889 (May 2002)]. The coherence time, estimated from the exponentially decaying amplitude of the oscillations, is about 5 s which is very promising for quantum computing using the phase qubits (JJs) or flux qubits (SQUIDs). "Global Data Grids for Data Intensive Science" Professor Paul Avery , University of Florida "Black Holes at Future Colliders and Beyond" Greg Landsberg , Brown University If the scale of quantum gravity is as low as a TeV, as was proposed by Arkani-Hamed, Dimopoulos, and Dvali a few years ago, one of the most dramatic manifestation of this fact would be copious production of miniature black holes at the CERN's LHC accelerator, qualifying the latter as black-hole factories. These rapidly evaporating black holes could serve as sensitive probes of quantum gravity effects, topology of extra dimensions, and as a laboratory to produce new particles with the mass ~100 GeV. I'll discuss the black hole production and decay mechanisms at future colliders and the opportunities of cosmic ray detectors in observing black holes in ultra-high-energy cosmic ray collisions. Using the Higgs boson as an example, I'll demonstrate that it can be found in the decays of black holes as early as in the first hour of operation of the LHC, even with incomplete detectors. "The Role of Clusters in the Design of Nano-Scale Systems" Prof. Puru Jena , Virginia Commonwealth University [Host: Joseph Poon/Louis Bloomfield] Atomic clusters consisting of a few to a few thousand atoms constitute a new phase of matter intermediate between atoms and solids. Unlike conventional nanostructured materials, the size and composition of these clusters can be controlled one atom at a time. The properties of such clusters brought about by their large surface-to-volume ratio, unique geometry, low dimensionality and reduced coordination, exhibit novel behavior quite unlike that in the bulk. For example, metallic elements can be made to form ionic bonds while nonmagnetic and anti-ferromagnetic materials can become ferromagnetic or ferrimagnetic. This talk will introduce the principles for designing these clusters and discuss a concept where clusters can be viewed as super-atoms - adding a third dimension to the periodic table. Recent experimental evidence to support this idea will be presented. Examples of cluster assembled materials will include high-energetic materials involving Al(MnO4)3, alkali metal clusters isolated in zeolites, transition metal clusters supported on organic and metallic substrates, and manganese-oxide clusters passivated by acetate ligands. Ultimately the properties of crystals composed of clusters as the building blocks will be discussed. It is hoped that the synergy between theory and experiment will lead to the synthesis of cluster assembled materials with unique and tailored properties, thus creating new opportunities in materials science at the dawn of the new millennium. "A Singular Potential:from Theorist's Toy to Experimental Realization" Sidney A. Coon , NSF and New Mexico State University [Host: S. Liuti] The inverse square potential (V(r)~1/r**2), first studied by Cote, a contemporary of Isaac Newton, is an interesting potential for nonrelativistic quantum mechanics. It lies on the edge of the line dividing potentials which can be treated in the familiar manner and those which are singular. Singular potentials have been studied for a long time because they can be regarded as models for nonrenormalizable field theories, and, more recently, as an element of the new paradigm of effective field theory methods in nuclear physics. In this talk, I will demonstrate the mathematics of the 1/r**2 potential, including the anomalous (quantum mechanical) breaking of scale symmetry and a rigorous treatment of absorption ("fall to the center"). Correct mathematics leads to a quantum mechanical understanding of the formation of anions (electrons bound by the dipole moment of a polar molecule) and of a very recent dedicated experimental study of this potential in the context of manipulation of cold atoms. "Building Nucleons and Nuclei from Quarks and Glue: Early Results from the Research Program at Jefferson Lab" Larry Cardman , JLab [Host: T. Gallagher] "Spontaneous evolution from a cold Rydberg gas to an ultra cold plasma" Thomas Gallagher , University of Virginia "Long baseline neutrino oscillation experiments: why and how" A. Marchionni , Fermi Lab [Host: S. Conetti] The evidence for neutrino oscillations from the SuperKamiokande experiment still leaves several open questions. The present program of long baseline neutrino oscillation experiments will address these issues. The ongoing K2K experiment and the future JHF facility in Japan, the programs in preparation in the United States (MINOS) and in Europe (CNGS) will be reviewed. MINOS (Main Injector Neutrino Oscillation Search) will be operating at the beginning of 2005 over a baseline of 735 km from FERMILAB (Illinois) to Soudan (Minnesota). Status and goals of the MINOS experiment will be reported in detail. "High-performance dielectric thin films for science and technology" Dr. Bruce Van Dover , Agere Systems , Murray Hill NJ Ultrahigh-density dynamic random acess memory, hyperscaled field-effect transistors, and field-effect-induced superconductivity at 117 K in fullerenes are examples where high-performance thin film dielectrics play a pivotal role in science and technology. In the past, only a small set of materials (SiO2, Al2O3, (Ba,Sr)TiO3, etc.) have been considered for these structures. We have assessed a wide range of dielectric systems using a high-throughput, composition-spread approach. This has lead to the discovery and development of dielectrics with extremely high performance, as well as the identification of unexpected physics by careful investigation of systematic trends. I will discuss the scientific and technological issues, our approach to discovery, and the interesting materials and materials physics we have uncovered. "Running Out of Time: Why Elephants Don't Gallop" Julian Noble , University of Virginia Newtonian physics implies that running is impossible for sufficiently large animals. There are two main factors that influence this: 1. An animal's strength/weight ratio decreases with size, hence a sufficiently large animal will be liable to injury if it attempts a gallop. 2. The time required for an animal to move its limbs increases with size, but the time an animal can remain in the air (while running) does not scale with linear dimension. Therefore there is some size beyond which an animal has "run out of time" and cannot take advantage of a running gait. These aspects of the biomechanics of locomotion bear on the interesting questions of determining the speeds of extinct species, as well as how varying gravity affects locomotion. "Physicists and Industry in the 21st Century: Who, What, How" N.O. Lipari , Lipari Int'l Consulting [Host: V. Celli] The on-going global economic transformations require that industries strongly focus on innovation, time to market, quality and cost in the introduction of new products. The trend in each industry is to focus on core competencies and obtain the additional resources from external alliances and partnerships with Universities and Government. "Coopetition" is emerging as the most effective approach for technology transfer, i.e. the path from idea to products. This requires a totally different and much more pervasive role of the physicist. In addition to the scientific skills, the scientist needs interdisciplinary and communication skills in order to successfully interact in the industrial environment. Examples will be given. Specific suggestions for the role of the university in forming the physicists with the proper requirements for the modern industry will be discussed. "New States of Matter in the Quantum Hall and BEC Regimes " Kareljan Schoutens , University of Amsterdam "Lattice Effects and Jahn-Teller Fluctuations in Crystals" In many systems i.e. magnetoresistive and superconducting oxides, the atomic structure couples strongly to the electronic degrees of freedom. In CMR crystals, Jahn-Teller effects are strongly related to the metal-insulator transition, for instance. The manganites are one example where the JT distortions are static and are important ingredients to the polaron lattice formation. In cuprates, when static distortions are present it usually means superconductivity is killed, while dynamic effects prevailing in the SC phase are sometimes too fast to observe. (La/Sr)CoO3 serves as a prototype for studying the crossover from static to dynamic effects. With the pair density function analysis and inelastic S(Q,w) measurements, it was determined that dynamical JT fluctuations induce a distorted atomic structure. The S(Q,w) clearly shows the presence of localized phonon modes most likely due to JT excitations, while the local structure transforms to an unusually glassy state that is intermediate to the manganites and cuprates. "Light: Time Meets Frequency" Jun Ye , JILA "Making Sense of the New Cosmology" Michael Turner , University of Chicago Cosmology is in its most exciting period of discovery yet. Over the past five years we have determined the basic features of the Universe -- spatially flat; accelerating; composed of 1/3rd a new form of matter, 2/3rds a new form of energy, with some ordinary matter and neutrinos; and apparently born from a burst of rapid expansion during which quantum noise was stretched to astrophysical size seeding cosmic structure. Now we have to make sense of this: What is the dark matter particle? What is the nature of the dark energy? Why this mixture? How did the matter -- antimatter asymmetry arise? What is the underlying cause of inflation (if it occurred)? If we succeed in making sense of our Universe, this will truly be remembered as a Golden Age. "Quantum information with quantum fields: creation and entanglement of twin beams of light" The concept of quantum information can be seen as stemming from the fascinating idea of putting quantum mechanics to practical use as such, and not only as the theory behind, in particular, microscopic physics. Because of the latter, it is sometimes believed that experimental efforts in quantum information only involve exquisite control over nanoscale entities such as single atoms or single photons (see last week's colloquium for a beautiful illustration). This is not rigorously true, as qubits can also be implemented using exquisitely controlled macroscopic entities, such as optical fields of milliwatt power. In this talk, I will present our endeavor to create bright entangled light sources suitable for quantum teleportation and quantum error correction, as well as our contribution to the theoretical understanding of such problems. "Optically probing and controlling a single quantum dot" Daniel Gammon , Naval Research Lab. [Host: O. Pfister] "New Dimensions in Probing the Structure and Function of Matter: Concepts, Techniques and Technologies" Swapan Chattopadhyay , JLAB We will explore various concepts, techniques and technologies for producing ultrashort pulses of electrons and photons of all energies and colors from the femtosecond to the attosecond duration and beyond for breakthrough research in physics, chemistry, life and information sciences "Results from the Sudbury Neutrino Observatory" Andrew Hime , Los Alamos The Sudbury Neutrino Observatory (SNO) is a heavy water, imaging Cerenkov detector operating 6800 feet underground in the Creighton Nickel Mine in Ontario, Canada. With its heavy water target, SNO has the unique capability to detect and separate three distinct 8B solar neutrino signals through the charged current (CC), neutral current (NC), and elastic scattering (ES) channels. By comparing the solar neutrino flux deduced from the CC interaction (sensitive only to electron neutrinos) with that deduced from the NC or ES interactions (sensitive to all active neutrino flavors), SNO can make a unique study of the solar neutrino deficit and a model independent test for neutrino oscillations. Results from the pure D2O phase of SNO will be presented along with their implications for elementary particle physics, astrophysics, and cosmology. "A Layman's Guide To M-Theory" Michael Duff , University of Michigan Superunification of the fundamental interactions underwent a major paradigm shift in 1984 when eleven-dimensional supergravity was knocked off its pedestal by ten-dimensional superstrings. 1995 witnessed another shift of equal proportions, however, when superstrings were themselves superseded by ``M-theory'', a non-perturbative theory which describes extended objects with two dimensions (supermembranes) and five dimensions (superfivebranes), which subsumes all five consistent string theories and whose low-energy limit is, ironically, eleven-dimensional supergravity. "The Long Way From Strings To Large Extra Dimensions " Mariano Quiros , Istituto de Estructura de la Materia (CSIC), Madrid, Spain In the first part of the talk I will review the main ideas going from string models to the possibility of low string scales and large extra dimensions. In particular the subjects of bosonic and fermionic strings (IIA, IIB, heterotic and type I/I'), T-duality and D-branes, will be covered. In the second half of the talk I will describe the different scales which can appear in the various string constructions and provide the experimental constraints on transverse (gravitational) and longitudinal (gauge) dimensions using gravitational and collider data. "Building a quantum computer atom by atom" Chris Monroe , University of Michigan [Host: Robert Jones] A quantum computer can store and process quantum superpositions of numbers. This parallelism leads to an exponential speedup over conventional computers for certain algorithms. However, the prospects for constructing a quantum computer are highly speculative, owing to the extremely fragile nature of quantum superpositions. A quantum computer is nothing more than a smaller (and more humane) version of Schroedinger's Cat, and if one is ever built, it will strongly impact both computer science and fundamental quantum mechanics. A leading physical candidate for a quantum computer is a collection of individual trapped atoms, controlled and manipulated with optical fields. Experiments are reported in this context, including the demonstration of simple quantum logic gates and the controlled generation of entangled quantum states. The outlook for future quantum computing with atoms or alternative technologies will be discussed. The Llewellyn G. Hoxton Lecture Please not time and place Chemistry Building , Room 402 "The Universe of the Elementary Particles" Gerald 't Hooft , University of Utrecht [Host: Department of Physics] "How does God Play Dice? (Speculations about Quantum Mechanics at the Planck scale)" G. 't Hooft , University of Utrecht, Netherlands [Host: P. K. Kabir] Attempts to arrive at consistent theories combining Quantum Mechanics with General Relativity not only require new concepts of space, time and matter, such as the ideas that lead to Superstring Theory, D-brane theory and M-theory, but they may also require a reconsideration of what Quantum Mechanics itself really is about. Although completely deterministic scenarios appear to be ruled out by the Bell inequalities, it is nevertheless worth-while to investigate a set-up where we start with a deterministic theory and add to this the notion of information loss. Although models proposed so-far all show deficiencies of some sort which makes them unrealistic for describing the real world, these models do show how chaotic phenomena in a deterministic theory might be suspected to lie at the basis of the quantum nature of our world. "Entropy in the Solid State" Michael Widom , Carnegie Mellon University Equilibrium states of matter balance the thermodynamic tendency to minimize energy with the simultaneous need to maximize their entropy. Depending on the temperature, different equilibrium states may occur representing different tradeoffs between energy and entropy, leading potentially to a multiplicity of solid state phases. The phase diagram of a superalloy and of the element Pu illustrate the importance of entropy residing in modes of atomic vibrations. Additional examples will be given of quasicrystal- and glass-forming alloys in which the entropy resides instead in novel discrete configurational degrees of freedom. "Understanding Flight" David F. Anderson , Fermi National Accelerator Laboratory Through the years the explanation of flight has become mired in misconceptions that have become dogma. Wolfgang Langewiesche, the author of "Stick and Rudder" (1944) got it right when he wrote: "Forget Bernoulli's Theorem". A wing develops lift by diverting (from above) a lot of air. This is the same way that a propeller produces thrust and a helicopter produces lift. Newton's three laws and a phenomenon called the Coanda effect explain most of it. With an understanding of the real physics of flight, many things become clear. Inverted flight, symmetric wings, and the flight of insects are obvious. It is easy to understand the power curve, high-speed stalls, and the effect of load and altitude on the power requirements for lift. The contribution of wing aspect ratio on the efficiency of a wing, and the true explanation of ground effect will also be discussed. "Hyperfine physics - from the hydrogen atom to hemoglobin" Arthur S. Brill , University of Virginia Hyperfine physics deals with interactions between electron and nuclear spins. Measurements of these interactions provide information about the electronic structure of paramagnetic sites in molecules and crystals. Examples will be presented and briefly discussed of the roles of such measurements in atomic, biological, condensed matter molecular and nuclear physics. "From the QCD Phase Diagram to Heavy Ion Collisions and Back" Krishna Rajagopal , MIT I describe some of the things we think we know about the physics of a hot quark-gluon plasma and the phase transition between the stuff of the big bang and ordinary hadronic matter. The questions I will pose motivate people to collide heavy ions at relativistic energies. I will give two examples of how we may use measurements made in these experiments to map the QCD phase diagram, and hence to study the condensed matter physics of QCD. "TheRole of Clusters in the Design of Nano-Scale Systems" Puru Jena , Virginia Commonwealth University , Richmond, VA - Department of Physics [Host: Lou Bloomfield and Joe Poon] "A Problem in Atmospheric physics: Stratospheric ozone depletion" J. Elkins , National Oceanic and Atmospheric Administration Since 1987, almost all countries have signed the Montreal Protocol to control substances that cause depletion of the ozone layer. One of the successes of the Protocol has been the dramatic decrease in emissions of methyl chloroform, a metal degreaser that has been responsible for the decline of total equivalent chlorine in the atmosphere. However, chlorofluorocarbon (CFC-12), a common refrigerant, and the halons, fast-acting fire extinguishing agents, are still increasing in the atmosphere even though production ceased for the developed countries in 1996. This research talk will discuss ground-based and airborne measurements and their implication for the future ozone depletion. Preliminary results from a recent field campaign operated on the Trans-Siberian Railway will also be presented. Special Colloquium-Please note special time "A Study of Atmospheric Neutrinos with the Super-Kamiokande Detector" Professor James Stone , Boston University and Department of Energy The observation of flavor oscillations in atmospherically produced neutrinos by the Super-Kamiokande Experiment represents the first indication of massive neutrinos and new physics beyond the Standard Model of Particle Physics. Neutrino physics in the context of oscillations will be discussed and a detailed description of the Super-Kamiokande detector will be presented. The latest experimental results on proton decay, atmospheric and long baseline neutrino studies will be shown. "The Fixed-Target Charmed Road to Understanding Hadrons" Dr. Jeff Appel , Fermilab Measurements involving charm quarks tell us about the nature and details of light hadrons. This talk will summarize how and what we are learning from charm fixed-target experiments about the usual ground-state hadrons, and about scalar resonances such as the sigma, kappa, and f_0's which have had uncertain histories so far. "Formation and Trapping of Ultracold Molecules by Photoassociation" Pierre Pillet , Laboratoire Aime Cotton "The Baryon Junction and High-Energy Nuclear Collisions" Brian Cole , Columbia University [Host: C. Dukes] In the 1970's Veneziano suggested the existence of a set of diagrams in Regge theory that could allow the baryon number to be "extracted" from a baryon in a single step in high-energy hadronic interactions. Because no experimental evidence for these diagrams was found, the idea was largely forgotten. However, in recent years it has been resurrected and re-cast in terms of the so-called "baryon junction" a (possible) non-perturbative topological defect in the gluon fields within the baryon. In this picture, the junction plays the role of "bookkeeper" for baryon number conservation in high-energy collisions. Current theoretical models suggest that diagrams involving the exchange of the junction only become important in hadronic (e.g. p-p) collisions at collider energies. However, the junction may become active at much lower energies in nuclear collisions due to the multiple interaction of the incident nucleons. I will use results from a new generation of experiments studying proton-collisions at the Brookhaven National Laboratory AGS and CERN SPS accelerators to illustrate the possible role played by the junction in the "stopping" of the protons and in the abundant production of strange baryons and the production of anti-baryons. I will then discuss the possibility that the junction may be responsible for some of the anomolous results obtained from fixed-target heavy-ion experiments at the CERN SPS that were recently argued to provide evidence for quark-gluon plasma formation in high-energy nuclear collisions. I will discuss future studies of junction physics in fixed-target proton-nucleus experiments and in proton-proton and proton-nucleus collisions at the Relativistic Heavy Ion Collider. I will finish by highlighting some recent speculation that novel states of matter formed from "meshes" of junctions and anti-junctions may be created in heavy-ion collisions at RHIC. "How QCD Works" Hank Thacker , University of Virginia Mathematically, the interaction between quarks and gluons is remarkably similar to the electromagnetic interaction of electrons and photons. But unlike QED, QCD has an essentially nonperturbative structure, as exhibited most strikingly by the absolute confinement of quarks, which represents a fundamental property of the QCD vacuum (complete screening of color charge). Another property of QCD, chiral symmetry breaking, is also a statement about the vacuum, i.e. that it is full of quark-antiquark pairs (analogous to Cooper pairs in BCS theory). Chiral symmetry breaking and quark confinement are probably related phenomena, but the connection is poorly understood. I will discuss recent lattice calculations which have begun to expose the structure of the QCD vacuum. "III-Nitride Micro- and Nano-Structures and Devices" Professor Hongxing Jiang , Kansas State [Host: E. Kolomeisky and J. Poon] Advances in materials research and novel structure designs have brought the dimensions of photonic devices to the scales of the wavelength of the light they emit, transmit, and detect. In this realm, quantum nature of light dominates, enabling more efficient and fast devices. In this talk, the fabrication and optical studies of micron and wavelength-scale photonic structures, including micro-cavities and micro-size light emitters, based on III-nitride wide bandgap semiconductors will be presented. Our recent work on sub-micron photonic structures prepared by e-beam lithography and plasma etching will be discussed. Potential applications of III-nitride micro- and nano-photonics in efficient energy conversion and optical communications will be also be summarized. "Quantum Entanglement as a Resource for Communication" William Wootters , Williams College Quantum mechanical objects can exhibit correlations with one another that are fundamentally at odds with the paradigm of classical physics; one says that the objects are "entangled." In the past few years, entanglement has come to be studied not only as a marvel of nature but also as a potential resource, particularly as a resource for certain unusual kinds of communication. This talk reviews three proposed communication schemes based on entanglement: (i) dense coding, which is the effective doubling of the information-carrying capacity of a quantum particle through prior entanglement with a particle at the receiving end; (ii) teleportation, in which a quantum state is transferred from one particle to another over a distance, apparently without traversing the intervening space; and (iii) the efficient pooling of classical data, in which separated participants arrive at a conclusion faster because they share entanglement. These three schemes highlight three distinct ways in which entanglement can enhance communication. "Using the World Wide Web for Physics Teaching and Learning: Exploring Where Pedagogy and Technology Meet" Evelyn Patterson , U. S. Air Force Academy The explosion of World Wide Web technology over the past several years has spurred the development of an ever-increasing number of web-based teaching and learning materials and techniques. Web technology is being used to support student-teacher, student-student, and teacher-teacher communications, often providing communications channels and possibilities not possible previously. At the same time, physics education research continues to provide more insight about how, why, and the extent to which our students do-- and don't-- learn physics. Can our research-based understanding of how students learn and the new unprecedented power of communications lead us to improved courses and programs? This talk will survey the spectrum of ways in which the web is being used by the physics education community to promote physics teaching and learning. It will also introduce and discuss a unique mix of pedagogy and the web technology, the "Just-in-Time Teaching" (JiTT) strategy, now being implemented by over 120 faculty at more than 60 institutions across the country and in Canada and Europe." "Recent news from the vacuum? The Muon g-2 Experiment at Brookhaven" B. Lee Roberts , Boston University Since the experiments of Stern and Gerlach, magnetic moments of "elementary" particles have been important in our quest to understand subatomic physics. A brief review of the history and foundations of this field will be given as an introduction to the muon g-2 experiment at the Brookhaven AGS. This experiment, E821, has recently reported a new result with a relative accuracy of 1.3 ppm, which is larger than the theoretical (Standard Model) value by 2.6 standard deviations. The physics context of this measurement, the experiment, and the analysis leading to this new result will be presented. "First results from the Relativistic Heavy Ion Collider" Jamie Nagle , Columbia University "Nanoscale ordering in soft materials near surfaces and interfaces" Prof. Pulak Dutta , Northwestern Univ. A material is 'soft' if its structure, and thus its properties, can change in response to very weak stimuli; hence the current interest in using soft materials for switching, sensing, etc. One way to induce structures that do not occur otherwise is with the help of a surface or interface. This talk will give some examples of the use of synchrotron radiation to look at how molecules self-organize near surfaces and soft-hard interfaces. Our studies of Langmuir monolayers (including their use as templates for inorganic nucleation), self-assembled films, and normal liquids near interfaces will be described. "Neutron Spin Structure Function Measurements at Jefferson Lab" Nilanga Liyanage , Jefferson Laboratory Spin structure functions provide basic information about the spin of the quark distributions inside the nucleon. Measurements at high energy laboratories have provided precision spin structure function data at low values of xbj. However, there is little precision data at low and moderate values of momentum transfer and high values of xbj . This is especially true for the neutron due to the absence of a free neutron target. Polarized ND3 , NH3 and 3He targets at Jefferson lab combined with its high-polarization continuous electron beam have provided the opportunity to make high precision neutron spin structure measurements in the high xbj region. As examples of high precision measurements possible at Jefferson lab, I will describe two planned neutron spin structure measurements, one in the deep inelastic region and the other in the resonance region, using the Hall A polarized 3He target. The deep inelastic measurement will provide the first precision test of predictions for the virtual photon asymmetry An1 in the valence quark region. The measurement in the resonance region, combined with the DIS measurement will provide a first test of quark-hadron duality for spin structure of the neutron Joint Physics/Biology Colloquium "Ecological dynamics of multispecies communities" Alan McKane , Department of Physics -University of Manchester, UK [Host: Tim Newman] Many theoretical physicists with a background in non-equilibrium statistical mechanics are becoming interested in exploring mathematical models of ecosystems. The reason for this is clear when one realizes that such models typically involve a large number of individuals interacting according to simple rules and that the ultimate aim is to compute coarse-grained or long-time behavior. The ingredients of these models include population dynamics, predator-prey interactions, competitive effects, speciation and immigration. Two examples will be discussed. One is a model which describes the evolution of food webs using adaptive dynamics and the other, a stochastic model of species-rich ecosystems which makes predictions concerning the form of the species abundance distribution. "Quantum Confinement of Electrons and Phonons in Single Wall Carbon Nanotubes" Prof. A T. Johnson, Jr. , Univ. of Pennsylvania Single wall carbon nanotubes are a fascinating set of nanomaterials whose unique physical properties reflect the effect of quantum confinement on the electronic and phonon energy spectrum. Electron waves confined to the cylindrical tube wall obey periodic boundary conditions. Their energy spectrum consists of a set of one-dimentional subbands, making nanotubes metals or semiconductors depending on the precise wrapping of the constituent graphene sheet. I will discuss functional nanotube devices we have made, including field effect transistors, diodes, and highly conducting electrical interconnects. Nanotube sound waves (phonons) also experience quantum size effects. This makes nanotubes incredibly stiff, and may enable mechanical composites or nano-mechanical systems. We recently measured the effect of the quantized phonon spectrum on the specific heat of nanotubes as well as their thermal conductivity. Our results support theoretical predictions that nanotubes have an extremely high thermal conductivity, perhaps the highest of any known material. "The Musical Score, the Fundamental Theorem of Algebra, and the Measurement" Rick Trebino , Georgia Tech [Host: Louis Bloomfield] To measure an event in time requires a shorter one. As a result, the development of a technique to measure ultrashort laser pulses--less than 10-12 seconds long and the shortest events ever created--has been particularly difficult. We have, however, recently developed a simple method for fully characterizing these events, that is, for measuring a pulse's intensity and phase vs. time. This method relies on two seemingly unrelated ideas: the concept of the musical score and the fact that the Fundamental Theorem of Algebra fails in two dimensions. Specifically, an optical analog of a musical score of the pulse is produced by measuring its spectrogram. And the mathematics involved is equivalent to the two-dimensional phase-retrieval problem--a problem that is solvable only because the Fundamental Theorem of Algebra fails in two dimensions. We call the method Frequency-Resolved Optical Gating (FROG), and it is simple, rigorous, intuitive, and general. It can measure pulses in all spectral ranges, on a single-shot basis, and over a wide range of energies. FROG has been used to measure pulses as short as 4.5 femtoseconds (4.5 x 10-15 sec), and it can measure two pulses simultaneously. More recently, we have shown that FROG can be used in conjunction with spectral interferometry to measure essentially arbitrary pulses with as little as zeptojoules of energy (less than one photon!) on a multi-shot basis. "Women Becoming Mathematicians: The Doctoral Classes of 1940-1959" Margaret Murray , Department of Mathematics, Virginia Tech I give a report on an oral history-based study of the approximately 200 women who earned Ph.D.'s in mathematics from American colleges and universities during the years 1940-1959. I focus in some detail on the following questions: How did the women of this generation develop their mathematical interests and ambitions? Which individuals and institutions were particularly supportive of their mathematical goals? What obstacles to professional success did they encounter as they tried to build careers in mathematics? How did they balance the competing demands of career and personal life? How did they strike a balance between teaching, research, and service to the profession? What lessons can contemporary mathematicians, male and female, learn from the experiences of this generation? Special Atomic Colloquium - Please note special time "Quantum Entanglement and Quantum Teleportation" Yanhua Shih , Univ. of Maryland Baltimore County " Why do We Think Neutrinos Have Mass? And What's Next? " Boris Kayser , National Science Foundation We explain why the evidence for nonzero neutrino masses is compelling. Then, we turn to the questions about neutrinos raised by the presence of their nonzero masses. These questions include: How many different neutrinos are there? How much do they weigh? Is each neutrino identical to its antiparticle? How will we answer questions like these? "Superconductivity in a New Family of Heavy-Fermion Compounds" Joe Thompson , LANL [Host: Shivaram] The discovery of superconductivity in CeCu2Si2 nearly 20 years ago was totally unexpected and contradicted fundamental tenants of the well-established BCS theory of superconductivity. Instead of the magnetic moment carried by Ce+3 suppressing superconductivity, as expected from BCS, the presence of Ce was essential for superconductivity and responsible for increasing the effective mass of the electrons participating in superconductivity by orders-of-magnitude-hence, heavy-fermion superconductivity. As we now know, CeCu2Si2 was the first example of superconductivity mediated by antiferromagnetic spin fluctuations, which also may be the dominant pairing mechanism in high-temperature superconductors, and other parallels between heavy-fermion and cuprate superconductivity are emerging. Recently, we have discovered a new family of heavy-fermion materials, CeMInsub5 (M=Rh, Co, and Ir), in which superconductivity appears at temperatures higher than in any other heavy-fermion system. These materials form in a quasi-2D structure, which makes an analogy with the cuprate's magnetism and superconductivity appealing. Though much remains to be learned about their properties, this new family appears to be quite interesting and provocative. "Theory of de Broglie Waveguides" Professor Marvin Girardeau , University of Arizona Several experimental groups have recently succeeded in constructing quasi-one-dimensional (1D) atom waveguides and loading them with Bose-Einstein condensates of ultracold atomic vapors. An important motivation for such studies is the goal of constructing atomic de Broglie wave beam splitters and interferometers for ultrasensitive detection of very weak accelerations and gravitational perturbations. This talk will discuss the many-body Schrodinger dynamics of 1D systems of impenetrable bosons, which is exactly soluble via an exact mapping from an ideal Fermi gas to a strongly interacting Bose gas of impenetrable point particles. After description of some completed work on such systems in 1D toroidal geometries and harmonic traps, some work in progress will be described, concerned with a generalization to a model of a de Broglie beam splitter/interferometer using two coupled waveguides. Astronomy Building, Room 201 "Zeroing in on cosmological parameters" Max Tegmark , University of Pennsylvania - Physics Dept. [Host: T. X. Thuan] I describe the sharp constraints on cosmological paramaters placed by recent measurements of the cosmic microwave background, distant supernovae, galaxy clustering, etc., and how different types of measurements how allow powerful cross-checks to be made. I also comment on outstanding puzzles in the emerging cosmological "standard model" and upcoming measurements that may resolve them. "Quantum cryptography" Richard Hughes , Los Alamos National Laboratories Quantum cryptography, or more accurately quantum key distribution (QKD), uses single-photon transmissions to generate the shared, secret random number sequences, known as cryptographic keys, which are used to encrypt secret communications. Appealing features of QKD are that its security is based on principles of quantum physics and attempted eavesdropping can be detected. (Heisenberg's uncertainty principle ensures that an adversary can neither successfully tap the key transmissions, nor evade detection because eavesdropping raises the key error rate above a threshold value). I shall describe two quantum cryptography systems, based on the transmission of non-orthogonal single-photon states to generate shared key material, at Los Alamos. In one experiment we are generating key material over a 48-kilometer optical fiber path, and in the other by transmitting photons over a 1.6-km atmospheric path in daylight. In both cases, key material is built up using the transmission of a single-photon per bit of an initial secret random sequence. A quantum-mechanically random subset of this sequence is identified, becoming the key material after a data reconciliation stage with the sender. The atmospheric results show that QKD could be used for surface to satellite transmissions. "History of the two-fluid model and Bose-Einstein condensation" Laszlo Tisza , MIT " Quantum criticality in the high temperature superconductors" Professor Subir Sachdev , Yale University I discuss the phases and critical points of quantum antiferromagnets in two dimensions and their relationship to the physical properties of the high temperature superconductors. Non-magnetic impurities are argued to be a sensitive probe of the wavefunction of the electron spins: I will describe recent experiments on such impurities and the theoretical insights they have provided on the interplay of antiferromagnetism and superconductivity. Joint Chemistry/Physics Colloquium "Collective excitations and Multidimensional optical spectroscopies of dendrimers and biomolecules" Shaul Mukamel , University of Rochester - Chemistry Dept. [Host: Ian Harrison] Joint Physics and Engineering Physics Colloquium "Pure-Electron Plasma Experiments" Joel Fajans , University of California at Berkely [Host: J. Dorning/Joseph Poon] Plasmas made only of electrons are remarkably stable and manipulatable. They are ideal for studying basic plasma physics, two-dimensional fluid dynamics, and nonlinear dynamics. I will discuss some basic plasma results, including a demonstration that like charges can attract rather than repel, and that sometimes mountaintops are just as stable as valley floors. Next I will describe some fluid results like the instability shown in the figure below. Finally I will discuss autoresonance, a very basic and general phenomenon which occurs in nonlinear oscillator systems. "Superfluidity in low dimensions: beyond the mean-field theory" Eugene Kolomeisky , University of Virginia The Gross-Pitaevskii approximation is a long-wavelength theory widely used to describe a variety of properties of dilute Bose condensates, in particular trapped alkali gases. In this talk I will show that for short-ranged repulsive interactions this theory fails in one and two spatial dimensions, and appropriate low-dimensional modifications will be proposed. The new theory has a universal character, and some of its implications such as density profiles in confining potentials, superfluidity, solitons, and self-similar solutions will be discussed. "The Search for Non-Newtonian Gravity" E. Fischbach , Purdue University [Host: Rogers Ritter] Ongoing attempts to unify the known fundamental forces lead to the suggestion that there may exist new gravity-like forces in nature. These would arise from the exchange of new light bosonic quanta among the constituents of ordinary matter, and would produce apparent deviations from the predictions of Newtonian gravity. The suggestion of such a "fifth force" in 1986 has led to a broadened view of the interaction of gravity and other known and hypothetical forces, and has helped to stimulate a large number of new experiments to search for weak long-range forces. This talk will review both the theoretical motivation for such new forces, and the experimental results that have been obtained to date. More recently newer string-inspired theories have suggested the presence of additional macroscopic forces acting over sub-millimeter distances. Detecting such forces presents special challenges-both theoretical and experimental- for reasons that I will discuss. "On the road to measure CP violation and test the Standard Model:Observation of hadronic b --> u transitions" Yongsheng Gao , Harvard University CP violation is one of the great mystery of the universe. The major motivation of the first-generation B factories is to measure CP violations in the B meson system, especially the three CKM angles Alpha, Beta and Gamma. I'll present the first observation of hadronic b --> u transitions (B --> Pi+Rho-, Pi+Rho0, Pi+Pi-) which will be very important for the future measurements of the CKM angles Alpha and Gamma. Measuring Alpha and Gamma using these decay modes at the first-generation B factories will be discussed, along with a future outlook of B physics "Elastic Electron Deuteron Scattering: Past, Present and Future" Prof. Gerassimos (Makis) Petratos , Kent State University This talk will present a review of elastic electron scattering off the simplest nucleus, the deuteron. The elastic scattering process has long been a crucial tool in understanding the internal structure and dynamics of the nuclear two-body system. Studies of the deuteron form factors, measured in elastic scattering, offer unique opportunities to test both the conventional meson-nucleon "standard model" that describes the deuteron electromagnetic structure, and "nuclear chromodynamics" predictions of perturbative Quantum Chromodynamics based on the underlying quark-gluon substructure of the deuteron. A review of both the theoretical framework and of past (SLAC) and recent (JLab) measurements of the deuteron form factors will be presented "Searching for new particles in a high-energy neutrino beam: New results from Fermilab" Eric Zimmerman , Columbia University "Supernova - Gamma Ray Burst Connection" Roger Chevalier , University of Virginia "Dealing with Regional Conflict: Spin-Charge Inhomogeneity in Superconducting Cuprates and CMR Manganites." Takeshi Egami , University of Pennsylvania The discovery of high-temperature superconductivity was a double shock to the condensed matter physics community. Not only the critical temperature was so outrageously high (until then 30 K was considered to be the theoretical maximum), but magnetism appeared to be intimately involved, while for a long time magnetism had been considered to be incompatible with superconductivity. It then became the holy grail of theoreticians to overcome this apparent paradox, and various high-wire-act theories have been proposed. In the meantime, experimental data are accumulating that suggest a more conventional method of avoiding regional conflict between the spin and charge, by segregation. However, just as the social and international problems complete segregation simply defers the problem and does not solve it. Oxides are far ahead of us, and appear to have reached an intelligent solution. We discuss the results of recent inelastic and elastic neutron scattering measurements on cuprates and manganites, and speculate what this solution might be. "The Observation of Fractional Charge" Paul Fendley , Univerisy of Virginia - Physics "Breaking a one-dimensional chain: fracture in 1 + 1 dimensions" The breaking rate of an atomic chain stretched at zero temperature by a constant force can be calculated in a quasiclassical approximation by finding the localized solutions ("bounces") of the equations of classical dynamics in imaginary time. We show that this theory is related to the critical cracks of stressed solids, because the world lines of the atoms in the chain form a two-dimensional crystal, and the bounce is a crack configuration in (unstable) mechanical equilibrium. Thus the tunneling time, Action, and the breaking rate in the limit of small forces are determined by the classical results of Griffith. For the limit of large forces we give an exact bounce solution that describes the quantum fracture and classical crack close to the limit of mechanical stability. This limit can be viewed as a critical phenomenon for which we establish a Levanyuk-Ginzburg criterion of weakness of fluctuations, and propose a scaling argument for the critical regime. The post-tunneling dynamics is understood by the analytic continuation of the bounce solutions to real time. "New Techniques For Nanoscale Fabrication And Characterization" Robert Hull , Dept. Materials Science and Engineering, UVA The gallium focused ion beam produces highly collimated (10 nm - 1(mu)m) beams of high energy (3 - 30 kV) ions. These beams may be used as nanoscale "scalpels" to micromachine virtually any material by direct sputtering of the target surface. Combined with ion-beam induced deposition from organic vapors, this provides unique capabilities for sub 100 nm fabrication of three dimensional structures. I will describe how these capabilities form the basis of a new "nanoprinting" technology, for deep sub-micron pattern definition over planar and curved surfaces. In addition, imaging and spectroscopy in the focused ion beam system enables new routes for three-dimensional characterization and visualization of microscale structures. During sputtering by the primary beam, large numbers of secondary electrons and ions are produced, which may be used to form images of the sputtered surface. By concatenating images of surfaces at different depths during the sputtering process, three-dimensional reconstructions of the structure may be generated. These reconstructions can contain up to 107 independent pixels of information. Furthermore, using a quadrupole mass spectrometer, element-specific images may be obtained. These techniques enable "miroscopy in the third dimension" which can be of immediate and powerful impact in understanding material microstructure. "Aggregation Kinetics in Gelation, Traffic, Wealth, and other Everyday Phenomena" Prof. Sid Redner , Boston University In aggregation, clusters meet and irreversibly merge so that their average size grows continuously with time. This process describes, for example, making of jello and yogurt, raindrop formation in clouds, and the mass distribution of stars. I will present an elementary overview of cluster evolution in such aggregating systems. I begin by outlining the mean-field theory of aggregation and showing how scaling provides basic insights into long-time behavior. I will then discuss the intriguing relation between the cluster-size distribution and the first-passage probability of a random walk. Finally, I will discuss recent applications to traffic clustering and the distribution of wealth. "Neutrino Mass--Experimental Results from Super-Kamiokande" Jim Stone , Boston University/Department of Energy "Materials with Open Structures as Novel Thermoelectrics" Professor Ctirad Uher , University of Michigan "Cosmic Microwave Background Fluctuations: A Probe of Cosmology" David Spergel , Princeton University Observations of the microwave background are a powerful probe of the physics of the early universe and of cosmological parameters. Over the past few years, there has been a dramatic improvement in the quality of data. The current observations are consistent with a flat universe with a cosmological constant in which inflation produced the primordial fluctuations. Next year, NASA plans to launch MAP, a satellite that will make precision measurements of microwave background fluctuations. With these measurements, we will be able to test our basic cosmological paradigm. If correct, we can then use these observations to measure the basic cosmological parameters to high precision. "The Atacama Large Millimeter Array: Imaging Cosmic Dawns" Alwyn Wootten , National Radio Astronomy Observatory The Atacama Large Millimeter Array (ALMA), a project of the National Radio Astronomy Observatory and the European Southern Observatory, will be built over the coming decade in Northern Chile. ALMA will be a revolutionary telescope, operating at millimeter and submillimeter wavelengths and comprised of an array of individual antennas each 12 meters in diameter that work together to make precision images of astronomical objects. The goal of the ALMA Project is an array of 64 antennas that can be positioned as needed over an area 10 kilometers in diameter so as to give the array a zoom-lens capability. ALMA will image the universe with unprecedented sensitivity and sharpness at millimeter and submillimeter wavelengths. The energy density of radiation from both the Milky Way and from the diffuse extragalactic background peaks in the submillimeter. Aside from Cosmic Microwave Background photons, submillimeter photons are the most abundant photons in the Universe. Detailed imaging at these wavelengths will be a major step for astronomy, making it possible to study the origins of galaxies, stars and planets. To add a speaker, send an email to [email protected] Include the seminar type (e.g. Colloquia), date, name of the speaker, title of talk, and an abstract (if available). [Please send a copy of the email to [email protected].]
CommonCrawl
Coupling magnetic and plasmonic anisotropy in hybrid nanorods for mechanochromic responses Highly robust and soft biohybrid mechanoluminescence for optical signaling and illumination Chenghai Li, Qiguang He, … Shengqiang Cai Laser reprogramming magnetic anisotropy in soft composites for reconfigurable 3D shaping Heng Deng, Kianoosh Sattari, … Jian Lin Self-regulated non-reciprocal motions in single-material microstructures Shucong Li, Michael M. Lerch, … Joanna Aizenberg Modular Design of Programmable Mechanofluorescent DNA Hydrogels Remi Merindol, Giovanne Delechiave, … Andreas Walther Reconfigurable photoactuator through synergistic use of photochemical and photothermal effects Markus Lahikainen, Hao Zeng & Arri Priimagi 4D Thermomechanical metamaterials for soft microrobotics Qingxiang Ji, Johnny Moughames, … Muamer Kadic Feedback-controlled hydrogels with homeostatic oscillations and dissipative signal transduction Hang Zhang, Hao Zeng, … Olli Ikkala Actuating smart Mingming Ma Printing ferromagnetic domains for untethered fast-transforming soft materials Yoonho Kim, Hyunwoo Yuk, … Xuanhe Zhao Zhiwei Li ORCID: orcid.org/0000-0002-1489-45061, Jianbo Jin1, Fan Yang1, Ningning Song1 & Yadong Yin ORCID: orcid.org/0000-0003-0218-30421 Metamaterials Nanophotonics and plasmonics Synthesis and processing Mechanochromic response is of great importance in designing bionic robot systems and colorimetric devices. Unfortunately, compared to mimicking motions of natural creatures, fabricating mechanochromic systems with programmable colorimetric responses remains challenging. Herein, we report the development of unconventional mechanochromic films based on hybrid nanorods integrated with magnetic and plasmonic anisotropy. Magnetic-plasmonic hybrid nanorods have been synthesized through a unique space-confined seed-mediated process, which represents an open platform for preparing next-generation complex nanostructures. By coupling magnetic and plasmonic anisotropy, the plasmonic excitation of the hybrid nanorods could be collectively regulated using magnetic fields. It facilitates convenient incorporation of the hybrid nanorods into polymer films with a well-controlled orientation and enables sensitive colorimetric changes in response to linear and angular motions. The combination of unique synthesis and convenient magnetic alignment provides an advanced approach for designing programmable mechanochromic devices with the desired precision, flexibility, and scalability. Mechanochromic materials that exhibit reversible and predictable color changes have broad applications in mechanical sensors, security devices, bionic robots, and smart windows1. Most current systems rely on photonic structures2,3,4, fluorescence5, and plasmonic resonance6,7; they are limited to providing colorimetric responses to simple deformations under stretching and pressing and also lack the flexibility of large-scale programmable device fabrication. Therefore, engineering mechanochromic responses to complex perturbations remains a challenge, although it is highly desirable in many real-world applications that involve both linear and angular perturbations, such as rotation, bending, and twisting. The spatially differentiated photon-electron resonance of anisotropic plasmonic nanostructures8,9 offers excellent opportunities to achieve these colorimetric responses and has enabled a variety of fascinating applications such as diffraction-unlimited optics10, laser writing11, negative/zero-index metamaterials12, optical modulator13, and photothermal conversion14,15. Almost all of these explorations are, however, based on units that are either fabricated on solid substrates by advanced lithography and electrochemical self-assembly16,17,18,19 or carefully chosen from those chemically synthesized and then randomly deposited on substrates20,21,22. These methods produce anisotropic plasmonic nanostructures with fixed orientation relative to substrates and therefore lack flexibility for active tuning of plasmonic excitation for complex mechanochromic responses. Selective excitation of multiple plasmonic nanorods has been achieved via incorporation into liquid crystals (LCs)23,24, where orientational control could be realized by applying electric fields. Such a system may find applications in electrochromic displays25,26, but also shares the limitations of conventional LC devices. Through mechanical stretching of polymer matrices or masked metal evaporation, colloidal plasmonic nanoparticles were also made into oriented arrays to display polarization-dependent coloration27,28,29. Such systems, however, have limited flexibility in precise orientational control in the exact locations and matrices desired for fabricating complex mechanochromic devices. Magnetic–plasmonic hybrid nanostructures represent a class of smart nanomaterials for precise orientational control, and have been exploited in biomimetics, bioimaging, sensing, and information encryption30,31,32,33. They have been produced by co-assembly of plasmonic and magnetic nanomaterials34,35,36, and significant improvement is still desired in the dimensional control, structural stability, and the precision of alignment for designing mechanochromic films with predictable color changes37. Here, we report the development of programmable mechanochromic films with precise colorimetric responses to a number of mechanical perturbations by preparing magnetic–plasmonic hybrid nanorods through an unconventional colloidal synthesis approach. The plasmonic nanorods are grown alongside the magnetic ones through a seed-mediated process confined within highly permeable polymer shells, producing compact hybrid nanorods with perfect structural alignment, coupled magnetic–plasmonic properties, and excellent colloidal stability. This versatile approach represents an open platform that allows the design of a wide range of high-quality complex nanostructures. Using Fe3O4/Au hybrid nanorods as the active components, we demonstrate that the coupled magnetic and plasmonic anisotropy can enable efficient control of their orientation and subsequently the plasmonic excitation through magnetic means, which is confirmed by simulation and analytical solution derived from bra-ket notation. Based on the clear orientation–excitation correlation and conventional lithography, we magnetically align hybrid nanorods along the desired directions in the defined locations of the polymer matrices and further develop plasmonic films with pre-designed mechanochromic responses under various mechanical perturbations. Synthesis of magnetic–plasmonic hybrid nanostructures The space-confined seed-mediated growth of magnetic-plasmonic hybrid nanorods is depicted in Fig. 1a. FeOOH nanorods (120 nm × 20 nm, Supplementary Fig. 1a) were synthesized by a high-temperature hydrolysis reaction38,39 and then reduced to Fe3O4 by a polyol process with the protection of a silica shell (Fig. 1b)40,41. The blocking temperature and the magnetic anisotropy constant of Fe3O4@SiO2 nanorods were found to be 190 K (below RT) and 1.8 kJ m−3 (one order lower than magnetocrystalline anisotropy constant), indicating superparamagnetism with a dominant shape anisotropy (Supplementary Fig. 1). After immobilizing Au seeds through electrostatic interaction42, the nanorods were overcoated with a layer of resorcinol phenol (RF) resin43, whose cross-linking was enhanced by further heating at 100 °C. Meanwhile, the silica interlayer was etched by a base, producing magnetic nanorods@voids@RF nanostructure with small Au seeds dispersed homogeneously within the RF shells (Fig. 1c). Fig. 1: Synthesis and characterization of magnetic-plasmonic hybrid nanostructures. a Scheme of the confined growth towards magnetic-plasmonic hybrid nanorods. In the last step of the scheme, Fe3O4 nanorod and RF shell are removed to clarify the concave structure of the Au nanorod. TEM images of nanorods after SiO2 coating (b), RF coating (c), seeded growth with 15 µL (d), 25 µL (e) of the precursor. f TEM image showing hybrid nanorods with two typical configurations (left: side by side; right: overlapped). g HAADF and EDS mapping images of the hybrid structures. h The cross-sectional line profile of element distribution. i The real-time extinction spectra of cAuNRs with a time interval of 15 s. j Dependence of peak positions of surface plasmonic resonance and aspect ratios of cAuNRs on the volume of the precursor. The reaction kinetics is controlled by adding different amounts of precursors as indicated. Error bars represent the standard deviations from the measurement of ten hybrids nanorods in TEM images. The seeded growth of Au was carried out using our previously developed procedure44. Interestingly, once a small amount of HAuCl4 was added, only one isotropic Au nanoparticle formed within each RF shell (Supplementary Fig. 2a) due to Ostwald ripening, which involves the initial dissolution of smaller seeds by oxidative-etching by I−/O2 and then re-deposition to the larger seeds45. Further growth induced a unique concave structure along the long axis of Au nanorods (denoted as cAuNRs thereafter)46. Depending on the amount of added precursors, the seeds could grow progressively into cAuNRs with highly uniform size, shape, and perfect parallel alignment to the Fe3O4 nanorods (Supplementary Fig. 1d, e). Details of the unique concave structures are shown in Fig. 1f with two typical orientations (side-by-side and overlapped configurations) of cAuNRs. The hybrid nanostructure is further confirmed by element mapping (Fig. 1g) and energy-dispersive X-ray spectroscopy (EDS) analysis (Fig. 1h), with the latter clearly revealing a side-by-side (left) and overlapping (right) configurations. The growth was isotropic initially and then switched to anisotropic mode, with longitudinal plasmon modes appearing at a longer wavelength and red-shifting continuously to 880 nm (Fig. 1i). While both transverse and longitudinal peaks became stronger as the seeded growth proceeded, the peak due to the surface concave structure appeared at 630 nm (see Supplementary Discussion I). With more precursors, the growth became faster, resulting in cAuNRs with larger aspect ratios (Fig. 1j; Supplementary Fig. 2e). By manipulating the reaction kinetics, we could produce cAuNRs with different aspect ratios (up to 3.8) and correspondingly control their longitudinal resonance wavelengths. Bra-ket notation of plasmonic excitation To understand the plasmonic excitation of anisotropic nanostructures under linearly polarized light, we first derived the analytical equations of plasmonic excitation based on bra-ket notation. Figure 2a shows an arbitrary configuration of cAuNRs, whose orientation can be mathematically expressed by a ket, |α, Ɵ > . Under z-polarized light (Fig. 2b), the bra-ket notation of orientation state, ׀S > , of cAuNRs in Fig. 2a is expressed as AL׀α, Ɵ > + AT | 90o+ α, Ɵ > , where the first and second terms determine the longitudinal and transverse excitation, respectively. Exciting plasmon resonance of cAuNRs under polarized light could be interpreted as polarizer operator (P, |z > < z|) operating on the corresponding ket (Supplementary Fig. 4): $$\left| {{{A}} > = {P}} \right|\psi _{\mathrm{L}} > + {P}|\psi _{\mathrm{T}} > = {A}_{\mathrm{L}}{\mathrm{cos}}\alpha \left| {{z} > - {A}_{\mathrm{T}}{\mathrm{sin}}\alpha } \right|{z} > $$ where ALcosα and ATsinα represent longitudinal and transverse excitation coefficients, respectively. The resulted ket, |z > , indicates that the resonance happens along the z direction. Given an arbitrary orientation ket, the expectation value of excitation can be derived as follows by using the bra-ket theorem: $$< {A}(\alpha ,\theta ) > = < \psi _{\mathrm{L}}\left| {P} \right|\psi _{\mathrm{L}} > + < \psi _{\mathrm{T}}\left| {P} \right|\psi _{\mathrm{T}} > = {I}_{\mathrm{L}}{\mathrm{cos}}^2\alpha + {I}_{\mathrm{T}}{\mathrm{sin}}^2\alpha$$ It predicts that the expectation value of excitation is only dependent on the azimuthal angle, α (see Supplementary Methods II). We used ratiometric data processing to quantify the correlation between excitation states and azimuthal angle, α, which helps to eliminate signal fluctuation and backgrounds: $$\frac{{ < {E}_{\mathrm{L}}\left( {{\alpha }},{\theta} \right) > - < {E}_{\mathrm{L}}\left( {90^{\mathrm{o}},{\theta}} \right) > }}{{ < {E}_{\mathrm{L}}\left( {0^{\mathrm{o}},{\theta}} \right) > - < {E}_{\mathrm{L}}\left( {90^{\mathrm{o}},{\theta}} \right) > }} = {\mathrm{cos}}^2{\alpha}$$ $$\frac{{ < {E}_{\mathrm{T}}\left( {{\alpha }},{\theta} \right) > - < {E}_{\mathrm{T}}(0^{\mathrm{o}},{\theta}) > }}{{ < {E}_{\mathrm{T}}\left( {90^{\mathrm{o}},{\theta}} \right) > - < {E}_{\mathrm{T}}(0^{\mathrm{o}},{\theta}) > }} = {\mathrm{sin}}^2{\alpha}$$ where E is transverse or longitudinal extinction at given orientations. Fig. 2: Optical tunability of colloidal cAuNRs. a Schematic illustration of cAuNRs under the orientational state |α, Ɵ > with respect to the polarization of light. b Tuning plasmonic extinction of cAuNRs under polarized light and the corresponding mathematical interpretation by bra-ket notation. c, d Digital images of cAuNRs dispersions under normal (c) and polarized light (d). In both c and d, the colloidal dispersions from left to right correspond to spectra in Supplementary Fig. 2f (from bottom to top). e, f Tuning the extinction of cAuNRs under normal (e) and polarized light (f) for samples of highlighted columns in (c) and (d), respectively. e, f share the same y axis. The spectra were measured with an angle step of 15o. g Correlation between the excitation modes and orientational states of cAuNRs, with the fine spectra tunability shown in f. The abbreviations, Numer, Sim, and Exp, represent numerical, simulation, and experimental results, respectively. Error bars represent the standard deviations from three experimental measurements. h, i Summary of L-mode to T-mode ratio of different dispersions in c and d achieved by varying α under normal (h) and polarized light (i). The azimuth angle, Ɵ, was set at 90o. P and k are the polarization and wave vector of the incident light, respectively. Tuning plasmonic excitation by magnetic fields We first studied the orientation-dependent plasmonic excitation of cAuNRs by measuring the extinction of their colloidal dispersions in different magnetic fields. The perfect parallel alignment between cAuNRs and Fe3O4 nanorods, enabled by our unique synthesis, facilitated convenient magnetic control of the plasmon resonance of cAuNRs. Under an ordinary light (Fig. 2c), both transverse and longitudinal modes were excited (Supplementary Figs. 10, 11), and consequently, the solutions appeared gray. Under polarized light, selective excitation of transverse or longitudinal mode could be achieved by magnetically aligning cAuNRs to |90o, 90o > and |0o, 90o > , respectively (Fig. 2d; Supplementary Movies 1, 2). At |90o, 90o > , only the transverse mode at 525 nm was excited, and the solution of cAuNRs with different ARs was red (middle panel in Fig. 2d). At |0o, 90o > , the color of the solution turned from red to blue, green and finally yellow (bottom panel in Fig. 2d) due to the selective excitation of longitudinal modes and their continuous red-shift. The extinction spectra of cAuNRs with an aspect ratio of 2.5 are shown in Supplementary Fig. 2e, f under normal and polarized light, respectively. As shown in Fig. 2g and Supplementary Fig. 9, the dependence of plasmonic excitation on α from experimental measurements and simulation is consistent with the analytical solution of Eq. (2). To quantitatively describe the color brightness of the dispersions, we calculated the ratio between longitudinal (EL) and transverse extinction (ET) as f(α) = EL(α)/ET(α). As summarized in Fig. 2h, the highest contrast under ordinary light is 1.25, which is of high consistency with the gray/brown color in the aqueous solutions (bottom panel in Fig. 2c). Under polarized light, the factor can be modulated in a much broader range, from 0.5 to 2.7 (Fig. 2i), giving rise to obvious color changes in the aqueous solutions once aligning the cAuNRs from y axis to z axis via a magnetic field (Fig. 2c). Programmable mechanochromic response to stress The dependence of plasmonic excitation of cAuNRs on their orientation offers a reliable tool for fabricating mechanochromic devices. To this end, we prepared a cAuNRs/polymer composite film with nanorods aligned along a given direction by UV-curing an aqueous dispersion of cAuNRs and acrylamide under a uniform magnetic field (see Supplementary Discussion IV). The parallel alignment of cAuNRs with the magnetic fields was confirmed by the good agreement between the measured and theoretical extinction of cAuNRs in the films (Supplementary Fig. 13). The SEM images in Supplementary Fig. 14 demonstrated the well-defined orientational order of cAuNRs in the polymer matrices, and further statistical analysis revealed a narrow normal distribution of cAuNRs orientation along the directions of magnetic fields (SD = 2.8o). Inspired by the precise alignment of cAuNRs with the applied magnetic fields, we then proposed a simple method to prepare mechanochromic films with optimal and predictable colorimetric responses to stress. As illustrated in the upper panel of Fig. 3a, cAuNRs with a pre-designed orientation (׀α, 90o >) will be re-configured to another alignment (׀α + Δα, 90o >) with small changes in azimuth angle in response to a unidirectional pressing or stretching. An ideal mechanochromic film is expected to exhibit a substantial change in its optical properties when experiencing a minimal change in its azimuth angle during mechanical perturbation. Therefore, we started by calculating the first derivative of the longitudinal excitation of cAuNRs to optimize the mechanochromic sensitivity. As plotted in Fig. 3a, it approaches maximum and then decreases dramatically as α increases from 0o to 90o, suggesting that [30o, 60o] (slope threshold of 0.9) is the optimal range for engineering mechanochromic film with high sensitivity. To verify this hypothesis, we prepared three plasmonic films, in which cAuNRs were aligned randomly, 0o, and 30o to the surface normal. When the films were subjected to various pressures, their plasmonic excitation was monitored in-situ in real-time (Supplementary Fig. 16). The embedded cAuNRs tended to rotate to a horizontal position due to the elastic deformation of the polymer under vertical pressures. In Fig. 3b, as the pressure increased from 0 to 67.7 kPa, the intensity of longitudinal modes of cAuNRs gradually increased. Interestingly, the change of longitudinal excitation (ΔE) under 30o was significantly larger than that of 0o or random orientation (Fig. 3c). This observation is consistent with our theoretical prediction of sensitivity and experimentally verifies the proposed working principle for designing highly sensitive mechanochromic films. We further investigated the mechanochromic response of the films under stretching (see Supplementary Discussion V). The extinction spectra were systematically measured by stretching the film along different directions relative to the rod orientation. Under unidirectional strains (ε), the film elongates along the axial direction and narrows due to the Poisson effect. Therefore, the cAuNRs realign to the axial direction and produce traceable changes in their extinction spectra (Supplementary Fig. 18). In the three representative films in Fig. 3d, the longitudinal and transverse modes were gradually enhanced and suppressed, respectively, when the strain increased to 30%. Figure 3e reveals a linear correlation between ΔE and ε, the fitting slopes of which are highly dependent on the initial alignment (α) of cAuNRs. As summarized in the inset of Fig. 3e, the film exhibits anisotropic mechanochromic responses during stretching. More specifically, we observe the maximum slope at 45o, and when α deviates from this angle, the slope decays to a negligible value (Fig. 3f). The dependence on α can be predicted by the first derivative of the longitudinal excitation in Fig. 3a, indicating the general applicability of the mechanochromic film for stress sensing. By utilizing the nonlinear dependence of colorimetric response on α, we prepared a film with programmable mechanochromic responses to stress. Differential chromatic responses could be produced by patterning cAuNRs with different orientations within the film. In the specific example shown in Fig. 3g and Supplementary Movie 3, cAuNRs were magnetically aligned vertically (90o) in the red stars, and 45o in other parts to the stretching direction. During stretching, the ΔE of cAuNRs in the red stars was negligible while it gradually increased with strains in other areas, resulting in enhanced longitudinal plasmonic excitation and a complementary blue color. Therefore, the plasmonic film exhibited a changing contrast as the strain increased, providing a highly sensitive and vivid colorimetric response. Fig. 3: Programming the mechanochromic response by magnetic alignment. a The design principles of mechanochromic response of plasmonic films upon pressing and stretching. b Extinction spectra of the plasmonic films under different pressures with cAuNRs aligned along 30o to the surface normal. c Intensity changes of longitudinal modes when the plasmonic films were subject to different pressures. Insets: cross-section view of the plasmonic films. d Extinction spectra of the plasmonic films under different strains with cAuNRs aligned along 75o (left panel), 45o (middle panel), and 30o (right panel) to the stretching direction. Spectra were measured as strain (ε) increased from 0% to 30% with a step of 1%. Insets: top view of the plasmonic films during stretching. e Summary of intensity changes of cAuNRs under different strains. Arrows in the inset indicate the slopes of the mechanochromic response. f Anisotropic mechanochromic response of the plasmonic films enabled by magnetic alignment. The abbreviation, Numer, indicates the numerical results. g Orientation-dependent mechanochromic response of plasmonic films enabled by magnetically aligning cAuNRs along pre-designated directions. The top view of the plasmonic film is illustrated in the left panel to show the alignment of cAuNRs. Error bars in c and e represent the standard deviations from three experimental measurements. Motion-active plasmonic films In addition to simple pressing and stretching, we further demonstrate the versatility of the magnetic alignment approach for preparing mechanochromic devices with programmable colorimetric responses to linear rotation, bending, and nonlinear twisting. Figure 4a illustrates the alignment of cAuNRs along 45o out of plasmonic films, notated as |45o, 90o > under y-polarization. When the film was rotated by 45o, the orientation of cAuNRs became 0o. Consequently, the films turned to red as only the transverse mode of cAuNRs was excited (Fig. 4b). At −45o, the plasmonic excitation of cAuNRs transited to |90o, 90o > . The complementary green color of the longitudinal mode was observed in the films. In contrast to uniform color changes upon rotating, bending induced different colors at the two ends of the film due to the separation of the excitation states. For example, bending the film by 45o downward realigned cAuNRs into vertical,|0o, 90o > , and horizontal, |90o, 90o > , orientations, which further induced selective transverse (left end) and longitudinal (right end) excitation correspondingly and exerted uniform red and green colors at the two ends (Fig. 4c; Supplementary Movie 4). To confirm the predicted plasmon modes against rotation, we measured the extinction spectra at various rotation angles (Supplementary Fig. 19a, b). As α increased, transverse extinction was enhanced while longitudinal excitation was suppressed. The relative extinction was derived by Supplementary Eq. (18) and plotted against the transverse-mode angle (ɸT) in Fig. 4d, which further confirms the trigonometric prediction of the bra-ket theorem (Eq. (2)). An excellent agreement was also found between the derived ɸT–α correlation from Supplementary Eq. (19) and theoretical prediction (Supplementary Fig. 19c), which explicitly demonstrated the linear nature of rotation. Notably, one may expect a similar dependence of the mode angle on the bending or rotation angle because bending essentially induces opposite rotation effects on the two ends of the film (Supplementary Fig. 20). Fig. 4: Motion-active plasmonic films. a Schematics of the specific arrangement of cAuNRs in the plasmonic film. b Top views of the plasmonic films under different rotation angles. c Top views of the plasmonic films under different bending angles. d The transverse excitation of cAuNRs under different transverse phase angles (ɸT). e Schematics showing the in-plane 45° arrangement of cAuNRs inside the plasmonic film at the top (top) and side view (bottom). f Schematics of left-handed twist (top) and right-handed twist states (bottom). The twisting angle is set at 540°. g Digital images of the plasmonic film at initial (top panel), left-handed (middle panel), and right-handed twisting states (bottom panel). The polarization direction and orientation of cAuNRs are illustrated by red and black arrows, correspondingly. h Dependence of transverse excitation on localized rotation angle and y-coordinates by analyzing the superposition of intensity of transverse and longitudinal resonances to the overall lineshape. Insets: a picture of the twisted plasmonic film and the helical configuration of cAuNRs. i CD spectra of pure polymer and plasmonic films under twisted configuration. Error bars in d and h represent the standard deviations from three experimental measurements. By comparing rotating and bending, a critical principle became clear: symmetry-breaking along the active axis of mechanical perturbations induces separation of plasmon modes of cAuNRs during that motion (Supplementary Fig. 21). In order to verify this hypothesis, we have constructed a three-dimensional model with cAuNRs aligned 45° within the films (top-view in Fig. 4e). For both left- and right-handed helices formed upon 540° twisting, the aligned cAuNRs were re-configured into a helical form (Fig. 4f). Our further interpretation of the twisting perturbation revealed that the helical configuration was induced by a localized rotation effect. More specifically, twisting the film along its long axis implied localized rotational perturbations with a continuously increased rotating angle (γ), which can be visualized by configuring the three-dimensional orientation of representative rods (highlighted by the red dashed rectangle in Fig. 4e). The rods tend to rotate around the y axis by the twisting perturbation (Supplementary Fig. 22a). In this case, whereas the angle between their orientation and y axis remained at 45°, γ varied as a function of positions, whose dependence can be described as ω*y/L, where ω is the twisting angles, y is the coordinates of one arbitrary position in the films, and L represents the total length of the film (Supplementary Fig. 22b). In the experiment, the alignment of cAuNRs was achieved by applying magnetic fields at the designated angle followed by UV fixation. The successful alignment was evidenced by the uniform red and green color across the films under perpendicular and parallel polarization, respectively (Fig. 4g). When the film was twisted, its initial uniform color turned into alternating red and green segments. Due to the symmetric orientation of cAuNRs against the x–y plane, the same changes of plasmon modes were observed during left-handed and right-handed twisting, thus inducing a handedness-independent mechanochromic response. When the polarizer was rotated by 90° in the x–y plane, the initial colors of the twisted film turned to the opposite ones. To understand the intrinsic dependence of optical properties on twisting, we measured the space-resolved extinction spectra of twisted films (Supplementary Fig. 23), whose transverse mode was extracted and substituted into Eq. (4). The resulted extinction–local rotation correlation was plotted in Fig. 4h, which followed exactly the profiles predicted by Supplementary Eq. (21). The plasmon resonance of cAuNRs gradually switched from longitudinal mode to transverse mode as the twisting propagated from 0° to 180° inside the helical film. The helical configuration of cAuNRs was evidenced by the appearance of significant circular dichroism (CD) signals in Fig. 4i, and further confirmed by the localized surface electric fields excited at 800 nm and Poynting vectors excited at 630 nm, which represented the strength of longitudinal and transverse resonance, respectively (Supplementary Figs. 24, 25). We examined the quantitative correlation of γ–ɸT from two independently measured quantities, the y-coordinates of the films and the transverse extinction (Supplementary Fig. 23b, c), and found that such dependence could be well predicted by the theoretical calibration curve calculated from Supplementary Eq. (22) (Supplementary Fig. 23d). During each 180° twisting, one node was formed, which separated two regions with orthogonally aligned cAuNRs as evidenced by the clear color contrast in Supplementary Movie 5. More importantly, twisting exerted nonlinear perturbations to the excitation states of cAuNRs because the dependence of transverse-mode angle (ɸT) on local rotation angle (γ) was nonlinear. More complex colorimetric responses to mechanical motions can be programmed by patterning differently aligned cAuNRs at different locations of the films. As shown in Fig. 5a, the alignment of cAuNRs in the rhombus and background regions was 45° to the top and 45° to the bottom, respectively. Primary gray/brown was observed as both transverse and longitudinal modes were excited in the two regions. When the film was rotated left-hand to 30° and 45°, the rhombus area turned blue, and the background appeared red. Interestingly, the colors switched when the film was rotated to −30° and −45°. The plasmonic excitation of cAuNRs in the two regions diverged from each other as rotation increased, thereby exhibiting the pre-designed images with high contrast. In the case of bending (Fig. 5b), the asymmetric mechanochromic response was observed in the regions separated by the bending axis due to the opposite effect of bending to the plasmonic excitation of cAuNRs with the same orientation in the two regions. Thanks to its solution processability, the fabrication can be easily scaled up to produce centimeter-sized films with programmable mechanochromic responses to bending and rotating (Supplementary Fig. 26 and Supplementary Movie 4). Fig. 5: Mechanochromic devices. Top views of the plasmonic films under (a) rotation and (b) bending. The numbers below the images in (a) indicate the longitudinal mode angles (ɸL) of cAuNRs embedded in the regions as indicated by the same color of the angles. In (a), the plasmonic film was rotated left-handed. In (b), negative bending angles (i and ii) indicate bending backward while positive angles (iv and v) indicate bending forward. c Scheme showing the mechanochromic pressure sensor. d The cAuNRs are aligned 45o to the surface normal in the butterfly patterns (middle panel). When subjected to pressure, the top plasmonic film expands upward or downward and exhibits asymmetric mechanochromic response in the two wings of the butterfly due to the excitation of different plasmon modes of cAuNRs. We further demonstrate the versatility of the system by constructing a mechanochromic film with readable and asymmetric colorimetric responses to pressure change in an air chamber (Fig. 5c). The chamber was made by polysiloxane and glass, with the top opening sealed by a plasmonic film containing cAuNRs uniformly aligned 45° to the surface normal in a "butterfly" pattern. At ambient pressure, the flat film was gray because both transverse and longitudinal modes were excited (Fig. 5d). When air was injected into the chamber, a positive pressure pushed the top layer outward and displayed an asymmetric colorimetric response in the two wings, showing blue on the left and red on the right. In contrast, the color switched in the two wings under negative pressure when air was extracted from the chamber. While this device may find potential use as a simple colorimetric pressure indicator, more complex patterns can be designed based on the convenient magnetic alignment to provide readable colorimetric responses that allow qualitative estimation of the applied pressure (Supplementary Fig. 27, Supplementary Movie 7). In this work, we report a direct colloidal synthesis method to prepare hybrid magnetic-plasmonic nanostructures with well-defined morphologies and physical properties. On the basis of the unique properties of the hybrid nanorods, we have further proposed a reliable method to prepare mechanochromic films with controllable colorimetric responses towards linear and nonlinear mechanical motions and deformations. The plasmonic excitation of Au nanorods can be conveniently tuned by a magnetic field, producing selective excitation of plasmon modes in both the colloidal dispersions and polymer matrices. This synthetic method combines the advantages of conventional confined growth with the flexibility of structural engineering of nanomaterials, which is expected to produce a number of hybrid nanostructures by simply changing the initial templates and chemical components of the secondary metals. The enhanced colloidal stability of hybrid magnetic-plasmonic nanorods enables us to magnetically align them along pre-designed directions within polymer films. The asymmetric alignment of anisotropic plasmonic nanostructures about the active axis of external mechanical stimuli induces the excitation of different resonance modes, producing readable color changes. The incorporation of hybrid nanostructures and their magnetic alignment are compatible with the current fabrication processes of soft actuators, robots, and biomimetic systems. In addition, the contactless, fast, and reversible magnetic interactions allow colloidal nanoparticles to be efficiently aligned and patterned in polymer matrices, making them potentially useful for various applications, such as displays, sensors and actuators, anti-counterfeiting devices, and biomimetic systems with simultaneous shape and color changes. Synthesis of FeOOH nanorods In total, 10.8 g of FeCl3·6H2O was dissolved in 400 mL of deionized water and heated to 87 °C in an oven for 18 h. After that, FeOOH precipitated at the bottom, and the supernatant was discarded. The precipitation was washed in deionized water three times at 11,000 rpm for 15 min and finally dispersed in 40 mL of DI water. Silica coating on FeOOH nanorods and reduction To form uniform SiO2 coating, FeOOH was modified by PAA first. Typically, 216 mg of PAA (~1800) was dissolved in 600 mL of DI water, and 10 mL of FeOOH aqueous dispersion was added afterward. The solution was magnetically stirred overnight. FeOOH was recovered by centrifugation and washed with DI water three times at 11,000 rpm for 15 min The PAA-modified FeOOH was dispersed in 12 mL of DI water. For silica coating, 4 mL of FeOOH dispersion was concentrated into 2 mL and added to 40 ml of ethanol followed by 250 µL ammonium solution (28%). In all, 125 µL TEOS was added for 4-nm SiO2 coating or 250 µL of TEOS was added twice with a 1-h interval for 8-nm SiO2 coating. To achieve 12-nm SiO2 coating, 2 mL of FeOOH dispersion was added in a 20 mL of ethanol followed by 250 µL of ammonium solution and 150 µL of TEOS twice with a 1-h interval. The mixture was magnetically stirred overnight. Afterward, FeOOH@SiO2 was centrifugated out at 14,500 rpm for 10 min and washed with ethanol once and water three times. FeOOH@SiO2 was reduced to magnetic nanorods in DEG at 220 °C. In total, 15 mL of DEG was heated to 220 °C under nitrogen, to which 250 µL of FeOOH@SiO2 aqueous solution was injected. The reduction was kept for 6 h under nitrogen protection. The final product was washed by ethanol and water three times and dispersed in 12 mL of ethanol. APTES modification Typically, the dispersion of magnetic nanorods in ethanol was added into 50 mL of ethanol. Then it was heated to 80 °C and 200 µL of APTES was added quickly. The surface modification usually took 5 h under nitrogen protection. Afterward, the product was washed by ethanol four times and dispersed in 12 mL of ethanol. The protonation of the amino group rendered the nanorods positively charges, which were capable of attracting the negatively charged Au seeds through electrostatic interaction. Au seed preparation The Au seeds were prepared according to a previously reported method42. To 45 mL of Milli-Q water, 12 µL of THPC and 250 µL of NaOH (2 M) were added. After 5 min, 2 mL HAuCl4 (1%) was added. The solution was covered by foil and stirred overnight. Afterward, it was stored at 4 °C as stock seed solution. Au seed attachment and PVP modification In all, 3 mL of magnetic nanorods in ethanol was centrifugated and washed with DI water twice. It was dispersed in 5 mL of DI water and added into 10 mL of Au seed stock solution. The mixture was stirred about 1 h. Negative-charged Au seeds were attached to the surface of magnetic nanorods through electrostatic interaction. Excess Au seeds were discarded after centrifugation at 145 rpm for 10 min Afterward, 5 mL of DI water was added to disperse the solids. It was then transferred to 10 mL of PVP solution (20 mg mL−1, MW = 10,000) under sonication. The mixture was stirred overnight at room temperature. RF coating Excess PVP was removed by centrifuge at 14,500 rpm for 10 min. The solids were washed by DI water twice and finally dispersed in 28 mL of DI water. In total, 13 mg of R and 18 µL of F were added into that dispersion sequentially. After that, the mixture was heated to 50 °C, and then 100 µL ammonium solution (2.8%) was added quickly. The reactive was kept at 50 °C for 2 h and heated to 100 °C. The condensation at 100 °C took 5 h. Meanwhile, the SiO2 shell was etched completely, forming a gap between magnetic nanorods and RF shell with Au seeds inside due to the week alkaline condition. The final product was washed by Milli-Q water three times and dispersed in 2 mL Milli-Q water as seed solution. Confined growth of cAuNRs The seeded growth was carried out based on work reported previously44. Typically for growth of concave AuRNs to the full length, chemicals were added into 2 mL of Milli Q water in the following sequence, 500 µL of PVP (20 mg mL−1, MW = 10,000), 33 µL of KI (0.2 M), 33 µL of AA (0.1 M), 5 µL of HAuCl4(0.25 M), and finally 25 µL of seed solution. The growth usually took ~5 min. If decreasing the amount of AA, KI, and HAuCl4 proportionally, concave AuRNs with different aspect ratio were achieved (Fig. 1g, h; Supplementary Fig. 5). Preparation of plasmonic films To prepare the plasmonic films, 250 mg of AM, 14 mg of BIS, and 3 µL of 2-Hydroxy-2-methylpropiophenone were dissolved in 1 mL of DEG. While AM is the monomer of the polymer, BIS and 2-hydroxy-2-methylpropiophenone act as a cross-linking agent and photoinitiator (PI), respectively. In all, 1 mL of cAuNRs colloidal dispersion was centrifugated at 9000 rpm for 3 min, and supernatant was removed. Then, 200 µL of the precursor solution was added and sonicated for ~ 15 s to fully disperse the colloidal nanoparticles. To prepare the solid films, the solution containing magnetic–plasmonic nanorods was sandwiched between glass slides with a spacer (~1 mm), which was exposed to UV light (254 nm) for 1 min. For lithography, photomask was first placed above the cover glass before UV irradiation. After the first exposure, the mask was removed, and different magnetic fields with pre-designed directions were applied, followed by another UV exposure. The sequential magnetic alignment, UV exposure could be programmed to control the alignment of magnetic–plasmonic nanorods in specific locations. Experimentally, magnetic alignment was achieved by placing the precursor solution into the center of two identical permanent magnets to ensure the uniform alignment in a parallel fashion. The field strength was measured to be 25 mT (250 G). Before UV irradiation, magnetic alignment was balanced for ~ 10 s, and magnetic fields were not removed during polymerization. To prepare the mechanochromic devises, PDMS films were used. In a typical process, silicone elastomer curing agent and silicone elastomer base were thoroughly mixed with a mass ratio of 1:10. The mixture was cured at 60 °C for 2 h47. The linear polarization effect and the selective excitation of plasmon modes shown in this work can be described by bra-ket notation in mathematics. Referring to Supplementary Fig. 4, the light is incident along the y axis and polarized along the z axis. The linear polarizer can be denoted as $$P = |z > < z|,$$ where z is the polarization direction. In the experiment, the cAuNRs were rotated in x–y and y–z plane by magnetic control. Specifically, in the spherical coordinate, the angle between z axis and the long axis of nanorods is defined as α, while it is defined as Ɵ for the angle between the x axis and the long axis of projection of Au nanorods in the x–y plane. In this scenario, the state of longitudinal and transverse mode of plasmon resonance shall be denoted as |α, Ɵ > , |90o + α, Ɵ > . The state of plasmon resonance of concave AuNRs under arbitrary orientation is simply the sum of individual mode multiplied by a magnitude term, AL for longitudinal mode and AT for transverse mode. Each term can be expressed as a linear combination of three orthogonal eigenvectors, ׀x > , ׀y > and ׀z > . Therefore, for nanorods, the state function of plasmonic extinction, simply the sum of state function of individual plasmon resonance mode (transverse and longitudinal), can be denoted as $$|\varPhi > = |\varPhi _{\mathrm{L}} > + |\varPhi _{\mathrm{T}} > = {A}_{\mathrm{L}}|{\alpha},{\theta} > + {A}_{\mathrm{T}}|90^o + \alpha ,\theta > $$ As discussed above, ׀α, Ɵ > = sinαcosƟ׀x > + sinαsinƟ׀y > + cosα׀z > . The kets, ׀x > , ׀y > and ׀z > , satisfy orthonormality condition. AL and AT are the maximum extinction coefficient of longitudinal and transverse excitation, respectively. Since the longitudinal excitation of hybrid nanorods is parallel to the long axis, its orientation state is the same as that of the rods. In the case of transverse modes, however, an angle of 90o is added to α as transverse modes are perpendicular to the orientation of nanorods. Accordingly, their maximum extinction intensity in the spectra can be described as AL2 and AT2. When z-polarized light passes through the solution to be measured, the longitudinal and transverse mode will be excited and features several absorption peaks in the UV–Vis spectra. These processes can be mathematically interpreted as the polarizer operator, P, acting on the state of nanorods, |\(\varPhi\)>, and the generated new ket defines the excited state of plasmon resonance. The total absorbance is derived as: $$\left| A {> } \right. = \, \left.P \right|\varPhi _{\mathrm{L}} > + P|\varPhi _{\mathrm{T}} > \\ = \, |z > < z|({A}_{\mathrm{L}}|{\alpha},{\theta} > + {A}_{\mathrm{T}}|90^o + \alpha ,\theta > ) \\ = \, \left| {z > < z} \right|\left( {A}_{\mathrm{L}}{\mathrm{sin}}\alpha {\mathrm{cos}}\theta |x > + {A}_{\mathrm{L}}{\mathrm{sin}}\alpha {\mathrm{sin}}\theta \left| {y > + {A}_{\mathrm{L}}{\mathrm{cos}}\alpha } \right|z > \right. \\ + \left. {A}_{\mathrm{T}}{\mathrm{sin}}(\alpha + 90^o){\mathrm{cos}}\theta |x > + {A}_{\mathrm{T}}{\mathrm{sin}}(\alpha + 90^o){\mathrm{sin}}\theta |y > + {A}_{\mathrm{T}}{\mathrm{cos}}(\alpha + 90^o)|z > \right) \\ = \, {A}_{\mathrm{L}}{\mathrm{sin}}\alpha {\mathrm{cos}}\theta \left| {z > < z} \right|x > + {A}_{\mathrm{L}}{\mathrm{sin}}\alpha {\mathrm{sin}}\theta \left| {z > < z\left| {y > + {A}_{\mathrm{L}}{\mathrm{cos}}\alpha } \right|z > < z} \right|z > \\ + {A}_{\mathrm{T}}{\mathrm{cos}}\alpha {\mathrm{cos}}\theta \left| {z > < z} \right|x > + {A}_{\mathrm{T}}{\mathrm{cos}}\alpha {\mathrm{sin}}\theta \left| {z > < z} \right|y > - {A}_{\mathrm{T}}{\mathrm{sin}}\alpha \left| {z > < z} \right|z > \\ = \, {A}_{\mathrm{L}}{\mathrm{cos}}\alpha |z > - {A}_{\mathrm{T}}{\mathrm{sin}}\alpha |z> $$ As seen here, the absorbance is characterized by two terms with the same phase but different coefficients. Whereas the first one defines the longitudinal mode, the second indicates the transverse one. The physical meaning of ket, ׀z > , is that the excitation of plasmon resonance is along the z axis for both modes. ALcosα and ATsinα are absorption efficiency for the longitudinal and transverse modes. If α is 0, which means the AuNRs are parallel to the z axis, the second term will be zero. The only longitudinal mode is excited, and the absorption efficiency reaches maximum, AL, indicating the highest intensity for the longitudinal mode. Deriving expectation value of excitation Bearing this in mind, we show, in the following, how the bra-ket notation helps to describe the selective excitation of plasmon resonance of nanostructures under linearly polarized light. The expectation value of the polarization operator, P, for an orientational state, \(\varPhi\), of cAuNRs is the sum of two terms, transverse and longitudinal modes and can be described mathematically as: $$< A(\alpha ,\theta ) > = \, < {A}_{\mathrm{L}}(\alpha ,\theta ) > + < {A}_{\mathrm{T}}(\alpha ,\theta ) > \\ = \, < \varPhi _L\left| {\mathrm{P}} \right|\varPhi _{\mathrm{L}} > + < \varPhi _{\mathrm{T}}\left| {\mathrm{P}} \right|\varPhi _{\mathrm{T}} > \\ = \, < \varPhi _{\mathrm{L}}|{A}_{\mathrm{L}}{\mathrm{cos}}\alpha |z > + < \varPhi _{\mathrm{T}}| - {A}_{\mathrm{T}}{\mathrm{sin}}\alpha |z > \\ = \, \left( {{A}_{\mathrm{L}}{\mathrm{sin}}\alpha {\mathrm{cos}}\theta < x{\mathrm{|}} + {A}_{\mathrm{L}}{\mathrm{sin}}\alpha {\mathrm{sin}}\theta < y| + {A}_{\mathrm{L}}{\mathrm{cos}}\alpha < z|} \right){A}_{\mathrm{L}}{\mathrm{cos}}\alpha |z > \\ + \left( {{A}_{\mathrm{T}}{\mathrm{cos}}\alpha {\mathrm{sin}}\theta < x} \right. \left. { \, + \, {A}_{\mathrm{T}}{\mathrm{cos}}\alpha {\mathrm{cos}}\theta < y| - {A}_{\mathrm{T}}{\mathrm{sin}}\alpha < z|} \right)\left( { - {A}_{\mathrm{T}}{\mathrm{sin}}\alpha |z > } \right)\\ = \, {A}^2_{\mathrm{L}}{{\mathrm{cos}}^2\alpha + {A}_{\mathrm{T}}^2}{\mathrm{sin}}^2\alpha$$ The expectation value of absorption, or experimental measurable, is solely dependent on the azimuthal angle, α. IL and IT represent the maximum intensity of longitudinal and transverse modes. For α ϵ [0o, 90o], the expectation value of plasmonic excitation of either transverse or longitudinal mode intrinsically follows: $$< A\left( {\alpha ,\theta } \right) > + < A\left( {90^o - \alpha ,\theta } \right) > = 2 < A(45^o,\theta ) > $$ This derived symmetry relation helps to deduce the expectation of plasmonic excitations. Simulation of excited states under different orientation angles The simulated spectra of cAuNRs under various orientations are computed by the finite-element frequency-domain method (Comsol Multiphysics). To explicitly model component diversity and the structural complexity, morphologies and size are all derived from TEM images. The shape of initial AuNRs is derived as a flat rod terminated with two half ellipsoids (Supplementary Fig. 5) because the morphology of cAuNRs observed from TEM images is neither perfect rod nor an ideal ellipsoid. Basically, the two ellipsoids on the ends can refine the sharpness and make the model more feasible. To create the concavity in the model, we removed another cutting rod with a radius of 10 nm was built atop the AuNRs, and the overlay of the two. As shown in Supplementary Fig. 5, the sharp edge around the concave was further rounded by another rod with a radius of 50 nm to mimic the smooth surface of cAuNRs as evidenced by TEM images in Supplementary Fig. 2d. Magnetic nanorod was modeled as a cylinder with a large aspect ratio and two spherical ends. The overall size is 110 nm in length and 20 nm in width. The material domains were defined separately to endow them basic physical properties, like relative permittivity or refractive index. The interpolation functions for the complex refractive index of gold is taken from the Optical Materials Database. The refractive index of the surrounding is set as 1.5 to mimic the presence of the polymer shell. A plane electromagnetic wave propagates along the x axis, and is polarized in the z axis. The wavelength of the electromagnetic wave was swept from 400 nm to 1000 nm with a 10-nm step. The absorption cross-section was computed by integrating power loss density over the volume of Au and Fe3O4 nanorods. Scattering cross-section was defined as integral of the dot product of surface normal vector and Poynting vector over the close surface of both Au and Fe3O4 nanorods. The extinction cross-section is simply the sum of the two. After computing the scattering field by solving Maxwell's equations, the electric field norm and Poynting vector at a specific wavelength were plotted to visualize the localized surface plasmon resonance and scattering around the concave structure. The extinction, absorption, and scattering cross-section were plotted in Supplementary Fig. 9a–c, respectively. All data generated and analyzed during this study are included in this published article (and its Supplementary Information files) and are available from the corresponding author on reasonable request. Li, Z. & Yin, Y. Stimuli‐responsive optical nanomaterials. Adv. Mater. 31, 1807061 (2019). Chan, E. P., Walish, J. J., Urbas, A. M. & Thomas, E. L. Mechanochromic photonic gels. Adv. Mater. 25, 3934–3947 (2013). Ge, J. & Yin, Y. Responsive photonic crystals. Angew. Chem. Int. Ed. 50, 1492–1522 (2011). Li, Z. & Yin, Y. Creating chameleon-like smart actuators. Matter 1, 550–551 (2019). Chi, Z. et al. Recent advances in organic mechanofluorochromic materials. Chem. Soc. Rev. 41, 3878–3896 (2012). Han, X., Liu, Y. & Yin, Y. Colorimetric stress memory sensor based on disassembly of gold nanoparticle chains. Nano Lett. 14, 2466–2470 (2014). Kim, Y. et al. Reconfigurable chiroptical nanocomposites with chirality transfer from the macro-to the nanoscale. Nat. Mater. 15, 461–468 (2016). Jiang, N., Zhuo, X. & Wang, J. Active plasmonics: principles, structures, and applications. Chem. Rev. 118, 3054–3099 (2017). Millstone, J. E. et al. Observation of a quadrupole plasmon mode for a colloidal solution of gold nanoprisms. J. Am. Chem. Soc. 127, 5312–5313 (2005). Gramotnev, D. K. & Bozhevolnyi, S. I. Plasmonics beyond the diffraction limit. Nat. Photon 4, 83 (2010). Kristensen, A. et al. Plasmonic colour generation. Nat. Rev. Mater. 2, 1–14 (2016). Grigorenko, A. et al. Nanofabricated media with negative permeability at visible frequencies. Nature 438, 335–338 (2005). Nicholls, L. H. et al. Ultrafast synthesis and switching of light polarization in nonlinear anisotropic metamaterials. Nat. Photon 11, 628–633 (2017). Maity, S. et al. Spatial temperature mapping within polymer nanocomposites undergoing ultrafast photothermal heating via gold nanorods. Nanoscale 6, 15236–15247 (2014). Maity, S., Wu, W.-C., Tracy, J. B., Clarke, L. I. & Bochinski, J. R. Nanoscale steady-state temperature gradients within polymer nanocomposites undergoing continuous-wave photothermal heating from gold nanorods. Nanoscale 9, 11605–11618 (2017). Zhu, X., Vannahme, C., Højlund-Nielsen, E., Mortensen, N. A. & Kristensen, A. Plasmonic colour laser printing. Nat. Nanotechnol. 11, 325 (2016). Wurtz, G. A. et al. Designed ultrafast optical nonlinearity in a plasmonic nanorod metamaterial enhanced by nonlocality. Nat. Nanotechnol. 6, 107 (2011). Ellenbogen, T., Seo, K. & Crozier, K. B. Chromatic plasmonic polarizers for active visible color filtering and polarimetry. Nano Lett. 12, 1026–1031 (2012). Shen, Y. et al. Plasmonic gold mushroom arrays with refractive index sensing figures of merit approaching the theoretical limit. Nat. Commun. 4, 1–9 (2013). ADS Google Scholar Shafiei, F. et al. Plasmonic nano-protractor based on polarization spectro-tomography. Nat. Photon 7, 367 (2013). Shafiei, F. et al. A subwavelength plasmonic metamolecule exhibiting magnetic-based optical Fano resonance. Nat. Nanotechnol. 8, 95 (2013). Yang, S. et al. Feedback-driven self-assembly of symmetry-breaking optical metamaterials in solution. Nat. Nanotechnol. 9, 1002–1006 (2014). Liu, Q. et al. Self-alignment of plasmonic gold nanorods in reconfigurable anisotropic fluids for tunable bulk metamaterial applications. Nano Lett. 10, 1347–1353 (2010). Liu, Q., Yuan, Y. & Smalyukh, I. I. Electrically and optically tunable plasmonic guest–host liquid crystals with long-range ordered nanoparticles. Nano Lett. 14, 4071–4077 (2014). Zhang, Y., Liu, Q., Mundoor, H., Yuan, Y. & Smalyukh, I. I. Metal nanoparticle dispersion, alignment, and assembly in nematic liquid crystals for applications in switchable plasmonic color filters and E-polarizers. ACS Nano 9, 3097–3108 (2015). Franklin, D. et al. Polarization-independent actively tunable colour generation on imprinted plasmonic surfaces. Nat. Commun. 6, 1–8 (2015). Dirix, Y., Bastiaansen, C., Caseri, W. & Smith, P. Oriented pearl‐necklace arrays of metallic nanoparticles in polymers: a new route toward polarization‐dependent color filters. Adv. Mater. 11, 223–227 (1999). Wilson, O., Wilson, G. J. & Mulvaney, P. Laser writing in polarized silver nanorod films. Adv. Mater. 14, 1000–1004 (2002). de León, A. G. et al. Method for fabricating pixelated, multicolor polarizing films. Appl. Opt. 39, 4847–4851 (2000). Article ADS PubMed Google Scholar Wang, X. et al. Anisotropically shaped magnetic/plasmonic nanocomposites for information encryption and magnetic-field-direction sensing. Research 2018, 7527825 (2018). PubMed Central Google Scholar Jung, I., Ih, S., Yoo, H., Hong, S. & Park, S. Fourier transform surface plasmon resonance of nanodisks embedded in magnetic nanorods. Nano Lett. 18, 1984–1992 (2018). Jung, I. et al. Fourier transform surface plasmon resonance (FTSPR) with gyromagnetic plasmonic nanorods. Angew. Chem. 130, 1859–1863 (2018). Li, Z., Yang, F. & Yin, Y. Smart materials by nanoscale magnetic assembly. Adv. Funct. Mater. 30, 1903467 (2019). Wang, M. & Yin, Y. Magnetically responsive nanostructures with tunable optical properties. J. Am. Chem. Soc. 138, 6315–6323 (2016). Wang, M. et al. Magnetic tuning of plasmonic excitation of gold nanorods. J. Am. Chem. Soc. 135, 15302–15305 (2013). Zhang, M. et al. High-strength magnetically switchable plasmonic nanorods assembled from a binary nanocrystal mixture. Nat. Nanotechnol. 12, 228 (2017). Article ADS PubMed CAS Google Scholar Wang, M., He, L., Xu, W., Wang, X. & Yin, Y. Magnetic assembly and field‐tuning of ellipsoidal‐nanoparticle‐based colloidal photonic crystals. Angew. Chem. Int. Ed. 54, 7077–7081 (2015). Piao, Y. et al. Wrap–bake–peel process for nanostructural transformation from β-FeOOH nanorods to biocompatible iron oxide nanocapsules. Nat. Mater. 7, 242–247 (2008). Xu, W. et al. Chemical transformation of colloidal nanostructures with morphological preservation by surface-protection with capping ligands. Nano Lett. 17, 2713–2718 (2017). Li, Z. et al. Magnetic assembly of nanocubes for orientation-dependent photonic responses. Nano Lett. 19, 6673–6680 (2019). Li, Z. et al. Magnetic targeting enhanced theranostic strategy based on multimodal imaging for selective ablation of cancer. Adv. Funct. Mater. 24, 2312–2321 (2014). Li, N. et al. Sol–gel coating of inorganic nanostructures with resorcinol–formaldehyde resin. Chem. Commun. 49, 5135–5137 (2013). Gao, C., Zhang, Q., Lu, Z. & Yin, Y. Templated synthesis of metal nanorods in silica nanotubes. J. Am. Chem. Soc. 133, 19706–19709 (2011). Chen, L. et al. High-yield seedless synthesis of triangular gold nanoplates through oxidative etching. Nano Lett. 14, 7201–7206 (2014). Gilroy, K. D. et al. Thermal stability of metal nanocrystals: an investigation of the surface and bulk reconstructions of Pd concave icosahedra. Nano Lett. 17, 3655–3661 (2017). Li, Z., Liu, Y., Marin, M. & Yin, Y. Thickness-dependent wrinkling of PDMS films for programmable mechanochromic responses. Nano Res. 1–7 https://doi.org/10.1007/s12274-020-2617-z (2020). The authors are grateful for the financial support from the U.S. National Science Foundation (CHE-1808788). Yin also thanks the UCR Academic Senate for providing the Committee on Research (CoR) Grant. Acknowledgment is also made to the Central Facility for Advanced Microscopy and Microanalysis at UCR for help with TEM analysis. Department of Chemistry, University of California, Riverside, CA, 92521, USA Zhiwei Li, Jianbo Jin, Fan Yang, Ningning Song & Yadong Yin Zhiwei Li Jianbo Jin Fan Yang Ningning Song Yadong Yin Y.Y. proposed this work and the experimental design. J.J. and Z.L. developed 3D model and ran the numerical simulations. Z.L. proposed the numerical solution based on bra-ket notation. F.Y. acquired the SEM images and helped with taking optical pictures. N.S. characterized and analyzed the magnetic properties. Z.L. and Y.Y. prepared the paper. All authors contributed to the discussion of the results. Correspondence to Yadong Yin. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Li, Z., Jin, J., Yang, F. et al. Coupling magnetic and plasmonic anisotropy in hybrid nanorods for mechanochromic responses. Nat Commun 11, 2883 (2020). https://doi.org/10.1038/s41467-020-16678-8
CommonCrawl
Hyperdense Object If a planet has a core of super-dense material, how might that effect the geological composition of that planet? Specifically: imagine a lunar-sized planet, with earth-like geology and density. If the center of that planet is a very small mass of super-dense unobtanium that cannot collapse in on itself (think neutron star density. This Super dense material is about 97.3% of earths mass compressed into the size of a shower stall), how would that effect the surrounding layers of this planet? Would the normally earth-dense layers just flatten against the core and the rest of it become super-dense as well? Would this planet be able to hold liquid water and an atmosphere? reality-check physics gravity geology TCAT117 You'd have no atmosphere, and almost no liquid water. Right, the atmosphere. The gravity of the planet will be quite strange, considering pretty much all of the gravitational material will be at the very centre. You'll have to ask someone that knows more about those things than me though. My concern would be that, without a large rotating core, you have no magnetic fields protecting the planet. That immediately puts you somewhere in the vicinity of a Mars like scenario, regardless of anything else. Solar wind will strip away your atmosphere, and simple mathematics will show that you don't have that much to lose. The volume of your planet is small so it won't take long, astronomically speaking, for you to lose your lighter elements. This will drop the temperature and pressure, losing any water that you have on the surface. Secondly, you specified that you have a lunar sized planet, but 97.3% of the mass is at the centre, in a shower-stall sized block. Compared to even the Moon, that's absolutely tiny. Even if we assume a generous stall of maybe $3m^3$, that means that you have practically all of your mass in...wait for it...$1.37\times 10^{-19}$% of your planet. That's negligible. 3% of Earth's mass is about $1.8\times10^{23} kg$ which means that the density of everything other than your core is, on average, $7363kg/m^3$. That doesn't sound too bad; it's about the average density of cast iron, and it's more than Earth which is about $5515kg/m^3$. Consider that this planet is way smaller though. Again, I don't know what this would do to a planet but it would be weird; it obviously doesn't happen in nature. First thing that comes to mind is gravity: this planet would have, at the surface, a gravitational pull of $132m/s^2$. That's about $13.5g$ so any human, and most animals, would be dead at the surface. No large organisms can survive at that. Most plants on earth can't either. On the other hand, might help you keep an atmosphere for longer. Lastly, you specified that the unobtanium magically doesn't collapse in, but you didn't say anything about the other way around. Neutron star matter is kept that way through truly enormous gravity, and our planet doesn't come close to that. Without supergravity to keep it in check, the core of the planet will explode. A teaspoon of neutron star matter will produce about $10^{27}J$ of energy as it decays; you have way more. You're planet will last maybe a few minutes before it's vapourised. With the amount of matter it'll produce the approximate energy output of the sun for 14 straight days. There'll be nothing left of your planet and anything surrounding it. SerenicalSerenical $\begingroup$ If you have a moon that stays between the planet and its sun (period of moon orbiting planet is same as planet orbiting star), the moon could have a magnetic field and atmosphere to shield the planet. If the planet has internal (geothermal) heat, then you might still have liquid water. $\endgroup$ – SRM Jun 9 '18 at 4:06 $\begingroup$ Maybe. I don't know how this would work as a moon though. Moons can get a bit bigger than Earth's, at least in our solar system, but an Earth-mass object might behave differenlty $\endgroup$ – Serenical Jun 9 '18 at 7:34 $\begingroup$ doesn't have to be massive, just needing to block solar wind. A really undense moon with a decent mag field. $\endgroup$ – SRM Jun 9 '18 at 8:13 $\begingroup$ No, this planet in the question would be massive. I'm not too well versed in astrophysics but I don't think a celestial body of Earth's weight is too common as a moon. Besides, this would produce almost no magnetic field, I think. Then there's the fact that it would vapourise after maybe 5 minutes $\endgroup$ – Serenical Jun 9 '18 at 11:20 $\begingroup$ Gravity doesn't just depend on overall mass: it depends on distance as well. A small object with the same mass as a large object will have a much higher surface gravity. If I recall, the formula for surface acceleration is $GM/{r^2}$ where G is the gravitational constant, M is the planetary mass, and r is the radius of the planet. So when r gets a lot smaller, the gravity goes up a lot. With most of the mass at the centre, the uneven distribution won't matter so much but he planet is weirdly dense. Earth is the densest in our solar system and you have that beaten by a lot. $\endgroup$ – Serenical Jun 9 '18 at 22:54 I loved Serenical's answer and upvoted it. You should too. In addition to it, think about this: A lunar-sized object with earth-like gravity would experience: Serious gravitic shear along both the curvature of its surface and radially from the center. In other words, anything bigger than an amoeba trying to live on it would likely be torn apart. I don't think the non-unobtainium mass will collapse (I could be wrong about this, if I am it will be a glorious little star for a brief period of time), but it will be superheated magma. Do you remember what happens when ice skaters pull their arms in? Yup, they spin faster. The rotational speed of the lunar mass will be tremendous. With all the forces at play, I should think the non-unobtainium mass (all of it) would be a boiling liquid. No solid surface. In fact, it might vaporize into a gas dwarf planet. The problem is that it's a whole lotta mass in a very small space. It's not condusive to stability. Gravitic Shear: Compared to the force of gravity, the radius of the earth is large enough that you can conceptualize the surface as a flat plane. As people walk along that plane, the force of gravity is statistically the same everywhere your feet, torso, and head pass. But you're seriously reducing the radius. Now the force is concentrated closer to the ground than it is higher. I'd have to work the math to prove it (I apologize that I don't have the time), but walking would have minor effect, falling down and getting up would have a major effect. Buildings would be unable to withstand the shear (in my opinion). By "shear" I mean substantial differences in gravitational force between two nearby regions. Rotation: As an object of mass X grows closer to its center, it spins faster. Here's a passable example: Think of a cylinder drawn around the skater. It begins with radius A (the outer extend of her leg). As she pulls her leg abover her head, the radius shrinks to radius B and she spins faster. This is an aspect of the conservation of momentum. So, the lunar sized object (smaller radius than the Earth) has the same mass as the Earth. It must spin faster. A lot faster. JBHJBH $\begingroup$ Thanks, JBH, can you give some more background on both parts of your answer? Specifically, how this object's rotation would be effected, and what you mean by gravitic shear. Thanks again for your time 👍 $\endgroup$ – user49466 Jun 9 '18 at 13:26 $\begingroup$ I edited my answer. $\endgroup$ – JBH Jun 13 '18 at 0:26 Let's run some numbers. Earth's mass compressed down to 3m^3 is roughly 2e24 kg/m^3. That's 10 million times more dense than a neutron star. This would be made up of degenerate quark matter. A neutron star forms when a star that does not have enough mass to become a stellar black hole collapses. It becomes so dense its gravity overcomes the electromagnetic force keeping electrons and positrons apart. They fuse together to form neutrons. The neutrons can't be compressed further because the strong nuclear force becomes repulsive at 0.7 fm. But you're 10 million times more dense than that. At your densities it's theorized that even neutrons would break down into quarks and gluons. It would also be very, very, very hot in the order of 1e12 Kelvins. With so little mass (and thus gravity) to hold itself together it would immediately blow itself apart. Let's assume none of that happens, it isn't at some cosmologically high temperature, and it doesn't blow itself part. It's just magically really dense. There's still gravity to worry about. What's happening to the normal matter adjacent to this infernal shower stall? Newtonian gravity is $F = Gm/r^2$. $m$ is Earth's mass, 6e24kg $r$ is the distance from the center of mass, about 1 meter $G$ is the gravitational constant, 6.7e−11 $\frac{N m^2}{kg^2}$ Put that together and we get 402,000,000,000,000 N or 4e14 N. How much force is that? It's the weight of 100 million blue whales. It's 10 million times more than the thrust of a Saturn V rocket. It's a lot. Anything adjacent to the core will be immediately flattened onto the core. Anything above that will collapse. Your planet collapses into a thin layer of degenerate matter on the surface of the shower stall. Tidal forces It gets worse. Gravity gets weaker with the square of the distance from the center of mass. At 2 meters from the worst shower stall in the universe, the gravitational attraction is 4 times less. At 4 meters it's 16 times less. At 8 meters it's 64 times less. This extreme force gradient is known as a tidal force and it tears things apart. Imagine an 8 meter high rock 8 meters from the core. The end closest to the core is 8 meters away and will be experiencing $\frac{4e14N}{8}$ or 6e12 N. The further end is 16 meters away and will be experiencing $\frac{4e14N}{256}$ or 1.5e12 N. The rock will be torn apart by 4.5e12 N, the force of 100,000 Saturn V rockets. It explodes anyway The force of all this normal matter falling into the dense unobtanium core and compressing will produce more energy than I care to calculate. Let's calculate it. Let's say a hunk of rock falls towards the core in this intense gravitational field. Since we're dealing with such a crazy large gravity field we'll use special relativity. You can see the equation derived here. $$ v = c \sqrt{1 - \left[ \frac{1}{\frac{GM}{c^2} \left( \frac{1}{r_\mathrm{final}} - \frac{1}{r_\mathrm{initial}} \right) + 1} \right]^2}$$ $M$ is the mass of the shower stall, 6e24 kg. $r_\mathrm{initial}$ is the starting height above center mass. $r_\mathrm{final}$ is the height above center mass when it lands, 1m. $c$ is the speed of light, 3e8 m/s. $v$ is its velocity when it hits the surface of the shower stall. Running the numbers, falling from just 2 meters it's going 20,000,000 m/s or about 6.5% the speed of light. From 4 meters it's going 8%. From 10 meters it's going 8.9%. From 100 meters it's going 9.4% and it doesn't get much faster above that. We can calculate its kinetic energy with the special relativity formulas... $$K = (\gamma - 1) m c^2$$ $$\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}$$ 1 kg at 0.094c will impact the core with about 4e14 J or about 100 kilotons of TNT. So the rock surrounding the core briefly pancakes onto the core before blowing the planet apart. SchwernSchwern I've got a bunch of problems with what's come so far: Neutron star material going boom: agree. However, it was only specified as unobtaininum of that density, it might not go boom. The planet having Earth-like geology: nope, it's not going to happen. The problem is you don't have enough normal matter for radioactivity to keep the core molten. No plate tectonics. The planet erodes to basically flat and stays there. Atmosphere: Nothing says unobtainium can't generate a magnetic field. However, it's not going to have an atmosphere because a world of that size can't hold onto one. It's not surface gravity that counts in holding an atmosphere, it's escape velocity. Escape velocity is determined both by surface gravity and how fast it drops off with distance. Your Luna-sized object has nowhere near the total gravity of Earth. Gravitational sheer: I'm baffled here. First, any creature that really lived in such an environment would be appropriately curved, the sheer would be built in and cause no abnormal force on them. Second, while I can see there being such an effect if you were walking around on the core this is a Luna-sized world. The horizon is a lot closer than on Earth but it's still basically flat. Loren PechtelLoren Pechtel Not the answer you're looking for? Browse other questions tagged reality-check physics gravity geology or ask your own question. 5km artificial planet, with same gravity as on Earth How fast must a planet with 1.27g spin to allow for space elevators? Exoplanetary Review: Acid Rain Could a planet larger than earth with half the gravity have a denser atmosphere? Could there be an upside down hurricane on a gaseous super-Earth Earthquake modeling on a fictitious extraterrestrial moon How do I figure out the size of an Earth-like world's layers and composition of elements Given the mass and composition of a planet, can one determine what the radius should be? Effects on objects due to a brief relocation of massive amounts of mass How much surface water do I need for plate-tectonics on a planet?
CommonCrawl
Black hole mergers from quadruples G Fragione, B Kocsis With the hundreds of merging binary black hole (BH) signals expected to be detected by LIGO/Virgo, LISA and other instruments in the next few years, the modeling of astrophysical channels that lead to the formation of compact-object binaries has become of fundamental importance. In this paper, we carry out a systematic statistical study of quadruple BHs consisting of two binaries in orbit around their center of mass, by means of high-precision direct $N$-body simulations including Post-Newtonian (PN) terms up to 2.5PN order. We found that most merging systems have high initial inclinations and the distributions peak at $\sim 90^\circ$ as for triples, but with a more prominent broad distribution tail. We show that BHs merging through this channel have a significant eccentricity in the LIGO band, typically much larger than BHs merging in isolated binaries and in binaries ejected from star clusters, but comparable to that of merging binaries formed via the GW capture scenario in clusters, mergers in hierarchical triples, or BH binaries orbiting intermediate-mass black holes in star clusters. We show that the merger fraction can be up to $\sim 3$--$4\times$ higher for quadruples than for triples. Thus even if the number of quadruples is $20\%$--$25\%$ of the number of triples, the quadruple scenario can represent an important contribution to the events observed by LIGO/VIRGO.
CommonCrawl
Resurchify Search Home Find Impact Factor Check Impact Index of thousands of worldwide Journals What is Impact Factor? The impact factor (IF), also named as Journal Impact Factor (JIF) is a metric used to evaluate the importance of a Journal. It is determined by calculating an average number of citations received by the selected articles in that journal within the last few years. It is proposed by the founder of the Institute for Scientific Information, Eugene Eli Garfield and is being regularly calculated beginning from 1975 for all the journals registered in the Journal Citation Reports (JCR). For a given year, the Journal's IF is computed as the number of cites, received in that throughout the year, of the scientific articles or papers published in that particular journal during the two previous years, dividing it by the number of articles published in that journal in the previous two years.years. It helps to measure the relative importance of journals within particular areas and to compare the journals within the same areas. The higher the JIF, the better it is ranked. Typically, journals with more review articles or papers are able to achieve higher JIF. How it is calculated? The calculation of IF is based on a period of two years and computes as dividing the number of times articles published in that journal were cited by the total number of articles which are citable. $IF_x = \frac{Citations_{x-1} + Citations_{x-2}}{Publications_{x-1} + Publications_{x-2}}$ For an example, to find the impact factor of a "ABC" journal for the year 2019, we would compute: $IF_{2019}= \frac{Citations_{2018} + Citations_{2017}}{Publications_{2018} + Publications_{2017}}$ For example, if the Journal have the following citations and publications value: Citations2018 = 80 Publications2018 = 30 IF2016= (80 + 60) / (30 + 40) = 2 This value of IF indicates that, on an average, the articles of "ABC" Journal published in the years 2017 and 2018 have approximately received 2 (two) citations each in the year of 2019. It is important to note here that the 2018's IF is published in the year 2019. It cannot be computed until all publications in the previous year of 2018 are processed by the indexing agency. How do I find the impact factor of a journal? The impact factor of a journal is calculated by dividing the number of current year citations to the source items published in that journal during the previous two years. It is denoted as a ratio between citations and recent citable items published. You can either refer to the Journal Citation Reports (JCR) or the Scopus® database to find the impact factor of the journal. The data from the Scopus® database can also be found at resurchify.com. You can find the impact factor of thousands of journals on this website. To search the impact factor of any Journal or Conference, you can query by its title or ISSN. You can also query using the publisher's name or by subject category in the search box and can select the required journal. You can also check in detail analysis (like five years average, highest impact in the last five years, etc.) of a particular item, by clicking on the same. All these details will be helpful when you want to select a journal or assess the quality of a journal. Importance of Impact Factor Impact factor gives the approximate idea about how prestigious a particular journal is in its field. The higher the IF of the journal, the better it is ranked. By using this metric you can basically evaluate and compare the journals in similar subject categories to identify their importance. What is a high impact factor? Impact Factor is a measure of the importance of a journal. The impact factor (IF) is a measure of the yearly average number of citations to recent articles published in that journal. It is often used to compare journals of the same category. Higher the Impact factor, higher is the ranking of the journal. But do not take this number as an absolute measure. It should be used to compare the journals withing a single discipline. For example, an Artificial Intelligence journal's Impact Factor cannot be compared with a journal from the Management domain. One of the events that took place in the year 2017 is illustrated here. The news suggests that the Journal Citation Reports (JCR) database tracked all impact factors for more than 12000 journals. It was found that approximately only 1.9% of the journals had a 2017 impact factor of 10 or higher. The top 5% of journals had impact factors approximately equal to or greater than 6. Data Source and Statistical Analysis We perform various analysis on the data produced by SCImago. The SCImago Journal & Country Rank is an openly accessible portal which covers the journals and scientific indicators generated from the data present in the Scopus® database (Elsevier B.V.). Scopus database includes the information of more than 15,000 journals from different fields from around 4,000 publishers and also covers around 1000 open access journals We perform statistical impact analysis for various journals and conference to evaluate their impact trends. You can find here the average impact index for the last three and five years. We also present here the highest (best) and lowest (worst) impact for the last few years. To show the variation of impact data over years, we compute and show the standard deviation. The above metrics help you to better correlate and judge the impact of any particular journal/conference. We perform all the analysis on Cites/Doc. (2 Year) metric. Also known as impact index. Categories and Areas Covered We cover the impact and detailed analysis of almost all the major areas and disciplines. We covered the following categories: Endocrinology, Diabetes and Metabolism Environmental Chemistry Geology Otorhinolaryngology Small Animals Oncology (nursing) Pharmacy Pediatrics Ecology, Evolution, Behavior and Systematics Neuropsychology and Physiological Psychology Management of Technology and Innovation Pulmonary and Respiratory Medicine Mathematics (miscellaneous) Emergency Nursing Public Administration Complementary and Manual Therapy Computer Vision and Pattern Recognition Pharmaceutical Science Gerontology Health Professions (miscellaneous) Sociology and Political Science Occupational Therapy Embryology Epidemiology Water Science and Technology Drug Discovery Food Animals Anthropology Chemical Engineering (miscellaneous) Issues, Ethics and Legal Aspects Atomic and Molecular Physics, and Optics Biochemistry Visual Arts and Performing Arts Internal Medicine Artificial Intelligence Cognitive Neuroscience Critical Care Nursing Review and Exam Preparation Paleontology Radiological and Ultrasound Technology Social Sciences (miscellaneous) Astronomy and Astrophysics Pharmacology (nursing) Management Information Systems Cardiology and Cardiovascular Medicine Developmental Biology Mechanical Engineering Cultural Studies Complementary and Alternative Medicine Nature and Landscape Conservation Hardware and Architecture Analysis Aging Pharmacology Radiation Neurology (clinical) Atmospheric Science Cancer Research Analytical Chemistry Physics and Astronomy (miscellaneous) Arts and Humanities (miscellaneous) Ophthalmology Biological Psychiatry Emergency Medical Services Agricultural and Biological Sciences (miscellaneous) Equine Developmental and Educational Psychology Insect Science Global and Planetary Change Fuel Technology Nurse Assisting Urban Studies Rheumatology Nuclear and High Energy Physics Environmental Science (miscellaneous) Applied Psychology Architecture Assessment and Diagnosis Advanced and Specialized Nursing Rehabilitation Safety Research Health, Toxicology and Mutagenesis Leadership and Management E-learning Surgery Human Factors and Ergonomics Classics Nursing (miscellaneous) Health Information Management Cellular and Molecular Neuroscience Chiropractics Immunology and Microbiology (miscellaneous) Respiratory Care Earth and Planetary Sciences (miscellaneous) Toxicology Electronic, Optical and Magnetic Materials Behavioral Neuroscience Safety, Risk, Reliability and Quality LPN and LVN Dentistry (miscellaneous) Dental Hygiene Structural Biology Statistical and Nonlinear Physics Political Science and International Relations Virology Materials Chemistry Electrical and Electronic Engineering Chemistry (miscellaneous) Aerospace Engineering Strategy and Management Marketing History Environmental Engineering Veterinary (miscellaneous) Physical and Theoretical Chemistry Reviews and References (medical) Library and Information Sciences Orthopedics and Sports Medicine Transportation Anesthesiology and Pain Medicine Energy (miscellaneous) Process Chemistry and Technology Neurology Maternity and Midwifery Clinical Biochemistry Genetics (clinical) Materials Science (miscellaneous) Forestry Education Nanoscience and Nanotechnology Statistics and Probability Geometry and Topology Archeology Signal Processing Plant Science Pollution Pediatrics, Perinatology and Child Health Medicine (miscellaneous) Colloid and Surface Chemistry Business, Management and Accounting (miscellaneous) Management, Monitoring, Policy and Law Nutrition and Dietetics Logic Clinical Psychology Periodontics Oral Surgery Molecular Medicine Genetics Condensed Matter Physics Economic Geology Soil Science Biomaterials Neuroscience (miscellaneous) Community and Home Care Animal Science and Zoology Modeling and Simulation Life-span and Life-course Studies Reproductive Medicine Health Informatics Family Practice Mathematical Physics Information Systems Anatomy Medical and Surgical Nursing Computational Mathematics Energy Engineering and Power Technology Sports Science Geography, Planning and Development Nuclear Energy and Engineering Literature and Literary Theory Acoustics and Ultrasonics Horticulture Surfaces and Interfaces Aquatic Science Urology Information Systems and Management Developmental Neuroscience Care Planning Media Technology Electrochemistry Management Science and Operations Research Economics, Econometrics and Finance (miscellaneous) Computer Networks and Communications Public Health, Environmental and Occupational Health Polymers and Plastics Drug Guides Communication Biomedical Engineering Speech and Hearing Filtration and Separation Computer Science Applications Hematology Demography Health (social science) Instrumentation Infectious Diseases Psychiatric Mental Health Applied Microbiology and Biotechnology Cell Biology Geotechnical Engineering and Engineering Geology Mechanics of Materials Obstetrics and Gynecology Food Science Radiology, Nuclear Medicine and Imaging Pharmacology (medical) Pathology and Forensic Medicine Numerical Analysis Sensory Systems Immunology and Allergy Applied Mathematics Endocrine and Autonomic Systems Gender Studies Histology Microbiology Dermatology Space and Planetary Science Computer Graphics and Computer-Aided Design Psychiatry and Mental Health Dental Assisting Social Psychology Hepatology Computer Science (miscellaneous) Law Physical Therapy, Sports Therapy and Rehabilitation Agronomy and Crop Science Renewable Energy, Sustainability and the Environment Earth-Surface Processes Accounting Orthodontics Immunology Oncology Philosophy Museology Algebra and Number Theory Ecological Modeling Psychology (miscellaneous) Biotechnology Pharmacology, Toxicology and Pharmaceutics (miscellaneous) Tourism, Leisure and Hospitality Management Statistics, Probability and Uncertainty Religious Studies Computers in Earth Sciences Finance Microbiology (medical) Development Experimental and Cognitive Psychology Language and Linguistics Parasitology Organic Chemistry Ecology Geochemistry and Petrology Archeology (arts and humanities) Research and Theory Decision Sciences (miscellaneous) Fundamentals and Skills Inorganic Chemistry Nephrology Civil and Structural Engineering Human-Computer Interaction Automotive Engineering Building and Construction History and Philosophy of Science Linguistics and Language Engineering (miscellaneous) Molecular Biology Economics and Econometrics Waste Management and Disposal Industrial Relations Chemical Health and Safety Software Oceanography Social Work Computational Theory and Mathematics Health Policy Medical Terminology Business and International Management Physiology (medical) Ceramics and Composites Podiatry Ocean Engineering Conservation Endocrinology Industrial and Manufacturing Engineering Control and Optimization Geriatrics and Gerontology Control and Systems Engineering Spectroscopy Biochemistry (medical) Geophysics Medical Laboratory Technology Metals and Alloys Organizational Behavior and Human Resource Management Biophysics Fluid Flow and Transfer Processes Multidisciplinary Transplantation Optometry Biochemistry, Genetics and Molecular Biology (miscellaneous) Music Catalysis Critical Care and Intensive Care Medicine Stratigraphy Bioengineering Emergency Medicine Gastroenterology Surfaces, Coatings and Films Physiology Discrete Mathematics and Combinatorics Medical Assisting and Transcription Computational Mechanics Theoretical Computer Science Years Covered So far, we have covered our impact factor analysis for the years 2018, 2017, 2016, 2015, 2014, and 2013. However, we will update soon all our analysis for the year 2019 also, once the data is available. Top Journals and Conferences in the Related Fields We also provide the expert suggestions for top journals and conferences which are in related fields or in same categories. This will help you to find out other top journals and conference opportunities where you can submit your research paper or article to showcase the quality of your work. Importance of choosing Journals with High Impact Factor and Important Suggestions It is always advised to submit your articles into a journal with high impact factor in your field. This is to show the credibility and worthiness of your research articles and your work. It is observed that most of the reviewers judge the quality of your articles based on the referenced journal articles. Therefore, it is always recommended to cite or refer the articles from the top journals (Which are basically the one having high impact factor) Quality of Journal Matters! Powered by www.resurchify.com
CommonCrawl
viXra.org > All Submission Categories Abstracts Authors Papers Full Site All Submission Categories Previous months: 2007 - 0702(58) - 0703(50) - 0704(5) - 0705(1) - 0706(8) - 0707(2) - 0708(3) - 0709(3) - 0710(1) - 0711(6) - 0712(3) 2008 - 0801(4) - 0802(4) - 0803(2) - 0804(9) - 0805(4) - 0806(1) - 0807(12) - 0808(6) - 0809(3) - 0810(16) - 0811(5) - 0812(9) 2009 - 0901(3) - 0902(7) - 0903(6) - 0904(5) - 0907(49) - 0908(110) - 0909(61) - 0910(66) - 0911(64) - 0912(51) 2010 - 1001(46) - 1002(52) - 1003(264) - 1004(138) - 1005(110) - 1006(67) - 1007(55) - 1008(91) - 1009(71) - 1010(61) - 1011(76) - 1012(52) 2011 - 1101(98) - 1102(55) - 1103(118) - 1104(82) - 1105(40) - 1106(59) - 1107(51) - 1108(47) - 1109(56) - 1110(69) - 1111(109) - 1112(83) 2012 - 1201(97) - 1202(76) - 1203(89) - 1204(90) - 1205(106) - 1206(89) - 1207(97) - 1208(219) - 1209(95) - 1210(154) - 1211(135) - 1212(138) 2013 - 1301(170) - 1302(140) - 1303(193) - 1304(147) - 1305(181) - 1306(200) - 1307(148) - 1308(138) - 1309(186) - 1310(237) - 1311(160) - 1312(197) 2020 - 2001(376) Any replacements are listed farther down [33515] viXra:2001.0403 [pdf] submitted on 2020-01-19 18:48:00 Planck Length and Third Amendment to the Rayleigh-Jeans Law Authors: S. V. Miheev Comments: 7 Pages. It is shown that the minimum wavelength is approximately equal to the Planck length. To prove the exact equality, it is necessary to find an analytical solution of a certain integral = 1 /(8*Pi)^2 . The calculations are based: on a previously obtained dependence of the temperature of zero oscillations on the maximum and minimum wavelengths of standing oscillations in the cavity; on the previously calculated fraction of thermal fluctuations in the critical energy density (1/4); and on the law of Stefan – Boltzmann. It is assumed that thermal and zero-point vibrations are held in the cavity by their own gravitational field, and the wavelength of these oscillations is quantized. The critical frequency is determined above which corrections arise associated with the quantization of the wavelength of oscillations. The corrections are obtained: the Rayleigh – Jeans law, the formula for the distribution of the energy density of zero-point vibrations over frequencies, the Planck formula, and the Stefan – Boltzmann law. It is shown that corrections that reduce the energy density of zero-point vibrations by 120 orders of magnitude are almost impossible to detect in laboratory experiments with thermal radiation. I apologize for spreading false information that the minimum wavelength is significantly greater than the Planck length. My attempts to find a solution by adjusting the parameters led to errors. Category: Quantum Gravity and String Theory A Distributed Algorithm for Brute Force Password Cracking on n Processors Authors: Roman Bahadursingh A Password P, can be defined as a hash of x symbols .A brute force password cracking algorithm will go through every possible combination of symbols from 1 – x symbols. This form of password cracker takes O(n) time to solve, where n is the number of possible combinations, achieved by sn where s is the number of symbols available for a password. Having a password cracker with multiple processors, having the processors instead of all checking from symbol 0 to the last symbol, using a more decentralized approach can greatly improve the speed of this computation to O(n/2) for two processors, O(n/3) for three processors and O(n/np) as a generalized formula. This algorithm also allows for multiple processors of different clock speeds to also crack a password in more optimal time. Category: Data Structures and Algorithms The Hybrid Time Clock as a Function of Gravity Authors: Stephen H. Jarvis Comments: 15 Pages. This paper follows on from the preceding papers [1-15] in detailing how time is a function of gravity using what is termed the "hybrid time clock", in much the same way Einstein used the analogy of a clock in his special relativity theory [16]. The paper here doesn't dispute key known facts of special relativity regarding time and gravity, and of course mass, yet it does highlight new equations relevant to the general question theoretical physics is asking in regard to gravity, namely how it is related to QED (Quantum Electrodynamics). In this paper, "time" is considered to be "what a clock measures" for the lack of a better definition. The hybrid time clock is a concept developed from the hybrid time equation of the preceding papers [1-15], more specifically the most recent paper "Hybrid Time Theory: "Euler's Formula" and the "Phi-Algorithm"" [15], and is implemented here in this paper as a function of gravity. In short, the concept of the hybrid time clock is that of using a construct of measuring time, the concept of time, using a "time-clock" as a standardized construct of measurement for time. Time could indeed be the most mysterious entity of all. It may not even exist. Yet the hybrid time clock is a way to measure the concept of time for the purpose of also understanding the concept of space and associated phenomena of energy and mass and light. This paper will take the concept of the hybrid time clock and associated equation from paper 15 ([15]: p11, eq8) and present its case as "the" integral function of G (gravity) directly associating it to EM (electromagnetism) as the phi-quantum wave-function as accounted for in the preliminary papers [1-15]. Category: Mathematical Physics On the Meaning of Relativity in Contemporary Physics Authors: Daniele Sasso The paper examines the evolution of the concept of relativity from origins, in the 17th century, to this day. Doubtless the concept of relativity has undergone a substantial change that misrepresents the concept that instead expresses in its essence the fact that laws and equations of physics do not depend on the inertial reference frame and on the inertial observer. The principle of relativity hence is valid only for inertial reference frames and observers who move with constant relative motion. For all inertial observers the same laws of physics and the same equations are valid, it doesn't mean nevertheless for them also the same mathematical solutions are valid when these solutions depend on inertial velocity of a second reference frame with respect to the reference frame supposed at rest. Category: Relativity and Cosmology Neutron Spectroscopy Data Resolution Authors: George Rajna "As far as we know, this is the first published work showing an application of super resolution to neutrons. We're at the forefront of an exciting new trend that will help other neutron scattering facilities improve their own data resolution as well," said Lin. [14] Coupled with SNS, the world's most powerful pulsed accelerator-based neutron source, VENUS will be the only open research facility platform in the US to provide time-of-flight neutron imaging capabilities to users from academia and industry. [13] A spallation neutron source has been used by physicists in Japan to search for possible violations of the inverse square law of gravity. [12] Physicists have proposed a way to test quantum gravity that, in principle, could be performed by a laser-based, table-top experiment using currently available technology. [11] Now however, a new type of materials, the so-called Weyl semimetals, similar to 3-D graphene, allow us to put the symmetry destructing quantum anomaly to work in everyday phenomena, such as the creation of electric current. [10] Physicist Professor Chunnong Zhao and his recent PhD students Haixing Miao and Yiqiu Ma are members of an international team that has created a particularly exciting new design for gravitational wave detectors. [9] A proposal for a gravitational-wave detector made of two space-based atomic clocks has been unveiled by physicists in the US. [8] The gravitational waves were detected by both of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington, USA. [7] A team of researchers with the University of Lisbon has created simulations that indicate that the gravitational waves detected by researchers with the LIGO project, and which are believed to have come about due to two black holes colliding, could just have easily come from another object such as a gravaster (objects which are believed to have their insides made of dark energy) or even a wormhole. In their paper published in Physical Review Letters, the team describes the simulations they created, what was seen and what they are hoping to find in the future. [6] In a landmark discovery for physics and astronomy, international scientists said Thursday they have glimpsed the first direct evidence of gravitational waves, or ripples in space-time, which Albert Einstein predicted a century ago. [5] Scientists at the National Institute for Space Research in Brazil say an undiscovered type of matter could be found in neutron stars (illustration shown). Here matter is so dense that it could be 'squashed' into strange matter. This would create an entire 'strange star'-unlike anything we have seen. [4] The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the electromagnetic inertia, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions. Category: Nuclear and Atomic Physics On the Polarization of Gravitational Waves Authors: Paul R. Gerber It is argued that transversal waves in the geometry of space-time, as postulated by General Relativity, violate translation symmetry within phase planes of a planar wave. Consequently, it is proposed that a corresponding test of transversality is performed, which should have become possible with the increasing number of registered events of gravitational waves. This would provide a independent, new, and essential test of General Relativity. Operation Approach of $g\star$-Closed Sets in Ideal Topological Spaces Authors: Saeid Jafari, A. Sevakumar, M. Parimala In this article we introduce $(I; \gamma)$ -$g\star$-closed sets in topological spaces and also introduce $\gamma g\star$-$T_I$-spaces and investigate some of their properties. Category: Topology Regularity and Normality Via $\beta \theta$-Open Sets Authors: M. Caldas, Saeid Jafari The aim of this paper is to present and study a new type of regularity and normality called $\beta \theta$-regularity and $\beta \theta$-normality, repectively by using $\beta \theta$-open sets. On the Class of Semipre-$\theta$-Open Sets in Topological Spaces Authors: M. Caldas, Saeid Jafari, T. Noiri In this paper we consider the class of $\beta \theta$-open sets in topological spaces and investigate some of their properties. We also present and study some weak separation axioms by involving the notion of $\beta \theta$-open sets. New Types of Continuous Functions Via $G \~$ \alpha$-Open Sets Authors: Saeid Jafari, A. Selvakumar In this paper , we will continue the study of related irresolute functions with $G \~$ \alpha$-open sets [6]. We introduce and study the notion of completely $G \~$ \alpha$-irresolute functions. Further, we discuss the notion of $G \~$ \alpha$-quotient functions and study some of their properties. $mi$-Open Sets and Quasi-$mi$-Open Sets in Terms of Minimal Ideal Topological Spaces Authors: Saeid Jafari, M. Parimala The purpose of this paper is to introduce a new type of open sets called mI-open sets and quasi-$mI$-open sets in minimal ideal topological spaces and investigate the relation between minimal structure space and minimal ideal structure spaces. Basic properties and characterizations related to these sets are given. Some Properties of $g\~$$\alpha$-Closed Graphs R.Devi et al. [4] introduced the concept of $g\~$$\alpha$-open sets. In this paper, we introduce and study some properties of functions with ultra $g\~$$\alpha$-closed graphs and strongly $g\~$$\alpha$-closed graphs by utilizing $g\~$$\alpha$-open sets and the eg-closure operator. Contra $g\~$$\alpha$-Continuous Functions The concept of $g\~$$\alpha$-closed sets in a topological space are introduced by R. Devi et. al. [4]. In this paper, we introduce the notion of contra $g\~$$\alpha$-continuous functions utilizing $g\~$$\alpha$-open sets and study some of its applications. On New Type of Sets in Ideal Topological Spaces Authors: A. Selvakumar, Saeid Jafari In this paper, we introduce the notion of $I_{g~\alpha}$ -closed sets in ideal topological spaces and investigate some of their properties. Further, we introduce the concept of mildly $I_{g~\alpha}$- closed sets and $I_{g~\alpha}$-normal space. The Correct Formulas of the Experiment of Michelson-Morley Authors: Louiz Akram Comments: 4 Pages. This is a better corrected work from a previous work. When the light photons bounce off a moving mirror, they react with its atoms to be reflected, consequently, the photons change their velocity vector after each impact depending on the velocity vector of the mirror and thus of the atom. The Michelson-Morley experiment was the event that changed the modern science about "the light" by leading to Lorentz and Einstein's theories about time. I made the formulas of the Michelson-Morley experiment by considering the effects of the reflection on the light and The rate (percentage) of fringe shift calculated by the formulas which I demonstrated is perfectly null. As a result not all physics must be wrong ,but only a principle that Einstein and Lorentz stated by fixing the light speed for all observers, which makes us study light in a very difficult way. The formulas demonstrated in this work allow us to study the light as a normal wave and to understand easily all the other light effects without being obliged to use any relativity of the time. Category: Classical Physics Nano $g\~$$\alpha$-Closed Sets in Nano Topological Spaces The basic objective of this paper is to introduce and investigate the properties of Nano $g\~$$\alpha$-closed sets in Nano topological spaces. Refutation of Riemann's Definition of Integrals for a Cheating Runner Authors: Colin James III Comments: 2 Pages. © Copyright 2020 by Colin James III All rights reserved. Disqus comments are ignored, so don't bother. See ersatz-systems.com We evaluate six equations which are not tautologous. This refutes the conjecture of Riemann's definition of integrals for a cheating runner before the integral is invoked to form a non tautologous fragment of the universal logic VŁ4. Category: Functions and Analysis $g\~$$\alpha$-Closed Sets in Terms of Grills In this paper, we define the $g\~$$\alpha$($\theta$)-convergence and $g\~$$\alpha$($\theta$)-adherence using the concept of grills and study some of their properties. $g\~$$\alpha$-Closed Sets in Topological Spaces Authors: R. Devi, A. Selvakumar, Saeid Jafari In this paper, we introduce the notion of $g\~$$\alpha$-closed sets in topological spaces and investigate some of their basic properties. Dark Energy Survey Results The Dark Energy Survey (DES) program uses the patterns of cosmic structure as seen in the spatial distribution of hundreds of millions of galaxies to reveal the nature of "dark energy," the source of cosmic acceleration. [32] Key components for the sky-mapping Dark Energy Spectroscopic Instrument (DESI), weighing about 12 tons, were hoisted atop the Mayall Telescope at Kitt Peak National Observatory (KPNO) near Tucson, Arizona, and bolted into place Wednesday, marking a major project milestone. [31] Category: Astrophysics Weak Separation Axioms Via Pre-Regular $p$-Open Sets Authors: M. Caldas, Saeid Jafari, T. Noiri, M. S. Sarsak In this paper, we obtain new separation axioms by using the notion of $(\delta; p)$-open sets introduced by Jafari [3] via the notion of pre-regular $p$-open sets [2]. $**gα$ Closed and $**gα$ Open Sets in the Digital Plane Authors: M. Vigneshwaran1, Saeid Jafari, S. E. Han Digital topology was first studied in the late 1970's by the computer analysis researcher Azriel Rosenfeld [15]. In this paper we derive some of the properties of **gα-open and **gα-closed sets in the digital plane. Moreover, we show that the Khalimsky line $(Z^{2}, K^{2})$ is not an αT_1/2*** space. Also we prove that the family of all **gα-open sets of $(Z^2, K^2)$, say $**GαO(Z^2, K^2)$, forms an alternative topology of Z2 and the topological space (Z2, $**G\alpha O(Z^2, K^2))$ is a T_1/2 space. Moreover, we derive the properties of *gα-closed and *gα-open sets in the digital plane via the singleton's points C# Application to Deal with Neutrosophic $g \alpha$-Closed Sets in Neutrosophic Topology Authors: S. Saranya, M. Vigneshwaran, Saeid Jafari In this paper, we have developed a C# application for finding the values of the complement, union, intersection and the inclusion of any two neutrosophic sets in the neutrosophic field by using .NET Framework, Microsoft Visual Studio and C# Programming Language. In addition to this, the system can find neutrosophic topology ($\tau$ ), neutrosophic $g \alpha$-closed sets and neutrosophic $g \alpha$-closed sets in each resultant screens. Also this computer based application produces the complement values of each neutrosophic closed sets. Brain Transplant Telescopes It will enable Australian researchers to do more ambitious research despite the increasing radio-frequency interference from radio transmitters, make more discoveries, and perhaps understand some more of the mysteries of the universe. [32] Key components for the sky-mapping Dark Energy Spectroscopic Instrument (DESI), weighing about 12 tons, were hoisted atop the Mayall Telescope at Kitt Peak National Observatory (KPNO) near Tucson, Arizona, and bolted into place Wednesday, marking a major project milestone. [31] LHCb Explores Lepton Universality The LHCb collaboration has reported an intriguing new result in its quest to test a key principle of the Standard Model called lepton universality. [40] Do the anomalies observed in the LHCb experiment in the decay of B mesons hide hitherto unknown particles from outside the currently valid and well-tested Standard Model? [39] "There is strong experimental evidence that there is indeed some new physics lurking in the lepton sector," Dev said. [38] Now, in a new result unveiled today at the Neutrino 2018 conference in Heidelberg, Germany, the collaboration has announced its first results using antineutrinos, and has seen strong evidence of muon antineutrinos oscillating into electron antineutrinos over long distances, a phenomenon that has never been unambiguously observed. [37] The Precision Reactor Oscillation and Spectrum Experiment (PROSPECT) has completed the installation of a novel antineutrino detector that will probe the possible existence of a new form of matter. [36] The MINERvA collaboration analyzed data from the interactions of an antineutrino-the antimatter partner of a neutrino-with a nucleus. [35] The inclusion of short-range interactions in models of neutrinoless double-beta decay could impact the interpretation of experimental searches for the elusive decay. [34] The occasional decay of neutrons into dark matter particles could solve a long-standing discrepancy in neutron decay experiments. [33] The U.S. Department of Energy has approved funding and start of construction for the SuperCDMS SNOLAB experiment, which will begin operations in the early 2020s to hunt for hypothetical dark matter particles called weakly interacting massive particles, or WIMPs. [32] Thanks to low-noise superconducting quantum amplifiers invented at the University of California, Berkeley, physicists are now embarking on the most sensitive search yet for axions, one of today's top candidates for dark matter. [31] Category: High Energy Particle Physics Protein Mimics Antiaging Effects Scientists are just beginning to understand the cellular processes that lead to aging and slow healing in skin cells. [23] A new method allows researchers to systematically identify specialized proteins that unpack DNA inside the nucleus of a cell, making the usually dense DNA more accessible for gene expression and other functions. [22] Bacterial systems are some of the simplest and most effective platforms for the expression of recombinant proteins. [21] Category: Physics of Biology On the Various Ramanujan Equations (Rogers-Ramanujan Continued Fractions) Linked to Some Sectors of String Theory and Particle Physics: Further New Possible Mathematical Connections Vi. Authors: Michele Nardelli, Antonio Nardelli In this research thesis, we have analyzed and deepened further Ramanujan expressions applied to some sectors of String Theory and Particle Physics. We have therefore described other new possible mathematical connections. On a Certain Identity Involving the Gamma Function Authors: Theophilus Agama The goal of this paper is to prove the identity \begin{align}\sum \limits_{j=0}^{\lfloor s\rfloor}\frac{(-1)^j}{s^j}\eta_s(j)+\frac{1}{e^{s-1}s^s}\sum \limits_{j=0}^{\lfloor s\rfloor}(-1)^{j+1}\alpha_s(j)+\bigg(\frac{1-((-1)^{s-\lfloor s\rfloor +2})^{1/(s-\lfloor s\rfloor +2)}}{2}\bigg)\nonumber \\ \bigg(\sum \limits_{j=\lfloor s\rfloor +1}^{\infty}\frac{(-1)^j}{s^j}\eta_s(j)+\frac{1}{e^{s-1}s^s}\sum \limits_{j=\lfloor s\rfloor +1}^{\infty}(-1)^{j+1}\alpha_s(j)\bigg)=\frac{1}{\Gamma(s+1)},\nonumber \end{align}where \begin{align}\eta_s(j):=\bigg(e^{\gamma (s-j)}\prod \limits_{m=1}^{\infty}\bigg(1+\frac{s-j}{m}\bigg)\nonumber \\e^{-(s-j)/m}\bigg)\bigg(2+\log s-\frac{j}{s}+\sum \limits_{m=1}^{\infty}\frac{s}{m(s+m)}-\sum \limits_{m=1}^{\infty}\frac{s-j}{m(s-j+m)}\bigg), \nonumber \end{align}and \begin{align}\alpha_s(j):=\bigg(e^{\gamma (s-j)}\prod \limits_{m=1}^{\infty}\bigg(1+\frac{s-j}{m}\bigg)e^{-(s-j)/m}\bigg)\bigg(\sum \limits_{m=1}^{\infty}\frac{s}{m(s+m)}-\sum \limits_{m=1}^{\infty}\frac{s-j}{m(s-j+m)}\bigg),\nonumber \end{align}where $\Gamma(s+1)$ is the Gamma function defined by $\Gamma(s):=\int \limits_{0}^{\infty}e^{-t}t^{s-1}dt$ and $\gamma =\lim \limits_{n\longrightarrow \infty}\bigg(\sum \limits_{k=1}^{n}\frac{1}{k}-\log n\bigg)=0.577215664\cdots $ is the Euler-Mascheroni constant. A Useful Result About the Zeros of Riemann's Hypothesis: Motivated by many scientific articles attacking the use of Riemann's hypothesis, I made a very useful work about it by proving that the zeros of zeta when the variable real part is smaller than 1/2 and thus of Riemann's Hypothesis aren't images of the divergent function series . In this proof, I didn't suppose that zeta is convergent, but I supposed that the zero is among the images of a given complex number, since zeta can only be a relation when it doesn't converge. Division by Zero Dzielenie Przez Zero Authors: Leszek Mazurek Comments: 49 Pages. Text in Polish, English version soon Since the beginning of time, the ban on dividing by zero has been a serious problem for all thinking people, who have found this restriction hard to accept. Moreover, intuitively it seems very artificial and unjustified. In this work, I have made the effort to analyze this problem very carefully, to try and finally understand where it comes from, why it is a problem, a limitation, and what should be done to solve it. By a very detailed analysis of multiplication and division, I have found out that these operations are the same. Combined with the selection operation they are transformations that perform the change of a pair of numbers into another pair of numbers. After the analysis, I've found out that a ratio of two numbers, that can represent such a transformation, is the most natural form of a number. I'm proposing a new understanding of fractions and explaining why 1/2 should not be equal to 2/4. I presented an explanation to the problem of why division by zero is not possible in real numbers' domain and how easy it can be understood in the domain of rational numbers. The results that I have obtained not only solve this problem but also shed a whole new light on understanding the numbers and operations of numbers. My work fixes one of the foundations of mathematics, it is difficult to predict all consequences which this change can bring, to everything, what was created based on this wrong foundation. Category: General Mathematics Point-Free Topological Monoids and Hopf Algebras on Locales and Frames Authors: C. Özel, P. Linker, M. Al Shumrani, Saeid Jafari In this note, we are intended to offer some theoretical consideration concerning the introduction of point-free topological monoids on the locales and frames. Moreover, we define a quantum group on locales by utilizing the Drinfeld-Jimbo group. Proof of Beal's Conjecture Authors: Nikos Mantzakouras The Beal conjecture is a number theory formulated in 1993 by the billionaire banker, Mr Andrew Beal. Mr Beal, very recently, declared a one-million-dollar award for the proof of this number theory. As at present, no proof of this conjecture has been generally found. In this article, we provided the proof for Beal conjecture in a crystal-clear systematic approach.Proof implies that we know the proof of Fermat's last theorem. Category: Number Theory Possible Solution to Hidden Variable(s) in Context of String Theory and Logic Authors: John Peel Comments: 22 Pages. Hopefully interesting - proofs cuold follow The assumption is that a length (angle etc) can vary and be constant simultaneously. This is extended to say that the infinite can be reduced to a constant - essentially a decision. This is in the context of the proposed existence of strings and their connection to curves. Proteins on a Neuron's Surface Scientists have found a new way to home in on the proteins covering a particular cell's surface. The feat offers insight into how brain cells form intricate networks during development. [34] In a recent report, Mengke Yang and colleagues at the Brain Research Instrument Innovation Center, Institute of Neuroscience, Center for Systems Neuroscience and Optical System Advanced Manufacturing Technology in China, Germany and the U.K. developed a new technique named the multiarea two-photon real-time in vitro explorer (MATRIEX). [33] Measuring optical blood flow in the resting human brain to detect spontaneous activity has for the first time been demonstrated by Wright State University imaging researchers, holding out promise for a better way to study people with autism, Alzheimer's and depression. [32] Protein Positioning Technique A research team at Kobe University has developed a method of artificially controlling the anchorage position of target proteins in engineered baker's yeast (Saccharomyces cerevisiae). [35] Scientists have found a new way to home in on the proteins covering a particular cell's surface. The feat offers insight into how brain cells form intricate networks during development. [34] In a recent report, Mengke Yang and colleagues at the Brain Research Instrument Innovation Center, Institute of Neuroscience, Center for Systems Neuroscience and Optical System Advanced Manufacturing Technology in China, Germany and the U.K. developed a new technique named the multiarea two-photon real-time in vitro explorer (MATRIEX). [33] Decoy Molecule Neutralizes Arenaviruses Dr. Hadas Cohen-Dvashi, a member of the Diskin lab, led the next stage of the research, in which she "surgically removed" the very tip of the rodent receptor to which the virus binds and engineered it onto part of an antibody. [36] A research team at Kobe University has developed a method of artificially controlling the anchorage position of target proteins in engineered baker's yeast (Saccharomyces cerevisiae). [35] Scientists have found a new way to home in on the proteins covering a particular cell's surface. The feat offers insight into how brain cells form intricate networks during development. [34] Heat-Insulating and Heat-Conducting Scientists at the Max Planck Institute for Polymer Research (MPI-P) in Mainz and the University of Bayreuth have now jointly developed and characterized a novel, extremely thin and transparent material that has different thermal conduction properties depending on the direction. [23] Heat pipes are devices to keep critical equipment from overheating. They transfer heat from one point to another through an evaporation-condensation process and are used in everything from cell phones and laptops to air conditioners and spacecraft. [22] Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed an algorithm that can discover and optimize these materials in a matter of months, relying on solving quantum mechanical equations, without any experimental input. [21] Researchers at the University of Illinois at Urbana-Champaign have developed a new technology for switching heat flows 'on' or 'off'. [20] Thermoelectric materials can use thermal differences to generate electricity. Now there is an inexpensive and environmentally friendly way of producing them with the simplest tools: a pencil, photocopy paper, and conductive paint. [19] A team of researchers with the University of California and SRI International has developed a new type of cooling device that is both portable and efficient. [18] Thermal conductivity is one of the most crucial physical properties of matter when it comes to understanding heat transport, hydrodynamic evolution and energy balance in systems ranging from astrophysical objects to fusion plasmas. [17] Researchers from the Theory Department of the MPSD have realized the control of thermal and electrical currents in nanoscale devices by means of quantum local observations. [16] Physicists have proposed a new type of Maxwell's demon-the hypothetical agent that extracts work from a system by decreasing the system's entropy-in which the demon can extract work just by making a measurement, by taking advantage of quantum fluctuations and quantum superposition. [15] Pioneering research offers a fascinating view into the inner workings of the mind of 'Maxwell's Demon', a famous thought experiment in physics. [14] Category: Thermodynamics and Energy Machine Learning Ancient Past A team of researchers affiliated with several institutions in China and two in the U.S. has developed a way to use machine learning to get a better look at the past. In their paper published in the journal Science, the group describes how they used machine learning to analyze records of the past. [28] Bioinformatics researchers at Heinrich Heine University Düsseldorf (HHU) and the University of California at San Diego (UCSD) are using machine learning techniques to better understand enzyme kinetics and thus also complex metabolic processes. [27] DNA regions susceptible to breakage and loss are genetic hot spots for important evolutionary changes, according to a Stanford study. [26] For the English scientists involved, perhaps the most important fact is that their DNA read was about twice as long as the previous record, held by their Australian rivals. [25] Researchers from the University of Chicago have developed a high-throughput RNA sequencing strategy to study the activity of the gut microbiome. [24] Today a large international consortium of researchers published a complex but important study looking at how DNA works in animals. [23] Asymmetry plays a major role in biology at every scale: think of DNA spirals, the fact that the human heart is positioned on the left, our preference to use our left or right hand ... [22] Scientists reveal how a 'molecular machine' in bacterial cells prevents fatal DNA twisting, which could be crucial in the development of new antibiotic treatments. [21] In new research, Hao Yan of Arizona State University and his colleagues describe an innovative DNA HYPERLINK "https://phys.org/tags/walker/" walker, capable of rapidly traversing a prepared track. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Category: Artificial Intelligence Correlation of the Residual Charge of an Electrolytic Capacitor with the Earth's Magnetic Field Authors: Evgeny Arsyukhin The effect of the diurnal variation of the residual charge of the electrolytic capacitor is revealed. The change in voltage between the capacitor plates correlates with high accuracy with the daily change in the strength of the Earth's calm magnetic field. We were unable to find a description of such an effect in the literature. It can be interesting for school teachers to organize simple experiments at school. Category: Geophysics Remarks on Birch and Swinnerton-Dyer Conjungture Authors: Algirdas Antano Maknickas Comments: 1 Page. These short remarks show deriviation of Birch and Swinnerton-Dyer conjungture. As a consequence new one resulting constant free equality of Birch and Swinnerton-Dyer conjungture proposed On the Ramanujan Mathematics Applied to Some Sectors of String Theory and Particle Physics: Further New Possible Mathematical Connections V. In this research thesis, we have analyzed and deepened further Ramanujan expressions applied to some sectors of String Theory and Particle Physics. We have therefore described new possible mathematical connections. Baryon Asymmetry from the Minimal Fractal Manifold Authors: Ervin Goldfain Baryon asymmetry represents the observed excess of matter over antimatter and is conjectured to follow from the Sakharov conditions for baryogenesis. Our brief note highlights a surprising connection between baryon asymmetry and the minimal fractality of spacetime near the Fermi scale. This connection is likely to emerge from the non-equilibrium regime of dimensional fluctuations in the early Universe. A Runner Cheating Thanks to Riemann: Comments: 2 Pages. This is a better corrected work from a previous work about integrals. Pure mathematics should be used very carefully when applying it to many fields that have special considerations and special axioms. I used a simple example of runners waiting for the start of a race. I concluded thanks to Riemann's definition of integrals that a runner can cheat in order to win. The demonstration in this paper is very simple but the analogy of the proposed example with many fields can make the researcher be careful when using the definition of Riemann for the integrals. Unificação Das Leis Gravitacional Com a Relatividade Especial Authors: Alberto Mananga Bifica Comments: 10 Pages. Este Artigo Científico está Escrito em Português A combinação das leis Gravitacional com a Relatividade Especial detalha as condições que permitem explicar que a energia é o responsável pela interacção gravitacional, juntamente com o espaço-tempo, matéria; a gravidade tem o seu valor máximo de velocidade v=c nos centros de massas dos planetas, estrelas, buracos negros e também no processo contínuo de energia criar matéria e matéria se transformar em ondas electromagnéticas que participam no processo da expansão do Universo. Nos pontos extremos em que a gravidade alcança v=c é onde acontece a unificação com o electromagnetismo. Campo Unificado (Gravidade e Electromagnetismo) A combinação das leis da Relatividade Especial com a Cinemática e Dinâmica do movimento resultaram nas novas equações electromagnéticas e gravitacionais que detalham as condições que as torna em uma única força e também permitem calcular a massa, raio e carga eléctrica do Universo no seu estado actual, progressivo e regressivo. A força electromagnética torna-se igual a gravidade quando a velocidade das suas interacções é igual a da luz, isto é, nos centro de massa onde há maior potência de energia, na expansão do Universo, e no processo contínuo de energia criar matéria e matéria se transformar em ondas electromagnéticas para interagir com o exterior. As equações mostraram que não foi uma força que deu origem ao bing-bang (F=0), pôs por uma única partícula não poderia haver interacção; A origem do bing-beng ocorreu por meio da energia (E=mc^2) com campos de acção gravitacional e electromagnética onde a massa tende para infinito com a velocidade de equilíbrio v=c; e como também mostram que as ondas de energia nunca tiveram um começo sempre existiram num tempo infinito; As equações v_e v_g=c^2 e v_e v_p=c^2 respectivamente mostram evidências de que onda de energia ou a luz, na verdade são as leis gravitacionais e electromagnéticas é uma só entidade e também prova que átomos são apenas uma acção causado pelas possibilidades da dualidade onda-partícula. Equações Finais do Campo Unificado (Gravidade e Electromagnetismo) A força electromagnética torna-se igual a gravidade quando a velocidade das suas interacções é igual a c da luz, isto ocorre nos centro de massa onde há maior potência de energia, na expansão do Universo, e no processo contínuo de energia criar matéria e matéria se transformar em ondas electromagnéticas para interagir com o exterior. As equações mostraram que não foi uma força que deu origem ao bing-bang (F=0), mas sim por meio de energia (E=mc^2) com campos de acção gravitacional e electromagnética onde a massa tende para infinito com a velocidade de equilíbrio v=c; A equação v_e v_g=c^2 mostra evidencias de que onda de energia ou a luz, na verdade são as leis gravitacionais e electromagnéticas é uma só entidade, e também prova que átomos são apenas uma acção causado pelas possibilidades da dualidade onda-partícula que o seu conjunto é a luz. Sistema Atómico e O Segredo da Luz (Energia Transportadora Das Ondas, Velocidade e Raio do Protão) O átomo tem sido um dos grandes desafios da Física Moderna para tentar desvendar os seus mistérios que quanto mais conhecermos o mundo atómico mais incertezas encontram para seu entendimento. Este artigo mostra que o segredo do átomo está na sua ligação profunda e única com a luz que permite a sua existência e funcionamento nas suas possibilidades de se tornar numa partícula. A velocidade da luz é o produto proporcional das velocidades centrípetas do protão e electrão provando que se trata duma única entidade no Universo ou seja e equação 〖v_e v_p〗^ =c^2 prova que a luz é responsável pelo átomo permitindo assim a dualidade onda-partícula deste sistema. Por meio da dualidade onda-partícula encontrou-se uma equação que é a nova entidade de energia no Universo. Atlantis in Auschwitz-Birkenau Authors: Leon Elshout Auschwitz was surrounded by a sea of hostile nations. While the Jews were collectively kicked into the death realm of the sheol as Atlantis had sunken into the sea. Auschwitz was located in the Polish region of Galicia, a name that sounds like Galilee. The Lake of Galilee has got the same harp shape (Psalm 49:4) as Athanasius Kirchers Atlantis had. Adolf Hitler had the spirit of the false "nazi" prince King Atlas while king David will be the true "nashi" prince of the coming aion according to Ezekiel 37:25. Category: Religion and Spiritualism Proving Physics by Making Science Formal (2020) Authors: Alexandre Harvey-Tremblay Under the expectation that the laws of physics are revealed by the practice of science in nature, we conjectured that a mathematical formalization of such would carry over this expectation and would be able to produce, in a formal setting, the laws of physics as theorems of the practice. Here, we report a mathematical model formalizing the practice of science in nature. The model, which we name formal science, is constructed within the frameworks of algorithmic information theory and that of theoretic computer science. Formal science is a significant improvement over the informal practice of science as well as a 'clarification tool par excellence' for the foundation of physics and as such it can derive the corpus of physics as a theorem, hence giving it a purely mathematical origin. Formal science reveals that nature and the laws that govern it, far from being arbitrary, are mathematically extremely special; nature is emergent, quite minimally, as the 'substance' that formally verifies the experiments recursively enumerated from the domain of science by the observer. After we present the model, we then begin the long program to derive the corpus of physics using formal science as the sound, free of physical baggage, and observer-centric mathematical foundation of physics. Different Types of Propulsion Authors: Jeffrey Joseph Wolynski A short list from memory that will be clarified and refined in the future that outlines different types of propulsion people use in the galaxy. The purpose of this paper is to take the edge off the ridicule factor still prevalent in today's society concerning extraterrestrial visitation and their interaction with the militaries and civilian populations of the world. The denial of their visitation and interaction of Earthlings by the scientific communities of the world can be ignored, their opinion is worthless. It is only the facts that matter, and the facts point to Earth being visited by people from other star systems. The facts also point to their craft being far more advanced than our own, and they utilize nuclear propulsion in a way we haven't figured out how to do yet. Category: General Science and Philosophy Strip Configuration of the Poincare' Sphere Authors: Vincenzo Nardozza In this paper we find and discuss the Strip Configuration of the Poincare' Sphere. Biot-Savart Law and Stokes' Theorem Authors: Eric Su Comments: 2 Pages. Note that the author does not read Disqus comments here, please respond by email. Including a list of publications is also a healthy gesture. Biot-Savart law describes magnetic field due to the electric current in a conductive wire. For a long straight wire, the magnetic field is proportional to (I/r). The curl of magnetic field is proportional to (dI/dr). For a constant current, the curl of magnetic field is zero. Consequently, the surface integral of the curl of magnetic field is zero but the line integral of the magnetic field is not. Stokes' theorem can not be applied to the magnetic field vector generated by a constant electric current because the magnetic field is not a differentiable vector. Stellar Aberration Revisited Authors: Cameron Wong Comments: 6 Pages. Theoretically, stellar aberration can be calculated with absolute precision other than "approximately". It appears that the reasoning in calculating stellar aberration proposed by James Bradley cannot satisfy a big group of people, particularly those people who believe only special relativity can present such calculation that is accurate to the point. This means that, in case relativity happens to be invalid, the argument on the calculation about stellar aberration has not yet truly settled in the science world these days. The problem is that relativity does show evidence that it must lead itself to conclude speed of light c=0 and thus refutes itself. So, history still leaves open the opportunity for us to approach the calculation from a new angle Why Bodies Made of Matter Obey Mathematical Laws Authors: Jose P Koshy Why bodies obey mathematical laws is a longstanding question in physics. Here I put forth a very simple explanation. Matter is made up of fundamental units having finite non inter-convertible properties, and motion is one of that properties. The former implies that the integration of units into a system is adding up; adding up follows mathematical laws. The latter implies that the system remains changing due to motion; motion obeys mathematical laws, and so the changes in the system obey mathematical laws. Chemical Bond with Atoms The team believe that one day in future electron microscopy may become a general method for studying chemical reactions, similar to spectroscopic methods widely used in chemistry labs. [36] By breaking with conventionality, University of Otago physicists have opened up new research and technology opportunities involving the basic building block of the world—atoms. [35] A novel technique that nudges single atoms to switch places within an atomically thin material could bring scientists another step closer to realizing theoretical physicist Richard Feynman's vision of building tiny machines from the atom up. [34] Category: Quantum Physics Photo-Excited Mott Insulators Assistant Professor Ohmura Shu and Professor Takahashi Akira of the Nagoya Institute of Technology and others have developed a charge model to describe photo-excited states of one-dimensional Mott insulators under the JST Strategic Basic Research Programs. [37] The team believe that one day in future electron microscopy may become a general method for studying chemical reactions, similar to spectroscopic methods widely used in chemistry labs. [36] By breaking with conventionality, University of Otago physicists have opened up new research and technology opportunities involving the basic building block of the world—atoms. [35] Robotic Graspers Defy Gravity Scientists have developed a suction unit that can be used on rough surfaces, no matter how textured, and that has applications in the development of climbing robots and robotic arms with grasping capabilities. [12] The relationship may even unlock the quantum nature of gravity. "It is among our best clues to understand gravity from a quantum perspective," said Witten. [11] Scientists at the University of British Columbia have proposed a radical new theory to explain the exponentially increasing size of the universe. [10] Category: Condensed Matter Relativity and Absolute Laws Authors: Jean Louis Van Belle This paper adds some thoughts on relativity theory and geometry to our one-cycle photon model. We basically highlight what exactly we should think of as being relative in this model (energy, wavelength, and the related force and field values), as opposed to what is absolute (the geometry of spacetime and the geometry of the photon). Molecules Move Faster Now, writing in Physical Review Letters, Cristian Rodriguez-Tinoco and a team of Université libre de Bruxelles' (ULB) Faculty of Sciences lead by Simone Napolitano shows that large molecules actually move faster in the proximity of rougher surfaces at the nanometric scale. [37] The team believe that one day in future electron microscopy may become a general method for studying chemical reactions, similar to spectroscopic methods widely used in chemistry labs. [36] By breaking with conventionality, University of Otago physicists have opened up new research and technology opportunities involving the basic building block of the world—atoms. [35] Solve Equations.(Τhe Generalized Theorem) While all the approximate methods mentioned or others that exist, give some specific solutions of the generalized transcendental equations or even polynomial, cannot resolve them completely. "What we ask when we solve a generalized transcendental equation or polyonomial, is to find the total number of roots and not separate sets of roots in some random or specified this time. Mainly because this, too many categories transcendental equations have infinite number of solutionsin the complex whole " There are some particular equations or Logarithmic functions Trigonometric functions which solve particular problems in Physics, and mostly need the generalized solution. This is now the theory G.R.LE, to deal with the help of Super Simple geometric functions, or interlocking with very satisfactory answer to all this complex problem. Programmable Cells Nests Using DNA, small silica particles, and carbon nanotubes, researchers of Karlsruhe Institute of Technology (KIT) have developed novel programmable nanocomposites that can be tailored to various applications and programmed to degrade quickly and gently. [37] Scientists at the Technical University of Munich (TUM) have functionalized a simple rod-like building block with hydroxamic acids at both ends. [36] Tiny silica bottles filled with medicine and a special temperature-sensitive material could be used for drug delivery to kill malignant cells only in certain parts of the body, according to a study published recently by researchers at the Georgia Institute of Technology. [35] An Analysis of the State-Transition Function of a Self-Reproducing Structure in Cellular Automata Space Authors: Perry W Swanborough A cellular automata structure described by J Byl (1989) self-replicates under a corresponding state-transition function. Subsequent work has established that replication of this and related structures given by other researchers is homochiral. This work describes a detailed analysis of the state-transition function for replication of J Byl's structure, so the work serves as an Appendix to accompany the preceding work viXra:1904.0225. On Various Ramanujan Formulas Applied to Some Sectors of String Theory (Open Strings) and Particle Physics: Further New Possible Mathematical Connections Iv. In this research thesis, we have analyzed and deepened various Ramanujan expressions applied to some sectors of String Theory (open strings) and Particle Physics. We have therefore described further new possible mathematical connections. Piece by Piece - Electrochemical Synthesis of Individual Nanoparticles and their Performance in ORR Electrocatalysis Authors: Mathies V. Evers, Miguel Bernal, Beatriz Roldan Cuenya, Kristina Tschulik The impact of individual HAuCl4 nanoreactors is measured electrochemically, which provides operando insights and precise control over the modification of electrodes with functional nanoparticles of well-defined size. Uniformly sized micelles are loaded with a dissolved metal salt. These solution-phase precursor entities are then reduced electrochemically - one by one - to form nanoparticles (NPs). The charge transferred during the reduction of each micelle is measured individually and allows operando sizing of each of the formed nanoparticles. Thus, particles of known number and sizes can be deposited homogenously even on nonplanar electrodes. This is demonstrated for the decoration of cylindrical carbon fibre electrodes with 25 +/- 7 nm sized Au particles from HAuCl4-filled micelles. These Au NP-decorated electrodes show great catalyst performance for ORR (oxygen reduction reaction) already at low catalyst loadings. Hence, collisions of individual precursor-filled nanocontainers are presented as a new route to nanoparticle-modified electrodes with high catalyst utilization. Category: Chemistry Ab Initio Cyclic Voltammetry on Cu(111), Cu(100) and Cu(110) in Acidic, Neutral and Alkaline Solutions Authors: Alexander Bagger, Rosa M. Arán-Ais, Joakim Halldin Stenlid, Egon Campos dos Santos, Logi Arnarson, Kim Degn Jensen, María Escudero-Escribano, Beatriz Roldan Cuenya, Jan Rossmeisl Comments: Pages. Electrochemical reactions depend on the electrochemical interface; between the catalytic surfaces and the electrolytes. To control and advance electrochemical reactions there is a need to develop realistic simulation models of the electrochemical interface to understand the interface from an atomistic point-of-view. Here we present a method for obtaining thermodynamic realistic interface structures, a procedure to derive specific coverages and to obtain ab initio simulated cyclic voltammograms. As a case study, the method and procedure is applied in a matrix study of three Cu facets in three different electrolyte. The results are validated by a direct comparison with experimental cyclic voltammograms. The alkaline (NaOH) electrolyte CV are described by H_ and OH_, while neutral (KHCO3) the CO_3 species are present and in acidic (KCl) the Cl_ species dominate. An almost one-to-one mapping is observed from simulation to experiments giving an atomistic understanding of the interface structure of the Cu facets. The strength of atomistic understanding the interface at electrolyte conditions will allow realistic investigations of electrochemical reactions in future studies. Spin Currents for Advanced Electronic Devices Graphene-based van der Waals heterostructures could be used to design ultra-compact and low-energy electronic devices and magnetic memory devices, according to a study led by ICREA Prof. Sergio O. Valenzuela, head of the ICN2 Physics and Engineering of Nanodevices Group. [15] A new method that precisely measures the mysterious behavior and magnetic properties of electrons flowing across the surface of quantum materials could open a path to next-generation electronics. [14] The emerging field of spintronics aims to exploit the spin of the electron. [13] In a new study, researchers measure the spin properties of electronic states produced in singlet fission – a process which could have a central role in the future development of solar cells. [12] Scalability of Next-Generation Electronics Nebraska engineers Peter and Eli Sutter have shown that the elemental condiment can spice up a nanomaterial sandwich by putting a literal twist on the multi-layered classic. [16] Graphene-based van der Waals heterostructures could be used to design ultra-compact and low-energy electronic devices and magnetic memory devices, according to a study led by ICREA Prof. Sergio O. Valenzuela, head of the ICN2 Physics and Engineering of Nanodevices Group. [15] A new method that precisely measures the mysterious behavior and magnetic properties of electrons flowing across the surface of quantum materials could open a path to next-generation electronics. [14] Nanostructure for Structurally Colored Surfaces Structural colors appear because the imprinted pattern on a surface changes the wavelengths of light. Chinese scientists have introduced an azopolymer that allows the imprinting of nanopatterns in a novel room-temperature lithographic process. [17] Nebraska engineers Peter and Eli Sutter have shown that the elemental condiment can spice up a nanomaterial sandwich by putting a literal twist on the multi-layered classic. [16] Graphene-based van der Waals heterostructures could be used to design ultra-compact and low-energy electronic devices and magnetic memory devices, according to a study led by ICREA Prof. Sergio O. Valenzuela, head of the ICN2 Physics and Engineering of Nanodevices Group. [15] Force I. the One and Only Fundamental Interaction Authors: Tamas Lajtner The world's first thought power meter is present. Its first result is that thought force with no electric field is able to change the current and voltage in an electric circuit . This result has conclusions. The most important is the following one: there is only one fundamental interaction, because every known interaction can be explained by one ultimate fundamental interaction called Force I of All. (Pronunciation: "Force the First".) Force I is the interaction between space and matter. Thought force is one existing form of Force I and though force measured is the first evidence that Force I exists. Stability of Ice Lenses in Saline Soils Authors: S. S. L. Peppin A model of the growth of an ice lens in a saline porous medium is developed. At high lens growth rates the pore fluid becomes supercooled relative to its equilibrium Clapeyron temperature. Instability occurs when the supercooling increases with distance away from the ice lens. Solute diffusion in the pore fluid significantly enhances the instability. An expression for the segregation potential of the soil is obtained from the condition for marginal stability of the ice lens. The model is applied to a clayey silt and a glass powder medium, indicating parameter regimes where the ice lens stability is controlled by viscous flow or by solute diffusion. A mushy layer, composed of vertical ice veins and horizontal ice lenses, forms in the soil in response to the instability. A marginal equilibrium condition is used to estimate the segregated ice fraction in the mushy layer as a function of the freezing rate and salinity. Does The Observation Process Effect On The Observed Particle Nature? Authors: Gerges Francis Tawdrous Empirical Data An Electron Or Subatomic Particle Behaves As A Particle And Not A Wave When It's Observed! Why? Because The Human Mind Realization Process Depends On (And Uses) The Light Velocity (0.3mkm/Sec) How Does The Observer Effect On The Observed Particle Nature? By Light Beam Produced By The Observer Mind Realization Process. Paper Idea Summary This paper tells a clear claim – that – Because our minds realization process depends on and uses the light velocity (0.3mkm/sec) in the human thinking process that causes the light velocity to be a contributor factor of every thing around us – simply – We See The Universe Through The Light Velocity Effect On Our Vision- That easily explains the equation E=mc2 – why any Mass Energy is defined relative to the light velocity?! Because the light velocity is found in our minds- that explains Why light velocity is constant in all frames Now Because our minds uses the light velocity (0.3mkm/sec), that enable our minds to produce light beams by the thinking process – this produced light beam can effect on any particle nature as the sun rays effect on all particles – This is the idea shortly– let's ask……"Is this idea truth?" This paper tries to prove… Gerges Francis Tawdrous +201022532292 A Theory for Gravitational Killing Cancer Cells, Lightning Mechanism and Anomalous Magnetism of Muon 2G Authors: Reginald B. Little The death of cancer cells under zero gravity or simulated zero gravity has an unknown cause. A prior theory of gravity by fractional, reversible fissing of matter and fusing of space to target is presented for explaining this mystery of gravitational killing of cancer cells. With this new theory a new math of divergent differentiations and divergent integrations are outlined to explain mysteries. By the mechanism given the variation in source gravity as computed by the new math can thereby explain effects on biology as the biology and chemistry have divergent differentiations and divergent integrations, which couple source of gravities and couple changing gravities to surrounding spaces and targets in surrounding spaces. Greater effects of gravities in nano-scales than molecular scales are reasoned as nano-sizes have mass effects and greater collectivity relative to molecular scales. The mechanism also postulates superluminosity of rare with slowing to luminous with denseness (space reversal) for explaining inertia, denseness and back and forth time reversal. The loss of inertia due to space reversal is reasoned! Mass to energy and vice versa dynamics are involved relativistically. By such new mechanics there are differences in denseness as the superlumes fiss to rare so surrounding rare can couple and the superluminous rare concentrates to slow so as to couple to dense. There are limits of such superluminosity as by the vast distances and the vast, composite, dense spaces of matter and the slowness (inertia). These new mechanics of composite matter/space manifest group dispersion (by new divergent calculus) as provided by hidden mechanics as by self-interacting self-deforming conformations to explain observable phase dispersed (older calculus). The observables are manifested by phase dispersions of matter and space as by new divergent calculus via constructive self interactions. The new math is contrasted with the Newtonian integrals and derivatives, which are more finite in actions and consequences whereas the divergent integrals and divergent differentiations are more infinities in actions and consequences. If dynamic infinity and count infinity, then the counting and mechanics can be as demonstrated here by many infinities or counter infinities. Billions Quantum Entangled Electrons In a new study, U.S. and Austrian physicists have observed quantum entanglement among "billions of billions" of flowing electrons in a quantum critical material. [36] Researchers at Technische Universität Darmstadt have recently demonstrated the defect-free assembly of versatile target patterns of up to 111 single-atom quantum systems. [35] Physicists at the National Institute of Standards and Technology (NIST) have teleported a computer circuit instruction known as a quantum logic operation between two separated ions (electrically charged atoms), showcasing how quantum computer programs could carry out tasks in future large-scale quantum networks. [34] Scientists have developed a topological photonic chip to process quantum information, promising a more robust option for scalable quantum computers. [33] With their insensitivity to decoherence, Majorana particles could become stable building blocks of quantum computers. [32] A team of researchers at the University of Maryland has found a new way to route photons at the micrometer scale without scattering by building a topological quantum optics interface. [31] Researchers at the University of Bristol's Quantum Engineering Technology Labs have demonstrated a new type of silicon chip that can help building and testing quantum computers and could find their way into your mobile phone to secure information. [30] Theoretical physicists propose to use negative interference to control heat flow in quantum devices. [29] Particle physicists are studying ways to harness the power of the quantum realm to further their research. [28] A fundamental barrier to scaling quantum computing machines is "qubit interference." In new research published in Science Advances, engineers and physicists from Rigetti Computing describe a breakthrough that can expand the size of practical quantum processors by reducing interference. [26] Measure Quantum Materials Experimental physicists have combined several measurements of quantum materials into one in their ongoing quest to learn more about manipulating and controlling the behavior of them for possible applications. [30] These emerging magnetic properties suggest that the dots could, indeed, have potential in quantum computing as storage and processing devices. [29] Researchers successfully integrated the systems-donor atoms and quantum dots. [28] A team of researchers including U of A engineering and physics faculty has developed a new method of detecting single photons, or light particles, using quantum dots. [27] Recent research from Kumamoto University in Japan has revealed that polyoxometalates (POMs), typically used for catalysis, electrochemistry, and photochemistry, may also be used in a technique for analyzing quantum dot (QD) photoluminescence (PL) emission mechanisms. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] In the September 23th issue of the Physical Review Letters, Prof. Julien Laurat and his team at Pierre and Marie Curie University in Paris (Laboratoire Kastler Brossel-LKB) report that they have realized an efficient mirror consisting of only 2000 atoms. [20] Physicists at MIT have now cooled a gas of potassium atoms to several nanokelvins-just a hair above absolute zero-and trapped the atoms within a two-dimensional sheet of an optical lattice created by crisscrossing lasers. Using a high-resolution microscope, the researchers took images of the cooled atoms residing in the lattice. [19] Three Circle Chains Arising from Three Lines Authors: Hiroshi Okumura We generalize a problem in Wasan geometry involving a triangle and its incircle and get simple relationships between the three chains arising from three lines. Category: Geometry Semiconductor Green Laser Scientists and Engineers have used surface-emitting semiconductor lasers in data communications, for sensing, in FaceID and within augmented reality glasses. [40] A study led by scientists of the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) at the Center for Free-Electron Laser Science in Hamburg/Germany presents evidence of the amplification of optical phonons in a solid by intense terahertz laser pulses. [39] Femtosecond lasers are capable of processing any solid material with high quality and high precision using their ultrafast and ultra-intense characteristics. [38] Nanoscalechip Detect Temperature Now, Joohyun Lee and Il Doh of the Korea Research Institute of Standards and Science, in Daejeon, South Korea, have developed a tiny device that measures otherwise undetectable heat changes. [21] An international team of physicists, materials scientists, and mechanical engineers has confirmed the high thermal conductivity predicted in isotopically enriched cubic boron nitride, the researchers report in the advance electronic edition of the journal Science. [20] Thermoelectric materials can use thermal differences to generate electricity. Now there is an inexpensive and environmentally friendly way of producing them with the simplest tools: a pencil, photocopy paper, and conductive paint. [19] A team of researchers with the University of California and SRI International has developed a new type of cooling device that is both portable and efficient. [18] Thermal conductivity is one of the most crucial physical properties of matter when it comes to understanding heat transport, hydrodynamic evolution and energy balance in systems ranging from astrophysical objects to fusion plasmas. [17] Researchers from the Theory Department of the MPSD have realized the control of thermal and electrical currents in nanoscale devices by means of quantum local observations. [16] Physicists have proposed a new type of Maxwell's demon-the hypothetical agent that extracts work from a system by decreasing the system's entropy-in which the demon can extract work just by making a measurement, by taking advantage of quantum fluctuations and quantum superposition. [15] Pioneering research offers a fascinating view into the inner workings of the mind of 'Maxwell's Demon', a famous thought experiment in physics. [14] For more than a century and a half of physics, the Second Law of Thermodynamics, which states that entropy always increases, has been as close to inviolable as any law we know. In this universe, chaos reigns supreme. [13] Physicists have shown that the three main types of engines (four-stroke, twostroke, and continuous) are thermodynamically equivalent in a certain quantum regime, but not at the classical level. [12] Colloidal Quantum Dot Photodetectors In their experiment, the researchers used a technique to electronically dope the quantum dots robustly and permanently. [30] These emerging magnetic properties suggest that the dots could, indeed, have potential in quantum computing as storage and processing devices. [29] Researchers successfully integrated the systems-donor atoms and quantum dots. [28] A team of researchers including U of A engineering and physics faculty has developed a new method of detecting single photons, or light particles, using quantum dots. [27] Recent research from Kumamoto University in Japan has revealed that polyoxometalates (POMs), typically used for catalysis, electrochemistry, and photochemistry, may also be used in a technique for analyzing quantum dot (QD) photoluminescence (PL) emission mechanisms. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] In the September 23th issue of the Physical Review Letters, Prof. Julien Laurat and his team at Pierre and Marie Curie University in Paris (Laboratoire Kastler Brossel-LKB) report that they have realized an efficient mirror consisting of only 2000 atoms. [20] Physicists at MIT have now cooled a gas of potassium atoms to several nanokelvins-just a hair above absolute zero-and trapped the atoms within a two-dimensional sheet of an optical lattice created by crisscrossing lasers. Using a high-resolution microscope, the researchers took images of the cooled atoms residing in the lattice. [19] On Various Ramanujan Formulas Applied to Some Sectors of String Theory and Particle Physics: Further New Possible Mathematical Connections III. In this research thesis, we have analyzed and deepened various Ramanujan expressions applied to some sectors of String Theory and Particle Physics. We have therefore described further new possible mathematical connections. Ontological Math-Physics Mirror Between Noether and Planck Authors: Francis Maleval A mirror built here, by virtue of an iterative process, a geometry from a conceptual object. This dynamic, served by the Noether's theorem, generates the universal constants. Nano-Patterns Building Blocks Scientists at the Technical University of Munich (TUM) have functionalized a simple rod-like building block with hydroxamic acids at both ends. [36] Tiny silica bottles filled with medicine and a special temperature-sensitive material could be used for drug delivery to kill malignant cells only in certain parts of the body, according to a study published recently by researchers at the Georgia Institute of Technology. [35] The lab of Cheryl Kerfeld at Michigan State University has created a synthetic nano-sized factory, based on natural ones found in bacteria. [34] Quantum Detector Sensitivity One of the open questions in quantum research is how heat and thermodynamics coexist with quantum physics. [39] But one lesser-known field is also starting to reap the benefits of the quantum realm-medicine. [38] A quantum squeezing and amplification technique has been used to measure the position of a trapped ion to subatomic precision. [37] A new theoretical model involves squeezing light to just the right amount to accurately transmit information using subatomic particles. [36] The standard approach to building a quantum computer with majoranas as building blocks is to convert them into qubits. However, a promising application of quantum computing-quantum chemistry-would require these qubits to be converted again into so-called fermions. [35] Scientists have shown how an optical chip can simulate the motion of atoms within molecules at the quantum level, which could lead to better ways of creating chemicals for use as pharmaceuticals. [34] Chinese scientists Xianmin Jin and his colleagues from Shanghai Jiao Tong University have successfully fabricated the largest-scaled quantum chip and demonstrated the first two-dimensional quantum walks of single photons in real spatial space, which may provide a powerful platform to boost analog quantum computing for quantum supremacy. [33] To address this technology gap, a team of engineers from the National University of Singapore (NUS) has developed an innovative microchip, named BATLESS, that can continue to operate even when the battery runs out of energy. [32] Stanford researchers have developed a water-based battery that could provide a cheap way to store wind or solar energy generated when the sun is shining and wind is blowing so it can be fed back into the electric grid and be redistributed when demand is high. [31] Researchers at AMOLF and the University of Texas have circumvented this problem with a vibrating glass ring that interacts with light. They thus created a microscale circulator that directionally routes light on an optical chip without using magnets. [30] A EQUAÇÃO Das EQUAÇÕES. a Unificadora do Tudo. a SOLUÇÃO DE Absolutamente Tudo. Authors: JosÊnio Dos Anjos Comments: 1 Page. Pus a mão no caldeirão de Deus e ainda O espiei apertar o interruptor de luz Um carro da Teoria Geral da Relatividade viaja a 100km/h para percorrer 100km. quando chega lá, verifica-se que ele percorreu 109km em 1h. Pergunta: de onde vem a energia extra para ter aumentado esse percurso? Problema pior, ele foi sem nenhum combustível no tanque. Solução. foi a caminho que está se esticando. o carro esta estacionário. se der partida, sua ultima gota de gasolina explode. E como o caminho se esticou em ambos os lados em velocidade constante de 100km/h, a impressão que temos é que ele viajou uns kms extras. Solucionado o mistério da Energia Escura e de tudo, absolutamente tudo. As outras questões estão em outro artigo. Qualquer teoria utilizando a equação aqui postulada, tem que ser expressamente autorizada pelo autor. Deep Learning Real-Time Imaging Researchers have harnessed the power of a type of artificial intelligence known as deep learning to create a new laser-based system that can image around corners in real time. [28] A team of scientists at Freie Universität Berlin has developed an Artificial Intelligence (AI) method that provides a fundamentally new solution of the "sampling problem" in statistical physics. [27] Deep learning, which uses multi-layered artificial neural networks, is a form of machine learning that has demonstrated significant advances in many fields, including natural language processing, image/video labeling and captioning. [26] Single Molecule Force Spectroscopy As researchers develop clever approaches to achieve that goal, this subject alone could be a theme of another exciting symposium." [41] A team at Osaka University has created single-molecule nanowires, complete with an insulation layer, up to 10 nanometers in length. [40] Using optical and electrical measurements, a two-dimensional anisotropic crystal of rhenium disulfide was found to show opposite piezoresistant effects along two principle axes, i.e. positive along one axis and negative along another. [39] A team of researchers from the University of Konstanz has demonstrated a new aqueous polymerization procedure for generating polymer nanoparticles with a single chain and uniform shape, which, by contrast to previous methods, involves high particle concentrations. [38] A team of researchers from China, the U.S. and Japan has developed a way to strengthen graphene-based membranes intended for use in desalination projects-by fortifying them with nanotubes. [37] The team arrived at their results by imaging gold nanoparticles, with diameters ranging from 2 to 5 nanometres, via aberration corrected scanning transmission electron microscope. [36] Nanoparticles of less than 100 nanometres in size are used to engineer new materials and nanotechnologies across a variety of sectors. [35] For years, researchers have been trying to find ways to grow an optimal nanowire, using crystals with perfectly aligned layers all along the wire. [34] Ferroelectric materials have a spontaneous dipole moment which can point up or down. [33] Researchers have successfully demonstrated that hypothetical particles that were proposed by Franz Preisach in 1935 actually exist. [32] Scientists from the Department of Energy's SLAC National Accelerator Laboratory and the Massachusetts Institute of Technology have demonstrated a surprisingly simple way of flipping a material from one state into another, and then back again, with single flashes of laser light. [31] Crystals with Ultrahigh Piezoelectricity Now, an international team of researchers say that cycles of AC fields also make the internal crystal domains in some materials bigger and the crystal transparent. [33] The presence of helical modes allowed them to form a new quantum device from a topological crystalline insulator known as a helical nanorod with quantized longitudinal conductance. [32] Now, researchers at MIT along with colleagues in Boston, Singapore, and Taiwan have conducted a theoretical analysis to reveal several more previously unidentified topological properties of bismuth. [31] At the heart of his field of nonlinear optics are devices that change light from one color to another-a process important for many technologies within telecommunications, computing and laser-based equipment and science. [30] Researchers from Siberian Federal University and Kirensky Institute of Physics have proposed a new design for a multimode stripline resonator. [29] In addition to helping resolve many of the technical challenges of non-line-of-sight imaging, the technology, Velten notes, can be made to be both inexpensive and compact, meaning real-world applications are just a matter of time. [28] Researchers in the Department of Physics of ETH Zurich have measured how electrons in so-called transition metals get redistributed within a fraction of an optical oscillation cycle. [27] Insights from quantum physics have allowed engineers to incorporate components used in circuit boards, optical fibers, and control systems in new applications ranging from smartphones to advanced microprocessors. [26] In a paper published August 1, 2019 as an Editors' Suggestion in the journal Physical Review Letters, scientists at JQI and Michigan State University suggest that certain materials may experience a spontaneous twisting force if they are hotter than their surroundings. [25] The technology could allow for new capabilities in quantum computing, including modems that link together many quantum computers at different locations. [24] Self-Organized Quantum Criticality Writing in Nature, researchers describe the first-time observation of 'self-organized criticality' in a controlled laboratory experiment. [32] A potential revolution in device engineering could be underway, thanks to the discovery of functional electronic interfaces in quantum materials that can self-assemble spontaneously. [31] Now, for the first time ever, researchers from Aalto University, Brazilian Center for Research in Physics (CBPF), Technical University of Braunschweig and Nagoya University have produced the superconductor-like quantum spin liquid predicted by Anderson. [30] Electrons in graphene-an atomically thin, flexible and incredibly strong substance that has captured the imagination of materials scientists and physicists alike-move at the speed of light, and behave like they have no mass. [29] In a series of exciting experiments, Cambridge researchers experienced weightlessness testing graphene's application in space. [28] Scientists from ITMO University have developed effective nanoscale light sources based on halide perovskite. [27] Physicists have developed a technique based on optical microscopy that can be used to create images of atoms on the nanoscale. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] Authors: Mesut KAVAK Laws and Loves The Physics of Subatomic Particles and their Behavior Modeled with Classical Laws Authors: Jeff Yee, Lori Gardi Comments: 19 pages Using the physics of sound waves as a foundation, subatomic particles and their behaviors are modeled with classical mechanics to calculate the Planck energy, the electron's energy and the energy levels of the first two atoms: hydrogen and helium. Five different methods are used to calculate energies, including spring-mass systems and wave systems, and all five are found to be equal in their calculations. The Nonlinear Electromagnetic Spectrum Frequency Scale Authors: Frank H. Makinson Electromagnetic geophysical studies have been using very low electromagnetic frequencies, below 1 Hz, transmitters and receivers for several decades. The devices that produce these frequencies are referred to as magnetotelluric transmitters and receivers. The frequencies for these devices are being identified by using a decimal notation, 0.1, 0.01, 0.001, 0.0001, 0.00001 Hz and below. Accepting frequency designations as tens divisions of one reveals that the current EM frequency scale is nonlinear. All contemporary physical law equations that contain frequency as a value, directly or indirectly, are based upon the assumption that the EM frequency scale is linear. On Some New Notions in Nano Ideal Topological Spaces Authors: M. Parimala, Saeid Jafari The purpose of this paper is to introduce the notion of nano ideal topological spaces and investigate the relation between nano topological space and nano ideal topological space. Moreover, we offer some new open and closed sets in the context of nano ideal topological spaces and present some of their basic properties and characterizations. A New Scientific Discovery Authors: Yahya A.Sharif Our muscles lift our bodies very effortlessly so,I can lift up my massive 60 kg body with only my weak calf muscles when trying to pick a fruit on a tree Twin Primes Conjecture Twin prime conjecture, also known as Polignac's conjecture, in number theory, assertion that there are infinitely many twin primes, or pairs of primes that differ by 2. The first statement of the twin prime conjecture was given in 1846 by French mathematician Alphonse de Polignac, who wrote that any even number can be expressed in infinite ways as the difference between two consecutive primes On Improper Integrals Authors: Anamitra Palit The writing intends to point out aspects of conflict regarding some standard improper integrals Rhind Papyrus Problem no 48 Squaring a Circle in 4-D Spacetime Authors: Manfred U.E. Pohl The ToE Framework that unites quantum theory with gravitation "Solution to the Problem of Time" [1] is based on the Solution of the Black-Hole Information Paradox, namely the squaring of a circle (π) in space-time. It is well known how to "square" a circle over an additional dimension, as shown in "Solution to the Problem of Time" II +III [2]. In addition to the Essay "It takes a Decision to Decide if Decidability is True or False" [3] (concering Gödels incompleteness and impossibility of Hilbert's Programm) it is shown here the Solution to the Problem No. 48 in the Rhind-Papyrus. Keywords: Pi, God, Unified Principle, Gödel's Incompleteness, Foundation of Mathematics, TOE. Category: Set Theory and Logic Microworld_59. Unsolved Problems of Physics_ 19 Xxix. Binding Energies of Ethereal Vortex-Like Structures of Electron, Neutron, Proton and the Role of Magnetism Authors: N.N. Leonov Comments: 14 Pages. English and russian texts The first approximate estimates of binding energies of ethereal vortex-like structures of electrons, neutrons and protons are obtained. On Some Ramanujan Functions Applied to Various Sectors of String Theory and Particle Physics: New Possible Mathematical Connections II. In this research thesis, we have analyzed and deepened various Ramanujan functions applied to some sectors of String Theory and Particle Physics. We have therefore described further new possible mathematical connections. Dark Matter Temperature Taking Physicists at the University of California, Davis are taking the temperature of dark matter, the mysterious substance that makes up about a quarter of our universe. [19] According to a new study, they could also potentially detect dark matter, if dark matter is composed of a particular kind of particle called a "dark photon." [18] A global team of scientists, including two University of Mississippi physicists, has found that the same instruments used in the historic discovery of gravitational waves caused by colliding black holes could help unlock the secrets of dark matter, a mysterious and as-yet-unobserved component of the universe. [17] The lack of so-called "dark photons" in electron-positron collision data rules out scenarios in which these hypothetical particles explain the muon's magnetic moment. [16] By reproducing the complexity of the cosmos through unprecedented simulations, a new study highlights the importance of the possible behaviour of very high-energy photons. In their journey through intergalactic magnetic fields, such photons could be transformed into axions and thus avoid being absorbed. [15] Scientists have detected a mysterious X-ray signal that could be caused by dark matter streaming out of our Sun's core. Hidden photons are predicted in some extensions of the Standard Model of particle physics, and unlike WIMPs they would interact electromagnetically with normal matter. In particle physics and astrophysics, weakly interacting massive particles, or WIMPs, are among the leading hypothetical particle physics candidates for dark matter. The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy. There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron-proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter. Quantum Dots Spinning These emerging magnetic properties suggest that the dots could, indeed, have potential in quantum computing as storage and processing devices. [29] Researchers successfully integrated the systems-donor atoms and quantum dots. [28] A team of researchers including U of A engineering and physics faculty has developed a new method of detecting single photons, or light particles, using quantum dots. [27] Recent research from Kumamoto University in Japan has revealed that polyoxometalates (POMs), typically used for catalysis, electrochemistry, and photochemistry, may also be used in a technique for analyzing quantum dot (QD) photoluminescence (PL) emission mechanisms. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] In the September 23th issue of the Physical Review Letters, Prof. Julien Laurat and his team at Pierre and Marie Curie University in Paris (Laboratoire Kastler Brossel-LKB) report that they have realized an efficient mirror consisting of only 2000 atoms. [20] Physicists at MIT have now cooled a gas of potassium atoms to several nanokelvins-just a hair above absolute zero-and trapped the atoms within a two-dimensional sheet of an optical lattice created by crisscrossing lasers. Using a high-resolution microscope, the researchers took images of the cooled atoms residing in the lattice. [19] Researchers have created quantum states of light whose noise level has been "squeezed" to a record low. [18] Semiconductor Neutron Detector Researchers at Northwestern University and Argonne National Laboratory have developed a new material that opens doors for a new class of neutron detectors. [33] Transistors are tiny switches that form the bedrock of modern computing; billions of them route electrical signals around inside a smartphone, for instance. Quantum computers will need analogous hardware to manipulate quantum information. [32] "The realization of such all-optical single-photon devices will be a large step towards deterministic multi-mode entanglement generation as well as high-fidelity photonic quantum gates that are crucial for all-optical quantum information processing," says Tanji-Suzuki. [31] Researchers at ETH have now used attosecond laser pulses to measure the time evolution of this effect in molecules. [30] A new benchmark quantum chemical calculation of C2, Si2, and their hydrides reveals a qualitative difference in the topologies of core electron orbitals of organic molecules and their silicon analogues. [29] A University of Central Florida team has designed a nanostructured optical sensor that for the first time can efficiently detect molecular chirality-a property of molecular spatial twist that defines its biochemical properties. [28] UCLA scientists and engineers have developed a new process for assembling semiconductor devices. [27] A new experiment that tests the limit of how large an object can be before it ceases to behave quantum mechanically has been proposed by physicists in the UK and India. [26] Phonons are discrete units of vibrational energy predicted by quantum mechanics that correspond to collective oscillations of atoms inside a molecule or a crystal. [25] This achievement is considered as an important landmark for the realization of practical application of photon upconversion technology. [24] Considerable interest in new single-photon detector technologies has been scaling in this past decade. [23] Diabolical Quantum Emitters Diabolical points (DPs) introduce ways to study topological phase and peculiar energy dispersion. Scientists in China and partners from the United Kingdom demonstrated DPs in strongly coupled active microdisks. [30] These emerging magnetic properties suggest that the dots could, indeed, have potential in quantum computing as storage and processing devices. [29] Researchers successfully integrated the systems-donor atoms and quantum dots. [28] A team of researchers including U of A engineering and physics faculty has developed a new method of detecting single photons, or light particles, using quantum dots. [27] Recent research from Kumamoto University in Japan has revealed that polyoxometalates (POMs), typically used for catalysis, electrochemistry, and photochemistry, may also be used in a technique for analyzing quantum dot (QD) photoluminescence (PL) emission mechanisms. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] In the September 23th issue of the Physical Review Letters, Prof. Julien Laurat and his team at Pierre and Marie Curie University in Paris (Laboratoire Kastler Brossel-LKB) report that they have realized an efficient mirror consisting of only 2000 atoms. [20] Physicists at MIT have now cooled a gas of potassium atoms to several nanokelvins-just a hair above absolute zero-and trapped the atoms within a two-dimensional sheet of an optical lattice created by crisscrossing lasers. Using a high-resolution microscope, the researchers took images of the cooled atoms residing in the lattice. [19] Topological Crystalline Insulators The presence of helical modes allowed them to form a new quantum device from a topological crystalline insulator known as a helical nanorod with quantized longitudinal conductance. [32] Now, researchers at MIT along with colleagues in Boston, Singapore, and Taiwan have conducted a theoretical analysis to reveal several more previously unidentified topological properties of bismuth. [31] At the heart of his field of nonlinear optics are devices that change light from one color to another-a process important for many technologies within telecommunications, computing and laser-based equipment and science. [30] Researchers from Siberian Federal University and Kirensky Institute of Physics have proposed a new design for a multimode stripline resonator. [29] In addition to helping resolve many of the technical challenges of non-line-of-sight imaging, the technology, Velten notes, can be made to be both inexpensive and compact, meaning real-world applications are just a matter of time. [28] Researchers in the Department of Physics of ETH Zurich have measured how electrons in so-called transition metals get redistributed within a fraction of an optical oscillation cycle. [27] Insights from quantum physics have allowed engineers to incorporate components used in circuit boards, optical fibers, and control systems in new applications ranging from smartphones to advanced microprocessors. [26] In a paper published August 1, 2019 as an Editors' Suggestion in the journal Physical Review Letters, scientists at JQI and Michigan State University suggest that certain materials may experience a spontaneous twisting force if they are hotter than their surroundings. [25] The technology could allow for new capabilities in quantum computing, including modems that link together many quantum computers at different locations. [24] A University of Oklahoma physicist, Alberto M. Marino, is developing quantum-enhanced sensors that could find their way into applications ranging from biomedical to chemical detection. [23] Proof that there Are no Odd Perfect Numbers Authors: Kouji Takaki We have obtained the conclusion that there are no odd perfect numbers. Representing Basic Physical Fields by Quaternionic Fields Authors: J.A.J. van Leunen Comments: 31 Pages. This is part of the Hilbert Book Model Project Basic physical fields are dynamic fields like our universe and the fields that are raised by electric charges. These fields are dynamic continuums. Most physical theories treat these fields by applying gravitational theories or by Maxwell equations. Mathematically these fields can be represented by quaternionic fields. Dedicated normal operators in quaternionic non-separable Hilbert spaces can represent these quaternionic fields in their continuum eigenspaces. Quaternionic functions can describe these fields. Quaternionic differential and integral calculus can describe the behavior of these fields and the interaction of these fields with countable sets of quaternions. All quaternionic fields obey the same quaternionic function theory. The basic fields differ in their start and boundary conditions. AlphaZero Rule the Quantum World The chess world was amazed when the computer algorithm AlphaZero learned, after just four hours on its own, to beat the best chess programs built on human expertise. Now a research group at Aarhus University in Denmark has used the very same algorithm to control a quantum computer. [39] Researchers have discovered that input-output maps, which are widely used throughout science and engineering to model systems ranging from physics to finance, are strongly biased toward producing simple outputs. [38] A QEG team has provided unprecedented visibility into the spread of information in large quantum mechanical systems, via a novel measurement methodology and metric described in a new article in Physics Review Letters. [37] Researchers from Würzburg and London have succeeded in controlling the coupling of light and matter at room temperature. [36] Researchers have, for the first time, integrated two technologies widely used in applications such as optical communications, bio-imaging and Light Detection and Ranging (LIDAR) systems that scan the surroundings of self-driving cars and trucks. [35] The unique platform, which is referred as a 4-D microscope, combines the sensitivity and high time-resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy. [34] The experiment relied on a soliton frequency comb generated in a chip-based optical microresonator made from silicon nitride. [33] This scientific achievement toward more precise control and monitoring of light is highly interesting for miniaturizing optical devices for sensing and signal processing. [32] It may seem like such optical behavior would require bending the rules of physics, but in fact, scientists at MIT, Harvard University, and elsewhere have now demonstrated that photons can indeed be made to interact-an accomplishment that could open a path toward using photons in quantum computing, if not in light sabers. [31] Optical highways for light are at the heart of modern communications. But when it comes to guiding individual blips of light called photons, reliable transit is far less common. [30] Theoretical physicists propose to use negative interference to control heat flow in quantum devices. [29] Verifying Quantum Computers Output A team of researchers from the University of Innsbruck and the Austrian Academy of Sciences has developed a way to verify the output from one quantum computer by comparing it to the output of another quantum computer. [57] A new test to check if a quantum computer is giving correct answers to questions beyond the scope of traditional computing could help the first quantum computer that can outperform a classical computer to be realized. [56] In quantum computing, as in team building, a little diversity can help get the job done better, computer scientists have discovered. [55] Significant technical and financial issues remain towards building a large, fault-tolerant quantum computer and one is unlikely to be built within the coming decade. [54] Chemists at Friedrich Schiller University in Jena (Germany) have now synthesised a molecule that can perform the function of a computing unit in a quantum computer. [53] The research team developed the first optical microchip to generate, manipulate and detect a particular state of light called squeezed vacuum, which is essential for HYPERLINK "https://phys.org/tags/quantum/" quantum computation. [52] Australian scientists have investigated new directions to scale up qubits-utilising the spin-orbit coupling of atom qubits-adding a new suite of tools to the armory. [51] A team of international researchers led by engineers from the National University of Singapore (NUS) have invented a new magnetic device to manipulate digital information 20 times more efficiently and with 10 times more stability than commercial spintronic digital memories. [50] Working in the lab of Mikhail Lukin, the George Vasmer Leverett Professor of Physics and co-director of the Quantum Science and Engineering Initiative, Evans is lead author of a study, described in the journal Science, that demonstrates a method for engineering an interaction between two qubits using photons. [49] Researchers with the Department of Energy's Oak Ridge National Laboratory have demonstrated a new level of control over photons encoded with quantum information. [48] Category: Digital Signal Processing Coder-Decoder for Open Science Data Authors: Domenico Oricchio I thought a method to preserve our scientific and cultural knowledge for future generations Crack in Universal Physics The concept of universal physics is intriguing, as it enables researchers to relate physical phenomena in a variety of systems, irrespective of their varying characteristics and complexities. [50] Stephen Wilson, a professor of materials in UC Santa Barbara's College of Engineering, works in that "long before" realm, seeking to create new materials that exhibit desirable new states. [49] A phenomenon that is well known from chaos theory was observed in a material for the first time ever, by scientists from the University of Groningen, the Netherlands. [48] Plasmonic nanostructures have been widely used for enhancing light-matter interactions due to the strong local field enhancement in deep subwavelength volumes. [47] Post-Moore Brain-Inspired Computing In their paper published this week in Applied Physics Reviews authors Jack Kendall, of Rain Neuromorphics, and Suhas Kumar, of Hewlett Packard Labs, present a thorough examination of the computing landscape, focusing on the operational functions needed to advance brain-inspired neuromorphic computing. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] Galactic Birthplace High-Energy Particles Nine sources of extremely high-energy gamma rays comprise a new catalog compiled by researchers with the High-Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory. [33] For almost 10 years, astronomers have been studying a mysterious diffuse radiation coming from the centre of our galaxy. [32] Deep beneath a mountain in the Apennine range in Italy, an intricate apparatus searches for the dark matter of the universe. [31] A Multiverse-where our Universe is only one of many-might not be as inhospitable to life as previously thought, according to new research. [30] On Various Ramanujan Formulas Applied to Some Sectors of String Theory and Particle Physics: Further New Possible Mathematical Connections. Preopen Sets in Ideal Generalized Topological Spaces Authors: M. Caldas, M. Ganster, Saeid Jafari, T. Noiri, V. Popa, N. Rajesh The aim of this paper is to introduce and characterize the concepts of preopen sets and their related notions in ideal generalized topological spaces. On I-Open Sets and I-Continuous Functions in Ideal Bitopological Spaces Authors: M. Caldas, Saeid Jafari, N. Rajesh, F. Smarandache The aim of this paper is to introduce and character- ize the concepts of I-open sets and their related notions in ideal bitopological spaces. Separation Axioms in Ideal Bitopological Spaces Authors: M. Caldas, Saeid Jafari, V. Popa, N. Rajesh The purpose of this paper is to introduce and study the notions $I-R_0$, $I-R_1$, $I-T_0$, $I-T_1$ and $I-T_2$ in ideal bitopological space. Properties of $\alpha$-Open Sets in Ideal Minimal Spaces Authors: M. Caldas, M. Ganster, Saeid Jafari, T. Noiri, N. Rajesh The purpose of this paper is to introduce and characterize the concept of $\alpha$-open set and several related notions in ideal minimal spaces. Properties of $\beta$-Open Sets in Ideal Minimal Spaces Authors: Saeid Jafari, T. Noiri, N. Rajesh, R. Saranya In this paper, we introduce and study the class of $\beta$-open sets and other related classes of notions in ideal minimal spaces. On Upper and Lower Slightly $\delta$-$\beta$-Continuous Multifunctions Authors: Saeid Jafari, N. Rajesh In this paper, we introduce and study upper and lower slightly $\delta$-$\beta$- continuous multifunctions in topological spaces and obtain some characterizations of these new continuous multifunctions. Properties of Ideal Bitopological $\alpha$-Open Sets Authors: A. I. El-Maghrabi, M. Caldas, Saeid JAFARI, R. M. Latif, A. Nasef, N. Rajesh, S. Shanthi The aim of this paper is to introduced and character- ized the concepts of $\alpha$-open sets and their related notions in ideal bitopological spaces. On New Separation Axioms in Bitopological Spaces Authors: N. Rajesh, E. Ekici, Saeid Jafari The purpose of this paper is to introduce the notions $\¨g-R_0$, $\¨g-R_1$, $\¨g-T_0$, $\¨g-T_1$ and \¨g-T_2$ in bitopological space. On qi-Open Sets in Ideal Bitopological Spaces In this paper, we introduce and study the concept of qI-open set. Based on this new concept, we dene new classes of functions, namely qI-continuous functions, qI-open functions and qI- closed functions, for which we prove characterization theorems. Semiopen Sets in Ideal Bitopological Spaces Authors: M. Caldas, Saeid Jafari, N. Rajesh The aim of this paper is to introduced and charac- terized the concepts of semiopen sets and their related notions in ideal bitopological spaces. Some Remarks on Low Separation Axioms Via id-Sets Authors: Saeid Jafari, S. Shanthi, N. Rajesh The purpose of this paper is to introduce some new classes of ideal topological spaces by utilizing I-open sets and study some of their fundamental properties. Some Fundamental Properties of $\beta$-Open Sets in Ideal Bitopological Spaces In this paper we introduce and characterize the concepts of $\beta$-open sets and their related notions in ideal bitopological spaces. Semiopen Sets in Ideal Minimal Spaces Authors: Saeid Jafari, N. Rajesh, R. Saranya In this paper, we present and study the concepts of semiopen sets and their related notions in ideal minimal spaces. Challenges of Gbagyi Unity In Nigeria Authors: ssor M.B Nuhu "Gbagyi Unity" is the oxymoron of Gbagyi nation. The subject of this discourse is reasonably a prickly one. For this, I initially became nervous, to a state of incomprehension, as I pondered over the likely motivation of the conveners of this meeting. During my consideration of this highly engaging intention, I skated into a quicksand. Even though it was quite easy for me to read the orifices of the conveners as I can attest to their patriotic experiences on issues that pertains Gbagyi, I could not afford being unmindful of conflicting comebacks such a discourse can provoke the audience. Recent Advances in Meth8/VŁ4, a Modal Model Checker for Universal Logic We evaluate 738 artifacts in 3860 assertions to confirm 557 as tautology and 3303 as not (85.6%) in 1258 draft pages. On I-Open Sets and I-Continuous Functions in Ideal Minimal Spaces Authors: Saeid Jafari, T. Noiri, N. Rajesh The aim of this paper is to introduce and characterize the concepts of I-open sets and their related notions in ideal minimal spaces. Refutation of Probabilistic Reasoning Across the Causal Hierarchy We evaluate equations for axioms, propositions, and theorems as not tautologous. The last as P([X∧Z]Y|[X∧Z]W)≡P([X]Y|[X]W) is the briefest refutation of the conjecture of probabilistic reasoning across the causal hierarchy. These results form a non tautologous fragment of the universal logic VŁ4. Rigorous Proofs for Riemann Hypothesis, Polignac's and Twin Prime Conjectures in 2020 Authors: John Yuk Ching Ting Comments: 50 Pages. Contains Rigorous Proofs for Riemann Hypothesis (and Explanations for two types of Gram points), Polignac's and Twin Prime Conjectures. Mathematics for Incompletely Predictable Problems is associated with Incompletely Predictable problems containing Incompletely Predictable entities. Nontrivial zeros and two types of Gram points in Riemann zeta function (or its proxy Dirichlet eta function) together with prime and composite numbers from Sieve of Eratosthenes are Incompletely Predictable entities. Correct and complete mathematical arguments for first key step of converting this function into its continuous format version, and second key step of applying Information-Complexity conservation to this Sieve result in direct spin-offs from first key step consisting of proving Riemann hypothesis (and explaining two types of Gram points), and second key step consisting of proving Polignac's and Twin prime conjectures. Nanoparticles Enter Tumors Through Cells Researchers from U of T Engineering have discovered that an active, rather than passive, process dictates which nanoparticles enter solid tumors. [23] Researchers at Oregon State University have developed an improved technique for using magnetic nanoclusters to kill hard-to-reach tumors. [22] MIT researchers have now come up with a novel way to prevent fibrosis from occurring, by incorporating a crystallized immunosuppressant drug into devices. [21] In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. [20] Inspired by ideas from the physics of phase transitions and polymer physics, researchers in the Divisions of Physical and Biological Sciences at UC San Diego set out specifically to determine the organization of DNA inside the nucleus of a living cell. [19] Scientists from the National Institute of Standards and Technology (NIST) and the University of Maryland are using neutrons at Oak Ridge National Laboratory (ORNL) to capture new information about DNA and RNA molecules and enable more accurate computer simulations of how they interact with everything from proteins to viruses. [18] The DNA molecules are chiral, which means they can exist in two forms which are mirror images, like a left and right hand. The phenomenon was dubbed "chiral induced spin selectivity" (CISS), and over the last few years, several experiments were published allegedly showing this CISS effect, even in electronic devices. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase-a so-called "cellular immortality" ribonucleoprotein. [14] Einstein Mass–energy Equivalence Equation E=mc^2 is Wrong Because Does not Contains Dark Matter Authors: Adrian Ferent Comments: 486 Pages. © 2014 Adrian Ferent Einstein mass–energy equivalence equation E=mc^2 is wrong because does not contains Dark Matter Einstein in 1905 did not formulate exactly the equation E=mc^2 but he said: 'if a body gives off the energy L in the form of radiation, its mass diminishes by L/c^2'. Thus means for Einstein the inertial mass of an object changes if the object absorbs or emits energy. "We do not see Dark Matter energy because at light speed the Dark Matter electron energy is not released" 'The Ferent factor is the Lorentz factor where the speed of the photon is replaced by the Dark photon speed" "Ferent's Dark Matter mass-energy equivalence equation: E = md × vp^2" "The electron energy is the sum of the photon energy and the Dark Matter electron energy" "The particle energy E, is the sum of Matter energy and Dark Matter energy: E = Em + Edm" "Ferent's mass–energy equivalence equation: E=mc^2 + md × vp^2 " "We do not see Dark Matter energy because at light speed the Dark Matter energy is not released" Democratize Nanopore Research Now a team of researchers at the University of Ottawa is democratizing entry into the field of nanopore research by offering up a unique tool to accelerate the development of new applications and discoveries. [24] Researchers from U of T Engineering have discovered that an active, rather than passive, process dictates which nanoparticles enter solid tumors. [23] Researchers at Oregon State University have developed an improved technique for using magnetic nanoclusters to kill hard-to-reach tumors. [22] MIT researchers have now come up with a novel way to prevent fibrosis from occurring, by incorporating a crystallized immunosuppressant drug into devices. [21] In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. [20] Inspired by ideas from the physics of phase transitions and polymer physics, researchers in the Divisions of Physical and Biological Sciences at UC San Diego set out specifically to determine the organization of DNA inside the nucleus of a living cell. [19] Scientists from the National Institute of Standards and Technology (NIST) and the University of Maryland are using neutrons at Oak Ridge National Laboratory (ORNL) to capture new information about DNA and RNA molecules and enable more accurate computer simulations of how they interact with everything from proteins to viruses. [18] The DNA molecules are chiral, which means they can exist in two forms which are mirror images, like a left and right hand. The phenomenon was dubbed "chiral induced spin selectivity" (CISS), and over the last few years, several experiments were published allegedly showing this CISS effect, even in electronic devices. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Sensitive Torque Measuring Device A team of physicists at Purdue University has built the most sensitive torque measuring device ever. In their paper published in the journal Nature Nanotechnology, the team describes their new device and outline how it might be used. [25] Now a team of researchers at the University of Ottawa is democratizing entry into the field of nanopore research by offering up a unique tool to accelerate the development of new applications and discoveries. [24] Researchers from U of T Engineering have discovered that an active, rather than passive, process dictates which nanoparticles enter solid tumors. [23] Researchers at Oregon State University have developed an improved technique for using magnetic nanoclusters to kill hard-to-reach tumors. [22] MIT researchers have now come up with a novel way to prevent fibrosis from occurring, by incorporating a crystallized immunosuppressant drug into devices. [21] In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. [20] Inspired by ideas from the physics of phase transitions and polymer physics, researchers in the Divisions of Physical and Biological Sciences at UC San Diego set out specifically to determine the organization of DNA inside the nucleus of a living cell. [19] Scientists from the National Institute of Standards and Technology (NIST) and the University of Maryland are using neutrons at Oak Ridge National Laboratory (ORNL) to capture new information about DNA and RNA molecules and enable more accurate computer simulations of how they interact with everything from proteins to viruses. [18] The DNA molecules are chiral, which means they can exist in two forms which are mirror images, like a left and right hand. The phenomenon was dubbed "chiral induced spin selectivity" (CISS), and over the last few years, several experiments were published allegedly showing this CISS effect, even in electronic devices. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] Treatments for Obesity and Diabetes Engineered ingestible molecular traps created from mesoporous silica particles (MSPs) introduced to the gut can have an effect on food efficiency and metabolic risk factors. [24] Researchers from U of T Engineering have discovered that an active, rather than passive, process dictates which nanoparticles enter solid tumors. [23] Researchers at Oregon State University have developed an improved technique for using magnetic nanoclusters to kill hard-to-reach tumors. [22] MIT researchers have now come up with a novel way to prevent fibrosis from occurring, by incorporating a crystallized immunosuppressant drug into devices. [21] In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. [20] Inspired by ideas from the physics of phase transitions and polymer physics, researchers in the Divisions of Physical and Biological Sciences at UC San Diego set out specifically to determine the organization of DNA inside the nucleus of a living cell. [19] Scientists from the National Institute of Standards and Technology (NIST) and the University of Maryland are using neutrons at Oak Ridge National Laboratory (ORNL) to capture new information about DNA and RNA molecules and enable more accurate computer simulations of how they interact with everything from proteins to viruses. [18] The DNA molecules are chiral, which means they can exist in two forms which are mirror images, like a left and right hand. The phenomenon was dubbed "chiral induced spin selectivity" (CISS), and over the last few years, several experiments were published allegedly showing this CISS effect, even in electronic devices. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Ordered Nanostructures in 3-D Scientists have developed a platform for assembling nanosized material components, or "nano-objects," of very different types—inorganic or organic—into desired 3-D structures. [26] A team of physicists at Purdue University has built the most sensitive torque measuring device ever. In their paper published in the journal Nature Nanotechnology, the team describes their new device and outline how it might be used. [25] Now a team of researchers at the University of Ottawa is democratizing entry into the field of nanopore research by offering up a unique tool to accelerate the development of new applications and discoveries. [24] Researchers from U of T Engineering have discovered that an active, rather than passive, process dictates which nanoparticles enter solid tumors. [23] Researchers at Oregon State University have developed an improved technique for using magnetic nanoclusters to kill hard-to-reach tumors. [22] MIT researchers have now come up with a novel way to prevent fibrosis from occurring, by incorporating a crystallized immunosuppressant drug into devices. [21] In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. [20] Inspired by ideas from the physics of phase transitions and polymer physics, researchers in the Divisions of Physical and Biological Sciences at UC San Diego set out specifically to determine the organization of DNA inside the nucleus of a living cell. [19] Scientists from the National Institute of Standards and Technology (NIST) and the University of Maryland are using neutrons at Oak Ridge National Laboratory (ORNL) to capture new information about DNA and RNA molecules and enable more accurate computer simulations of how they interact with everything from proteins to viruses. [18] Refutation of the Godlike Computer: Benzmüller's Nightmare Comments: 1 Page. © Copyright 2020 by Colin James III All rights reserved. Disqus comments are ignored, so don't bother. See ersatz-systems.com We evaluate the claimed definition of "positive properties" for God which is not tautologous and refutes the conjecture that a supreme being necessarily exists by computer proof. This forms a non tautologous fragment of the universal logic VŁ4. Proof of Golbach's Conjecture Every even integer > 2 is the sum of two prime numbers & equivalent Each odd integer > 5 is the sum of three prime numbers USING THE SIEVE OF ERATOSTHENES. A Toroidal or Disk-Like Zitterbewegung Electron? We present Oliver Consa's classical calculations of the anomalous magnetic moment of an electron, pointing out some of what we perceive to be weaker arguments, and adding comments and questions with a view to possibly arrive at a more elegant approach to the problem on hand in the future. Quantum Dot Laser Diodes Los Alamos scientists have incorporated meticulously engineered colloidal quantum dots into a new type of light emitting diodes (LEDs) containing an integrated optical resonator, which allows them to function as lasers. [31] Tiny, easy-to-produce particles, called quantum dots, may soon take the place of more expensive single crystal semiconductors in advanced electronics found in solar panels, camera sensors and medical imaging tools. [30] North Carolina State University researchers have developed a microfluidic system for synthesizing perovskite quantum dots across the entire spectrum of visible light. [29] Nanoparticles derived from tea leaves inhibit the growth of lung cancer cells, destroying up to 80% of them, new research by a joint Swansea University and Indian team has shown. [28] A team of researchers including U of A engineering and physics faculty has developed a new method of detecting single photons, or light particles, using quantum dots. [27] Recent research from Kumamoto University in Japan has revealed that polyoxometalates (POMs), typically used for catalysis, electrochemistry, and photochemistry, may also be used in a technique for analyzing quantum dot (QD) photoluminescence (PL) emission mechanisms. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] Algorithms Related to the Three Typologies with Constant Angular Step (C) and Manageable (A) (L) and (Pc) Parameters. Authors: Dante Servi Description of the algorithms related to the three typologies with constant angular step (C) and manageable (A) (L) and (Pc) parameters, provided for in the list on the sheet 8/14 (and described below) of my article "How and why to use my graphic method" published on viXra.org in the geometry group at number 1910.0620 rev.(v4). To find all my articles on this subject grouped in a single page, starting from the last published, just click on author's name: Dante Servi. Connecting Dots on Dark Matter They found that this light is brighter in regions that contain a lot of matter and dimmer where matter is sparser-a correlation that could help them narrow down the properties of exotic astrophysical objects and invisible dark matter. [28] The CASPEr team is developing special nuclear magnetic resonance (NMR) techniques, each targeted at a specific frequency range and therefore at a specific range of dark-matter particle masses. [27] This week, scientists from around the world who gathered at the University of California, Los Angeles, at the Dark Matter 2018 Symposium learned of new results in the search for evidence of the elusive material in Weakly Interacting Massive Particles (WIMPs) by the DarkSide-50 detector. [26] If they exist, axions, among the candidates for dark matter particles, could interact with the matter comprising the universe, but at a much weaker extent than previously theorized. New, rigorous constraints on the properties of axions have been proposed by an international team of scientists. [25] The intensive, worldwide search for dark matter, the missing mass in the universe, has so far failed to find an abundance of dark, massive stars or scads of strange new weakly interacting particles, but a new candidate is slowly gaining followers and observational support. [24] "We invoke a different theory, the self-interacting dark matter model or SIDM, to show that dark matter self-interactions thermalize the inner halo, which ties ordinary dark matter and dark matter distributions together so that they behave like a collective unit." [23] Technology proposed 30 years ago to search for dark matter is finally seeing the light. [22] They're looking for dark matter-the stuff that theoretically makes up a quarter of our universe. [21] Results from its first run indicate that XENON1T is the most sensitive dark matter detector on Earth. [20] High Thermal Conductivity An international team of physicists, materials scientists, and mechanical engineers has confirmed the high thermal conductivity predicted in isotopically enriched cubic boron nitride, the researchers report in the advance electronic edition of the journal Science. [20] Thermoelectric materials can use thermal differences to generate electricity. Now there is an inexpensive and environmentally friendly way of producing them with the simplest tools: a pencil, photocopy paper, and conductive paint. [19] A team of researchers with the University of California and SRI International has developed a new type of cooling device that is both portable and efficient. [18] A Model for Matter-Antimatter Asymmetry Authors: Risto Raitio A previous preon scenario based on unbroken global supersymmetry is developed further to provide a natural physical reason for the observed matter-antimatter asymmetry. A tentative mechanism for asymmetric genesis of matter in early cosmology is proposed. With global supersymmetry made local the scenario can be extended to supergravity. Extremely Fast Quantum Calculations Transistors based on germanium can perform calculations for future quantum computers. This discovery by the team of Menno Veldhorst is reported in Nature. [30] A new benchmark quantum chemical calculation of C2, Si2, and their hydrides reveals a qualitative difference in the topologies of core electron orbitals of organic molecules and their silicon analogues. [29] A University of Central Florida team has designed a nanostructured optical sensor that for the first time can efficiently detect molecular chirality-a property of molecular spatial twist that defines its biochemical properties. [28] UCLA scientists and engineers have developed a new process for assembling semiconductor devices. [27] A new experiment that tests the limit of how large an object can be before it ceases to behave quantum mechanically has been proposed by physicists in the UK and India. [26] Phonons are discrete units of vibrational energy predicted by quantum mechanics that correspond to collective oscillations of atoms inside a molecule or a crystal. [25] This achievement is considered as an important landmark for the realization of practical application of photon upconversion technology. [24] Considerable interest in new single-photon detector technologies has been scaling in this past decade. [23] Engineers develop key mathematical formula for driving quantum experiments. [22] Physicists are developing quantum simulators, to help solve problems that are beyond the reach of conventional computers. [21] Engineers at Australia's University of New South Wales have invented a radical new architecture for quantum computing, based on novel 'flip-flop qubits', that promises to make the large-scale manufacture of quantum chips dramatically cheaper-and easier-than thought possible. [20] Quantum Loop Communication Technology Scientists from Argonne National Laboratory and the University of Chicago launched a new testbed for quantum communication experiments from Argonne last week. [40] Physicists at The City College of New York have used atomically thin two-dimensional materials to realize an array of quantum emitters operating at room temperature that can be integrated into next generation quantum communication systems. [39] Research in the quantum optics lab of Prof. Barak Dayan in the Weizmann Institute of Science may be bringing the development of such computers one step closer by providing the "quantum gates" that are required for communication within and between such quantum computers. [38] Calculations of a quantum system's behavior can spiral out of control when they involve more than a handful of particles. [37] Researchers from the University of North Carolina at Chapel Hill have reached a new milestone on the way to optical computing, or the use of light instead of electricity for computing. [36] The key technical novelty of this work is the creation of semantic embeddings out of structured event data. [35] The researchers have focussed on a complex quantum property known as entanglement, which is a vital ingredient in the quest to protect sensitive data. [34] Cryptography is a science of data encryption providing its confidentiality and integrity. [33] Researchers at the University of Sheffield have solved a key puzzle in quantum physics that could help to make data transfer totally secure. [32] "The realization of such all-optical single-photon devices will be a large step towards deterministic multi-mode entanglement generation as well as high-fidelity photonic quantum gates that are crucial for all-optical quantum information processing," says Tanji-Suzuki. [31] On Some Ramanujan Equations Concerning the Continued Fractions. Further Possible Mathematical Connections with Some Parameters of Particle Physics and Cosmology Vi. Comments: 115 Pages. In this research thesis, we have analyzed and deepened some equations concerning the Ramanujan continued fractions. We have described further possible mathematical connections with some parameters of Particle Physics and Cosmology. Quantum Chips Computing Correctly In a step toward practical quantum computing, researchers from MIT, Google, and elsewhere have designed a system that can verify when quantum chips have accurately performed complex computations that classical computers can't. [41] Researchers at Nanyang Technological University, Singapore (NTU Singapore) have developed a quantum communication chip that is 1,000 times smaller than current quantum setups, but offers the same superior security quantum technology is known for. [40] Physicists at The City College of New York have used atomically thin two-dimensional materials to realize an array of quantum emitters operating at room temperature that can be integrated into next generation quantum communication systems. [39] Research in the quantum optics lab of Prof. Barak Dayan in the Weizmann Institute of Science may be bringing the development of such computers one step closer by providing the "quantum gates" that are required for communication within and between such quantum computers. [38] Calculations of a quantum system's behavior can spiral out of control when they involve more than a handful of particles. [37] Researchers from the University of North Carolina at Chapel Hill have reached a new milestone on the way to optical computing, or the use of light instead of electricity for computing. [36] The key technical novelty of this work is the creation of semantic embeddings out of structured event data. [35] The researchers have focussed on a complex quantum property known as entanglement, which is a vital ingredient in the quest to protect sensitive data. [34] Cryptography is a science of data encryption providing its confidentiality and integrity. [33] Researchers at the University of Sheffield have solved a key puzzle in quantum physics that could help to make data transfer totally secure. [32] Proof of the Riemann Hypothesis Comments: 20 Pages. In International Conference From Nina Ringo on Mathematics and Mechanics 16 May 2018 Abstract: The Riemann zeta function is one of the most Euler's important and fascinating functions in mathematics. By analyzing the material of Riemann's conjecture, we divide our analysis in the ζ(z) function and in the proof of the conjecture, which has very important consequences on the distribution of prime numbers. The proof of the Hypothesis of Riemann result from the simple logic, that when two properties are associated, (the resulting equations that based in two Functional equations of Riemann ), if zero these equations, ie ζ(z) = ζ (1-z)= 0 and simultaneously they to have the proved property 1-1 of the Riemann function ζ(z).Thus, there is not margin for to non exist the Re (z) = 1/2 {because ζ (z) = ζ (1-z)=0 and also ζ(z) as and ζ(1-z) are 1-1}.This, as it stands, will gives the direction of all the non-trivial roots to be all in on the critical line, with a value in the real axis equal 1/2. Teoria y. a Teoria Unificadora do Tudo.. Authors: Dos Anjos, Josênio Comments: 10 Pages. 10 pag. Este artigo científico está escrito em português. TEORIA Y. A TEORIA UNIFICADORA DO TUDO. A Diferença de potencial da Energia-Espaço-Tempo e suas reações. As absurdamente fantásticas simplificação dos maiores mistérios da humanidade, muitas consideradas insolúveis até agora, com seus inúmeros e complexos cálculos matemáticos, que chegam a beirar a loucura sem explicar quase nada, aqui, revelados e explicados de maneira tão simples. Da revelação da sopa de Deus, ao acionamento do interruptor da luz. O que é a luz, ou uma onda eletromagnética? É uma diferença de potencial, ou seja, uma energia acumulada. Mas ao contrário do que a ciência postula, ela não viaja a 299 792 458 m/s, ela está estacionária. Pasmem, isso mesmo! O que se move a 299 792 458 m/s é a energia-espaço-tempo. Ou seja, a luz é um risco fixo num disco vinílico 3D rodando. Quem roda é o disco, não o risco. Partindo desta descoberta, todos os mistérios da astrofísica da quântica à Teoria Geral da Relatividade serão desvendadas. Complex Problems at Speed of Light Many of the most challenging optimization problems encountered in various disciplines of science and engineering, from biology and drug discovery to routing and scheduling can be reduced to NP-complete problems. [28] AMOLF researchers and their collaborators from the Advanced Science Research Center (ASRC/CUNY) in New York have created a nanostructured surface capable of performing on-the-fly mathematical operations on an input image. [27] Narimanov has gone a step further in abstracting the imaging process by only considering information transfer, independently of how that information is encoded. [26] The Research of Using Truth to Restrict Authoritative Theories (In Chinese) Authors: Ding Jian First to put forth an argument, truth must have absoluteness and immutability, and does not exist in reality. Then, according to whether it does exist in reality to distinguish different definition domains, all the knowledge is divided into three parts: natural science (materialism), metaphysics (idealism) and mathematics. The contents contained in the metaphysics can be called as the truths, which have existed only in order to the existence of the natural science. The characteristic of the truths is that they cannot be proved by empirical methods, and can only be gradually approached by repeated practices. The principle of seeking limits in mathematics was abstracted from the physical processes of ascertaining the truths. And mathematics runs throughout both of the natural science and metaphysics, it has helped us to break through the bondage of finite thought by the way of infinite subdivision, from the quantitative change in real space has gone deep into the qualitative change of ideal realm. It not only has achieved the unity of opposites of all the knowledge, but can also make reasoning under the premise of mutual restriction according to have the characteristic of continuity. As a result, metaphysics has been translated into that neither divorced from practices, nor just observed objective things with a one-sided, isolated and static way of thinking. This is precisely where the bright spot of the article lies. Between the truths, which can only be reasoned through the continuity of objective things in reality, and can produce logical causalities, but cannot deduce out any contradictory state, nor can there be any chronological order. For example, a rational explanation is firstly given for the disagreement of "whether matter is the first or consciousness is the first." The philosophical question of "which came the first, chicken or egg", is explained in passing. In addition, it was also found that in Einstein's special relativity there was a paradoxes, which was to use one truth (the principle of constant light velocity in vacuum) to overthrow another truth (the absoluteness of simultaneity). After inspection, it is determined that the value c of light speed in vacuum has been replaced with the value v of real light speed. Here lay Einstein's mistake precisely. At last, his "principle of relativity" has been modified rationally. Category: History and Philosophy of Physics Infrared Silicon Photonics In a new report published on Scientific Reports, Milan M. Milošević and an international research team at the Zepler Institute for Photonics and Nanoelectronics, Etaphase Incorporated and the Departments of Chemistry, Physics and Astronomy, in the U.S. and the U.K. Introduced a hyperuniform-disordered platform to realize near-infrared (NIR) photonic devices to create, detect and manipulate light. [31] Researchers at the University of Bristol's Quantum Engineering Technology Labs have demonstrated a new type of silicon chip that can help building and testing quantum computers and could find their way into your mobile phone to secure information. [30] Theoretical physicists propose to use negative interference to control heat flow in quantum devices. [29] Particle physicists are studying ways to harness the power of the quantum realm to further their research. [28] A fundamental barrier to scaling quantum computing machines is "qubit interference." In new research published in Science Advances, engineers and physicists from Rigetti Computing describe a breakthrough that can expand the size of practical quantum processors by reducing interference. [26] Optical Resonators In the quantum realm, under some circumstances and with the right interference patterns, light can pass through opaque media. [34] Researchers at the Technion-Israel Institute of Technology have constructed a first-of-its-kind optic isolator based on resonance of light waves on a rapidly rotating glass sphere. [33] The micro-resonator is a two-mirror trap for the light, with the mirrors facing each other within several hundred nanometers. [32] "The realization of such all-optical single-photon devices will be a large step towards deterministic multi-mode entanglement generation as well as high-fidelity photonic quantum gates that are crucial for all-optical quantum information processing," says Tanji-Suzuki. [31] Researchers at ETH have now used attosecond laser pulses to measure the time evolution of this effect in molecules. [30] A new benchmark quantum chemical calculation of C2, Si2, and their hydrides reveals a qualitative difference in the topologies of core electron orbitals of organic molecules and their silicon analogues. [29] A University of Central Florida team has designed a nanostructured optical sensor that for the first time can efficiently detect molecular chirality-a property of molecular spatial twist that defines its biochemical properties. [28] UCLA scientists and engineers have developed a new process for assembling semiconductor devices. [27] A new experiment that tests the limit of how large an object can be before it ceases to behave quantum mechanically has been proposed by physicists in the UK and India. [26] Phonons are discrete units of vibrational energy predicted by quantum mechanics that correspond to collective oscillations of atoms inside a molecule or a crystal. [25] Super Cold Memory Storage Scientists at the Department of Energy's Oak Ridge National Laboratory have experimentally demonstrated a novel cryogenic, or low temperature, memory cell circuit design based on coupled arrays of Josephson junctions, a technology that may be faster and more energy efficient than existing memory devices. [34] Just like their biological counterparts, hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse, with some connections strengthening at the expense of others. [33] Open Letter to Physicists Authors: Robert Yusupov Comments: 6 Pages. Open Letter to Physicists © Robert Yusupov 14.01.2020 This is a fiery appeal and appeal to the entire physical community for recognition of the "Theory of Nature" by Robert Yusupov, a free researcher, dialectical materialist, communist. For 7 years now, the physical power of the Russian Federation, headed by the Physical Sciences Division of the Russian Academy of Sciences, has been unable to really, objectively, honestly and impartially evaluate the "Theory of Nature" for scientific significance and consistency. Physical power in the Russian Federation has completely degraded! Physical power in the Russian Federation has completely lost its scientific vigilance and scientific scent! Physical power in the Russian Federation has proved its professional unsuitability! Physical power in the Russian Federation reached full scientific impotence! Physical power in the Russian Federation has proved its complete unscrupulousness and unconsciousness! Physical power in the Russian Federation has gone blind and does not see that TN is a Revolution in physics! Physical power in the Russian Federation has become a brake on scientific progress, primarily in physics! It's time to change this rotten power! Открытое письмо физикам Comments: 7 Pages. Открытое письмо физикам © Robert Yusupov 14.01.2020 Это пламенное обращение и воззвание ко всему физическому сообществу на предмет признания «Теории Природы» Юсупова Роберта, свободного исследователя, диалектического материалиста, коммуниста. Физическая власть РФ во главе с ОФН РАН вот уже 7 лет никак не может реально, объективно, честно и непредвзято оценить «Теорию Природы» на предмет научной значимости и состоятельности. Физическая власть в РФ деградировала абсолютно! Физическая власть в РФ полностью потеряла научную бдительность и научный нюх! Физическая власть в РФ доказала свою профессиональную непригодность! Физическая власть в РФ дошла до полной научной импотенции! Физическая власть в РФ доказала свою полную беспринципность и несознательность! Физическая власть в РФ ослепла и не видит, что ТП – это Революция в физике! Физическая власть в РФ стала тормозом научного прогресса и в первую очередь в физике! Пора менять эту гнилую власть! A Project for Village Empowerment Authors: Acharya Sennimalai Kalimuthu Comments: 21 Pages. NA In this project, a number of new economic, political and administrative reforms have been proposed. Space, Time and Nature of Physical Interactions (Text in Polish) Authors: Kajetan Młynarski Abstract The work uses non-standard concepts and calculations based on the concept of identity (i.e. not based on set theory as a basic theory). This made it possible to obtain several potentially interesting results. For example: the number of dimensions of a space is not an assumption, it is determined by a theorem. Also a theorem indicates the existence of a maximum speed having properties analogous to c. According to the presented theory, there is only one fundamental interaction. Depending on the parameters (distance, rotation), it has properties similar to known fundamental interactions. In the final part I propose some experiments checking the obtained results. Fourth Power Algorithm Using Polynomials Authors: Zeolla Gabriel This document develops and demonstrates the discovery of a new potentiation algorithm that works absolutely with all the numbers using the formula of the square of a binomial, trinomial, tetranomial and pentanomial Category: Algebra Influential Electrons Quantum Relationship A team of physicists has mapped how electron energies vary from region to region in a particular quantum state with unprecedented clarity. [47] Observation of Spin-Charge Deconfinement in Fermionic Hubbard Chains"), they used a so-called quantum simulator. [46] From raindrops rolling off the waxy surface of a waterlily leaf to the efficiency of desalination membranes, interactions between water molecules and water-repellent "hydrophobic" surfaces are all around us. [45] It takes a Decision to Decide if Decidability is True or False Authors: Manfred U.E.Pohl Turning a discarded (Descartes) Coordinate-System into a new (Newtonian) Coordinate-System - [pi / pi :=1 ; pi := c := Meter / Second] - Written to the FXQi Essay Contest on "Undecidability, Uncomputability, and Unpredictability" Relation of Accelerations in Two Inertial Frames in Special Relativity Theory Authors: Sangwha Yi Comments: 4 Pages. Thank you for reading In special relativity theory, we discover the relation of inertial frames' accelerations. In this theory, we can understand general state of the relation of inertial frames' accelerations Twin Paradox Solution (Language German). Authors: Genrih Leonidovich Arutyunov Twin Paradox solution (language German). Das Zwillingsparadoxon Beweis und Lösung zum ersten Mal seit 110 Jahren. Refutation of the Zorn Lemma We evaluate theorems for the Kuratowski-Zorn lemma and Zorn's lemma for which both are not tautologous. This refutes the Zorn lemma to form a non tautologous fragment of the universal logic VŁ4. Hybrid Time Theory: "Euler's Formula" and the "Phi-Algorithm" In this series of papers on the golden ratio algorithm (phi-algorithm) for time [1-14], the concept of time as the phi-algorithm has been the core focus of topic. Through the development of the papers, the idea of the phi-algorithm for time seeking to define pi has resulted in the development a vast field of ideas covering what is perceived and measured of the natural physical world, containing equations that fit all observed data regarding the field forces for mass and energy and associated constants thereof. These results took shape from the initial premise of defining space as a "0" 3-d construct associated to time, "time" as a concept that is tagged to a basic logical construct of consciousness, namely the features of time-before, time-now, and time-after, and how that process of time from time-before to time-after can fit with the basic feature of 0-space in time-now. It was proposed that time would propagate from any point in 0-space as a spherical wavefront at a fixed rate. Now, in this paper, that assumption will be fully addressed, namely the assumption of time seeking to conform to a spherical wavefront and thus to the notion of the value of pi. This will be achieved by first addressing how words should be most precisely and efficiently used to explain the notion of space and time in detailing a just as efficient and precise use of numbers describing the process of linking a 0 reference in space to an infinite reference. Then, that description will uphold the findings of papers 1-14 [1-14], most importantly the final premise reached in paper 14 regarding natural radioactive decay. There it shall be explained exists an associated equation for time, not explicitly the phi-algorithm for time, yet an associated algorithm of its own explained in Euler's formula. The aim here in this paper is to then propose a compromise between two time-theories as a general equation for time involving the value of "Euler's number", "phi", and "pi". Inertial Motion of the Quantum Self-Interacting Electron Authors: Peter Leifer Attempts represent the self-interacting quantum electron as the cyclic motion on the stable attractor has been discussed. This motion subjects quantum inertia principle expressed by the parallel transported energy-momentum generator along a closed geodesic in the space of the unlocated quantum states $CP(3)$ . The affine gauge potential in the complex projective state space (similar to the Higgs potential) seriously deforms the Jacobi fields in the vicinity of the ``north pole". It was assumed that the divergency of the Jacobi field may be compensated by the fields of the Poincar\'e generators representing EM-like ``field shell" of the electron in the dynamical spacetime. Thereby, the spacetime looks as ultimately deprecated in the role of the ``container of matter" and it appears as the accompanied to the quantum electron functional space (dynamical spacetime). Meanwhile, the dynamics of the self-interacting electron is essentially non-linear and deterministic. Protein Sestrin Benefits Michigan Medicine researchers studying a class of naturally occurring protein called Sestrin have found that it can mimic many of exercise's effects in flies and mice. [38] Researchers have developed a way to prop up a struggling immune system to enable its fight against sepsis, a deadly condition resulting from the body's extreme reaction to infection. [37] An interdisciplinary team of scientists from KU Leuven, the University of Bremen, the Leibniz Institute of Materials Engineering, and the University of Ioannina has succeeded in killing tumour cells in mice using nano-sized copper compounds together with immunotherapy. [36] Herpes Simplex Viruses An Italian research team has refined the history and origins of two extremely common pathogens in human populations, herpes simplex virus type 1 and type 2. [39] Michigan Medicine researchers studying a class of naturally occurring protein called Sestrin have found that it can mimic many of exercise's effects in flies and mice. [38] Researchers have developed a way to prop up a struggling immune system to enable its fight against sepsis, a deadly condition resulting from the body's extreme reaction to infection. [37] An interdisciplinary team of scientists from KU Leuven, the University of Bremen, the Leibniz Institute of Materials Engineering, and the University of Ioannina has succeeded in killing tumour cells in mice using nano-sized copper compounds together with immunotherapy. [36] Asymptotic Safety, Black-Hole Cosmology and the Universe as a Gravitating Vacuum State Authors: Carlos Castro A model of the Universe as a dynamical homogeneous anisotropic self-gravitating fluid, consistent with Kantowski-Sachs homogeneous anisotropic cosmology and Black-Hole cosmology, is developed. Renormalization Group (RG) improved black-hole solutions resulting from Asymptotic Safety in Quantum Gravity are constructed which explicitly $remove$ the singularities at $t = 0$. Two temporal horizons at $ t _- \simeq t_P$ (Planck time) and $ t_+ \simeq t_H$ (Hubble time) are found. For times below the Planck time $ t < t_P$, and above the Hubble time $ t > t_H$, the components of the Kantowski-Sachs metric exhibit a key sign $change$, so the roles of the spatial $z$ and temporal coordinates $ t$ are $exchanged$, and one recovers a $repulsive$ inflationary de Sitter-like core around $ z = 0$, and a Schwarzschild-like metric in the exterior region $ z > R_H = 2 G_o M $. Therefore, in this fashion one has found a dynamical Universe $inside$ a Black Hole whose Schwarzschild radius coincides with the Hubble radius $ r_s = 2 G_o M = R_H$. For these reasons we conclude by arguing that our Universe could be seen as a Gravitating Vacuum State inside a Black-Hole. The Research of Using Truth to Restrict Authoritative Theories If Background em Radiation Forms a Locality Relative to Which em Waves Propagate Their Speed/wavelength Energy Mix Then the Time Dilation Theory is not Needed. Authors: Iain Smith Observations that the speed of electromagnetic waves are reliably measured to travel at the speed of light "c" relative to the observer and proved to be independent of the emitters relative speed have resulted in the theory of the dilation of time as a practical interpretation of the theory of special and general relativity. This papers alternative theory suggests background EM radiation provides the locality that EM waves latch onto and set their speed/wavelength mix relative to. As such back ground radiation will exist locally to all emitters and observers of EM waves this would explain the observations that currently force the time dilation theory. In turn, if time dilation does not exist then the correlation between extended atomic half lives and speed are in fact an observation of a transfer of energy to the atoms stores during accelerating events they have experienced rather than proof of the rate of time slowing down at speed. This theory is applied to the results of experiments bouncing laser pulses off a reflector on the moon as observational confirmation. Metals Trace History of Galaxies Astronomers have cataloged signs of nine heavy metals in the infrared light from supergiant and giant stars. [34] The data comprising this image were gathered by the Wide Field Camera 3 aboard the NASA/ESA Hubble Space Telescope. [33] For almost 10 years, astronomers have been studying a mysterious diffuse radiation coming from the centre of our galaxy. [32] Deep beneath a mountain in the Apennine range in Italy, an intricate apparatus searches for the dark matter of the universe. [31] One-Dimensional Quantum Divorce Observation of Spin-Charge Deconfinement in Fermionic Hubbard Chains"), they used a so-called quantum simulator. [46] From raindrops rolling off the waxy surface of a waterlily leaf to the efficiency of desalination membranes, interactions between water molecules and water-repellent "hydrophobic" surfaces are all around us. [45] The ever-more-humble carbon nanotube may be just the device to make solar panels—and anything else that loses energy through heat—far more efficient. [44] When traversing a solid material such as glass, a light wave can deposit part of its energy in a mechanical wave, leading to a color change of the light. [43] On Some Ramanujan Equations Concerning the Continued Fractions. Further Possible Mathematical Connections with Some Parameters of Particle Physics and Cosmology V. In this research thesis, we have analyzed and deepened some equations concerning the Ramanujan continued fractions. Further possible mathematical connections with some parameters of Particle Physics and Cosmology. Proteome State in Live Cells Australian scientists have developed a molecular probe that senses the state of the proteome—the entire set of the proteins—by measuring the polarity of the protein environment. [39] Michigan Medicine researchers studying a class of naturally occurring protein called Sestrin have found that it can mimic many of exercise's effects in flies and mice. [38] Researchers have developed a way to prop up a struggling immune system to enable its fight against sepsis, a deadly condition resulting from the body's extreme reaction to infection. [37] The Fuzzy Probabilities Authors: Antoine Balan Comments: 1 page, written in english We propose to introduce a measure in the theory of fuzzy sets, calling this notion the fuzzy probabilities. Response to the Postmodernisms' of the Age: Contemporary Issues of Notability Authors: Miguel A. Sanchez-Rey Postmodernisms' of the Age is a contemporary issue of notability. Understanding & Exploring -> [ Mandelbrot Algorithms+AI+QRNG Concepts+Hard Problem Concepts based on Python & Haskell ] – A Short Communication. Authors: Nirmal Tej Kumar Comments: 5 Pages. Short Communication [ PART A ] - Python Medical Image Processing & Electron Microscopy Image Processing Informatics Using Python/LLVM. [ PART B ] - Haskell Exploring a JIT Compiler with Haskell and LLVM in the Context of Medical Image Processing & Electron Microscopy Image Processing Software R&D Using Mandelbrot Algorithms. Модель униполярного заряда Authors: Киров Comments: 1 Page. Russ lang Определена модель униполярного заряда Unipolar Charge Model Authors: Kirov Unipolar charge model determined From Deriving Mass-Energy Equivalence with Classical Physics to Mass-Velocity Relation and Charge-Velocity Relation of Electrons Authors: JianFei Chen There are controversies on mass-velocity relation and charge of moving electrons, which are related with mass-energy equivalence. Author thought that mass-energy equivalence was expressing the energy relation of bodies and space as mass. In this paper author proposed relative kinetic energy $E_{rk}=mv^2$ to explain this relation. With relative kinetic energy theory, author inferred the equations for mass-energy equivalence and mass-velocity relation. While analyzing the electron acceleration movement, author found the reasons for the two unreal equation of mass-velocity relation, and determined the equations of mass-velocity relation and charge-velocity relation of electrons. These determinations are of important significance for relativity theory and electrodynamics, even for superconducting research. [[email protected], [email protected]] Energy Efficient Computers and Smartphones With enhanced properties such as greater strength, lighter weight, increased electrical conductivity and chemical reactivity, nanomaterials (NMs) are widely used in areas like ICT, energy and medicine. [35] Scientists at the Okinawa Institute of Science and Technology Graduate University (OIST) have developed a light-based device that can act as a biosensor, detecting biological substances in materials; for example, harmful pathogens in food samples. [34] A tightly focused, circularly polarized spatially phase-modulated beam of light formed an optical ring trap. [33] Scientists at Tokyo Institute of Technology proposed new quasi-1-D materials for potential spintronic applications, an upcoming technology that exploits the spin of electrons. [32] They do this by using "excitons," electrically neutral quasiparticles that exist in insulators, semiconductors and in some liquids. [31] Researchers at ETH Zurich have now developed a method that makes it possible to couple such a spin qubit strongly to microwave photons. [30] Quantum dots that emit entangled photon pairs on demand could be used in quantum communication networks. [29] Researchers successfully integrated the systems-donor atoms and quantum dots. [28] A team of researchers including U of A engineering and physics faculty has developed a new method of detecting single photons, or light particles, using quantum dots. [27] Recent research from Kumamoto University in Japan has revealed that polyoxometalates (POMs), typically used for catalysis, electrochemistry, and photochemistry, may also be used in a technique for analyzing quantum dot (QD) photoluminescence (PL) emission mechanisms. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] Do US Sanctions On Iran Work? Survival in The Modern World Authors: Syed Noman Ali Shah "There is no doubt that the United states will not achieve success with this new plot against Iran." Iran's President Hassan Rouhani on US new sanctions. My research is composed Iran's survival in isolation after a hard tension by the west for 40 years. How Iran is going to manage it people through its policies and diplomacy nationally and internationally. In today's globalized world how, Iran react towards sanctions and focuses on its nuclear weapons production. Relations with Russia as alternate ally after US and supporting of militant groups inside the middle east to extend her regime. Shall different engagements at a time lead Iran into hard situation like in Yemen, Syria, Iraq, supporting Hammas and Hezbollah which are black-listed by the international community. On one side Iran is facing sanctions from the world like import-export shut down, no travelling by sea and air, no foreign transactions, steel and iron industry which the biggest contributing sector in the Iran's economy is and parallel to these issues Iran is financially and militarily supporting internationally these groups. Gold Atoms Peculiar Pyramidal Shape Freestanding clusters of 20 gold atoms take the shape of a pyramid, researchers have discovered. [30] The multimodal nanoscience approach to studying quantum physics phenomena is, they say, a "technological leap for how scientists can explore quantum materials to unearth new phenomena and guide future functional engineering of these materials for real-world applications." [29] Researchers have developed a three-dimensional dynamic model of an interaction between light and nanoparticles. [28] Scientists from ITMO University have developed effective nanoscale light sources based on halide perovskite. [27] Physicists have developed a technique based on optical microscopy that can be used to create images of atoms on the nanoscale. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] In the September 23th issue of the Physical Review Letters, Prof. Julien Laurat and his team at Pierre and Marie Curie University in Paris (Laboratoire Kastler Brossel-LKB) report that they have realized an efficient mirror consisting of only 2000 atoms. [20] Boosting Cell's Antibacterial Properties Researchers have developed a way to prop up a struggling immune system to enable its fight against sepsis, a deadly condition resulting from the body's extreme reaction to infection. [37] An interdisciplinary team of scientists from KU Leuven, the University of Bremen, the Leibniz Institute of Materials Engineering, and the University of Ioannina has succeeded in killing tumour cells in mice using nano-sized copper compounds together with immunotherapy. [36] Johns Hopkins researchers report that a type of biodegradable, lab-engineered nanoparticle they fashioned can successfully deliver a "suicide gene" to pediatric brain tumor cells implanted in the brains of mice. [35] Replacements of recent Submissions [12758] viXra:2001.0363 [pdf] replaced on 2020-01-19 12:09:04 Relativity, Light and Photons This paper adds some thoughts on relativity theory and geometry to our one-cycle photon model. We basically highlight what we should think of as being relative in this model (energy, wavelength, and the related force/field values), as opposed to what is absolute (the geometry of spacetime and the geometry of the photon). We also expand our photon model somewhat by introducing an electromagnetic vector combining electric and magnetic fields. Finally, we add a discussion on how we can think about photon-electron interactions and polarization. Relativity and Light Comments: No. of pages excludes title page. This paper adds some thoughts on relativity theory and geometry to our one-cycle photon model. We basically highlight what we should think of as being relative in this model (energy, wavelength, and the related force/field values), as opposed to what is absolute (the geometry of spacetime and the geometry of the photon). Authors: Hans van Leunen Comments: 5 Pages. Short Communication - Revised Contact - Disclaimer - Privacy - Funding
CommonCrawl
Pseudospectral discretization of delay differential equations in sun-star formulation: Results and conjectures DCDS-S Home Hopf bifurcation of a fractional-order octonion-valued neural networks with time delays September 2020, 13(9): 2561-2573. doi: 10.3934/dcdss.2020138 Controllability analysis of nonlinear fractional order differential systems with state delay and non-instantaneous impulsive effects Baskar Sundaravadivoo , Department of Mathematics, Alagappa University, Karaikudi-630 004, India Received November 2018 Revised April 2019 Published September 2020 Early access November 2019 This manuscript prospects the controllability analysis of non-instantaneous impulsive Volterra type fractional differential systems with state delay. By enroling an appropriate Grammian matrix with the assistance of Laplace transform, the conditions to obtain the necessary and sufficiency for the controllability of non-instantaneous impulsive Volterra-type fractional differential equations are derived using algebraic approach and Cayley-Hamilton theorem. A distinctive approach presents in the manuscript, i have taken non-instantaneous impulses into the fractional order dynamical system with state delay and studied the controllability analysis, since this not exists in the available source of literature. Inclusively, i have provided two illustrative examples with the existence of non-instantaneous impulse into the fractional dynamical system. So this demonstrates the validity and efficacy of our obtained criteria of the main section. Keywords: Fractional differential equations, Caputo fractional derivative, non-instantaneous impulses, Laplace transformation, Mittag-Leffler function. Mathematics Subject Classification: 93B05, 34A08. Citation: Baskar Sundaravadivoo. Controllability analysis of nonlinear fractional order differential systems with state delay and non-instantaneous impulsive effects. Discrete & Continuous Dynamical Systems - S, 2020, 13 (9) : 2561-2573. doi: 10.3934/dcdss.2020138 R. Agarwal, M. Benchohra and B. Slimani, Existence results for differential equations with fractional order and impulses, Mem. Differential Equations Math. Phys., 44 (2008), 1-21. Google Scholar R. Agarwal, M. Benchohra and S. Hamani, A survey on existence results for boundary value problems of nonlinear fractional differential equations and inclusions, Acta Appl. Math., 109 (2010), 973-1033. doi: 10.1007/s10440-008-9356-6. Google Scholar R. Agarwal, S. Hristova and D. O. Regan, Noninstantaneous impulses in Caputo fractional differential equations and practical stability via Lyapunov functions, J. Franklin Inst., 354 (2017), 3097-3119. doi: 10.1016/j.jfranklin.2017.02.002. Google Scholar R. Agarwal, S. Hristova and D. O. Regan, p-Moment exponential stability of Caputo fractional differential equations with noninstantaneous random impulses, Journal of Applied Mathematics and Computing, 2016 (2016), 1-26. Google Scholar R. Agarwal, S. Hristova and D. O. Regan, Stability of Solutions to Impulsive Caputo Fractinal Differential Equations, Electron. J. Differential Equations, 2016. Google Scholar R. Agarwal, S. Hristova and D. O. Regan, A survey of Lyapunov functions, stability and impulsive Caputo fractional differential equations, Fract. Calc. Appl. Anal., 19 (2016), 290-318. doi: 10.1515/fca-2016-0017. Google Scholar R. Agarwal, D. O. Regan and S. Hristova, Monotone iterative technique for the initial value problem for differential equations with noninstantaneous impulses, Appl. Math. Comput., 298 (2017), 45-56. doi: 10.1016/j.amc.2016.10.009. Google Scholar R. Agarwal, D. O. Regan and S. Hristova, Stability by Lyapunov like functions of nonlinear differential equations with noninstantaneous impulses, J. Appl. Math. Comput., 53 (2017), 147-168. doi: 10.1007/s12190-015-0961-z. Google Scholar M. Benchohra and D. Seba, Impulsive Fractional Differential Equations in Banach Spaces, Electron. J. Qual. Theory Differ. Equ., Special Edition I, 2009. doi: 10.14232/ejqtde.2009.4.8. Google Scholar G. Bonanno, R. Rodriquez-Lopez and S. Tersian, Existence of solutions to boundary value problem for impulsive fractional differential equation, Fract. Calc. Appl. Anal., 17 (2014), 717-744. doi: 10.2478/s13540-014-0196-y. Google Scholar J. Cao and H. Chen, Some results on impulsive boundary valueproblem for fractional differential inclusions, Electron. J. Qual. Theory Differ. Equ., 11 (2011), 1-24. doi: 10.14232/ejqtde.2011.1.11. Google Scholar M. Feckan, J. R. Wang and Y. Zhou, Periodic solutions for nonlinear evolution equations with non-istantaneous impulses, Nonauton. Dyn. Syst., 1 (2014), 93-101. Google Scholar M. Feckan, Y. Zhou and J. Wang, On the concept and existence of solution for impulsive fractional differential equations, Commun. Nonlinear Sci. Numer. Simul., 17 (2012), 3050-3060. doi: 10.1016/j.cnsns.2011.11.017. Google Scholar J. Henderson and A. Ouahab, Impulsive differential inclusions with fractional order, Comput. Math. Appl., 59 (2010), 1191-1226. doi: 10.1016/j.camwa.2009.05.011. Google Scholar E. Hernandez and D. O. Regan, On a new class of abstract impulsive differential equations, Proc. Amer. Math. Soc., 141 (2013), 1641-1649. doi: 10.1090/S0002-9939-2012-11613-2. Google Scholar S. Hristova and R. Terzieva, Lipschitz Stability of Differential Equations with Non-Instantaneous Impulses, Adv. Difference Equ., 2016. doi: 10.1186/s13662-016-1045-6. Google Scholar W. Jiang and W. Z. Song, Controllability of singular systems with control delay, Automatica, 37 (2001), 1873-1877. Google Scholar R. E. Kalman, Y. C. Ho and K. S. Narendra, Controllability of linear dynamical systems, Contributions to Differential Equations, 1 (1963), 189-213. Google Scholar T. D. Ke and D. Lan, Decay integral solutions for a class of impulsive fractional differential equations in Banach spaces, Fract. Calc. Appl. Anal., 17 (2014), 96-121. doi: 10.2478/s13540-014-0157-5. Google Scholar A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, North-Holland Mathematics Studies, 204. Elsevier Science B.V., Amsterdam, 2006. Google Scholar P. Li and Ch. Xu, Boundary Value Problems of Fractional Order Differential Equation with Integral Boundary Conditions and Not Instantaneous Impulses, J. Funct. Spaces, 2015. doi: 10.1155/2015/954925. Google Scholar N. I. Mahmudov, Controllability of Linear Stochastic Systems in Hilbert Spaces, J. Math. Anal. Appl., 259 (2001), 64-82. doi: 10.1006/jmaa.2000.7386. Google Scholar N. I. Mahmudov, Controllability of Semilinear Stochastic Systems in Hilbert Spaces, J. Math. Anal. Appl., 288 (2003), 197-211. doi: 10.1016/S0022-247X(03)00592-4. Google Scholar K. S. Miller and B. Ross, An Introduction to The Fractional Calculus and Fractional Differential Equations, A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1993. Google Scholar D. N. Pandey, S. Das and N. Sukavanam, Existence of solutions for a second order neutral differential equation with state dependent delay and not instantaneous impulses, Int. J. Nonlinear Sci., 18 (2014), 145-155. Google Scholar M. Pierri, D. O. Regan and V. Rolnik, Existence of solutions for semi-linear abstract differential equations with not instantaneous impulses, Appl. Math. Comput., 219 (2013), 6743-6749. doi: 10.1016/j.amc.2012.12.084. Google Scholar I. Podlubny, Fractional Differential Equations, An introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications, Mathematics in Science and Engineering, 198. Academic Press, Inc., San Diego, CA, 1999. Google Scholar R. Rodriquez-Lopez and S. Tersian, Multiple solutions to boundary value problm for impulsive fractional differential equations, Fract. Calc. Appl. Anal., 17 (2014), 1016-1038. doi: 10.2478/s13540-014-0212-2. Google Scholar X. B. Shu, Y. Lai and Y. Chen, The existence of mild solutions for impulsive fractional partial differential equations, Nonlinear Anal., 74 (2011), 2003-2011. doi: 10.1016/j.na.2010.11.007. Google Scholar A. Sood and S. K. Srivastava, On Stability of Differential Systems with Noninstantaneous Impulses, Math. Probl. Eng., 2015. doi: 10.1155/2015/691687. Google Scholar C. Tunc, A note on the qualitative behaviors of non-linear Volterra integro-differential equation, J. Egyptian Math. Soc., 24 (2016), 187-192. doi: 10.1016/j.joems.2014.12.010. Google Scholar C. Tunc and O. Tunc, New qualitative criteria for solutions of Volterra integro-differential equations, Arab Journal of Basic and Applied Sciences, 25 (2018), 158-165. Google Scholar J. R. Wang, M. Feckan and Y. Zhou, Relaxed controls for nonlinear fractional impulsive evolution equations, J. Optim. Theory Appl., 156 (2013), 13-32. doi: 10.1007/s10957-012-0170-y. Google Scholar J. Wang and Z. Lin, A class of impulsive nonautonomous differential equations and Ulam - Hyers-Rassias stability, Math. Methods Appl. Sci., 38 (2015), 868-880. doi: 10.1002/mma.3113. Google Scholar J. Wang, Y. Zhou and Z. Lin, On a new class of impulsive fractional differential equations, Appl. Math. Comput., 242 (2014), 649-657. doi: 10.1016/j.amc.2014.06.002. Google Scholar R. Wang, M. Feckan and Y. Zhou, On the new concept of solutions and existence results for impulsive fractional evolution equations, Dyn. Partial Differ. Equ., 8 (2011), 345-361. doi: 10.4310/DPDE.2011.v8.n4.a3. Google Scholar X. Zhang, X. Huang and Z. Liu, The existence and uniqueness of mild solutions for impulsive fractional equations with nonlocal conditions and infinite delay, Nonlinear Anal. Hybrid Syst., 4 (2010), 775-781. doi: 10.1016/j.nahs.2010.05.007. Google Scholar Ndolane Sene. Mittag-Leffler input stability of fractional differential equations and its applications. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 867-880. doi: 10.3934/dcdss.2020050 Ahmed Boudaoui, Tomás Caraballo, Abdelghani Ouahab. Stochastic differential equations with non-instantaneous impulses driven by a fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2521-2541. doi: 10.3934/dcdsb.2017084 Mehmet Yavuz, Necati Özdemir. Comparing the new fractional derivative operators involving exponential and Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 995-1006. doi: 10.3934/dcdss.2020058 Jean Daniel Djida, Juan J. Nieto, Iván Area. Parabolic problem with fractional time derivative with nonlocal and nonsingular Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 609-627. doi: 10.3934/dcdss.2020033 Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar. A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 561-574. doi: 10.3934/dcdss.2020031 Liang Bai, Juan J. Nieto, José M. Uzal. On a delayed epidemic model with non-instantaneous impulses. Communications on Pure & Applied Analysis, 2020, 19 (4) : 1915-1930. doi: 10.3934/cpaa.2020084 Ricardo Almeida, M. Luísa Morgado. Optimality conditions involving the Mittag–Leffler tempered fractional derivative. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021149 Raziye Mert, Thabet Abdeljawad, Allan Peterson. A Sturm-Liouville approach for continuous and discrete Mittag-Leffler kernel fractional operators. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2417-2434. doi: 10.3934/dcdss.2020171 Behzad Ghanbari, Devendra Kumar, Jagdev Singh. An efficient numerical method for fractional model of allelopathic stimulatory phytoplankton species with Mittag-Leffler law. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3577-3587. doi: 10.3934/dcdss.2020428 Avadhesh Kumar, Ankit Kumar, Ramesh Kumar Vats, Parveen Kumar. Approximate controllability of neutral delay integro-differential inclusion of order $ \alpha\in (1, 2) $ with non-instantaneous impulses. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021058 Ebenezer Bonyah, Samuel Kwesi Asiedu. Analysis of a Lymphatic filariasis-schistosomiasis coinfection with public health dynamics: Model obtained through Mittag-Leffler function. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 519-537. doi: 10.3934/dcdss.2020029 Huy Tuan Nguyen, Huu Can Nguyen, Renhai Wang, Yong Zhou. Initial value problem for fractional Volterra integro-differential equations with Caputo derivative. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6483-6510. doi: 10.3934/dcdsb.2021030 Hayat Zouiten, Ali Boutoulout, Delfim F. M. Torres. Regional enlarged observability of Caputo fractional differential equations. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 1017-1029. doi: 10.3934/dcdss.2020060 Irene Benedetti, Valeri Obukhovskii, Valentina Taddei. Evolution fractional differential problems with impulses and nonlocal conditions. Discrete & Continuous Dynamical Systems - S, 2020, 13 (7) : 1899-1919. doi: 10.3934/dcdss.2020149 Changpin Li, Zhiqiang Li. Asymptotic behaviors of solution to partial differential equation with Caputo–Hadamard derivative and fractional Laplacian: Hyperbolic case. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3659-3683. doi: 10.3934/dcdss.2021023 Francesco Mainardi. On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2267-2278. doi: 10.3934/dcdsb.2014.19.2267 Muslim Malik, Anjali Rose, Anil Kumar. Controllability of Sobolev type fuzzy differential equation with non-instantaneous impulsive condition. Discrete & Continuous Dynamical Systems - S, 2022, 15 (2) : 387-407. doi: 10.3934/dcdss.2021068 Amir Khan, Asaf Khan, Tahir Khan, Gul Zaman. Extension of triple Laplace transform for solving fractional differential equations. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 755-768. doi: 10.3934/dcdss.2020042 Saif Ullah, Muhammad Altaf Khan, Muhammad Farooq, Zakia Hammouch, Dumitru Baleanu. A fractional model for the dynamics of tuberculosis infection using Caputo-Fabrizio derivative. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 975-993. doi: 10.3934/dcdss.2020057 Ilknur Koca. Numerical analysis of coupled fractional differential equations with Atangana-Baleanu fractional derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 475-486. doi: 10.3934/dcdss.2019031 Baskar Sundaravadivoo
CommonCrawl
Skip to main content Skip to sections This service is more advanced with JavaScript available Search SpringerLink Fundamental Approaches to Software Engineering International Conference on Fundamental Approaches to Software Engineering FASE 2016: Fundamental Approaches to Software Engineering pp 179-196 | Cite as Cut Branches Before Looking for Bugs: Sound Verification on Relaxed Slices Jean-Christophe Léchenet Nikolai Kosmatov Pascale Le Gall Part of the Lecture Notes in Computer Science book series (LNCS, volume 9633) Program slicing can be used to reduce a given initial program to a smaller one (a slice) which preserves the behavior of the initial program with respect to a chosen criterion. Verification and validation (V&V) of software can become easier on slices, but require particular care in presence of errors or non-termination in order to avoid unsound results or a poor level of reduction in slices. This article proposes a theoretical foundation for conducting V&V activities on a slice instead of the initial program. We introduce the notion of relaxed slicing that remains efficient even in presence of errors or non-termination, and establish an appropriate soundness property. It allows us to give a precise interpretation of verification results (absence or presence of errors) obtained for a slice in terms of the initial program. Our results have been proved in Coq. Threatening Statement Runtime Error Control Dependence Initial Program Additional Dependency These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Download conference paper PDF Context. Program slicing was initially introduced by Weiser [32, 33] as a technique allowing to decompose a given program into a simpler one, called a program slice, by analyzing its control and data flow. In the classic definition, a (program) slice is an executable program subset of the initial program whose behavior must be identical to a specified subset of the initial program's behavior. This specified behavior that should be preserved in the slice is called slicing criterion. A common slicing criterion is a program point l. For the purpose of this paper, we prefer this simple formulation to another criterion (l, V) where a set of variables V is also specified. Informally speaking, program slicing with respect to the criterion l should guarantee that any variable v at program point l takes the same value in the slice and in the original program. Since Weiser's original work, many researchers have studied foundations of program slicing (e.g. [4, 5, 6, 8, 11, 14, 20, 26, 27, 28]). Numerous applications of slicing have been proposed, in particular, to program understanding, software maintenance, debugging, program integration and software metrics. Comprehensive surveys on program slicing can be found e.g. in [9, 29, 30, 35]. In recent classifications of program slicing, Weiser's original approach is called static backward slicing since it simplifies the program statically, for all possible executions at the same time, and traverses it backwards from the slicing criterion in order to keep those statements that can influence this criterion. Static backward slicing based on control and data dependencies is also the purpose of this work. Goals and Approach. Verification and Validation (V&V) can become easier on simpler programs after "cutting off irrelevant branches" [13, 15, 17, 22]. Our main goal is to address the following research question: (RQ) Can we soundly conduct V&V activities on slices instead of the initial program? In particular, if there are no errors in a program slice, what can be said about the initial program? And if an error is found in a program slice, does it necessarily occur in the initial program? We consider errors determined by the current program state such as runtime errors (that can either interrupt the program or lead to an undefined behavior). We also consider a realistic setting of programs with potentially non-terminating loops, even if this non-termination is unintended. So we assume neither that all loops terminate, nor that all loops do not terminate, nor that we have a preliminary knowledge of which loops terminate and which loops do not. Dealing with potential runtime errors and non-terminating loops is very important for realistic programs since their presence cannot be a priori excluded, especially during V&V activities. Although quite different at first glance, both situations have a common point: they can in some sense interrupt normal execution of the program preventing the following statements from being executed. Therefore, slicing away (that is, removing) potentially erroneous or non-terminating sub-programs from the slice can have an impact on soundness of program slicing. While some aspects of (RQ) were discussed in previous papers, none of them provided a complete formal answer in the considered general setting (as we detail in Sects. 2 and 6 below). To satisfy the traditional soundness property, program slicing would require to consider additional dependencies of each statement on previous loops and error-prone statements. That would lead to inefficient (that is, too large) slices, where we would systematically preserve all potentially erroneous or non-terminating statements executed before the slicing criterion. Such slices would have very limited benefit for our purpose of performing V&V on slices instead of the initial program. This work proposes relaxed slicing, where additional dependencies on previous (potentially) erroneous or non-terminating statements are not required. This approach leads to smaller slices, but needs a new soundness property. We state and prove a suitable soundness property using a trajectory-based semantics, and show how this result can justify V&V on slices by characterizing possible verification results on slices in terms of the initial program. The proof has been formalized in the Coq proof assistant [7] and is available in [1]. The Contributions of this work include: a comprehensive analysis of issues arising for V&V on classic slices; the notion of relaxed slicing (Definition 6) for structured programs with possible errors and non-termination, that keeps fewer statements than it would be necessary to satisfy the classic soundness property of slicing; a new soundness property for relaxed slicing (Theorem 1); a characterization of verification results, such as absence or presence of errors, obtained for a relaxed slice, in terms of the initial program, that constitutes a theoretical foundation for conducting V&V on slices (Theorems 2, 3); a formalization and proof of our results in Coq. Paper Outline. Section 2 presents our motivation and illustrating examples. The considered language and its semantics are defined in Sect. 3. Section 4 defines the notion of relaxed slice and establishes its main soundness property. Next, Sect. 5 formalizes the relationship between the errors in the initial program and in a relaxed slice. Finally, Sects. 6 and 7 present the related work and the conclusion with some future work. 2 Motivation and Running Examples Errors and Assertions. We consider errors that are determined by the current program state1 including runtime errors (division by zero, out-of-bounds array access, arithmetic overflows, out-of-bounds bit shifting, etc.). Some of these errors do not always interrupt program execution and can sometimes lead to an (even more dangerous) undefined behavior, such as reading or writing an arbitrary memory location after an out-of-bounds array access in C. Since we cannot take the risk to overlook some of these "silent runtime errors", we assume that all threatening statements are annotated with explicit assertions assert(C) placed before them, that interrupt the execution whenever the condition C is false. This assumption will be convenient for the formalization in the next sections: possible runtime errors will always occur in assertions. Such assertions can be generated syntactically (for example, by the RTE plugin of the \({\textsc {Frama}}\text {-}{\textsc {C}}\) toolset [21] for C programs). For instance, line 10 in Fig. 1a prevents division by zero at line 11, while line 13 makes explicit a potential runtime error at line 14 if the array a is known to be of size N. In addition, the assert(C) keyword can be also used to express any additional user-defined properties on the current state. Most previous applications of slicing to debugging used slices in order to better understand an already detected error, by analyzing a simpler program rather than a more complex one [8, 29, 30]. Our goal is quite different: to perform V&V on slices in order to discover yet unknown errors, or show their absence (cf. (RQ)). The interpretation of absence or presence of errors in a slice in terms of the initial program requires solid theoretical foundations. Open image in new window (a) A program computing in two ways the average of elements of a given array a of size N whose only nonzero elements can be at indices \(\{0,k,2k,\dots \}\), and its two slices: (b) w.r.t. line 18, and (c) w.r.t. line 20. Classic Soundness Property. Let p be a program, and q a slice of p w.r.t. a slicing criterion l. The classic soundness property of slicing (cf. [6, Definition 2.5] or [28, Slicing Th.]) can be informally stated as follows. Let \(\sigma \) be an input state of p. Suppose that p halts on \(\sigma \). Then q halts on \(\sigma \) and the executions of p and q on \(\sigma \) agree after each statement preserved in the slice on the variables that appear in this statement.2 This property was originally established for classic dependence-based slicing for programs without runtime errors and only for executions with terminating loops: nothing is guaranteed if p does not terminate normally on \(\sigma \). Let us show why this property does not hold in presence of potential runtime errors or non-terminating loops. Illustrating Examples. Figure 1a presents a simple (buggy) C-like program that takes as inputs an array a of length N and an integer k (with \(0\leqslant \) k \(\leqslant 100\), \(0\leqslant \) N \(\leqslant 100\)), and computes in two different ways the average of the elements of a. We suppose that all variables and array elements are unsigned integers, and all elements of a whose index is not a multiple of k are zero, so it suffices to sum array elements over the indices multiples of k and to divide the sum by N. The sum is computed twice (in s1 at lines 3–8 and in s2 at lines 9–16), and the averages avg1 and avg2 are computed (lines 17–20) and compared (lines 21–22). We assume that necessary assertions with explicit guards (at lines 5, 10, 13, 17, 19) are inserted to prevent runtime errors. Figure 1b shows a (classic dependence-based) slice of this program with respect to the statement at line 18. Intuitively, it contains only statements (at lines 1, 3, 4, 6, 7, 18) that can influence the slicing criterion, i.e. the values of variables that appear at line 18 after its execution.3 In addition, we keep the assertions to prevent potential errors in preserved statements. Similarly, Fig. 1c shows a slice with respect to line 20, again with protecting assertions. Errors ( Open image in new window ), non-termination (\(\circlearrowleft \)) and normal termination (—) of programs of Fig. 1 for some inputs. Figure 2 summarizes the behavior of the three programs of Fig. 1 on some test data. The elements of a do not matter here. Suppose we found an error at line 17 in slice (b) provoked by test datum \(\sigma _4\). Program (a) does not contain the same error: it fails earlier, at line 13. We say that the error at line 17 in slice (b) is hidden by the error at line 13 of the initial program. Similarly, test datum \(\sigma _5\) provokes an error at line 17 in slice (b) while this error is hidden by an error at line 10 in (a). In fact, the error at line 17 cannot be reproduced on the initial program, so we say that it is totally hidden by other errors. For slice (c), detecting an error at line 10 on test datum \(\sigma _5\) would allow us to observe the same error in (a). However, if this error in slice (c) is also provoked by test datum \(\sigma _3\), this test datum does not provoke any error in (a) because the loop at line 4 does not terminate. We say that this error is (partially) hidden by a non-termination of the loop at line 4. These examples clearly show that Property 1 is not true in presence of errors or non-terminating loops for classic slices. Indeed, the executions of p and q may disagree at least for two reasons: (i) a previously executed non-terminating loop not preserved in the slice, or (ii) a previously executed failing statement not preserved in the slice. Let us consider another example related to error-free programs. If we suppose that \(0<\,\)k \(\leqslant 100\), \(0<\,\)N \(\leqslant 100\), and replace N/k by (N-1)/k at line 11 of Fig. 1, neither slice contains any error. If we manage to verify the absence of errors on both slices, can we be sure that the initial program is error-free as well? Bigger Slices vs. Weaker Soundness Property. One solution (adopted by [18, 25, 26]), cf. Sect. 6) proposes to ensure Property 1 even in presence of errors and potentially non-terminating loops by considering additional dependencies. This approach would basically lead to always preserving in the slice any (potentially non-terminating) loop or error-prone statement that can be executed before the slicing criterion. The resulting slices would be much bigger, and the benefit of performing V&V on slices would be very limited. For instance, to ensure that the executions of program (a) and slice (b) activated by test datum \(\sigma _4\) agree on all statements of slice (b), line 13 should be preserved in slice (b). That would result (by transitivity of dependencies) in keeping e.g. the loop at line 12 and lines 9–11 in slice (b) as well. Similarly, the loop at line 4 should be kept in slice (c) to avoid disagreeing executions for test datum \(\sigma _3\). The slices can become much bigger in this approach. In this paper we propose relaxed slicing, an alternative approach that does not require to keep all loops or error-prone statements that can be executed before the slicing criterion, but ensures a weaker soundness property. We demonstrate that the new soundness property is sufficient to justify V&V on slices instead of the initial program. In particular, we show that reasons (i) and (ii) above are the only possible reasons of a hidden error, and investigate when the absence of errors in slices implies the absence of errors in the initial program. 3 The Considered Language and Its Semantics Language. In this study, we consider a simple WHILE language (with integer variables, fixed-size arrays, pure expressions, conditionals, assertions and loops) that is representative for our formalization of slicing in presence of runtime errors and non-termination. The language is defined by the following grammar: $$\begin{aligned} Prog \quad {:}{:=} \quad&Stmt^*&\\ Stmt \quad {:}{:=} \quad&l:\texttt {skip}\ |&\\&l : x=e\ |&\\&\texttt {if}\ (l:b)\ Prog\ \texttt {else}\ Prog\ |&\\&\texttt {while}\ (l:b)\ Prog \ |&\\&l : \texttt {assert}\ (b, l')&\end{aligned}$$ where \(l, l'\) denote labels, e an expression and b a boolean expression. A program (Prog) is a possibly empty list of statements (Stmt). The empty list is denoted \(\lambda \), and the list separator is "; ". We assume that the labels of any given program are distinct, so that a label uniquely identifies a statement. Assignments, conditions and loops have the usual semantics. As its name suggests, \(\texttt {skip}\) does nothing. The assertion \(\texttt {assert}(b, l')\) stops program execution in an error state (denoted \(\varepsilon \)) if b is false, otherwise execution continues normally. As said earlier, we assume that assertions are added to protect all threatening statements. The label \(l'\) allows us to associate the assertion with another statement that should be protected by the assertion (e.g. because it could provoke a runtime error). An assertion often protects the following line (like in Fig. 1, where the protected label is not indicated). Two simple cases however need more flexibility (cf. Fig. 3). Some assertions have to be themselves protected by assertions when they contain a threatening expression. Figure 3a gives such an example where, instead of creating three assertions pointing to l, assertions l1 and l2 point to l, and assertion l3 points to another assertion l1. Figure 3b (inspired by the second loop of Fig. 1) shows how assertions with explicit labels can be used to protect a loop condition from a runtime error. The arrows in Fig. 3 indicate the protected statement. Assertions can be also added by the user to check other properties than runtime errors. If the user does not need to indicate the protected statement, they can choose for \(l'\) either the label l of the assertion itself or any label not used elsewhere in the program. User-defined assertions should be also protected against errors by other assertions if necessary. Two special cases of assertions Semantics. Let p be a program. A program state is a mapping from variables to values. Let \(\varSigma \) denote the set of all valid states, and \(\varSigma _\varepsilon =\varSigma \cup \{\varepsilon \}\), where \(\varepsilon \) is the error state. Let \(\sigma \) be an initial state of p. The trajectory of the execution of p on \(\sigma \), denoted \(\mathcal {T} \llbracket p \rrbracket \sigma \), is the sequence of pairs \(\langle (l_1,\sigma _1)\dots (l_k,\sigma _k)\dots \rangle \), where \(l_1,\dots ,l_k,\dots \) is the sequence of labels of the executed instructions, and \(\sigma _i\) is the state of the program after the execution of instruction \(l_i\). \(\mathcal {T}\) can be seen as a (partial) function $$\begin{aligned} \mathcal {T} : Prog \rightarrow \varSigma \rightarrow Seq(L \times \varSigma _\varepsilon ) \end{aligned}$$ where \(Seq(L \times \varSigma _\varepsilon )\) is the set of sequences of pairs \((l,\sigma )\in L \times \varSigma _\varepsilon \). Trajectories can be finite or (countably) infinite. A finite subsequence at the beginning of a trajectory T is called a prefix of T. The empty sequence is denoted \(\langle \,\rangle \). Let \(\oplus \) be the concatenation operator over sequences. For a finite trajectory T, we denote by \(LS_\sigma (T)\) the last state of T (i.e. the state component of its last element) if \(T\ne \langle \,\rangle \), and \(\sigma \) otherwise. The definition of \(T_1 \oplus T_2\) is standard if \(T_1\) is finite. If \(T_1\) is infinite or ends with the error state \(\varepsilon \), then we set \(T_1 \oplus T_2 = T_1\) for any \(T_2\) (and even if \(T_2\) is not well-defined, in other words, \(\oplus \) performs lazy evaluation of its arguments). We denote by \(\mathcal {E}\) an evaluation function for expressions, that is standard and not detailed here. For any (pure) expression e and state \(\sigma \in \varSigma \), \(\mathcal {E} \llbracket e \rrbracket \sigma \) is the evaluation of expression e using \(\sigma \) to evaluate the variables present in e. The error state is only reached through a failed \(\texttt {assert}\). Thanks to the assumption that all potentially failing statements are protected by assertions, we do not need to model errors in expressions or other statements: errors always occur in assertions. We also suppose for simplicity that all variables appearing in p are initialized in any initial state of p, that ensures the absence of expressions that cannot be evaluated due to an uninitialized variable. These assumptions slightly simplify the presentation without loss of generality for our purpose: loops and errors (in assertions) are present in the language. Figure 4 gives the inductive definition of \(\mathcal {T}\) for any valid state \(\sigma \in \varSigma \). The definitions for a loop and a conditional rely on the notation \((v \rightarrow T_1, T_2)\) also defined in Fig. 4. For any state \(\sigma \), variable x and value v, \(\sigma [x \leftarrow v]\) denotes \(\sigma \) overridden by the association \(x \mapsto v\). Notice that in the definitions for a sequence and a loop, it is important that \(\oplus \) does not evaluate the second parameter when the first trajectory is infinite or ends with the error state since the execution of the remaining part is not defined in this case. Thus \(\varepsilon \) can appear only once at the very end of a trajectory. Trajectory-based semantics of the language (for a valid state \(\sigma \in \varSigma \)) We illustrate these definitions on slice (b) of Fig. 1, denoted \(p_b\). For every initial state \(\sigma \) of \(p_b\) and unsigned integer i, we define \(\sigma ^i\) = \(\sigma [s_1 \leftarrow (i\cdot a[0]\ {\text {mod}} M_u)]\), where \(M_u\) denotes the maximal representable value of an unsigned integer. Then the trajectory on \(\sigma _3\) is infinite, while the trajectory on \(\sigma _5\) leads to an error: 4 Relaxed Program Slicing 4.1 Control and Data Dependences Let L(p) denote the set of labels of program p. Let us consider here a more general slicing criterion defined as a subset of labels \(L_0\subseteq L(p)\), and construct a slice with respect to all statements whose labels are in \(L_0\). In particular, this generalization can be very useful when one wants to perform V&V on a slice with respect to several threatening statements. In this work we focus on dependence-based slicing, where a dependence relation \(\mathcal {D}\subseteq L(p)\times L(p)\) is used to construct a slice. We write \(l\xrightarrow [p]{\mathcal {D}}l'\) to indicate that \(l'\) depends on l according to \(\mathcal {D}\), i.e. \((l,l')\in \mathcal {D}\). The definitions of control and data dependencies, denoted respectively \(\mathcal {D}_c\) and \(\mathcal {D}_d\), are standard, and given following [6]. Definition 1 (Control Dependence \(\mathcal {D}_c\)). The control dependencies in p are defined by \(\mathtt{if}\) and \(\mathtt{while}\) statements in p as follows: $$\begin{aligned} \begin{array}{c} {for \ any \ statement \ }{} \texttt {if }\ (l: b)\ q\ \texttt {else }\ r {\ and \ } l' \in L(q)\cup L(r), {\ we \ define \ } l \xrightarrow [p]{\mathcal {D}_c} l';\\ {for \ any \ statement \ }{} \texttt {while }\ (l: b)\ q {\ and \ } l' \in L(q), {\ we \ define \ } l \xrightarrow [p]{\mathcal {D}_c} l'. \end{array} \end{aligned}$$ For instance, in Fig. 1a, lines 5–7 are control-dependent on line 4, while lines 13–15 are control-dependent on line 12. To define data dependence, we need the notion of (finite syntactic) paths. Let us denote again by \(\oplus \) the concatenation of paths, extend \(\oplus \) to sets of paths as the set of concatenations of their elements, and denote by "\(*\)" Kleene closure. (Finite Syntactic Paths). The set of finite syntactic paths \(\mathcal {P}(p)\) of a program p is inductively defined as follows: For a given label l, let \(\mathop {\mathrm {def}}\nolimits (l)\) denote the set of variables defined at l (that is, \(\mathop {\mathrm {def}}\nolimits (l)=\{v\}\) if l is an assignment of variable v, and \(\emptyset \) otherwise), and let \(\mathop {\mathrm {ref}}\nolimits (l)\) be the set of variables referenced at l. If l designates a conditional (or a loop) statement, \(\mathop {\mathrm {ref}}\nolimits (l)\) is the set of variables appearing in the condition; other variables appearing in its branches (or loop body) do not belong to \(\mathop {\mathrm {ref}}\nolimits (l)\). We denote by \(\mathop {\mathrm {used}}\nolimits (l)\) the set \(\mathop {\mathrm {def}}\nolimits (l) \cup \mathop {\mathrm {ref}}\nolimits (l)\). (Data Dependence \(\mathcal {D}_d\)). Let l and \(l'\) be labels of a program p. We say that there is a data dependency \(l \xrightarrow [p]{\mathcal {D}_d} l'\) if \( \mathop {\mathrm {def}}\nolimits (l)\ne \emptyset \) and \(\mathop {\mathrm {def}}\nolimits (l) \subseteq \mathop {\mathrm {ref}}\nolimits (l')\) and there exists a path \(\pi =\pi _1 l \pi _2 l' \pi _3 \in \mathcal {P}(p)\) such that for all \(l'' \in \pi _2\), \(\mathop {\mathrm {def}}\nolimits (l'') \ne \mathop {\mathrm {def}}\nolimits (l)\). Each \(\pi _i\) may be empty. For instance, in Fig. 1b, line 18 is data-dependent on line 1 (with \(\pi =1,3,4,17,18\)) and on line 6 (with \(\pi =1,3,4,5,6,7,4,17,18\)), while line 6 is data-dependent on lines 1, 3, 6 and 7. A slice of p is expected to be a quotient of p, that is, a well-formed program obtained from p by removing zero, one or more statements. A quotient can be identified by the set of labels of preserved statements. Notice that when a conditional (or a loop) statement is removed, it is removed with all statements of its both branches (or its loop body) to preserve the structure of the initial program in the quotient. Given a dependence relation \(\mathcal {D}\) and \(L_0\subseteq L(P)\), the slice based on \(\mathcal {D}\) w.r.t. \(L_0\) will be also identified by the set of labels of preserved statements. The following lemma justifies the correctness of the definitions of slices given hereafter. We denote by \(\mathcal {D}^{*}\) the reflexive transitive closure of \(\mathcal {D}\), and by \((\mathcal {D}^*)^{-1}(L_0)\) the set of all labels \(l'\in L(p)\) such that there exists \(l \in L_0\) with \(l' \xrightarrow [p]{\mathcal {\mathcal {D}^{*}}} l\). Lemma 1 Let \(L_0\subseteq L(P)\). If \(\mathcal {D}\) is a dependence relation on p such that \(\mathcal {D}_c \subseteq \mathcal {D}\), then \((\mathcal {D}^*)^{-1}(L_0)\) is the set of labels of a (uniquely defined) quotient of p. Lemma 1 can be easily proven by structural induction. It allows us to define a slice as the set of statements on which the statements in \(L_0\) are (directly or indirectly) dependent. (Dependence-based Slice). Let \(\mathcal {D}\) be a dependence relation on p such that \(\mathcal {D}_c \subseteq \mathcal {D}\), and \(L_0\subseteq L(P)\). A dependence-based slice of p based on \(\mathcal {D}\) with respect to \(L_0\) is the quotient of p whose set of labels is \((\mathcal {D}^*)^{-1}(L_0)\). A classic dependence-based slice of p with respect to \(L_0\) is based on \(\mathcal {D}=\mathcal {D}_c \cup \mathcal {D}_d\). 4.2 Assertion Dependence and Relaxed Slices Soundness of classic slicing for programs without runtime errors or non-terminating loops can be expressed by Property 1 in Sect. 2. As we illustrated, to generalize this property in presence of runtime errors and for non-terminating executions one would need to add additional dependencies and systematically preserve in the slice all potentially erroneous or non-terminating statements executed before (a statement of) the slicing criterion. We propose here an alternative approach, called relaxed slicing, where only one additional dependency type is considered. (Assertion Dependence \(\mathcal {D}_a\)). For every assertion \(l : \texttt {assert }\ (b, l')\) in p with \(l,l'\in L(p)\), we define an assertion dependency \(l \xrightarrow [p]{\mathcal {D}_a} l'.\) (Relaxed Slice). A relaxed slice of p with respect to \(L_0\) is the quotient of p whose set of labels is \((\mathcal {D}^*)^{-1}(L_0)\), where \(\mathcal {D}=\mathcal {D}_c \cup \mathcal {D}_d\cup \mathcal {D}_a\). For instance, in Fig. 1a, there would be an assertion dependence of each threatening statement on the corresponding protecting assertion (written on the previous line). Therefore both slices (b) and (c) of Fig. 1 (in which we artificially preserved assertions in Sect. 2) are in fact relaxed slices where assertions are naturally preserved thanks to the assertion dependence. Assertion dependence brings two benefits. It ensures that a potentially threatening instruction is never kept without its protecting assertion. At the same time, an assertion can be preserved without its protected statement, that is quite useful for V&V that focus on assertions: slicing w.r.t. assertions may produce smaller slices if we do not need the whole threatening statement. For example, a relaxed slice w.r.t. the assertion at line 17 would contain only this unique line. Notice that a relaxed slice does not require to include potentially erroneous or non-terminating statements that can prevent the slicing criterion from being executed (like in [18, 25, 26]). For example, slice (b) does not include the potential error at line 13, and slice (c) does not include the loop of line 4. 4.3 Soundness of Relaxed Slicing We cannot directly compare the trajectory of the original program with a slice, since it may refer to statements and variables not preserved in the slice. We use projections of trajectories that reduce them to selected labels and variables. (Projection of a State). The projection of a state \(\sigma \) to a set of variables V, denoted \(\sigma {\downarrow }V\), is the restriction of \(\sigma \) to V if \(\sigma \ne \varepsilon \), and \(\varepsilon \) otherwise. (Projection of a Trajectory). The projection of a one-element sequence \(\langle (l, \sigma )\rangle \) to a set of labels L, denoted \(\langle (l, \sigma ) \rangle {\downarrow }L\), is defined as follows: $$\begin{aligned} \langle (l, \sigma )\rangle {\downarrow }L = {\left\{ \begin{array}{ll} \langle (l, \sigma {\downarrow }\mathop {\mathrm {used}}\nolimits (l))\rangle &{}\text {if } l \in L,\\ \langle \,\rangle &{}\text {otherwise.} \end{array}\right. } \end{aligned}$$ The projection of a trajectory \(T = \langle (l_1, \sigma _1) \dots (l_k, \sigma _k) \dots \rangle \) to L, denoted \(\mathop {\mathrm {Proj}}\nolimits _L(T)\), is defined element-wise: \( \mathop {\mathrm {Proj}}\nolimits _L(T) = \langle (l_1, \sigma _1)\rangle {\downarrow }L \,\oplus {} \ldots \oplus \, \langle (l_k, \sigma _k)\rangle {\downarrow }L \,\oplus \dots \, .\) We can now state and prove the soundness property of relaxed slices. Theorem 1 (Soundness of a Relaxed Slice). Let \(L_0\subseteq L(p)\) be a slicing criterion of program p. Let q be the relaxed slice of p with respect to \(L_0\), and \(L=L(q)\) the set of labels preserved in q. Then for any initial state \(\sigma \in \varSigma \) of p and finite prefix T of \(\mathcal {T}\llbracket p \rrbracket \sigma \), there exists a prefix \(T'\) of \(\mathcal {T}\llbracket q \rrbracket \sigma \), such that: $$\begin{aligned} \mathop {\mathrm {Proj}}\nolimits _L(T) = \mathop {\mathrm {Proj}}\nolimits _L(T') \end{aligned}$$ Moreover, if p terminates without error on \(\sigma \), \(\mathcal {T}\llbracket p \rrbracket \sigma \) and \(\mathcal {T}\llbracket q \rrbracket \sigma \) are finite, and $$\begin{aligned} \mathop {\mathrm {Proj}}\nolimits _L(\mathcal {T}\llbracket p \rrbracket \sigma ) = \mathop {\mathrm {Proj}}\nolimits _L(\mathcal {T}\llbracket q \rrbracket \sigma ) \end{aligned}$$ Let \(\sigma \in \varSigma \), \(\mathcal {T}\llbracket p \rrbracket \sigma =\langle (l_1, \sigma _1) (l_2, \sigma _2)\dots \rangle \), and \(\mathcal {T}\llbracket q \rrbracket \sigma = \langle (l'_1, \sigma '_1)(l'_2, \sigma '_2) \dots \rangle \). Let \(T= \langle (l_1, \sigma _1) \dots (l_i, \sigma _i) \rangle \) be a finite prefix of \(\mathcal {T}\llbracket p \rrbracket \sigma \). By Definition 8, the projections of \(\mathcal {T}\llbracket q \rrbracket \sigma \) and T to \(L=L(q)\) have the following form $$\begin{aligned} \begin{array}{rcl} \mathop {\mathrm {Proj}}\nolimits _L(\mathcal {T}\llbracket q \rrbracket \sigma ) &{}=&{} \langle \, (l'_1, \sigma '_1 {\downarrow }\mathop {\mathrm {used}}\nolimits (l'_1)) (l'_2, \sigma '_2 {\downarrow }\mathop {\mathrm {used}}\nolimits (l'_2)) \dots \,\rangle , \\ \mathop {\mathrm {Proj}}\nolimits _L(T) &{}=&{} \langle \, (l_{f(1)}, \sigma _{f(1)} {\downarrow }\mathop {\mathrm {used}}\nolimits (l_{f(1)})) \,\dots \, (l_{f(j)}, \sigma _{f(j)} {\downarrow }\mathop {\mathrm {used}}\nolimits (l_{f(j)})) \,\rangle , \end{array} \end{aligned}$$ where \(j \le i\) and f is a strictly increasing function. Let us denote by k the greatest natural number such that \(k \le j\) and such that the prefix of \(\mathcal {T}\llbracket q \rrbracket \sigma \) of length k exists and satisfies \((\mathop {\mathrm {Proj}}\nolimits _L(T))^k = \mathop {\mathrm {Proj}}\nolimits _L((\mathcal {T}\llbracket q \rrbracket \sigma )^k)\), where we denote by \(U^k\) the prefix of length k for any trajectory U. Let \(T' = \langle (l'_1, \sigma '_1) \dots (l'_k, \sigma '_k) \rangle \) be the prefix \((\mathcal {T}\llbracket q \rrbracket \sigma )^k\). By Definition 8 we have $$\begin{aligned} \mathop {\mathrm {Proj}}\nolimits _L(T') = \langle \, (l'_1, \sigma '_1 {\downarrow }\mathop {\mathrm {used}}\nolimits (l'_1)) \,\dots \, (l'_k, \sigma '_k {\downarrow }\mathop {\mathrm {used}}\nolimits (l'_k)) \,\rangle . \end{aligned}$$ Since \((\mathop {\mathrm {Proj}}\nolimits _L(T))^k = \mathop {\mathrm {Proj}}\nolimits _L(T')\), for any \(m=1,2,\dots ,k\) we have \(l'_m=l_{f(m)}\) and \(\sigma '_m {\downarrow }\mathop {\mathrm {used}}\nolimits (l'_m)=\sigma _{f(m)}{\downarrow }\mathop {\mathrm {used}}\nolimits (l_{f(m)})\). Set \(\sigma _0 = \sigma '_0 = \sigma \). Let us prove that \(k = j\). We reason by contradiction and assume that \(k < j\). By maximality of k, there can be three different cases: \(\mathcal {T}\llbracket q \rrbracket \sigma \) is of size k, or \(l'_{k+1}\) exists, but \(l'_{k+1} \ne l_{f(k+1)}\), or \(l'_{k+1}\) exists, \(l'_{k+1} = l_{f(k+1)}\), but \(\sigma '_{k+1} {\downarrow }\mathop {\mathrm {used}}\nolimits (l'_{k+1}) \ne \sigma _{f(k+1)} {\downarrow }\mathop {\mathrm {used}}\nolimits (l_{f(k+1)})\). Since \(l'_k = l_{f(k)}\), cases 1 and 2 can be only due to a diverging evaluation of a control flow statement (i.e. \(\texttt {if}\), \(\texttt {while}\) or \(\texttt {assert}\)) situated in the execution of p between \(l_{f(k)}\) and \(l_{f(k+1)-1}\). If such a statement occurs at label \(l'_k = l_{f(k)}\), its condition would be evaluated identically in both executions since \(\sigma '_{k} {\downarrow }\mathop {\mathrm {used}}\nolimits (l'_{k}) = \sigma _{f(k)} {\downarrow }\mathop {\mathrm {used}}\nolimits (l_{f(k)})\). The first non-equal label \(l_{f(k+1)}\) cannot be part of the body of some non-preserved \(\texttt {if}\) or \(\texttt {while}\) statement between \(l_{f(k)}+1\) and \(l_{f(k+1)-1}\) in p by definition of control dependence (cf. Definition 1). Finally, the divergence cannot be due to an \(\texttt {assert}\) in p between \(l_{f(k)+1}\) and \(l_{f(k+1)-1}\) either, because a passed \(\texttt {assert}\) has no effect, while a failing \(\texttt {assert}\) would make it impossible to reach \(l_{f(k+1)}\) in p. Thus a divergence leading to cases 1 and 2 is impossible. In case 3, the key idea is to remark that \(\sigma '_{k} {\downarrow }\mathop {\mathrm {ref}}\nolimits (l'_{k+1}) = \sigma _{f(k+1)-1} {\downarrow }\mathop {\mathrm {ref}}\nolimits (l_{f(k+1)})\). Indeed, assume that there is a variable \(v \in \mathop {\mathrm {ref}}\nolimits (l'_{k+1})=\mathop {\mathrm {ref}}\nolimits (l_{f(k+1)})\) such that \(\sigma '_{k}(v) \ne \sigma _{f(k+1)-1}(v)\). The last assignment to v in the execution of p before its usage at \(l_{f(k+1)}\) must be preserved in q because of data dependence (cf. Definition 3), so it has a label \(l'_{u}=l_{f(u)}\) for some \(1\leqslant u \leqslant k.\) By definition of k, the state projections after this statement were equal: \(\sigma '_{u} {\downarrow }\mathop {\mathrm {used}}\nolimits (l'_{u}) = \sigma _{f(u)} {\downarrow }\mathop {\mathrm {used}}\nolimits (l_{f(u)})\), so the last values assigned to v before its usage at \(l_{f(k+1)}\) were equal, that contradicts the assumption \(\sigma '_{k}(v) \ne \sigma _{f(k+1)-1}(v)\). This shows that all variables referenced in \(l_{f(k+1)}\) have the same values, so the resulting states cannot differ, and case 3 is not possible either. Therefore \(k=j\), and \(T'\) satisfies \(\mathop {\mathrm {Proj}}\nolimits _L(T)=\mathop {\mathrm {Proj}}\nolimits _L(T')\). If p terminates without error on \(\sigma \), by the first part of the theorem we have a prefix \(T'\) of \(\mathcal {T}\llbracket q\rrbracket \sigma \) such that \(\mathop {\mathrm {Proj}}\nolimits _L(\mathcal {T}\llbracket p\rrbracket \sigma )=\mathop {\mathrm {Proj}}\nolimits _L(T')\). If \(T'\) is a strict prefix of \(\mathcal {T}\llbracket q\rrbracket \sigma \), this means as before that a control flow statement executed in p causes the divergence of the two trajectories. By hypothesis, there are no failing assertions in the execution of p, therefore it is due to an \(\texttt {if}\) or a \(\texttt {while}\). By the same reasoning as in cases 1, 2 above we show that its condition must be evaluated in the same way in both trajectories and cannot lead to a divergence. Therefore, \(T' = \mathcal {T}\llbracket q \rrbracket \sigma \). \(\square \) 5 Verification on Relaxed Slices In this section, we show how the absence and the presence of errors in relaxed slices can be soundly interpreted in terms of the initial program. Let q be a relaxed slice of p and \(\sigma \in \varSigma \) an initial state of p. If the preserved assertions do not fail in the execution of q on \(\sigma \), they do not fail in the execution of p on \(\sigma \) either. Let us show the contrapositive. Assume that \(\mathcal {T} \llbracket p \rrbracket \sigma \) ends with \((l, \varepsilon )\) where \(l\in L(q)\) is a preserved assertion. Let \(L=L(q)\). From Theorem 1 applied to \(T = \mathcal {T} \llbracket p \rrbracket \sigma \), it follows that there exists a finite prefix \(T'\) of \(\mathcal {T} \llbracket q \rrbracket \sigma \) such that \(\mathop {\mathrm {Proj}}\nolimits _{L}(T) = \mathop {\mathrm {Proj}}\nolimits _{L}(T')\). The last state of \(\mathop {\mathrm {Proj}}\nolimits _{L}(T')\) is \(\varepsilon \), therefore the last state of \(T'\) is \(\varepsilon \) too. It means that \(\varepsilon \) appears in \(\mathcal {T} \llbracket q \rrbracket \sigma \), and by definition of semantics (cf. Sect. 3) this is possible only if \(\varepsilon \) is its last state. Therefore \(\mathcal {T} \llbracket q \rrbracket \sigma \) ends with \((l, \varepsilon )\) as well. \(\square \) The following theorem and corollary immediately follow from Lemma 2. Let q be a relaxed slice of p. If all assertions contained in q never fail, then the corresponding assertions in p never fail either. Corollary 1 Let \(q_1,\ \dots ,\ q_n\) be relaxed slices of p such that each assertion in p is preserved in at least one of the \(q_i\). If no assertion in any \(q_i\) fails, then no assertion fails in p. The last result justifies the detection of errors in a relaxed slice. Let q be a relaxed slice of p and \(\sigma \in \varSigma \) an initial state of p. We assume that \(\mathcal {T} \llbracket q \rrbracket \sigma \) ends with an error state. Then one of the following cases holds for p: \((\dagger )\)\(\mathcal {T} \llbracket p \rrbracket \sigma \) ends with an error at the same label, or \((\dagger \dagger )\)\(\mathcal {T} \llbracket p \rrbracket \sigma \) ends with an error at a label not preserved in q, or \((\dagger \dagger \dagger )\)\(\mathcal {T} \llbracket p \rrbracket \sigma \) is infinite. Let \(L=L(q)\) and assume that \(\mathcal {T}\llbracket q \rrbracket \sigma \) ends with \((l,\varepsilon )\) for some preserved assertion at label \(l\in L\). We reason by contradiction and assume that \(\mathcal {T}\llbracket p \rrbracket \sigma \) does not satisfy any of the three cases. Then two cases are possible. First, \(\mathcal {T}\llbracket p \rrbracket \sigma \) ends with \((l',\varepsilon )\) for another preserved assertion at label \(l'\in L\) (with \(l'\ne l\)). Then reasoning as in the proof of Lemma 2 we show that \(\mathcal {T}\llbracket q \rrbracket \sigma \) ends with \((l',\varepsilon )\) as well, that contradicts \(l'\ne l\). Second, \(\mathcal {T}\llbracket p \rrbracket \sigma \) is finite without error. Then the second part of Theorem 1 can be applied and thus \(\mathop {\mathrm {Proj}}\nolimits _L(\mathcal {T}\llbracket p \rrbracket \sigma ) = \mathop {\mathrm {Proj}}\nolimits _L(\mathcal {T}\llbracket q \rrbracket \sigma )\). This is contradictory since \(\mathcal {T}\llbracket q \rrbracket \sigma \) contains an error (at label \(l\in L\)) and \(\mathcal {T}\llbracket p \rrbracket \sigma \) does not. \(\square \) For instance, consider the example of Fig. 1 with \(0<\)k\(\leqslant 100\), \(0<\)N\(\leqslant 100\). In this case we can prove that slice (b) does not contain any error, thus we can deduce by Theorem 2 that the assertions at lines 5 and 17 (preserved in slice (b)) never fail in the initial program either. If in addition we replace N/k by (N-1)/k at line 11 of Fig. 1, we can show that neither of the two slices of Fig. 1 contains any error. Since these slices cover all assertions, we can deduce by Corollary 1 that the initial program is error-free. Theorem 3 shows that despite the fact that an error detected in q does not necessary appear in p, the detection of errors on q has a precise interpretation. It can be particularly meaningful for programs supposed to terminate, for which a non-termination within some time \(\tau \) is seen as an anomaly. In this case, detection of errors in a slice is sound in the sense that if an error is found in q for initial state \(\sigma \), there is an anomaly (same or earlier error, or non-termination within time \(\tau \)) in p whose type can be easily determined by running p on \(\sigma \). It can be noticed that a result similar to Theorem 3 can be established for non-termination: if \(\mathcal {T} \llbracket q \rrbracket \sigma \) is infinite, then either \((\dagger \dagger )\) or \((\dagger \dagger \dagger )\) holds for p. 6 Related Work Weiser [34] introduced the basics of intraprocedural and interprocedural static slicing. A thorough survey provided in [30] explores both static and dynamic slicing and compares the different approaches. It also lists the application areas of program slicing. More recent surveys can be found at [9, 29, 35]. Foundations of program slicing have been studied e.g. in [4, 5, 6, 8, 11, 14, 20, 26, 27, 28]. This section presents a selection of works that are most closely related to the present paper. Debugging and Dynamic Slicing. Program debugging and testing are traditional application domains of slicing (e.g. [2, 19, 33]) where it can be used to better understand an already detected error, to prioritize test cases (e.g. in regression testing), simplify a program before testing, etc. In particular, dynamic slicing [8] is used to simplify the program for a given (e.g. erroneous) execution. However, theoretical foundations of applying V&V on slices instead of the initial program (like in [13, 22]) in presence of errors and non-termination, that constitute the main purpose of this work, have been only partially studied. Slicing and Non-terminating Programs. A few works tried to propose a semantics preserved by classic slicing even in presence of non-termination. Among them, we can cite the lazy semantics of [11], and the transfinite one of [16], improved by [24]. Another semantics proposed in [6] has several improvements compared to the previous ones: it is intuitive and substitutive. Despite the elegance of these proposals, they turn out to be unsuitable for our purpose because they consider non-existing trajectories, that are not adapted to V&V techniques, for example, based on path-oriented testing like in [13, 15]. Ranganath, et al. [26] provides foundations for the slicing of modern programs, i.e. programs with exceptions and potentially infinite loops, represented by control flow graphs (CFG) and program dependence graphs (PDG). Their work gives two definitions of control dependence, non-termination sensitive and non-termination insensitive, corresponding respectively to the weak and strong control dependences of [25] and further generalized for any finite directed graph in [14]. [26] also establishes the soundness of classic slicing with non-termination sensitive control dependence in terms of weak bisimulation, more adapted to deal with infinite executions. Their approach requires to preserve all loops, that results in much bigger slices than in relaxed slicing. Amtoft [4] establishes a soundness property for non-termination insensitive control dependence in terms of simulation. Ball and Horwitz [5] describes program slicing for arbitrary control flow. Amtoft and Ball [4, 5] state that an execution in the initial program can be a prefix of that in a slice, without carefully formalizing runtime errors. Our work establishes a similar property, and in addition performs a complete formalization of slicing in presence of errors and non-termination, explicitly formalizes errors by assertions and deduces several results on performing V&V on slices. Slicing in Presence of Errors. Harman, et al. [18] notes that classic algorithms only preserve a lazy semantics. To obtain correct slices with respect to a strict semantics, it proposes to preserve all potentially erroneous statements through adding pseudo-variables in the \(\mathop {\mathrm {def}}\nolimits (l)\) and \(\mathop {\mathrm {ref}}\nolimits (l)\) sets of all potentially erroneous statements l. Our approach is more fine-grained in the sense that we can independently select assertions to be preserved in the slice and to be considered by V&V on this slice. This benefit comes from our dedicated formalization of errors with assertions and a rigorous proof of soundness using a trajectory-based semantics. In addition, we make a formal link about the presence or the absence of errors in the program and its slices. Harman and Danicic [17] uses program slicing as well as meaning-preserving transformations to analyze a property of a program not captured by its own variables. For that, it adds variables and assignments in the same idea as our assertions. Allen and Horwitz [3] extends data and control dependences for Java program with exceptions. In both papers, no formal justification is given. Certified Slicing. The ideas developed in [4, 26] were applied in [10, 31]. Wasserrab [31] builds a framework in Isabelle/HOL to formally prove a slicing defined in terms of graphs, therefore language-independent. Blazy, et al. [10] proposes an unproven but efficient slice calculator for an intermediate language of the CompCert C compiler [23], as well as a certified slice validator and a slice builder written in Coq [7]. The modeling of errors and the soundness of V&V on slices were not specifically addressed in these works. To the best of our knowledge, the present work is the first complete formalization of program slicing for structured programs in presence of errors and non-termination. Moreover, it has been formalized in the Coq proof assistant on a representative structured language, that provides a certified program slicer and justifies conducting V&V on slices instead of the initial program. In many domains, modern software has become very complex and increasingly critical. This explains both the growing efforts on verification and validation (V&V) and, in many cases, the difficulties to analyze the whole program. We revisit the usage of program slicing to simplify the program before V&V, and study how it can be performed in a sound way in presence of possible runtime errors (that we model by assertions) and non-terminating loops. Rather than preserving more statements in a slice in order to satisfy the classic soundness property (stating an equality of whole trajectory projections), we define smaller, relaxed slices where only assertions are kept in addition to classic control and data dependences, and prove a weaker soundness property (relating prefixes of trajectory projections). It allows us to formally justify V&V on relaxed slices instead of the initial program, and to give a complete sound interpretation of presence or absence of errors in slices. First experiments with \({\textsc {Sante}}\) [12, 13], where all-path testing is used on relaxed slices to confirm or invalidate alarms initially detected by value analysis, show that using relaxed slicing allowed to reduce the program in average by 51 % (going up to 97 % for some examples) and accelerated V&V in average by 43 %. The present study has been formalized in Coq for a representative programming language with assertions and loops, and the results of this paper (as well as many helpful additional lemmas on dependencies and slices) were proved in Coq, providing a certified correct-by-construction slicer for the considered language [1]. This Coq formalization represents an effort of 8 person-months of intensive Coq development resulting in more than 10,000 lines of Coq code. Future work includes a generalization to a wider class of errors, an extension to a realistic programming language and a certification of a complete verification technique relying on program slicing. Another research direction is to precisely measure the reduction rate and benefits for V&V of relaxed slicing compared to slicing approaches systematically introducing dependencies on previous loops and erroneous statements. In an ongoing work in DEWI project, we apply relaxed slicing for verification of protocols of wireless sensor networks. Temporal errors (e.g. use-after-free in C) cannot be directly represented in this way. Formally, using the notation introduced hereafter in the paper (cf. Definition 8), their projections are equal: \(\mathop {\mathrm {Proj}}\nolimits _L(\mathcal {T}\llbracket p \rrbracket \sigma ) = \mathop {\mathrm {Proj}}\nolimits _L(\mathcal {T}\llbracket q \rrbracket \sigma )\). By formal definitions of Sect. 4, one easily checks that line 18 is data-dependent on line 6, that is in turn data-dependent on lines 1,3,7 and control-dependent on line 4. Part of the research work leading to these results has received funding for DEWI project (www.dewi-project.eu) from the ARTEMIS Joint Undertaking under grant agreement No. 621353. The authors thank Omar Chebaro, Alain Giorgetti and Jacques Julliand for many fruitful discussions and earlier work that lead to the initial ideas of this paper. Many thanks to the anonymous reviewers for lots of very helpful suggestions. Formalization of relaxed slicing (2016). http://perso.ecp.fr/~lechenetjc/slicing/ Agrawal, H., DeMillo, R.A., Spafford, E.H.: Debugging with dynamic slicing and backtracking. Softw. Pract. Exper. 23(6), 589–616 (1993)CrossRefGoogle Scholar Allen, M., Horwitz, S.: Slicing java programs that throw and catch exceptions. In: PEPM 2003, pp. 44–54 (2003)Google Scholar Amtoft, T.: Slicing for modern program structures: a theory for eliminating irrelevant loops. Inf. Process. Lett. 106(2), 45–51 (2008)MathSciNetCrossRefzbMATHGoogle Scholar Ball, T., Horwitz, S.: Slicing programs with arbitrary control-flow. In: Fritzson, P.A. (ed.) AADEBUG 1993. LNCS, vol. 749, pp. 206–222. Springer, Heidelberg (1993)CrossRefGoogle Scholar Barraclough, R.W., Binkley, D., Danicic, S., Harman, M., Hierons, R.M., Kiss, A., Laurence, M., Ouarbya, L.: A trajectory-based strict semantics for program slicing. Theor. Comp. Sci. 411(11–13), 1372–1386 (2010)MathSciNetCrossRefzbMATHGoogle Scholar Bertot, Y., Castéran, P.: Interactive Theorem Proving and Program Development. Springer, Heidelberg (2004)CrossRefzbMATHGoogle Scholar Binkley, D., Danicic, S., Gyimóthy, T., Harman, M., Kiss, Á., Korel, B.: Theoretical foundations of dynamic program slicing. Theor. Comput. Sci. 360(1–3), 23–41 (2006)MathSciNetCrossRefzbMATHGoogle Scholar Binkley, D., Harman, M.: A survey of empirical results on program slicing. Adv. Comput. 62, 105–178 (2004)CrossRefGoogle Scholar Blazy, S., Maroneze, A., Pichardie, D.: Verified validation of program slicing. CPP 2015, 109–117 (2015)Google Scholar Cartwright, R., Felleisen, M.: The semantics of program dependence. In: PLDI (1989)Google Scholar Chebaro, O., Cuoq, P., Kosmatov, N., Marre, B., Pacalet, A., Williams, N., Yakobowski, B.: Behind the scenes in SANTE: a combination of static and dynamic analyses. Autom. Softw. Eng. 21(1), 107–143 (2014)CrossRefGoogle Scholar Chebaro, O., Kosmatov, N., Giorgetti, A., Julliand, J.: Program slicing enhances a verification technique combining static and dynamic analysis. In: SAC (2012)Google Scholar Danicic, S., Barraclough, R.W., Harman, M., Howroyd, J., Kiss, Á., Laurence, M.R.: A unifying theory of control dependence and its application to arbitrary program structures. Theor. Comput. Sci. 412(49), 6809–6842 (2011)MathSciNetCrossRefzbMATHGoogle Scholar Ge, X., Taneja, K., Xie, T., Tillmann, N.: DyTa: dynamic symbolic execution guided with static verification results. In: the 33rd International Conference on Software Engineering (ICSE 2011), pp. 992–994. ACM (2011)Google Scholar Giacobazzi, R., Mastroeni, I.: Non-standard semantics for program slicing. High. Order Symbolic Comput. 16(4), 297–339 (2003)CrossRefzbMATHGoogle Scholar Harman, M., Danicic, S.: Using program slicing to simplify testing. Softw. Test. Verif. Reliab. 5(3), 143–162 (1995)CrossRefGoogle Scholar Harman, M., Simpson, D., Danicic, S.: Slicing programs in the presence of errors. Formal Aspects Comput. 8(4), 490–497 (1996)CrossRefzbMATHGoogle Scholar Hierons, R.M., Harman, M., Danicic, S.: Using program slicing to assist in the detection of equivalent mutants. Softw. Test. Verif. Reliab. 9(4), 233–262 (1999)CrossRefGoogle Scholar Horwitz, S., Reps, T., Binkley, D.: Interprocedural slicing using dependence graphs. In: PLDI (1988)Google Scholar Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Formal Asp. Comput. 27(3), 573–609 (2015)MathSciNetCrossRefGoogle Scholar Kiss, B., Kosmatov, N., Pariente, D., Puccetti, A.: Combining static and dynamic analyses for vulnerability detection: illustration on heartbleed. In: Piterman, N., et al. (eds.) HVC 2015. LNCS, vol. 9434, pp. 39–50. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-26287-1_3 CrossRefGoogle Scholar Leroy, X.: Formal verification of a realistic compiler. Commun. ACM 52(7), 107–115 (2009)CrossRefGoogle Scholar Nestra, H.: Transfinite semantics in the form of greatest fixpoint. J. Log. Algebr. Program. 78(7), 573–592 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Podgurski, A., Clarke, L.A.: A formal model of program dependences and its implications for software testing, debugging, and maintenance. IEEE Trans. Softw. Eng. 16(9), 965–979 (1990)CrossRefGoogle Scholar Ranganath, V.P., Amtoft, T., Banerjee, A., Hatcliff, J., Dwyer, M.B.: A new foundation for control dependence and slicing for modern program structures. ACM Trans. Program. Lang. Syst. 29(5) (2007). Article number (27)Google Scholar Reps, T.W., Yang, W.: The semantics of program slicing and program integration. In: TAPSOFT (1989)Google Scholar Reps, T.W., Yang, W.: The semantics of program slicing. Technical report, University of Wisconsin (1988)Google Scholar Silva, J.: A vocabulary of program slicing-based techniques. ACM Comput. Surv. 44(3), 12 (2012)CrossRefzbMATHGoogle Scholar Tip, F.: A survey of program slicing techniques. J. Prog. Lang. 3(3), 121–189 (1995)Google Scholar Wasserrab, D.: From formal semantics to verified slicing: a modular framework with applications in language based security. Ph.D. thesis, Karlsruhe Inst. of Techn (2011)Google Scholar Weiser, M.: Program slicing. In: ICSE (1981)Google Scholar Weiser, M.: Programmers use slices when debugging. Commun. ACM 25(7), 446–452 (1982)CrossRefGoogle Scholar Weiser, M.: Program slicing. IEEE Trans. Softw. Eng. 10(4), 352–357 (1984)CrossRefzbMATHGoogle Scholar Xu, B., Qian, J., Zhang, X., Wu, Z., Chen, L.: A brief survey of program slicing. ACM SIGSOFT Softw. Eng. Notes 30(2), 1–36 (2005)CrossRefGoogle Scholar © Springer-Verlag Berlin Heidelberg 2016 Email author 1.CEA, LIST, Software Reliability and Security LaboratoryGif-sur-YvetteFrance 2.Laboratoire de Mathématiques et Informatique pour la Complexité et les SystèmesCentraleSupélec, Université Paris-SaclayChâtenay-MalabryFrance About this paper Cite this paper as: Léchenet JC., Kosmatov N., Le Gall P. (2016) Cut Branches Before Looking for Bugs: Sound Verification on Relaxed Slices. In: Stevens P., Wąsowski A. (eds) Fundamental Approaches to Software Engineering. FASE 2016. Lecture Notes in Computer Science, vol 9633. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-49665-7_11 DOI https://doi.org/10.1007/978-3-662-49665-7_11 Publisher Name Springer, Berlin, Heidelberg Print ISBN 978-3-662-49664-0 Online ISBN 978-3-662-49665-7 eBook Packages Computer Science Computer Science (R0) Buy this book on publisher's site Published in cooperation with The European Joint Conferences on Theory and Practice of Software. Cite paper .RIS Papers Reference Manager RefWorks Zotero .ENW EndNote .BIB BibTeX JabRef Mendeley Motivation and Running Examples The Considered Language and Its Semantics Relaxed Program Slicing Verification on Relaxed Slices Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition © 2020 Springer Nature Switzerland AG. Part of Springer Nature. Not logged in Not affiliated 18.234.247.75
CommonCrawl
Express Integer as Sum of Four Squares This is kind of a follow-up to the question I posted here about expressing integers as the sum of two squares. Is there a similar general method for expressing integers as the sum of four squares? I believe the Lagrange's Four-Square Theorem states that all positive integers are expressible as the sum of four squares of integers, but how do you find these numbers. As an example consider the value $1638$. How can we find the four squares? Benzne_OBenzne_O $\begingroup$ See also cs.stackexchange.com/q/68501/755, stackoverflow.com/q/41524508/781723, mathoverflow.net/q/259152/37212, math.stackexchange.com/q/366673/14578, math.stackexchange.com/q/483101/14578, mathoverflow.net/q/110239/37212. $\endgroup$ – D.W. Jan 20 '17 at 5:44 Similar to the Brahmagupta-Fibonacci two-square identity. Euler has a four square identity which involves the sum of 4 squares: $$(a_1^2+a_2^2+a_3^2+a_4^2)(b_1^2+b_2^2+b_3^2+b_4^2) =\\ \quad(a_1b_1 - a_2b_2 - a_3b_3 - a_4b_4)^2 + (a_1b_2+a_2b_1+a_3b_4-a_4b_3)^2 +(a_1b_3 - a_2b_4 + a_3b_1 + a_4b_2)^2 + (a_1b_4 + a_2b_3 - a_3b_2 + a_4b_1)^2$$ Factor $1638$ as products of any small factors you know how to represent as sum of 4 squares. Repeat apply the formula will allow you to represent $1638$ itself as sum of 4 squares. For example, let's say we have factored $1638$ as $2\cdot 3^2 \cdot 7 \cdot 13$, we have: $$\begin{align} & 2\cdot 3^2 \cdot 7 \cdot 13\\ = & (1^2+1^2+0^2+0^2)(1^2+1^2+1^2+0^2)^2(2^2+1^2+1^2+1^2)(3^2+2^2+0^2+0^2)\\ = & (0^2 + 2^2 + 1^2 + 1^2)(1^2+1^2+1^2+0^2)(2^2+1^2+1^2+1^2)(3^2+2^2+0^2+0^2)\\ = & ((-3)^2 + 1^2 + 2^2 + 2^2)(2^2+1^2+1^2+1^2)(3^2+2^2+0^2+0^2)\\ = & ((-11)^2+(-1)^2+2^2 + 0^2)(3^2+2^2+0^2+0^2)\\ = & ((-31)^2 + (-25)^2 + 6^2 + 4^2)\\ \end{align}$$ This give you a non-trivial representation of $1638$ as $31^2 + 25^2 + 6^2 + 4^2$. In general, there are many representations of a number as a sum of 4 squares. There is a theorem: The total number of representations of a positive integer $n$ as the sum of four squares, representations that differ only in order and sign being counted as distinct, is eight times the sum of the divisors of $n$ that are not multiple of $4$. The above representation is only $1$ out of $8 \sum_{d\mid 1638, 4 \nmid d} d = 34944$ ways of representing $1638$ as sum of 4 squares. achille huiachille hui $1638=2\cdot3^2\cdot7\cdot13=(1^2+1^2)3^2(2^2+1^2+1^2+1^2)(3^2+2^2)$ Now use your technique for taking the product of two sums of two squares to a sum of two squares. Just as the Gaussian integers are a modern method to prove every prime not congruent to $3$ (mod $4$) is a sum of two squares ( since $\mathbb{Z}[i]$ is a Euclidean domain), a similar method works for the so-called Hurwitz quaternions $\mathbb{H} = \{ \frac{a+bi + cj +dk}{2}: a,b,c,d \in \mathbb{Z}, a \equiv b \equiv c \equiv d ({\rm mod} 2) \}$.$ \mathbb{H}$ is not a commutative ring, but behaves sufficiently like a Euclidean ring that a similar proof shows that every prime $p$ is a sum of $4$ integer squares. A full proof can be found in I.N. Herstein's "Topics in Algebra", but here is an outline: given any odd prime $p \in \mathbb{N}$, we can express $-1$ as a sum of two squares in $\mathbb{Z}/p\mathbb{Z}$. This means that there are integers $a,b$ such that $p |(a^{2}+b^{2}+1).$ This means that $p$ is not an irreducible element in $\mathbb{H}$, and this leads to the fact that $p = x^{2}+y^{2}+z^{2}+w^{2}$ for integers $x,y,z,w$. Once we know that every prime has such an expression, it follows (as noted in other answers and comments) that every positive integer is a sum of $4$ integer squares. However, it should be noted that, in practice, this may not be the most efficient way to express a given positive integer as a sum of $4$ integer squares. Geoff RobinsonGeoff Robinson Hint: Using the same condition as your previous question. If you can express the two numbers which add up to 1638 as a sum of two squares. You are through.:) InceptioInceptio This formula seems wrong. Try letting $a_1=b_1$, $a_2=b_2$, $a_3=b_3$, $a_4=a_4$. The result should be $(a_1^2+a_2^2+a_3^2+a_4^2)$. Euler's four-square identity says that the product of two numbers, each of which is a sum of four squares, is itself a sum of four squares. Specifically: $$(a_1^2+a_2^2+a_3^2+a_4^2)(b_1^2+b_2^2+b_3^2+b_4^2)= (a_1 b_1 + a_2 b_2 + a_3 b_3 + a_4 b_4)^2 + (a_1 b_2 - a_2 b_1 + a_3 b_4 - a_4 b_3)^2 + (a_1 b_3 - a_2 b_4 - a_3 b_1 + a_4 b_2)^2 + (a_1 b_4 + a_2 b_3 - a_3 b_2 - a_4 b_1)^2$$ Ivor lloydIvor lloyd Rabin and Shallit Algorithm Express Integer as Sum of Two Squares An efficient algorithm to represent a number as sum of $4$ squares On splitting a number as the sum of two squares. Show that $ \# \{ (a,b,c,d) \in \mathbb{Z}^4 : a^2 + b^2 + c^2 + d^2 = m\} \asymp m $ Sum of two squares Express 2104 as the sum of four squares Integers which are the sum of non-zero squares Sum of four squares Prove any positive integer can be the summed squares of four special nonnegative integers? Classification of the positive integers not being the sum of four non-zero squares Which natural numbers are the sum of three positive perfect squares? Every four consecutive integers contains one which cannot be written as sum of two squares
CommonCrawl
Plant Methods A low-cost and open-source platform for automated imaging Max R. Lien1, Richard J. Barker2, Zhiwei Ye1, Matthew H. Westphall1, Ruohan Gao1, Aditya Singh3, Simon Gilroy2 & Philip A. Townsend ORCID: orcid.org/0000-0001-7003-87741 Plant Methods volume 15, Article number: 6 (2019) Cite this article Remote monitoring of plants using hyperspectral imaging has become an important tool for the study of plant growth, development, and physiology. Many applications are oriented towards use in field environments to enable non-destructive analysis of crop responses due to factors such as drought, nutrient deficiency, and disease, e.g., using tram, drone, or airplane mounted instruments. The field setting introduces a wide range of uncontrolled environmental variables that make validation and interpretation of spectral responses challenging, and as such lab- and greenhouse-deployed systems for plant studies and phenotyping are of increasing interest. In this study, we have designed and developed an open-source, hyperspectral reflectance-based imaging system for lab-based plant experiments: the HyperScanner. The reliability and accuracy of HyperScanner were validated using drought and salt stress experiments with Arabidopsis thaliana. A robust, scalable, and reliable system was created. The system was built using open-sourced parts, and all custom parts, operational methods, and data have been made publicly available in order to maintain the open-source aim of HyperScanner. The gathered reflectance images showed changes in narrowband red and infrared reflectance spectra for each of the stress tests that was evident prior to other visual physiological responses and exhibited congruence with measurements using full-range contact spectrometers. HyperScanner offers the potential for reliable and inexpensive laboratory hyperspectral imaging systems. HyperScanner was able to quickly collect accurate reflectance curves on a variety of plant stress experiments. The resulting images showed spectral differences in plants shortly after application of a treatment but before visual manifestation. HyperScanner increases the capacity for spectroscopic and imaging-based analytical tools by providing more access to hyperspectral analyses in the laboratory setting. Remote monitoring of plant physiology and biochemistry holds enormous potential for understanding plant growth and development in settings ranging from the laboratory to the field [1]. Reflectance spectroscopy is rapidly emerging as a highly effective and practical approach for the rapid, non-destructive estimation of a wide variety of chemical, biophysical, and metabolic plant traits in living tissue [2]. The technique uses variations in leaf optical properties that arise from the interaction of light and chemical bonds [3, 4]. For example, measurement of absorbance and reflectance features in the visible spectrum and out into the infrared (~ 400 to 2500 nm) have been used to directly estimate foliar structure, plant chemical composition, water content, and metabolic status [5,6,7,8,9]. Some spectral features are known to be associated with specific chemical or stress responses, such as the detection of plant physiological stress using the photochemical reflectance index [10,11,12,13]. These spectra remain a rich resource information yet to be mined for plant studies and phenotyping, with the potential for many additional features of plant physiology and chemistry to be extracted [14, 15]. While such spectroscopic imaging techniques offer huge potential, they also raise many practical issues that currently limit applications [1]. For example, scheduling collections for multiple spectroscopic measurements across many samples and over many time points is often logistically difficult and even prohibitively time consuming, especially in the field where variable light conditions affect measurements [16]. These become important issues for the analysis of plant responses which tend to change rapidly in response to environmental or biotic stressors (requiring time-course data collection) and also to vary widely between different plant species or genotypes (requiring sampling from many individuals). To alleviate these setbacks, researchers and private companies have developed machines to automate most if not all of the imaging process [17,18,19,20,21,22]. The widest use of hyperspectral imaging is from sensors mounted on drones or small aircraft [23, 24]. Recently, stationary machines have also been created to scan non-moving fields or targets [1, 17, 25, 26]. In these machines, the imaging instrument is typically mounted to an optimal point and a motor-driven axis moves the plants that need to be scanned, or vice versa. The machine is controlled by software that accepts user input such as positional and instrument control commands. Although these machines can produce highly informative data from multiple samplings due to the non-invasive nature of hyperspectral imaging, there are significant drawbacks that have severely limited their accessibility to many plant researchers [27]. The main constraints with stationary imaging machines are high cost, often complex construction, and relatively large size, which all impact application in the laboratory setting [28]. For example, the LemnaTec Gmbh Lab Scanalyzer has a robust design and features a flexible arsenal of sensors, but its cost is prohibitive to most researchers [29]. Similarly, the Field Scanalyzer is suitable for field-based research, but it is even more expensive, requires a team of people to build, and can only be used in a large crop setting [18]. Custom machines have been built by researchers to provide equivalent non-invasive analysis, but a lack of information and technical details make these platforms relatively inaccessible and difficult to reproduce by most groups. In addition, differences in design for each platform mean that comparisons of results between studies can be harder to make and/or reproduce [6, 30,31,32,33]. However, with the advent of open-sourced, or so-called "maker" electronics and parts, researchers can now build these types of machines more easily. High quality, inexpensive, and well-documented tools are becoming increasingly available. Low-cost electronics and customizable materials (such as 3D printed parts) have given rise to a unique set of novel lab hardware [34, 35]. In addition, the nature of open-source materials makes sharing these new inventions easier as well [36, 37]. In this study, we have created HyperScanner: a non-invasive, lab-based system for hyperspectral imaging (Fig. 1a). Although most commercial spectrometers are not open-sourced, the HyperScanner platform itself is based entirely on other open-sourced systems and products that are also affordable. We combined an already existing open-source Computer Numerical Control (CNC) machine, the X-Carve, with custom software to create HyperScanner [38]. If one already has a preferred imaging instrument, the cost of the HyperScanner platform (not including the imaging spectrometer) totals less than 3000 USD. a Photo of the HyperScanner. b Modified mobile scanning head for a flat leaf experiment, equipped with the Headwall Photonics Nano Hyperspec. c Red–green–blue image of an Arabidopsis plant with a visual representation of hyperspectral data. d Visible and near infrared spectra of control and saline treatment Arabidopsis 1 day after the stress point HyperScanner's hardware and software are designed with the flexibility to tailor it to specific experiment protocols with minimal commitment of effort and time. The large scanning area allows for many plant samples, with a current capacity of about ~ 20 standard seed trays, to be studied simultaneously. A scan of two trays each containing 18 Arabidopsis thaliana plants takes approximately 5 min. In addition to HyperScanner's versatility, the design is fully modular: any part can be reengineered for a different sensor or experiment [39, 40]. For example, the dimensions of the instrument mount can be changed and 3D printed again to house a different instrument (Fig. 1b). Presently, the system is equipped with a Headwall Photonics (Bolton, MA, USA) Nano Hyperspec (Nano) Visible and Near Infrared (VNIR, 400–1000 nm) detector but has the flexibility to integrate other imaging modalities to provide an even deeper set of structural and chemical data to monitor plant performance (Fig. 1b, c). Further, the aim of this project was not only to create a low-cost and lab-based imaging machine, but also to provide documentation on its operation and construction with the goal of making this type of machine more accessible to plant researchers. Details regarding technical information, construction, and software are discussed in the methods section. We validated HyperScanner for plant research by imaging Arabidopsis plants experiencing drought or saline stress, as both are easily imposed and controlled environmental stresses [41]. Along with the readily available tools to quantify plant health by means of Red–Green–Blue (RGB) photography, Arabidopsis was chosen as the test subject because of the extensive literature characterizing its responses to a wide range of environmental conditions [42, 43]. This broad background of knowledge allows us to place insights from HyperScanner's spectral data into the broader context of physiological, biochemical, and molecular changes already characterized under those conditions. The HyperScanner was able to identify spectral shifts in the plant before any physiological harm could be detected using RGB photography [44] (Fig. 1d). If applied in the field, HyperScanner would allow the horticulturist or agronomist to amend the environment or use stress resistant varieties to ensure robust crop yields [45, 46]. HyperScanner proved to be a consistent and reliable tool that is able to collect reflectance data. The construction of HyperScanner consisted of low-cost and open-source materials, which resulted in a modular design. This approach allows the system to be modified and customized to support many different kinds of sensors and experiments. In our case, we optimized the position of the light mounts so that they provided effective lighting for an Arabidopsis experiment (Fig. 1b). Our custom software ensured that the operation of HyperScanner was not only reliable but also intuitive for the user. We grew the plants for 19 days in preparation for the stress period and hyperspectral analysis and used daily time-lapse photography with an 8MP Raspberry Pi Camera (RGB imaging; Raspberry Pi camera V2; Adafruit, New York, NY, USA). Differences in plant morphology were observed by performing this time-lapse photography of plant growth and applying the Phenotiki analysis package [42] (Fig. 2a). a Representative images of wild type Col-0 Arabidopsis responding to drought and 500 mM NaCl stress. Plants were grown for 19 days, before applying stress treatments. Images were analyzed using the Phenotiki image analysis software: b Rosette perimeter, c rosette diameter, and d rosette area (b–d mean ± SE, n = 18 replicates). Bars represent points significantly different from control, t-test, p < 0.05 On day 19, water was added to the control plants, the salt stress sample was given 2 L of 500 mM NaCl solution as an osmotic and ionic stress, and the drought sample was allowed to dry out by withholding watering from this time point onwards. The plants' growth environment (weight, moisture level and temperature) was monitored from day 13 until the final scan, and the data is available in Additional file 1. Analysis of plant morphology is presented in Fig. 2b–d. The investigation of morphological traits showed a reduction in growth rates of the drought samples and a gradual decline of plant size in the saline group (that correlates with the bleaching of the leaves) (Fig. 2a). Visual indicators of stress response represented by trends in reduction in rosette diameter, perimeter, and area were detectable 1–2 days after the stress point (Fig. 2a). Statistical analysis of these parameters showed a significant difference 1–2 days after the saline treatment and 4–6 days after the drought period began (Fig. 2b–d). On day 23, significant effects from saline stress on all traits occurred but only a significant effect on leaf area due to drought could be seen. Drought effects on leaf diameter and perimeter were significant on days 24 and 25, respectively. Thus, using conventional morphometric analysis it was possible to see a difference on day 24 for the drought and day 21 for the saline stress. Hyperspectral scanning began on day 20 (1 day after the stress point) and continued until day 26. Radiance images were converted to absolute reflectance and vector normalized. Pixels containing plants were extracted and sample pixels (n = 2000) were used for further analysis. Reflectance curves on day 20 revealed significant contrasts between the stress and control groups and provided a rapid indication that the plants were experiencing either drought or saline stress. Reflectance data measured on day 20 are presented as Normalized Difference Spectral Index (NDSI, comparing all wavelength pairs) heatmaps and wavelength by wavelength t-tests in Fig. 3. NDSI correlation (r-value) heatmaps indicate statistical trends in all treatment combinations (Fig. 3a). The NDSI heatmaps show narrowband sensitivity to drought at ~ 700 to 900 nm, and saline addition at ranges ~ 500 to 650 nm, ~ 700 to 720 nm, and ~ 800 to 900 nm. Differences between the drought and salinity are present at ~ 580 to 650 nm and ~ 750 to 800 nm. Although each sample combination exhibited unique trends, significant effects on wavelengths in the red-edge and near-IR ranges are present in all sample groups. Analysis of reflectance data on day 20. a NDSI correlation representations between each combinations of treatments. b Wavelength by wavelength t-tests for each treatment combination. Each solid line is the log (base 10) of the resulting p values. Each dashed line corresponds to the value showing statistical significance, log_10 (p = 0.05): values below the line indicate significance Wavelength by wavelength p values from t-tests on day 20 reflectance data between treatments are presented in Fig. 3b. A log (base 10) plot of the p values illustrates wavelengths in the reflectance spectrum with the greatest significance for differentiating treatments (values below the dotted lines on Fig. 3b). The resulting p values denote the specific wavelengths that changed due to stress. Each sample combination exhibited major significance in the 650–700 nm range and wavelengths past the red edge inflection point (~ 700 nm), and minor significance in the 500–650 nm range. Drought samples exhibited significant effects broadly in the near infrared, related to leaf/plant structure. Salinity samples exhibited narrower features, particularly at red wavelengths (related to effects on chlorophyll), at the red-edge (720 nm) due to stress, and in several locations in the near infrared due to impacts on leaf structure. As well, p values between the two stress treatments reveal differences in the patterns that all plant samples experienced. Notably, the two treatments exhibited significantly different responses in the green and red wavelengths, at the red-edge (greater effects of salinity in green, red and near-infrared) and very significant differences at 770 nm. While each treatment showed significant effects relative to the control at longer near-infrared wavelengths (> 800 nm), the two treatments were less distinguishable from each other at these wavelengths, pointing to the utility of broadband versus narrow spectral data. Significant differences between all sample groups could be observed on day 20, i.e., several days before the morphological RGB analysis was able to do so. Figure 4 presents trends in the stress plants from day 20 to day 26 using reflectance ratios derived from the hyperspectral imagery, based on significant wavelengths identified in Fig. 3. This enables visualization of changes in plant stress as it progresses by date. Ratios specific to the stress type were calculated with representative wavelengths, 782 nm/544 nm to compare drought stress with the control and 676 nm/743 nm to compare the salinity stress with the control. The calculated ratios were interpolated into each pixel of representative plant images on days 20, 23, 24, 25, and 26. Differences with the control are seen on day 20 in each stress type. Generally, ratios increased over time in stress samples and little to no change was seen in the control. In contrast to the day 20 analysis (Fig. 3), this method allowed us to observe spectral shifts in the spatial domain as well as across a period of time. a Spectral response of plants due to drought stress. Leaf reflectance ratio of 782 nm and 544 nm (i.e., R782/R544) as: trend in leaf subsets (mean ± SD), time series of representative control and drought stress plants. b Spectral response of plants due to saline stress. Ratio-metric comparison (R676/R743) of leaf reflectance as: trend in leaf subsets (mean ± SD), time series of representative control and drought stress plants. The wavelengths chosen for ratios are confirmed by the significant relationships shown in Fig. 3 To assess the radiometric fidelity of the Nano measurements, we compared reflectance from a flat leaf of a control plant made with an Analytical Spectral Devices full-range contact spectrometer and calibrated light source (ASD; FS3 350-2500; Analytical Spectral Devices, Boulder, CO, USA) with the Nano image of the same plant. The comparison demonstrated the congruence between measurements from the two types of instruments (Additional file 2), indicating that the measurement setup for the HyperScanner with the Nano (i.e., light source, calibration panels) is sufficient to match leaf level reflectance from a higher Signal-to-Noise Ratio (SNR) instrument such as the ASD. Plants experience a range of abiotic stresses in natural ecosystems, and in the context of an agroecosystem, environmental stresses can lead to reductions in growth rate and altered vegetative and reproductive development, which often plays out as being detrimental to crop yields. We mimicked environmental stresses common to agroecosystems within our controlled environment [47] to ask how well the HyperScanner could be used to rapidly monitor plant responses to these challenges. As the plants sustained the effects of drought or salinity, visual symptoms of the stress appeared (Fig. 2a). Monitoring plant growth by means of conventional RGB photography coupled to statistical analysis of the morphometric data extractable from these images allowed us to define a point when the physiological effects from the stress were detectable as being statistically significant alterations in leaf growth rates from the control. Thus, analysis of rosette diameter, perimeter, and leaf area indicated that the drought samples' growth rates were not statistically significant until day 23, i.e., 4 days after imposition of drought by cessation of watering (Fig. 2). The addition of the 500 mM saline solution affected the plants more rapidly than the drought stress. The salinity caused the plants to exhibit chlorosis of the leaves [48] and statistical significance on leaf expansion was again seen on day 22. In both treatments, statistical significance of plant growth responses to stress application was detected several days after the stress application. This delay is likely due to the limitations of visual analysis of parameters, such as growth, on detecting the earliest responses to stress that are likely to be through alterations in gene expression and plant biochemistry [49]. If a farmer or horticulturist were to be solely relying on analysis of RGB photography to assess stress conditions in the field, significant reduction in plant size and yield would be inevitable as these would be tightly linked to the changes being used to detect stress response from the imaging data. By using hyperspectral data, the amount of information available to a researcher dramatically increases [50, 51], and so, the RGB analysis in Fig. 2 does not contain the vast amount of information that the hyperspectral imaging can potentially provide. Thus, statistical analysis from the hyperspectral data provided information on plant stress being statistically significant well before alterations in growth were detected from the RGB data. The NDSI analysis in Fig. 3a reveals effects on the plants 1 day after the stress period began. When compared to the control, the drought stress was most significant in red edge to near IR bands (~ 700 to 900 nm). The heatmap of the salt and control revealed more significant trends (both magnitude and quantity), reinforcing that the saline stress was likely affecting the plants to a greater degree (at least in terms of changes in hyperspectral signal) than the drought stress (Fig. 2). The salt and drought NDSI show that the two types of stress have comparable effects on Arabidopsis 1 day after stress. Red-edge and near-IR wavelengths show significance in both treatments when compared to the control, but not in their own comparison, indicating that the two samples were affected in the same range of wavelengths and to similar amounts. On the other hand, correlations in the 580–650 nm and 750–800 nm ranges are present in the third NDSI and are not seen in comparisons with the control, which suggests that the stress samples changed differently in these ranges. Analyses of p values in Fig. 3b reveal changes along certain wavelengths in treatments and confirm our hypotheses drawn from the NDSI heatmaps in Fig. 3a. Significant p values resulting from the comparison of 580–640 nm between the two stress treatments indicate that these changes are unique to each stress type (Fig. 3b), namely through greater effects on reflectance in the saline treatment potentially due to differences in changes in relative pools of accessory (non-chlorophyll) pigments between the treatments. Similarly, wavelengths between 600 and 650 nm changed in both stresses when compared to the control, but when the stresses are compared against each other, major statistical differences are present, indicating that the effect on the (most likely) chlorophyll absorption in the red was much stronger with the saline treatment. As well, significance in the 700–800 nm range affirms that the NDSI correlations from that range are indeed unique to the stress type, with a greater impact on red-edge reflectance (an indicator of overall plant health) in the saline treatment. T-tests between the stress types allow for powerful conclusions to be made that are not available with comparisons to the control. Not only can change be seen between stresses and control groups, but the change relative to different stresses can be analyzed. Figure 3 also suggests that certain response mechanisms were employed by the plants to each stress condition. The drought samples experienced a shift in reflectance at the ~ 520 to 530 nm wavelengths compared to the control. This corresponds to the location of the photochemical reflectance index band at 531 nm [10], which has been shown to relate strongly to plant xanthophyll cycle pigment pools that change in response to stress [11]. In contrast, the ~ 520 to 530 nm band was not significantly changed in the saline stress, which confirms that the associated physiological change was likely only experienced by the drought stress. Similarly, the saline stress saw shifts in the sub-500 nm range that were not present in the drought samples, perhaps due to effects on chlorophyll-b and carotenoids that have strong absorptance features in the blue [52]. Similar observations can be drawn on different wavelengths. In each treatment, the most significant changes are seen after 700 nm and into the near-IR range. In the 750–800 nm range, both stress sample's reflectance shifted compared to the control, and the drought shifted more than the salinity. Analysis between the treatments reinforce this idea, as t-tests on those wavelengths resulted in the smallest p values. In addition, bands in the near-IR range were significant in both the drought and salinity samples; however, the t test between the two stresses shows only a small degree of significance, demonstrating that the plants' reflectance changed in a similar manner. Visualization of the hyperspectral imagery in Fig. 4 offers the capacity to track the progression of change using reflectance-based plant experiments. Because the response wavelength ratios can be visualized on the plant itself, spatial trends can be analyzed over time along with spectral trends. For instance, the effects on the drought ratio were initially noticeable at the base of the plant but spread through the stem and then to the leaves along the vasculature (Fig. 4a). Interestingly, the older salt stressed leaves experienced a reflectance change before the younger leaves (Fig. 4b). Such analyses are not possible if one only considers purely spectral data. HyperScanner is easy and inexpensive to build and suitable for many varied plants and experiments. Any plant can be scanned as long as they can fit into the scan area and is not taller than the height of the instrument, although, the height of the instrument, scanning speed, and the scanning routes can be changed to accommodate different species of plants. For example, in addition to Arabidopsis, we have successfully used HyperScanner on much larger cotton plants. Along with being able to support many different plants, HyperScanner can support different sensors and indeed, expansion to incorporate multiple parallel imaging modalities is a core concept for the HyperScanner. Thus, the 80/20 rail system combined with printing custom mounts allows for the rapid integration of new sensors. In this study, only the Nano was mounted as a detector in order to efficiently test the feasibility of HyperScanner. One notable current limitation is that the Nano is operated through the manufacturer's software. This makes starting the scans slightly more cumbersome. The full integration of the Nano into our software will increase the quality of operation. We are also currently implementing other sensors, including a laser rangefinder to detect and compensate for different plant heights [27]. Arabidopsis is very flat and not variable enough in height for a rangefinder to have been relevant for this study, but this will be an important addition for larger, more 3-dimensional plants. A parallel thermal imaging system will allow for the assessment of changes in the critical parameters of transpiration and photosynthetic capacity and a laser scanner will further enable the assessment of changes in foliar structure and biomass across future experiments [6, 14, 27, 53]. The integration of these imaging systems will be documented in future work using the HyperScanner. Perhaps the greatest potential for HyperScanner is in full automation of control and analysis [54]. The control system is currently being transformed from open-loop to closed-loop (i.e., using internal sensors and feedback) control. In addition, an exciting area for future development is to incorporate neural networks for plant classification [55]. The combination of HyperScanner and neural networks will allow for even more rapid acquisition and classification of reflectance data [28, 56]. Automated stress detection via a neural network could allow a researcher to maintain healthy samples with minimal interference. Indeed, the complete automation of the HyperScanner will result in an extremely efficient system [25]. The HyperScanner system was designed to collect hyperspectral data with minimal human effort while keeping the system accessible to researchers, affordable, and available for the addition of more instruments. The software, data, and other relevant files are publicly available within the article. In multiple experiments, we were able to measure absolute reflectance in Arabidopsis stress experiments. The data from HyperScanner showed spectral differences at an earlier point during the stress than visual observations and identified differences in stress responses between two treatments. HyperScanner can be used for improved detection of plant stress and holds a high potential to be a commonplace method for studying plants in many research settings. Arabidopsis growth environment A controlled growth chamber was used to grow plants adjacent to the HyperScanner. Six 1020 seed trays (Greenhouse MegaStore, Danville, IL, USA) each with 18 Arabidopsis Col-0 plants growing in potting soil were maintained at 22 °C with an 18:6 h day/night cycle (100 µmol/m2/s−1, from 4 foot fluorescent lights). Pot weights and RGB pictures of the plants were taken daily. Plant phenotypic response was analyzed from Raspberry Pi camera images using Phenotiki [42]. Hyperspectral scans were taken each day for 7 days starting at 20 days post germination. Two trays were used as control samples and were kept with constant water availability by adding 500 mL every day. The remaining four trays were split into 2 treatments. Two trays were not watered from day 19 onwards to impose drought stress. The two other trays were used for salt stress experiment by adding 2 L of 500 mM NaCl. To mitigate a possible border effect, edge plants should be excluded from data analysis and uniform lighting across the plant tray should be ensured. Overview of CNC and HyperScanner HyperScanner is based on CNC technology [57]. CNC is an automation of machine tools that utilizes computer control, smart sensors, and stepper motors. In CNC, the control computer executes sequential commands which are calculated based on user inputs and the machine's physical properties; smart sensors provide necessary information used to execute the computer's control algorithms. Stepper motors control the physical position of the tools. The computer-based control of CNC allows processes to be predetermined and also enables the recalibration of computer commands based on external changes. This digitization results in an automated and high-precision system. HyperScanner can precisely and accurately move to a point (X, Y, Z) with a user-chosen speed. The detector (in our case, the Nano Hyperspec line scanner, see below) is mounted to a central point which can move along the X, Y, and Z axes. The central point moves over the scan area underneath which facilitates the scanning of the plants. Our software gives the user the ability to intuitively control the movement of the machine in real time, create pre-planned scanning routes, and execute those routes. Nano Hyperspec Imaging Spectrometer The Nano is a line scanner (also known as pushbroom scanner) designed for the VNIR range (400–1000 nm) [58]. It consists of 640 spatial bands (pixels) and 270 spectral bands. The spectral bands are spaced at 2.2 nm/pixel. The Nano weighs 0.544 kg and has built-in mounting points, making it extremely suitable for the HyperScanner. Technical specifications of the HyperScanner HyperScanner was built to achieve a large scan area, variable speed, and precise movement control. HyperScanner features a scan area of 2.1 m2. Due to the constraints of imposed by the operating speed of the Nano, HyperScanner runs at a scanning speed of 1 cm/s; however, this may vary when using other detectors, and so the speed can be adjusted within the HyperScanner interface. Additionally, as the Nano is a line scanner, the length of the scan line is proportional to the height of the Z axis. The height can be optimized to a line that covers the sample dimensions. For instance, a line of ~ 8 cm was used in this experiment to cover the length of each pot. Table 1 lists the relevant technical values. Table 1 HyperScanner's technical specifications Construction of the HyperScanner HyperScanner was built using the 80/20 aluminum rail system, and the construction is based on X-Carve's existing system [38]. Although detailed building instructions can be found on X-Carve's website, many adjustments were made for our purposes [40, 59]. 3D models of HyperScanner are presented in Fig. 5a and Additional file 3. Four 1.8 m rails are vertically placed at each corner of the machine to support eight additional 1.8 m rails which are used to create two 1.8 m by 1.8 m horizontal sections. The bottom section supports removable platforms on which plant trays or pots can be scanned. The top section supports the stepper motor movement system. Two stepper motors are mounted to metal side plates that are attached to the top section support rails (Fig. 5a, Additional file 3). The side plates attached with motors are mounted with wheels that allow them to slide along two of the top section support rails (Y axis). The X axis rail is between the side plates. A metal gantry with wheels moves across this rail and also supports the linear actuator (Z axis). The X and Y axes operate with a belt drive and the Z axis operates with a worm drive. The Nano and lights are mounted at the bottom of the linear actuator by using custom-made mounts. Custom made mounts and additional support brackets were designed within SolidWorks, and each mount was specifically designed to match the existing hardware. The mounts were printed using an open-source 3D printer with PLA filament (LulzBot TAZ 5; Aleph Objects, Inc., Loveland, CO, USA). All SolidWorks files are available for download at (https://doi.org/10.7910/DVN/9DLR7S). The bill of materials (Additional file 4) lists all materials necessary to the construction of the HyperScanner. Necessary tools are not listed. a Isometric view of the HyperScanner: A, Y axis rails; B, X axis rail; C, Z axis linear actuator; D, Nano Hyperspec; E, plant trays; F, top section; G, bottom section. Diagrams representing the control flow of the HyperScanner and CNC machines: b open-loop control and c closed-loop control. d Data flow from the user to HyperScanner: (1) user draws a scanning route in embedded web form. (2) Path waypoints are sent as ASCII to Python 3 script via CEFPython. (3) Python 3 script converts ASCII coordinates to binary tuples for Arduino instruction set. (4) Binary tuples are transmitted from host computer to Arduino via UART connection. (5) Instructions are enqueued in FIFO in Arduino memory as they arrive from UART. (6) Instructions are dequeued as previous instruction completes, and AccelStepper functions are invoked. (7) AccelStepper library computes signal required to move stepper motors. (8) Signal sent to gShield over the Arduino GPIO ports Control and wiring The control algorithms that most CNC machines implement can be classified into two categories: open-loop control and closed-loop (feedback) control. In open-loop control, the process is linear, meaning that the computer simply accepts inputs from the operator and outputs a signal to control the system. Figure 5b shows the fundamental open-loop control design [57]. In closed-loop control, sensors are placed in the system to feed back information, which enables a higher degree of automation (Fig. 5c). HyperScanner is open-loop controlled. HyperScanner consists of four stepper motors and one linear actuator. The X axis is controlled by two motors; the Y axis is controlled by one motor; the Z axis is controlled by one motor and a linear actuator that allows for precise vertical positioning. Although open-loop control is currently implemented in HyperScanner, we intend to integrate feedback control to achieve a more robust and fully automated control mechanism. HyperScanner's basic wiring topology is included in Additional file 5. The computer accepts user input and transcribes it to stepper motor movements, which follows the same control structure in Fig. 5b. User input is translated from the computer to the Arduino and then to the gShield through custom software. The gShield is a stepper motor driver: once the gShield receives correctly notated commands, the appropriate level of power is driven to the motors. The Arduino is powered by the computer through a Universal Serial Bus (USB) connection that also doubles as the serial connection. The stepper motors, gShield, Nano, and lights are powered by a digital-control direct current power supply. A dedicated power source has not been implemented, as the weight and power load on the linear actuator is constantly changing due to the addition of various devices. Thus, the power supply needs to be constantly adjusted. Custom light mounts were 3D printed along with the Nano mount. The mounts are fitted to 20 W halogen bulbs (MR11; Simba Lighting, Torrance, CA), and the bulbs are powered close to the maximum power rating. The lights are mounted in parallel to the Nano's scan line so maximum even lighting is achieved. The mounts are connected to the end of the Z axis so that the lighting environment does not change while the scanner is moving. Additionally, the mounts are connected to 80/20 rails that slide along the Z axis linear actuator so that their height from the samples can be adjusted. Due to the indoor setting, artificial lighting was one of the most important considerations when building HyperScanner. The lighting and light mounts were reiteratively modified until a desirable configuration was achieved, resulting in even and consistent illumination throughout each of the scans. The interchangeability and adjustability of the lights' power and position made this an easy task. A set of software tools were developed to support the operation of the HyperScanner, both for direct interfacing with the machine's hardware and for higher-level user functions. These tools, named Ardupy, have been made publicly available on the University of Wisconsin EnSpec organization's Github page (https://github.com/EnSpec/Plant_CNC_Controller) as well as on Zenodo (https://doi.org/10.5281/zenodo.1406721) [60]. Ardupy manages the conversion of movement instructions received over the Uno's USB serial port into appropriate gShield instructions. Direct interfacing with the gShield is handled by the Arduino AccelStepper library [61]. AccelStepper manages the General-Purpose Input/Output (GPIO) outputs that are required to move a stepper motor to an absolute position with a given speed. User control of the AccelStepper library is achieved through a custom instruction set established between a host computer and the Arduino over a Universal Asynchronous Receiver/Transmitter (UART) serial connection. The instructions are encoded as 3-tuples of 32-bit integers that specify AccelStepper values and parameters. A full description of the instruction set is provided in Table 2. To prevent incoming instructions from interrupting the execution of previously received ones, instructions are enqueued in a First-In, First-Out (FIFO) as they are received via UART and dequeued when a previous instruction completes. In the current iteration of the software, this FIFO is implemented with the Arduino QueueList library [62]. A diagram of the data flow between the host computer and CNC machine is provided in Fig. 5d. Table 2 Instruction set supported by Arduino software In addition, the Graphical User Interface (GUI) Ardupy-GUI, was developed to facilitate the creation of scanning paths. Ardupy-GUI enables an intuitive approach to scanning. Rather than only entering values via a console, Ardupy-GUI consists of two user-friendly menus: a control panel that provides direct access to each of the instructions above; a route plotter that allows the user to draw scanning paths via click-and-drag or keyboard inputs (Additional file 6). The route plotter generates sequences of instructions based on the created paths. CEFPython, a set of Python 3 bindings for the Chromium Embedded Framework, was selected as the backend for the GUI, as it allows fast development of cross-platform graphical applications [63, 64]. CEFPython provides an interface between an embedded instance of a web browser which handles the display of the GUI and a Python 3 script which handles communication between the host computer and HyperScanner's Arduino. The JavaScript library jQuery is used to bind user actions in the GUI to function calls in the Python 3 script that backs the GUI [65]. The route drawing tool is implemented in d3.js, a library which provides efficient manipulation of scalable vector graphic images [66]. The backend Python 3 script generates binary instructions from the textual data entered into the GUI and transmits them to HyperScanner's Arduino over UART via the PySerial module [67]. The operation of HyperScanner is simple, after creating scanning routes and tuning instrument parameters (Additional file 7). First, the Nano is attached to the mount and connected to the computer. Scan height, speed, and line length are calculated separately, based on the Nano's field of view and integration time. This experiment used an integration time of 14 ms. Once the power supply and lights are turned on and Ardupy has been launched, the Nano is moved to a white panel and calibrated. A scanning route is chosen, and the plant samples are placed in the correct positions according to the selected route. After the initial setup, the Nano is set to capture, and the route is executed. One scan of two plant trays (18 plants each) takes less than five minutes. After the scan is completed, files can be transferred off of the Nano or different routes and plants can be loaded. Note that a scan of the white panel should be included for image processing. After the all the plants are scanned, the images can be processed. One important element of operation requiring attention during the equipment setup is that the belt drives must be checked to ensure they do not need to be re-tensioned. Correct belt tension is needed for optimal movement ability of the scanner and therefore this check is important. Although, re-tensioning the belts is needed very seldom. Hyperspectral scans were processed with ENVI 5.0 (Exelis Visual Information Solutions, Inc., Boulder, CO, USA). Each image scan includes a calibrated 99% reflectance spectralon panel (Labsphere, North Sutton, NH, USA), which was used to calculate reflectance and estimate the noise level of the hyperspectral image. Coefficient of Variation (CV) was used as the criteria for high SNR wavelength selection. After calculating along-track CV of the white panel image, wavelengths ranging from 477.1 to 903.42 nm, which yield low CV, are considered for further analysis. The white reference radiance spectrum was estimated for the whole image from the vertical scan line of the spectralon panel that had the maximum median radiance. Every plant pixel was used for analysis by delineating regions of interest within ENVI. The spatial and spectral edges of the hyperspectral image cube were excluded from analysis because it minimizes smile and keystone effects (e.g., cross-track variation in wavelength centers) [68]. Relative reflectance is calculated as: $$Rfl\left[ {i, j, k} \right] = \frac{{DN_{i,j,k} }}{{DN_{white,k} }}$$ where i and j correspond to the row and column of a pixel. k is the wavelength of the pixel. DNi,j is the radiance spectrum of each pixel. DNwhite is the radiance spectrum of the reference white panel. The calculation is done on a wavelength-by-wavelength basis. NDSI Normalized difference spectral indices were calculated for each of the Nano's 270 spectral bands. The difference in reflectance for a pair of bands (e.g., i and j) is divided by the sum, as in the following: $$NDSI\left[ {i, j} \right] = \frac{{band_{i} - band_{j} }}{{band_{i} + band_{j} }}$$ For each pair, indices were calculated for a sample of n = 2000 pixels. Statistical tests were then done on each NDSI combination and heatmaps were generated with the resulting statistical data. Headwall Photonics Nano Hyperspec VNIR: visible and near infrared red–green–blue NDSI: Normalized Difference Spectral Index ASD: analytical spectral devices SNR: GPIO: general-purpose input/output UART: universal asynchronous receiver/transmitter FIFO: first-in, first-out GUI: coefficient of variation Pieruschka R, Poorter H. Phenotyping plants: genes, phenes and machines. Funct Plant Biol. 2012;39:813–20. Fiorani F, Schurr U. Future scenarios for plant phenotyping. Annu Rev Plant Biol. 2013;64:267–91. CAS PubMed Article Google Scholar Curran PJ. Remote sensing of foliar chemistry. Remote Sens Environ. 1989;30:271–8. Ustin SL, Gitelson AA, Jacquemoud S, Schaepman ME, Asner GP, Gamon JA, et al. Retrieval of folair information about plant pigment systems from high resolution spectroscopy. Remote Sens Environ. 2009;113:S67–77. Matsuda O, Tanaka A, Fujita T, Iba K. Hyperspectral imaging techniques for rapid identification of Arabidopsis mutants with altered leaf pigment status. Plant Cell Physiol. 2012;53:1154–70. CAS PubMed PubMed Central Article Google Scholar Humplík JF, Lazár D, Husičková A, Spíchal L. Automated phenotyping of plant shoots using imaging methods for analysis of plant stress responses—a review. Plant Methods. 2015;11:1–10. Serbin SP, Singh A, Desai AR, Dubois SG, Jablonski AD, Kingdon CC, et al. Remotely estimating photosynthetic capacity, and its response to temperature, in vegetation canopies using imaging spectroscopy. Remote Sens Environ. 2015;167:78–87. Singh A, Serbin SP, McNeil BE, Kingdon CC, Townsend PA. Imaging spectroscopy algorithms for mapping canopy foliar chemical and morphological traits and their uncertainties. Ecol Appl. 2015;25:2180–97. Asner GP, Martin RE, Anderson CB, Knapp DE. Quantifying forest canopy traits: imaging spectroscopy versus field survey. Remote Sens Environ. 2015;158:15–27. Gamon JA, Serrano L, Surfus JS. The photochemical reflectance index: an optical indicator of photosynthetic radiation use efficiency across species, functional types, and nutrient levels. Oecologia. 1997;112:492–501. Garbulsky MF, Peñuelas J, Gamon J, Inoue Y, Filella I. The photochemical reflectance index (PRI) and the remote sensing of leaf, canopy and ecosystem radiation use efficiencies: a review and meta-analysis. Remote Sens Environ. 2011;115:281–97. Suárez L, Zarco-Tejada PJ, González-Dugo V, Berni JAJ, Fereres E. The photochemical reflectance index (PRI) as a water stress indicator in peach orchards from remote sensing imagery. Acta Hortic. 2012;962:363–70. Sarlikioti V, Driever SM, Marcelis LFM. Photochemical reflectance index as a mean of monitoring early water stress. Ann Appl Biol. 2010;157:81–9. Kuska MT, Mahlein AK. Aiming at decision making in plant disease protection and phenotyping by the use of optical sensors. Eur J Plant Pathol. 2018;5:5. https://doi.org/10.1007/s10658-018-1464-1. Wahabzada M, Mahlein AK, Bauckhage C, Steiner U, Oerke EC, Kersting K. Plant phenotyping using probabilistic topic models: uncovering the hyperspectral language of plants. Sci Rep. 2016;6:1–11. Kolukisaoglu Ü, Thurow K. Future and frontiers of automated screening in plant sciences. Plant Sci. 2010;178:476–84. Walter A, Studer B, Kölliker R. Advanced phenotyping offers opportunities for improved breeding of forage and turf species. Ann Bot. 2012;110:1271–9. Virlet N, Sabermanesh K, Sadeghi-Tehran P, Hawkesford MJ. Field scanalyzer: an automated robotic field phenotyping platform for detailed crop monitoring. Funct Plant Biol. 2017;44:143–53. Li L, Zhang Q, Huang D. A review of imaging techniques for plant phenotyping. Sensors (Switzerland). 2014;14:20078–111. Arvidsson S, Pérez-Rodríguez P, Mueller-Roeber B. A growth phenotyping pipeline for Arabidopsis thaliana integrating image analysis and rosette area modeling for robust quantification of genotype effects. New Phytol. 2011;191:895–907. Fanourakis D, Briese C, Max JFJ, Kleinen S, Putz A, Fiorani F, et al. Rapid determination of leaf area and plant height by using light curtain arrays in four species with contrasting shoot architecture. Plant Methods. 2014;10:1–11. Tisné S, Serrand Y, Bach L, Gilbault E, Ben Ameur R, Balasse H, et al. Phenoscope: an automated large-scale phenotyping platform offering high spatial homogeneity. Plant J. 2013;74:534–44. Liebisch F, Kirchgessner N, Schneider D, Walter A, Hund A. Remote, aerial phenotyping of maize traits with a mobile multi-sensor approach. Plant Methods. 2015;11:9. Berni JAJ, Zarco-Tejada PJ, Suárez L, González-Dugo V, Fereres E. Remote sensing of vegetation from UAV platforms using lightweight multispectral and thermal imaging sensors. Int Arch Photogramm Remote Sens Spat Inform Sci. 2009;38:6. Atkinson JA, Pound MP, Bennett MJ, Wells DM. Uncovering the hidden half of plants using new advances in root phenotyping. Curr Opin Biotechnol. 2019;55:1–8. Gamon JA, Cheng Y, Claudio H, MacKinney L, Sims DA. A mobile tram system for systematic sampling of ecosystem optical properties. Remote Sens Environ. 2006;103:246–54. Araus JL, Cairns JE. Field high-throughput phenotyping: the new crop breeding frontier. Trends Plant Sci. 2014;19:52–61. Minervini M, Scharr H, Tsaftaris SA. Image analysis: the new bottleneck in plant phenotyping. IEEE Signal Process Mag. 2015;32:126–31. LemnaTec. https://www.lemnatec.com/. Accessed 20 Oct 2018. Pereyra-Irujo GA, Gasco ED, Peirone LS, Aguirrezábal LAN. GlyPh: a low-cost platform for phenotyping plant growth and water use. Funct Plant Biol. 2012;39:905–13. Jansen M, Gilmer F, Biskup B, Nagel KA, Rascher U, Fischbach A, et al. Simultaneous phenotyping of leaf growth and chlorophyll fluorescence via Growscreen Fluoro allows detection of stress tolerance in Arabidopsis thaliana and other rosette plants. Funct Plant Biol. 2009;36:902–14. Bergsträsser S, Fanourakis D, Schmittgen S, Cendrero-Mateo MP, Jansen M, Scharr H, et al. HyperART: non-invasive quantification of leaf traits using hyperspectral absorption-reflectance-transmittance imaging. Plant Methods. 2015;11:1–17. Granier C, Aguirrezabal L, Chenu K, Cookson SJ, Dauzat M, Hamard P, et al. PHENOPSIS, an automated platform for reproducible phenotyping of plant responses to soil water deficit in Arabidopsis thaliana permitted the identification of an accession with low sensitivity to soil water deficit. New Phytol. 2005;169:623–35. Nuñez I, Matute T, Herrera R, Keymer J, Marzullo T, Rudge T, et al. Low cost and open source multi-fluorescence imaging system for teaching and research in biology and bioengineering. PLoS ONE. 2017;12:1–21. Green JM, Appel H, Rehrig EM, Harnsomburana J, Chang JF, Balint-Kurti P, et al. PhenoPhyte: a flexible affordable method to quantify 2D phenotypes from imagery. Plant Methods. 2012;8:1–12. Dobrescu A, Scorza LCT, Tsaftaris SA, McCormick AJ. A "Do-It-Yourself" phenotyping system: measuring growth and morphology throughout the diel cycle in rosette shaped plants. Plant Methods. 2017;13:1–12. Open-source Lab. http://www.appropedia.org/Open-source_Lab. Accessed 20 Oct 2018. X-Carve. https://www.inventables.com/technologies/x-carve. Accessed 22 Oct 2018. Katz R. Design principles of reconfigurable machines. Int J Adv Manuf Technol. 2007;34:430–9. Mehrabi MG, Ulsoy AG, Koren Y, Heytler P. Trends and perspectives in flexible and reconfigurable manufacturing systems. J Intell Manuf. 2002;13:135–46. Julkowska MM, Klei K, Fokkens L, Haring MA, Schranz ME, Testerink C. Natural variation in rosette size under salt stress conditions corresponds to developmental differences between Arabidopsis accessions and allelic variation in the LRR-KISS gene. J Exp Bot. 2016;67:2127–38. Minervini M, Giuffrida MV, Perata P, Tsaftaris SA. Phenotiki: an open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants. Plant J. 2017;90:204–16. Katori T, Ikeda A, Iuchi S, Kobayashi M, Shinozaki K, Maehashi K, et al. Dissecting the genetic control of natural variation in salt tolerance of Arabidopsis thaliana accessions. J Exp Bot. 2010;61:1125–38. Römer C, Wahabzada M, Ballvora A, Pinto F, Rossini M, Panigada C, et al. Early drought stress detection in cereals: simplex volume maximisation for hyperspectral image analysis. Funct Plant Biol. 2012;39:878–90. Yao R, Yang J, Wu D, Xie W, Gao P, Jin W. Digital mapping of soil salinity and crop yield across a coastal agricultural landscape using repeated electromagnetic induction (EMI) surveys. PLoS ONE. 2016;11:1–20. Roy SJ, Negrão S, Tester M. Salt resistant crop plants. Curr Opin Biotechnol. 2014;26:115–24. Hu Y, Schmidhalter U. Drought and salinity: a comparison of their effects on mineral nutrition of plants. J Plant Nutr Soil Sci. 2005;168:541–9. Apse MP, Aharon GS, Snedden WA, Blumwald E. Salt tolerance conferred by overexpression of a vacuolar Na+/H+ antiport in Arabidopsis. Science. 1999;285:1256–8. Chaerle L, Van Der Straeten D. Imaging techniques and the early detection of plant stress. Trends Plant Sci. 2000;5:495–501. Schimel DS, Asner GP, Moorcroft P. Observing changing ecological diversity in the Anthropocene. Front Ecol Environ. 2013;11:129–37. Thompson DR, Boardman JW, Eastwood ML, Green RO. A large airborne survey of Earth's visible-infrared spectral dimensionality. Opt Express. 2017;25:9186. Parida AK, Das AB. Salt tolerance and salinity effects on plants: a review. Ecotoxicol Environ Saf. 2005;60:324–49. Chaerle L, Pineda M, Romero-Aranda R, Van Der Straeten D, Barón M. Robotized thermal and chlorophyll fluorescence imaging of pepper mild mottle virus infection in Nicotiana benthamiana. Plant Cell Physiol. 2006;47:1323–36. Xu XW, Newman ST. Making CNC machine tools more open, interoperable and intelligent—a review of the technologies. Comput Ind. 2006;57:141–52. Taghavi Namin S, Esmaeilzadeh M, Najafi M, Brown TB, Borevitz JO. Deep phenotyping: deep learning for temporal phenotype/genotype classification. Plant Methods. 2018;14:1–14. Vijayarangan S, Sodhi P, Kini P, Bourne J, Sun H, Poczos B, et al. High-throughput robotic phenotyping of energy sorghum crops. F Serv Robot. 2017;5:1–14. Bollinger JG, Duffie NA. computer control of machines and processes. 1st ed. Boston: Addison-Wesley Longman Publishing Co., Inc.; 1988. Headwall Photonics Hyperspectral Imaging Sensors. http://www.headwallphotonics.com/hyperspectral-sensors. Accessed 20 Oct 2018. X-Carve Instructions. http://x-carve-instructions.inventables.com/1000mm/. Accessed 24 Oct 2018. Zenodo Archive of Ardupy. https://zenodo.org/record/1406721. Accessed 30 Aug 2018. McCauley, M. AccelStepper Library for Arduino. http://www.airspayce.com/mikem/arduino/AccelStepper/. Accessed 20 Oct 2018. Chatzikyriakidis, E. QueueList Library for Arduino. https://playground.arduino.cc/Code/QueueList. Accessed 20 Oct 2018. Tomczak, C. CEF Python. https://github.com/cztomczak/cefpython. Accessed 20 Oct 2018. Greenblatt, M. Chromium embedded framework. https://bitbucket.org/chromiumembedded/cef. Accessed 20 Oct 2018. The jQuery Foundation. https://jquery.org/. Accessed 20 Oct 2018. Bostock, M. d3.js Library. https://d3js.org/. Accessed 20 Oct 2018. Liechiti, C. PySerial module. https://pythonhosted.org/pyserial/. Accessed 20 Oct 2018. Mouroulis P, Green RO, Chrien TG. Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information. Appl Opt. 2000;39:2210. ML constructed the machine, ran the experiments, and drafted the manuscript. RG designed parts in SolidWorks and created technical drawings. MW wrote the custom software. ZY processed the images and provided the resulting data. RB and SG designed the plant stress experiments. AS conceptualized the machine and helped with initial development of analysis techniques. PT provided instrumentation and technical support for the project, and oversaw the quantitative analyses of spectral data. PT, RB, SG all contributed to the writing and analyses. All authors read and approved the final manuscript. This work was performed in the Environmental Spectroscopy Lab at the University of Wisconsin. Thanks are due to Katie Gold for helping to create the NDSI heatmaps and Ting Zheng for discussing methods of statistical analysis. Clayton Kingdon provided early guidance on prototyping the system. Thanks also to Amanda Gevens and Paul Bethke for securing preliminary financial support for the effort. The datasets supporting the conclusions of this article are available in the Cyverse repository (https://de.cyverse.org/de/?type=data&folder=/iplant/home/elytas/experiment_repository). The 3D printable model files are available on the Harvard Dataverse (https://doi.org/10.7910/DVN/9DLR7S). Ardupy is available on our Github (https://github.com/EnSpec/Plant_CNC_Controller) and archived on Zenodo (https://doi.org/10.5281/zenodo.1406721). All of the authors consent to the publication of this article. No human subjects were involved in this study. Funding for this research was provided to PT by the Wisconsin Potato Industry Board, USDA-NIFA-SCRI Grant 2011-51181-30629, the College of Agricultural and Life Sciences at UW-Madison, Hatch award WIS01874 and NASA Grant NNX15AN25G. SG and RB were supported by NASA Grants NNX14AT25G and NNX17AD52G, and NSF award IOS-1557899. Russell Labs, University of Wisconsin-Madison, 1630 Linden Drive, Madison, WI, 53706, USA Max R. Lien, Zhiwei Ye, Matthew H. Westphall, Ruohan Gao & Philip A. Townsend Birge Hall, University of Wisconsin-Madison, 430 Lincoln Drive, Madison, WI, 53706, USA Richard J. Barker & Simon Gilroy Frazier Rogers Hall, 1741 Museum Road, Gainesville, FL, 32611, USA Aditya Singh Max R. Lien Richard J. Barker Zhiwei Ye Matthew H. Westphall Ruohan Gao Simon Gilroy Philip A. Townsend Correspondence to Philip A. Townsend. Additional file 1. Daily tray moisture, tray weight, and room temperature measurements from the controlled Arabidopsis growth environment. Line plot of soil conductance percentage as a proxy for soil humidity. Area line plot showing tray weight as a proxy for water content. Line plot displaying the mean temperature (22˚C ± 2˚C). Reflectance curves obtained by the ASD and Nano from one control Arabidopsis plant. Three additional views of the HyperScanner. A bill of materials listing the supplies used to construct Hyperscanner. Does not list tools. A basic diagram of HyperScanner's wiring scheme. Each arrow represents a wired connection: A, external user input; B, returned data from the Nano; C, Nano control signal; D, positional data from Ardupy; E, gShield motor driver power; F, stepper motor power; G, Nano power; H, power supply; I, Arduino Uno; J, gShield motor driver; K, stepper motors. Screenshots of Ardupy's manual control and route planner menus. Users can control HyperScanner with manual control or through creating paths by click-and-dragging waypoints on the map panel. A flowchart describing HyperScanner's operational procedure. Lien, M.R., Barker, R.J., Ye, Z. et al. A low-cost and open-source platform for automated imaging. Plant Methods 15, 6 (2019). https://doi.org/10.1186/s13007-019-0392-1 Imaging spectroscopy Arabidopsis
CommonCrawl
Home > eBooks > Books by Independent Authors > Advanced Algebra: Digital Second Edition > Chapter IV. Homological Algebra 2016 Chapter IV. Homological Algebra Anthony W. Knapp Books By Independent Authors, 2016: 166-261 (2016) https://doi.org/10.3792/euclid/9781429799928-4 DOWNLOAD PDF SAVE TO MY LIBRARY This chapter develops the rudiments of the subject of homological algebra, which is an abstraction of various ideas concerning manipulations with homology and cohomology. Sections 1–7 work in the context of good categories of modules for a ring, and Section 8 extends the discussion to abelian categories. Section 1 gives a historical overview, defines the good categories and additive functors used in most of the chapter, and gives a more detailed outline than appears in this abstract. Section 2 introduces some notions that recur throughout the chapter—complexes, chain maps, homotopies, induced maps on homology and cohomology, exact sequences, and additive functors. Additive functors that are exact or left exact or right exact play a special role in the theory. Section 3 contains the first main theorem, saying that a short exact sequence of chain or cochain complexes leads to a long exact sequence in homology or cohomology. This theorem sees repeated use throughout the chapter. Its proof is based on the Snake Lemma, which associates a connecting homomorphism to a certain kind of diagram of modules and maps and which establishes the exactness of a certain 6-term sequence of modules and maps. The section concludes with proofs of the crucial fact that the Snake Lemma and the first main theorem are functorial. Section 4 introduces projectives and injectives and proves the second main theorem, which concerns extensions of partial chain and cochain maps and also construction of homotopies for them when the complexes in question satisfy appropriate hypotheses concerning exactness and the presence of projectives or injectives. The notion of a resolution is defined in this section, and the section concludes with a discussion of split exact sequences. Section 5 introduces derived functors, which are the basic mathematical tool that takes advantage of the theory of homological algebra. Derived functors of all integer orders $\geq0$ are defined for any left exact or right exact additive functor when enough projectives or injectives are present, and they generalize homology and cohomology functors in topology, group theory, and Lie algebra theory. Section 6 implements the two theorems of Section 3 in the situation in which a left exact or right exact additive functor is applied to an exact sequence. The result is a long exact sequence of derived functor modules. It is proved that the passage from short exact sequences to long exact sequences of derived functor modules is functorial. Section 7 studies the derived functors of $\mathrm{Hom}$ and tensor product in each variable. These are called $\mathrm{Ext}$ and $\mathrm{Tor}$, and the theorem is that one obtains the same result by using the derived functor mechanism in the first variable as by using the derived functor mechanism in the second variable. Section 8 discusses the generalization of the preceding sections to abelian categories, which are abstract categories satisfying some strong axioms about the structure of morphisms and the presence of kernels and cokernels. Some generalization is needed because the theory for good categories is insufficient for the theory for sheaves, which is an essential tool in the theory of several complex variables and in algebraic geometry. Two-thirds of the section concerns the foundations, which involve unfamiliar manipulations that need to be internalized. The remaining one-third introduces an artificial definition of "member" for each object and shows that familiar manipulations with members can be used to verify equality of morphisms, commutativity of square diagrams, and exactness of sequences of objects and morphisms. The consequence is that general results for categories of modules in homological algebra requiring such verifications can readily be translated into results for general abelian categories. The method with members, however, does not provide for constructions of morphisms member by member. Thus the construction of the connecting homomorphism in the Snake Lemma needs a new proof, and that is given in a concluding example. First available in Project Euclid: 19 June 2018 Digital Object Identifier: 10.3792/euclid/9781429799928-4 Rights: Copyright © 2016, Anthony W. Knapp DOWNLOAD PDF + SAVE TO MY LIBRARY < Previous Chapter Next Chapter > Advanced Algebra: Digital Second Edition • 1 January 2016 Project Euclid (distributor) Erratum Email Alerts notify you when an article has been updated or the paper is withdrawn. Visit My Account to manage your email alerts. Alert saved!
CommonCrawl
iThink Biology ← Contents A3 Urban­scapes A view of the city as an organism and our connection with nature A3.1 Introduction Capacities taught in this chapter Scientific processHow do we study an urban ecosystem? How do taxa adapt to urban environments? Scientific toolsGIS: How can simple tools be used for mapping land changes? Reading and interpretingIs the city an organism? Bridging science, society and the environmentCitizen science: how can citizen observations detect environmental changes? Quantitative skillsIs urban population growth the same everywhere? The plants and animals that live in a specific place, habitat, or time. As students of biology it is difficult for us to think of a city as a biological landscape or an ecosystem. We associate ecosystems with regions that are considered more natural, such as mountains (Western Ghats chapter), forests (Figs chapter) or wetlands (Waterscapes chapter). However, ecosystems can also be defined as places where biota are concentrated and have many interactions. Cities and urban areas with their dense concentration of humanity are also places where biota are concentrated and interact with each other, so they also qualify as ecosystems. Urban areas assume importance when we consider the fact that over 55% of the human population now lives in urban areas, a number that will grow to 68% by 2050.1 This means that for the first time in human history, most humans live in a built environment rather than a natural one. This built environment is a unique landscape created by humans on the Earth's surface, but one that harbours all the features of the ecosystems that we studied in the chapters on Western Ghats and Waterscapes, as you will see. This human-created landscape is especially present in India, as over 50% of India's population will live in urban areas by 2030. India will also have seven megacities (population > 10 000 000) by the end of the decade.2 This point is highlighted in Figure A3.1a where we see urban population growth in Indian cities from 1950 to 2025.3 Study Figure A3.1a and follow the increase in the size of the red dots to see how our urban population has grown. The satellite image of India at night from 2012 (Figure A3.1b) shows that it is possible to 'trace' country using the brightness of artificial light as a proxy for urbanisation. social-ecological justice Social justice refers to the fair distribution of economic, political and environmental benefits, such as quality health services, the right to vote, or access to natural resources. Ecological justice refers to the protection of the ecosystem against the negative impacts of human activities. Together, this approach to justice recommends a fair participation in decision-making and a recognition of the ecological services provided by nature. See also: ecosystem services. The projected urbanisation for India will continue a huge shift in lifestyle for many people. If we are to create healthy and happy communities, we need to understand the urban landscape to plan for the accommodation of all these human beings. In the coming decade, people's first and sustained experience of nature will be in an urban environment. How do we plan our cities so that humans develop a deep and lasting relationship with nature, a relationship that is built on principles of social and ecological justice? For this to happen it is important for humans to connect with and appreciate nature. As biologists we therefore need to understand how we can protect and increase natural spaces in cities to provide people with meaningful contact with the natural world. Figure A3.1a Urban population growth in India (red dots) 1950 to 2025. With copyright permission from Femke Reitsma. Figure A3.1b Satellite image of India at night (2012). NASA Earth Observatory. This chapter adopts multiple perspectives in examining a city. The first perspective is to consider a city to be an organism. Can the city really be understood as an organism, one that breathes, consumes and excretes? How can seeing the city as a living creature help us in understanding our interaction with it? Another perspective is the one given by the field of urban ecology. Urban ecology considers cities to be valid field sites and studies ecological processes within them in much the same way as we would in more natural areas. We will use two lenses to study the natural processes in a city: urban ecology straddles the disciplines of biology and geography, and each has its own way of understanding ecological concepts. While we do not explicitly speak about these differences, keep them in mind when you are reading this section. Another aspect of cities is the tremendous landscape change caused by built-up areas, roads, and other infrastructure. What effects do these activities have on the natural world? Firstly, fragmentation of natural habitats into smaller and isolated pieces leads to loss of biodiversity. Secondly, human activities cause problems such as air, soil and water pollution, over-extraction of groundwater, and urban heat island effects. While there are technical and structural solutions to some of these problems (such as more efficient internal combustion engines or better public transport), many of them can only be implemented by protecting (and providing) natural habitats for plants and animals in urban areas. Technological progress has fuelled the rapid pace of urbanisation globally. Satellite technology has helped us to map the Earth's land and water surfaces in greater detail than ever before. This can be seen in Figure A3.2, where urban growth in Bengaluru city between 1990 and 2015 is shown dynamically by overlaying satellite images on a map of Bengaluru. This visualisation allows for an immediate understanding of the unprecedented pace of the city's growth. Figure A3.2 Urban growth in Bengaluru between 1990 and 2015. The grey regions at the centre are part of the original city. The red is the urban sprawl seen in all directions by 2015. World Resources Institute (WRI) India, using EC-JRC and GHSL, India's Environmental Challenges in 10 Images. Satellite imagery has revolutionised the study of how land use has changed over several decades. This has become possible because longitudinal, high resolution data of much of the Earth's surface is available. The ability to understand data and use it for research and planning purposes has become increasingly important in ecology. Through the example of an urbanscape, we illustrate how geographic information systems (GIS) help ecologists. Over the last 30 years, GIS has influenced land use policy and it is now part of the standard toolkit of landscape biologists, urban planners, public health professionals, and so on. You can begin your GIS journey by doing Exercise A3.1. The benefits gained by human beings from the natural environment. To highlight the importance of preserving nature, biologists have formulated the concept of ecosystem services. Ecosystem services try to determine the benefits that nature provides to humans. We know that plants, animals, fungi and microbes perform important services, such as providing cleaner air and water, and are vital to human health. We also derive tremendous cultural and spiritual benefits from nature. This chapter introduces you to ecosystem services and uses a simple exercise to calculate the value of these services in an urban context. Through the concept of ecosystem services, we try to understand how biologists have framed the debate around conserving nature. As the world rapidly urbanises, such debates will become increasingly strident, and as students, we might have to come up with newer ideas to help people understand the value of nature better. Finally, we study the impact of human activity in urban landscapes on other organisms. While cities are dominated by one species, namely Homo sapiens, we also share the space with thousands of other species, including trees, animals, fungi, and most importantly, soil protists and bacteria. How do these biota interact with a completely new environment, and does the built environment create challenges for these creatures? Increasingly, biologists find that species living in such close proximity interact in interesting ways, making cities a completely new habitat that is worth studying. In many cases, plants and animals have shown interesting adaptations to survive (and in some cases thrive) in this landscape.4 Through this section we will begin our discovery of the non-human inhabitants that make up a city and find out how the urban landscape can affect a species. A3.2 Urban ecosystems Reading and interpreting Scientific process Scientific tools Quantitative skills India and Indians have a complex relationship with nature. We have a long tradition of reverence for nature, seen in our religious texts and hymns, and celebrated in festivals and songs. We all seem to love plants and animals, irrespective of our socioeconomic background. For instance, ecologist Harini Nagendra has documented gardens both in middle-class homes and in slum neighbourhoods of Bengaluru where people grow plants for a variety of reasons (aesthetic, cultural, or for sustenance).5 Urban Indians leave food outside temples and other places of worship for animals and birds, including ants, rats, dogs, pigeons and crows. Simultaneously, this caring attitude sits comfortably alongside a curious indifference to the destruction of nature that we see in modern India. Since economic liberalisation in 1991, economic growth in India has led to rapid urbanisation. Cities have expanded outwards at an exponential pace. This urban sprawl has placed tremendous pressure on the villages and natural spaces surrounding cities because of land conversion, resource extraction and waste dumping. In some cases, urban sprawl has extended into protected areas such as national parks, places that are meant to enjoy the highest protection of the law of the land. Urbanisation of rural and semi-rural areas has led to environmental pollution (such as chemical leachates in groundwater), biodiversity loss, and fragmentation of habitats. Further, within the core city areas, green space is diminishing as lakes, parks and other open spaces are converted to housing developments and built–up infrastructure. Study Figure A3.3 and think about how many of these scenes are familiar to you, where you live. Scenes of nature in the city. Figure A3.3 Scenes of nature in the city. a. Nishant D, from Wikimedia Commons, CC BY-SA 3.0. b. Kaustubh Rau. c. Nimish Subramaniam. d. Indi Samarajiva, from Flickr, CC BY 2.0. a. High rise apartments coexisting with informal settlements and green spaces in Mumbai This aerial view of Mumbai is a good representation of almost all emerging cities in India. While concrete buildings are sprawled in all directions, we find patches of trees and grass in various corners of the area. b. A street situated within an older neighbourhood in Bengaluru Often, you will find that despite the congested placement of buildings, trees find a way to thrive. In Bengaluru, one can even find rare instances of houses designed in ways that accommodate trees, instead of cutting them down. c. A lake in Bengaluru surrounded by high-rise apartments In Bengaluru, lakes are a popular spot around which walkways, cycle routes and benches are built. d. The view from Charminar in Hyderabad Even in streets as busy as this one, you will find patches of green growing onward and upward! Critical thinking Reading and interpreting. Based on your personal experience, would you agree or disagree with the statement that Indians appreciate nature while turning a blind eye to its destruction? Think through examples based on your own direct observations. Reflect on your own and your friends' attitudes and values with respect to nature and human needs. Global North The group of countries largely in the northern hemisphere of the globe that include the world's richest and most industrialised societies. Continents such as Europe and North America are considered to be part of the Global North. Due to the rapid pace of urbanisation, Indian cities have lost green space at an astonishing rate and now have lower levels of green space than their counterparts in the Global North. Figure A3.4 shows the amount of green space in terms of square metres per person in several Indian cities. The World Health Organization recommends a minimum of 9 m2 per person. Only Bhopal meets this requirement, with other cities falling far below the recommended area.6 Figure A3.4 Estimated per capita green space in square metres for Indian metros (2014). Adapted from Tripathi, NG, and Bedi, P. 'Digital Earth for Manipulating Urban Greens towards Achieving a Low Carbon Urban Society', IOP Conference Series: Earth and Environmental Science 18, no. 1 (2014): 012157, doi: 10.1088/1755-1315/18/1/012157. As India has urbanised and become more 'modern', its cities have lost natural spaces. Is there space in our cities for plants and animals, or should we only think about the human inhabitants and their needs? As biologists we need to build specialised knowledge of the need to preserve nature. In the following sections we will use two different ways to understand cities to see if these change our outlook. Urban metabolism: the city as an organism We tend to think of a city in terms of its physical infrastructure, such as concrete buildings, roads, and the electricity network. But are there ways to think about a city as a dynamic and functioning whole? For instance, could we think of a city as a cell, doing all the processes that a cell does to stay alive? urban metabolism A model used to analyse how resources are used and energy flows within an urban system like a city. Thinking about a city as an organism has its roots amongst nineteenth century sociologists and biologists, starting with Karl Marx and Friedrich Engels.7 In its contemporary avatar, this concept is termed urban metabolism and is widely used by urban planners and environmentalists to understand and plan cities. Critical thinking Bridging science, society and the environment Reading and interpreting Do you agree that a city can be considered a living organism? Justify the decision you make. Consider all the life processes that differentiate a living organism from non-living matter. Which life processes apply to a city? Which processes do not apply to a city? Urban metabolism tries to map the inputs and outputs of a city, treating it very much like an organism that eats, metabolises and excretes. Inputs into a city can include energy, food, and material goods, while outputs are wastes (solid, liquid and gaseous) and products that humans use. The concept of urban metabolism enables us to think of what the city will need to sustain itself in a healthy manner. Figure A3.5a shows a simple diagram of urban metabolism for a city. Currently most cities follow a linear model with inflows of food, materials and energy, and outflows of waste and manufactured products. A method used to measure how much land and water area is required to fulfil our demand for consumption of resources and generation of waste. We can also model a city on metabolic pathways that are circular, renewing and in some cases self-correcting. Circular pathways are less wasteful and reduce the city's ecological footprint. An example of using this framework is shown in Figure A3.5b for the city of Mumbai. Studying the figure, we see that water forms a major part of the inflow into the city. We notice that water output is 80% of water input.8 If we consider that all the water leaving the city is sewage (most of it untreated), then this brings new focus onto the problem of water usage and waste-water treatment. Releasing such large quantities of sewage into the creeks, estuaries and mangroves of Mumbai causes massive ecological damage. It is also a lost opportunity for the city to recycle its wastewater for internal use. Recycling sewage water would conserve natural water sources and place less strain on the water table. 1 PJ or petajoule = 1015 joule, which is equivalent to the energy released in a severe thunderstorm. 1 GJ or gigajoule = 109 joule Mumbai's energy consumption in 2013 was 272 PJ. Using a population estimate of 21 million in 2013, per capita energy consumption in Mumbai is 13 GJ. This is low by Global North standards, but typical for a country like India which is not (yet) highly industrialised. Most of the energy used in Mumbai is electricity generated by coal-burning power stations, and fossil fuels that power industries and households. Transport also has a significant energy requirement. Carbon dioxide gas is the waste output of power stations, vehicles and biomass burning. Mumbai can reduce its current output of carbon dioxide emissions of 22 megatonnes (MT) by turning to renewable energy sources and increased public transport. This will improve the city's metabolism. Examining a city's demand on natural resources to provide water and energy encourages us to think of ways to reduce the demand. Figure A3.5a Schematic of linear urban metabolism showing inputs and outputs. Figure A3.5b Metabolic analysis of Mumbai. Adapted from Reddy, BS, 'Metabolism of Mumbai: Expectations, Impasse and the Need for a New Beginning'. Indira Gandhi Institute of Development Research, Mumbai (January 2013). Exercise A3.1 is intended to guide you to think about a city as a living cell or organism. As you work through it, reflect on any changes in your perspective about your city (and neighbourhood). Exercise A3.1 How is a city like an organism? Scientific process Reading and interpreting Use the table below to write down the characteristics of a city and a cell or organism. A few points are filled in to get you started. Can you think of more points of similarity than those given in the table? Cell/organism Form Defined form with growth in specific developmental stages. Form is controlled by DNA. Form is not defined and growth is unplanned. (Exceptions are planned cities like Chandigarh.) Do cities have units like cells in an organism? Inputs Energy Autotrophs trap energy from the sun in photosynthesis. Heterotrophs obtain energy by eating or decomposing other organisms. Water Natural sources of water such as rivers, lakes and rainfall are used for habitat and body requirements. Materials Animals need materials for housing. They use locally available resources for this purpose. Plants require soil for anchoring and nutrients. What materials does a city require? Where are they sourced from? Outputs Waste (organic and inorganic) Regulation Homeostasis: temperature and metabolic balance (fluids, pH, blood pressure, and so on) are maintained. Similarities and differences between a city and a living organism Consider your surroundings, which can be your home, apartment complex or the neighbourhood in which you live. Determine the inflows and outflows of your locality. How does energy come into the system and flow out of it? What is the waste emissions profile of the system? How is a city NOT like an organism or cell? Check your answer Urban ecosystems The flow of energy through the various components of a system. The flow of nutrients between the biotic and abiotic components of an ecosystem; nutrients are taken from the environment and pass through living organisms, only to be released eventually back into the environment. The second way to think about a city is as an ecosystem. While the concept of an ecosystem applied to a city seems paradoxical, cities possess all the factors that define an ecosystem. We typically describe an ecosystem using three aspects: abiotic and biotic components, energy flow and nutrient cycling. More recently, socioeconomic aspects have also been included in the analysis of an ecosystem. Socioeconomic factors are particularly important in an urban context. Figure A3.6 shows a large, urbanised area with a variety of housing (apartments and slums) with natural elements like green spaces and water bodies. How can we use the concepts of a generalised ecosystem to understand an urban ecosystem? Figure A3.6 An urban ecosystem in India. Extra reading Components of an urban ecosystem Abiotic components: Temperature, rain, wind, light, soil, minerals, topography. Artificial light, sound (noise) and air quality are important abiotic factors in a city. Built-up areas and paved surfaces define the topography of a city. Soils and topography are important when planning new settlements. Biotic components: Producers: green plants that are hardy (many exotic species) and can survive in urban settings, along with human-created green spaces that can have native and exotic species; human beings also provide food as waste to sustain many non-human inhabitants in the ecosystem, so could be termed producers. Consumers: humans and other animals such as rodents, feral cats, dogs, birds and insects; all are more generalists than specialists. Decomposers: bacteria, fungi and protozoans break down waste food and sewage; feral birds and animals scavenge waste in Indian cities and contribute to the breakdown of waste. Energy flow: how does energy flow through different trophic levels? Normally there are three to four trophic levels with the sun as the primary energy source. All species are interconnected, so it is better to think of this as a web. A city can have similar trophic levels, but humans are at the top of the pyramid. Humans artificially sustain certain species, such as plants in parks and gardens, birds, dogs and rodents. Energy sources other than the sun are available, including electricity, natural gas and fuel for motor vehicles, all dependent on fossil fuels. Nutrient cycles: movement and exchange of organic and inorganic matter: There are several nutrient cycles such as water, carbon, nitrogen and phosphorus. Nutrients flow into and out of the system and are renewed periodically. All cycles are balanced in a properly working ecosystem. These cycles can be out of balance due to human activity, for example, sewage that flows into rivers and lakes can cause severe damage to these water bodies, leading to algal blooms and fish kill. Industrial and vehicular emissions affect the carbon and nitrogen cycle. Sewage treatment works help to break down human waste and recycle water and nutrients. Socioeconomic context: Natural ecosystem: villages depend on the natural ecosystem for several resources, including extracting timber, minor forest produce, and grazing, ideally without affecting it. Urban: complex interactions between government, industry and civil society that decide how the ecosystem develops. Currently economic growth and migration patterns are major drivers of India's cities. Case study: Analysing the city of Surat as an urban ecosystem Using the components of an ecosystem described in the extra reading as a guide, we will study the city of Surat as an urban ecosystem in India. Surat is India's eighth largest city and is going through a tremendous expansion phase. By one account, it will be the fastest growing city in the world between 2019 and 2035.9 It is also one of the cities in the Government of India's Smart Cities programme. These developments make it an interesting example for us to use in understanding urban ecosystems. A bend in the river: different views of Surat city as an urban ecosystem. Figure A3.7 A bend in the river: different views of Surat city as an urban ecosystem. a. Google Maps, 2021. b. Rahul Bhadane, Wikimedia commons, CC-BY-SA 4.0. c. Rahul Bhadane, Wikimedia commons, CC-BY-SA 4.0 . a. A satellite image of Surat city in which the outline represents the limits of the city Such satellite images make the bend of the Tapi river look prominent b. An aerial view of Surat showing dense urbanisation near the Tapi river While this is an image of a concrete jungle, notice small patches of green in between the buildings c. Tapi river front under construction River fronts are becoming increasingly popular and offer residents of busy cities an opportunity to be closer to natural environments Table A3.1 is a datasheet for the city of Surat. Using the same categories in this table, we will see how the city is performing as an ecosystem. The region where fresh water from rivers and streams meets the salt water from the ocean. The area from which rainfall drains into a river. Abiotic: Temperature range (average): 22–33°C with >300 days of sunlight. Yearly rainfall (average): 1202.8 mm. Average humidity: 60% (range 41–85%). Topography: Surat has a flat landscape with an average elevation of 13 metres (so it is almost at sea level). The floodplains of the Tapi meet the coastal plains near the sea. Soils: Substratum of basaltic lava flows covered with black loamy to clayey soils. Surat district is at the north-western edge of the Deccan plateau, which explains the basalt layer. Surat lies on the banks of the Tapi river. The river fans out into a delta with mangroves where it meets the Arabian Sea. This estuary (river-delta) is an important component of the urban ecosystem. In 2006, Surat suffered a severe flood with 80% of the city being under water. This was the third major flood since 1994, caused by release of waters from the Ukai dam on the Tapi river, 100 km upstream of the city. Urbanisation and a reduction in the catchment area are partly responsible for the floods. The city has instituted disaster management plans and a flood control network to account for future extreme weather events. Biotic: Human population 4.46 million (2011 census). Within the city limits the fauna is typical of urban areas. The river and estuary harbour many aquatic and amphibian species. Flora is primarily planted by humans and is a mix of native and exotic species. A 2013 Gujarat Forest Department study recorded low tree density (8.6 per hectare) in Surat city. A 2014 study recorded that space for gardens and recreational areas is 3% (2.34 km2) of the total urbanised area. The green space per capita value for Surat is 0.5 m2 (Figure A3.4). The human population increased by 63% between the 2001 and 2011 censuses. (Estimated population in 2021 is 7.5 million.) A 68 hectare Biodiversity Park is being constructed on both banks of the Tapi river. This is meant to target areas used for illegal garbage dumping, sewage outflows and unplanned construction. Energy flow: Terrestrial and aquatic food webs. Human activity drives both these trophic webs with sewage and garbage being major contributors. Loss of vegetation leads to reduction in primary productivity. The riverine environment harbours many aquatic species and threatened bird species such as the lesser flamingo (Phoeniconaias minor). Nutrient cycling: Carbon: Perturbations due to human activities, mainly from fossil fuel emissions. Nitrogen and phosphorus cycle: Not enough information. Water cycle: While water table levels are high (availability at 5–10 m) over-extraction has seen ingress of saline water into wells. The city government has calculated per capita carbon emissions at 4.46 tonnes. This value is lower than other Indian cities but far higher than the all-India value of ~2 tonnes. The city has a bus rapid transit system to reduce transport-associated carbon emissions. Human activity (waste dumping, sewage flow and over-extraction) are all affecting the water cycle. There are plans to convert 80% of biodegradable waste into compost. Hub for diamond processing, textiles and industries. The city is part of the Vapi-Ankleshwar industrial belt. Hazira liquified natural gas (LNG) port is nearby. Migration for jobs is a major cause of population growth. Surat is expected to grow at 9.5% per year and will continue to urbanise rapidly. Table A3.1 Surat city urban ecosystem datasheet. Table A3.1 reveals several interesting features about Surat. It is clear that the Tapi river and its associated estuary are a dominant feature of the landscape and the city has grown around the river. Tidal inflows influence the river for several kilometres upstream to form a brackish water delta which has unique flora and fauna typical of mangrove forests. During winter months, many migratory birds visit the estuary and the city to feed on the diatoms, copepods and crustaceans that thrive in the brackish water. Water flow depends on the Ukai dam 100 km upstream of the city. As with many rivers surrounded by urban settlements, the Tapi suffers from sewage inflow and garbage dumping, affecting aquatic and human life. The city government has reduced sewage flow by constructing a citywide sewage system with treatment plants. A 68 hectare Biodiversity Park is under construction on the river banks to rejuvenate biodiversity. However, the projected population growth means that urbanisation will continue for the near future. It is not clear how this urbanisation is to be balanced with protecting the fragile estuary and river system, increasing biodiversity, and reducing environmental pollution. land use/land cover Land use is the purpose for which land is being used, such as agriculture. Land cover is the surface cover on the ground, such as urban infrastructure or bare soil. When used together, this term indicates the categorisation of both human activities and natural processes within a particular time frame. Table A3.2 shows the changes to land use/land cover characteristics of Surat between 1998 and 2016.10 The city has used open land for urban growth. Vegetation has reduced, a trend that has accelerated in the last decade. As with many cities in India, there is a dearth of basic data to make clearer projections and recommendations. It is possible to generate data through environmental surveys of flora, fauna, soil and water, and as biology students, these are clear ways for us to be involved in deciding the city's future. The category 'bare soil' in Table A3.2 is a misnomer, since these are open spaces with ground cover and scrub vegetation that harbour biodiversity and serve as groundwater recharge areas. Land use/land cover type % change between 1998 and 2016 Water bodies −9.4 Vegetation −6.0 Built-up 46 Bare soil −44 Table A3.2 Land use/land cover change between 1998 and 2016 in Surat City. Mukherjee, F and Singh, D, 'Assessing Land Use-Land Cover Change and its Impact on Land Surface Temperature Using LANDSAT Data: A Comparison of Two Urban Areas in India', Earth Systems and Environment 4, no. 2 (2020): 385–407, doi: 10.1007/s41748-020-00155-9. Critical thinking Quantitative skills Reading and interpreting What is the total loss of vegetation areas between 1998 and 2016? Compare the total loss of vegetation areas with the increase in built-up areas. Why do you think they do not balance? Study Table A3.2 and read the sidenote before you answer this question. The case study of Surat provides us with a way to analyse urban spaces through an ecosystem framework. What have we learned by examining the case study? The city has to balance the competing demands of preserving the riverine ecosystem and the economic impetus that drives the human inhabitants. A healthy ecosystem is intricately linked with human health and wellbeing, so ultimately it is to our benefit to preserve and protect nature. The effect of ecosystem degradation can be asymmetric, depending upon your socioeconomic status. Very often the communities most affected by degradation are those that are economically and socially most vulnerable. Can you do a similar analysis for your city or neighbourhood? To research different aspects of your ecosystem you can combine your personal observations with reliable information retrieved from the Internet, as shown in the chapter on Waterscapes. Land use change Land use change is the process by which human activities transform the natural landscape.11 Dramatic land use change has been a defining feature of Indian cities in recent years. Fuelled by economic growth, cities have expanded rapidly, adding new housing developments, industrial hubs and roads. Natural ecosystems have rapidly degraded as cities have expanded outwards. Green spaces have become fragmented, leading to loss of biodiversity. Land use changes also perturb nutrient cycles, for instance, urbanisation affects the carbon and nitrogen cycle through industrial and vehicular emissions and loss of topsoil. Therefore, it is important to map and understand land use changes for making decisions about cities. Figure A3.8 shows land use changes for the National Capital Territory of Delhi.12 We see an increase in built-up land with the city expanding in all directions (note especially urbanisation in the north-west and south-west parts of the National Capital Territory). We see an accompanying decrease in 'fallow' land. Fallow land is arable land that is not being used for growing crops. It often harbours tremendous biodiversity. Typically, urban planning bodies and municipalities see fallow land as places for expansion, when in fact they need to be preserved for their inherent biological or agricultural value. Figure A3.8 Changes to the land use patterns of the National Capital Territory of Delhi between 2001 and 2017. Follow the red to see the increase in built-up areas. Adapted from Pramanik, S and Punia, M. 'Land Use/Land Cover Change and Surface Urban Heat Island Intensity: Source–Sink Landscape-Based Study in Delhi, India'. Environment, Development and Sustainability 22, no. 8 (2020): 7331–7356, doi: 10.1007/s10668-019-00515-0. Critical thinking Reading and interpreting What aspects do we need to consider when comparing satellite images of an area, as shown in Figure A3.8? Think about scale and season. Why do you think vegetation cover (green areas) expanded in the time period shown? Check if the city has a strong environmental culture. Why do you think built-up areas increased in a north-westerly direction? Reflect on factors that could affect a city's growth, such as land type, proximity to services, and ease of conversion. Geographic information systems (GIS) activity How do we find out data such as those shown in Figure A3.2 and Figure A3.8? Is it possible for students to generate such data? The answer is yes. Remote sensing using satellites has revolutionised mapping across the Earth. Access to public image databases allows us to map land use changes on our computers. We can generate maps that help us understand ecosystem changes and provide enhanced data to city planners, environmentalists, and civil society to make better land use decisions. Most Indian cities lack such maps at a neighbourhood scale, and with land parcels being ill-defined, open spaces can easily be converted into built-up areas. Using simple tools like Google Earth and ImageJ, students can generate maps of their neighbourhoods that will help concerned citizens, architects, town planners, and city commissioners. Let us do a simple exercise that will allow us to generate such maps. Although we have chosen a simple exercise, it is possible to add layers of complexity to land use maps and examine different categories of land use. Exercise A3.2 Identifying Land Use Change using Google Earth Scientific process Quantitative skills Download Google Earth Pro. Searching Google Earth for a particular location. Exercise A3.2 Searching Google Earth for a particular location. Ayushi Biswas Opening Google Earth. Open the Google Earth application on your device. Searching for a specific location. Search for a specific location using the search bar. Properties displayed along with the image. Use the different elements and their functions. The relevant elements have been circled in red. A historical image from 2004. Slide the cursor to select a year of your requirement. Using the ruler icon to open the measurement toolbox. The measurement toolbox offers different functions to make selections on the map. Choosing the polygon icon and selecting an area of your interest. The measurement of the area you have selected will also be visible in the toolbox. Assigning a name to the saved area (polygon). Type in a name for the area that you have selected and marked. The area (in sq. metres) of the polygon can be seen in the measurement tab. The measurement of the area you have selected will also be visible in the measurement tab. Choosing a different polygon and measuring its area. You can use a different type of polygon to measure a new area. Choosing a different year for analysis (in this case, 2021). Slide the cursor once again to select another year of your requirement. Polygons marking human settlements for 2021. Here, we have used a polygon to mark human settlements or buildings. Overlaying areas from two different years. The overlaying areas from two different years are marked by the yellow and red polygons. Open Google Earth and search for the location (see the box outlined in red). After choosing a location, find the view of the same image at different time scales using the 'Show Historical Imagery' option in the toolbar. We have chosen Kada Agrahara, a village in Karnataka. Slide the cursor to the required year, avoiding any image with cloud cover or blurry images (we have chosen an image from 2004). Click the ruler icon and go to the polygon option. Use the polygon tool to mark out an area of interest. In the image we have chosen to demarcate a human settlement. Note the area value given by the ruler for the demarcated region and save the polygon. Save the polygon and give it a name that will describe the demarcated area. Note down the area shown in the measurement tab. The units for area can be chosen in the dropdown menu. Similarly, using the ruler and polygon option a different area can be measured. (In the image a much larger area has been chosen to be measured). Choose an image from a different year and repeat steps 5 and 6. In the example we have chosen the year 2021. Calculate the area of the new polygon and note it down. Using the values for 2004 and 2021 we can calculate the change in a particular land use type. In the example we have chosen human settlements, but it could also be possible to look at changes to agricultural land, forest land and water bodies. It is possible to overlay previously saved polygons for the same image series. Total area, and area for human settlements for the years 2004 and 2021 can be obtained from the Polygon properties in Measurements option. The percentage change can then be calculated. Why do you think we emphasised the role of nature in the city in the previous two sections? One could argue that cities should be for humans, and areas outside cities should be for all the non-human inhabitants of the planet. One line of thought argues that urbanisation is good for the planet. With a dense concentration of settlements, there is greater efficiency in delivery of materials and services, leaving larger undeveloped areas of natural vegetation and wildlife. Wouldn't that work in the interest of both humans and non-humans? One reason that it doesn't work is because urbanisation takes over spaces that were previously other kinds of natural ecosystems such as forests or grasslands. Urban development displaces the original inhabitants of that ecosystem. For example, Mumbai is built on what were originally marshes. Some aspects of the original ecosystem are incorporated into the city, for example Mumbai retains remnants of the original marshes and mangroves along its coastline. If we value the lives of other species, then we need to preserve as much of their original habitat as still exists. A second reason is that nature is beneficial to the wellbeing of humans. Urban areas need a significant amount of nature within their boundaries for a healthy, functioning society. Think of nature as an environmental service provider that is necessary for human existence. The concept of nature as an environmental service provider was formulated and discussed by several biologists from the 1970s onwards. Ecologist Gretchen Daily and her colleagues gave it the name 'ecosystem services'.13 The rationale was that attaching monetary values to all the services that nature provides to humans enables activists to convince decision makers to preserve nature. Arguments about the intrinsic value of nature (conservation for its own sake) or the ethics of preserving all life on Earth (all species have an equal right to live) have been less successful. Ecosystem services are now defined as products or processes of natural systems that directly or indirectly benefit humans or enhance social welfare. Examples include erosion control provided by trees and grasses whose roots hold the soil, temperature regulation by forests, and pollination services by bees and other insects. Figure A3.9 Categories of ecosystem services as formulated by the Millenium Ecosystem Assessment, 2005. Millennium ecosystem assessment, Ecosystems and human well-being: Synthesis (Washington, DC: Island Press, 2005) and TEEB Manual for Cities: Ecosystem Services in Urban Management, The Economics of Ecosystems and Biodiversity (2011), accessed 3 November 2021. There are four categories of ecosystem services: primary productivity The rate at which organic compounds are produced from the conversion of environmental energy. Provisioning: material goods obtained from nature that are necessary for human survival. Examples are food, water, wood, fibre and medicines. Regulating: regulation of processes, such as climate and water purification. Examples are pest control by birds, flood control by forests, and water purification by reed beds in wetlands. Cultural: non-material benefits of ecosystems such as educational, heritage and spiritual benefits from sacred groves and recreational gardens, for example. Supporting: processes on a regional to global scale such as soil formation, biodiversity maintenance, nutrient cycling, and primary productivity. Critical thinking Bridging science, society and the environment The idea of ecosystem services is one way of thinking about nature in urban areas. Are there other lenses through which we can view nature? Think about ethics, attitudes and values as applied to natural environment as opposed to human needs and wants. How does one go about assessing ecosystem services in urban areas? We will do an exercise using a recent study from West Bengal.14 The authors, Das and Das, studied the impacts of urbanisation on ecosystem services through land use/land cover (LULC) changes for Old Malda. Old Malda is a small town located about 300 km north of Kolkata, near the Bangladesh border. Satellite data from four years (1990, 2000, 2010 and 2017) spanning 27 years was used. Attempt the exercise that follows to calculate the ecosystem services value for Old Malda in this period. Exercise A3.3 Ecosystem services The authors used the following steps to calculate the ecosystem services value for Old Malda. Step 1: Based on satellite imagery, define land cover types under four categories. Land cover types Equivalent biomes Value coefficient (million US$ ha−1 yr−1) Built-up Areas with houses and other buildings Urban 0 Agricultural land Areas with crop production Crop land 92 Vegetation Green space Forest 969 Water body Blue space: open water Rivers, lakes, ponds 8498 Calculating ecosystem services values for Old Malda. Das, M and Das, A, 'Dynamics of Urbanization and its Impact on Urban Ecosystem Services (UESs): A Study of a Medium Size Town of West Bengal, Eastern India', Journal of Urban Management 8, no. 3 (2019): 420–434, doi: 10.1016/j.jum.2019.03.002. The value coefficient (VC) signifies the monetary value provided by the particular land cover type. The value has been assigned based on earlier studies. Therefore, agricultural land provides goods and services that are equivalent to 92 million US dollars per hectare per year. Note the high VC of forests and water bodies (green and blue spaces). Step 2: Estimate the ecosystem services value (ESV) by calculating the area (A) of each land type and multiplying it by the value coefficient. The area of each land cover type is estimated from the satellite data. The GIS activity that you did earlier can be used for estimating the area of each land cover type. Sum all the ESV values to get the total value of ecosystem services. \[\text{ESV}_{\text{water body}} = \text{A}_{\text{water body}} \times \text{VC}_{\text{water body}}\] \[\text{ESV}_{\text{total}} = \text{ESV}_{\text{built-up}} + \text{ESV}_{\text{agricultural}} + \text{ESV}_{\text{vegetation}} + \text{ESV}_{\text{water body}}\] Use the equation to calculate the ESV for each land cover type and calculate the total ESV for 1990 and 2017, using the table below. Area (hectare) 1990 Ecosystem services value (USD) Built-up 212 313 Agricultural land 315 235 Vegetation 309 214 Water body 119 194 Calculating changes to ecosystem services for Old Malda for 1990 and 2017. Exercise A3.3 shows that it is possible to calculate ecosystem services value for any ecosystem if we have access to data. Das and Das used several checks to ensure that their calculations and interpretations were correct. These were: Calculate the Kappa coefficient (K) to determine the level of agreement between the image data and field reality. This needs a reference data set for which you know the land cover type. The K value ranges between 0 and 1, with a value >0.85 indicating strong agreement between the two data sets. Based on statistical analysis, the authors confirmed that using the value coefficient calculated for a global biome was valid for land cover types in Old Malda, since they weren't exact matches (for example, the forest biome value was used for all land classified as vegetation, although it may not necessarily be forest). It is possible to calculate the ESV for each ecosystem service category (provisioning, regulating, and so on) to determine how much each contributes to each land cover type. As our calculation shows, the total ecosystem services value of Old Malda increased by about 40% (537 935 USD or about 4 crore rupees). The table you completed in Exercise A3.3 shows changes in all land cover types between 1990 and 2017. Built-up areas and water bodies increased while agricultural and vegetation land cover decreased. (Das and Das do not comment on the 63% increase in area for water bodies.) As always, the devil is in the details, and we should only look at Exercise A3.3 as an exercise in calculating ecosystem services value, and not focus too much on the individual numbers. Similar studies from around the world have shown that natural spaces have shrunk dramatically, leading to reductions in the 'value' of the urban landscape by drastic amounts. Das and Das claim that this has had severe impacts on human health and wellbeing. Reflect critically on the assumptions on which the ecosystems services concept is based. Are you adversely affected by lack of contact with nature? Are you aware of declining human health and wellbeing that can be ascribed to urbanisation? Are you aware of species that have benefitted from urbanisation? In this section we learned about two ways of understanding the city as an environment. One can be thought of as a metaphor, with the city being treated as a living organism. We saw that this can be a powerful way to describe and quantify the city's dynamic processes, although it has some limitations. It omits the biotic components of the city, but is a good way to look at the level and impact of human activity. The second way to understand a city is the ecosystems approach which takes an ecological perspective. We used the case study of Surat city to map the components of an urban ecosystem and the effects of urbanisation on abiotic and biotic components. It is likely that both approaches are valid and can be used together for studying a city. We also saw the importance of mapping land use changes, and learned how to use simple image analysis tools for studying changes to land use. You should use this activity as a starting point for more detailed analysis. Finally, through the concept of ecosystem services, we looked at how we can calculate the value of nature's services. The combination of GIS and ecosystem services value analysis can provide valuable data on your neighbourhood or city, and you should try to generate such data. A3.3 Adapting to the city Reading and interpreting Scientific process From nature's perspective, cities are an explosion of a new and unusual type of environment dominated by a single species, Homo sapiens. Humans are well adapted to their urban landscapes, but what of the other species we share our cities with? Before we study the effects of urbanisation on other species, let's define certain terms that allow us to talk about this subject. Synurbisation is the acclimatisation or adjustment of wild animal species to an urban environment. Leopards in the Sanjay Gandhi National Park in Mumbai are an example of this phenomenon. Synanthropy happens when wild or undomesticated species live in close association with humans and benefit from this association. We have many such examples in India, including dogs, cats, crows, pigeons and monkeys.15 In the last few decades, many studies have explored synurbic attributes in plants and animals, particularly emphasising species' responses to the growing levels of air, water, noise, and light pollution. Globally, studies have shown that air and water pollution can impede respiratory and reproductive capabilities in animals, and in plants, polluted air can dramatically reduce photosynthetic capabilities and enzyme activities. The impact varies among species, for example mangoes are more tolerant of air pollution than jackfruit, lemon and neem.16 Melatonin is a chemical in our bodies that regulates our internal clock. A lux is the illumination that falls on a 1 m2 area that is 1 metre away from the flame of a candle. One example of this is that light pollution reduces melatonin production in humans as well as animals (Figure A3.10). We see that even exposure to very low nocturnal light levels can suppress melatonin in a range of species. Note that an illuminance of 100 lux is equivalent to a very dark day, as we would experience if the sky was covered by dark rain clouds. Light pollution can also change migratory patterns in birds and nesting behaviour in sea turtles, endangering these species. circadian clock A natural process that occurs within organisms, regulating body functions with the 24 hour cycle. This internal clock is usually synchronised with natural light and temperature conditions and regulates sleep and hunger. Artificial light can also change the circadian clock. The change reduces chlorophyll and nitrogen content in some plants, making them more susceptible to pests and diseases.17 Similarly, noise pollution, apart from affecting humans, can change reproductive cycles and behaviour in birds, bats, insects, and very vulnerable taxa such as frogs. Human activity that leads to various kinds of pollution can have far reaching effects on plants and animals. In the next section we will examine the effects of air pollution on a creature we wouldn't normally consider to be much affected by this problem, the Asian honey bee (Apis dorsata). Figure A3.10 Relationship between the number of studies and lowest illuminance of nocturnal light reported in those studies to suppress melatonin for different vertebrate groups. Humans are the most intensively studied and show melatonin suppression across a wide range of illuminance. Adapted from Grubisic, M et al., 'Light Pollution, Circadian Photoreception, and Melatonin in Vertebrates', Sustainability 11, no. 22, art. 6400 (2019), doi: 10.3390/su11226400. Critical thinking Quantitative skills How do we understand the graph shown in Figure A3.10? First study both axes and note that a logarithmic scale is used. This means there is a wide variation in the number of studies and the range of illuminance. Secondly, study the blue bars shown in the graph and discuss what they mean. Thirdly, consider what is being shown in the graph. Why are researchers studying the effect of light on melatonin production in different animals? Case study: Air pollution and giant honeybees Geetha Thimmegowda et al. from the National Centre of Biological Sciences (NCBS), Bengaluru (India), recently studied the effect of air pollution on giant honeybees (Apis dorsata).18 Specifically, they researched how different levels of air pollution affect the morphology and physiology of honeybees. The researchers selected four different neighbourhoods in Bengaluru city that had different levels of air pollution: rural, low-polluted, moderately-polluted, and highly-polluted (Figure A3.11a). The particulate matter at each site was measured using a portable pollution monitoring device (Smart Air Quality Monitor with Laser Sensor). Particulate matter is a mixture of solid particles and liquid droplets found in the air. We will see later why they chose to study particulate matter. The researchers performed the study over three years, and at each site, they chose one plant species, Tecoma stans, (a flowering ornamental plant present at all the sites) for bee observation and collection. They kept collected bees in a cage for observation and measurement of physiological parameters such as heart rate. Critical thinking Scientific process The authors in this study compared four different sites to test if air pollution affected the honeybees. What did they use as a control? If your answer is a rural site, can the rural area be considered pollution-free? Do we always need a control, especially in a study of this type? If a control doesn't exist, can it be designed? See the scientific process in the Western Ghats Chapter. Figure A3.11 Effects of pollution on honey bees. a. Study site locations in Bengaluru city with levels of pollution in inset. b. Apis dorsata. c. Distribution of suspended particulate matter in the four sites. d. Number of bees counted at each site. e. Percentage of bee survival in different levels of pollution. Thimmegowda, GG et al., 'A Field-Based Quantitative Analysis of Sublethal Effects of Air Pollution on Pollinators', Proceedings of the National Academy of Sciences 117, no. 34 (2020): 20653–20661, doi: 10.1073/pnas.2009074117. Exercise A3.4 Impact of pollution on honeybees Reading and interpreting Quantitative skills Interpret the data shown in Figure A3.11. Compare the levels of suspended particulate matter in the four study sites. Compare the number of honeybees counted at each site. Compare the survival rate of bees from the four sites. What would you conclude about the effect of particulate matter in the air on bee numbers and survival rates? The researchers found that particulate matter increased from rural to more polluted neighbourhoods. The number of bees collected and the percentage of bees that survived decreased from the rural area towards more polluted neighbourhoods (Figure A3.11d and Figure A3.11e). Next, the researchers investigated why air pollution might affect bee survival. They measured the percentage of body area covered with particulate matter from pollution. Examining the wings, antennae and hind legs of the bees, they found particulate matter on bee bodies to be significantly higher in the areas with high pollution compared to the rural site (Figure A3.12). The authors also found that air pollution affected the bees physiologically by lowering their respiration rates. Figure A3.12 Micrographs showing the effect of low (left panel) and high (right panel) levels of pollution on different parts of the giant honeybee. This study shows us that pollution can have far-reaching impacts on all organisms, not just humans. Remember that the effects of pollution are not restricted to urbanscapes; its spillover effects can be felt in the rural and natural outskirts as well. While living with humans might be advantageous for some species, it is detrimental to others. This section examined how species other than humans are adjusting (or not) to human-made novel environments. Owing to the opening of novel ecological niches, many native species might find it challenging to synurbanise into an urbanscape. Different types of human-generated pollutants have influenced the morphology, physiology, and behaviour of many species beyond repair, and we discussed a case study of the effect of pollution on honeybees. A3.4 Cities and public health Scientific process Quantitative skills Bridging science, society and the environment Due to the oversupply of human food, food wastage and inefficient food disposal, most Indian cities are dealing with monumental problems of garbage scattered throughout the city. Garbage in the city is unsightly and has serious ecological consequences. It attracts vermin and pests, as well as stray animals, including dogs, cats, monkeys, mongooses, and sometimes jackals along the fringes of the cities. Let's take the example of dogs. The streets of Indian cities have many stray dogs. Stray dogs are domestic dogs that have bonded with humans and then been abandoned on the streets without direct human supervision or care. If stray dogs breed, their puppies never bond with humans and become feral dogs. Stray dogs in a typical Indian city. Figure A3.13 Stray dogs in a typical Indian city. a. Hil Vanderwaal, Wikimedia Commons, CC BY 3.0. b. Pranjal Nath, Wikimedia commons, CC-BY-SA 4.0. c. Loganathan R, Wikimedia commons, CC-BY-SA 3.0 a. Free ranging dogs are found around restaurants and other places where food is available It is common to find that many municipalities or urban centres ban visitors from feeding stray dogs. b. A group of stray dogs on a street Such a sight is common to any Indian city c. Municipal workers catching stray dogs in Tamil Nadu, India Formal complaints made by residents often lead to such measures taken by the municipality Unregulated and unmanaged waste disposal in the city, along with limited sterilisation and population control efforts, lead to an increase in stray animals in urban areas. In India, stray dog populations in cities have increased exponentially. Currently, Indian cities are known to harbour around 35–40 million stray and feral dogs, an increase of 17% since 2016.19 These dogs have many effects on urban ecosystems. Case study: Human-stray dog interactions In their study, Chandran and Azeez researched whether human activities could be correlated with feral dog populations in Indian metropolitan areas.20 Secondary data were collected from ten fast-growing Indian cities: Ahmedabad, Bengaluru, Chennai, Coimbatore, Delhi, Hyderabad, Mumbai, Pune, Surat and Thiruvananthapuram. Secondary data are pre-existing data collected from various sources, such as government census records, municipal reports, and newspaper articles. Figure A3.14a shows that the human population of a city seems to correlate well with the amount of municipal solid waste in each city. This makes sense, as an increase in the number of humans leads to an increase in the amount of waste those humans generate (assuming waste disposal systems are similar across cities). Figure A3.14b plots the dog population and number of humans bitten by dogs in a city. We can predict that these two variables are positively correlated. However, Figure 3.17b shows that this is not the case. For instance, Mumbai with a relatively low dog population (95 000) recorded high dog bite numbers (~80 000) while Hyderabad has the highest number of dogs (500 000) but a low number of dog bites per year (~50 000). Figure A3.14 a. Population and amount of municipal solid waste for ten Indian cities. b. Stray dog population and number of dog bites per year for ten Indian cities. Adapted from Chandran, R and Azeez, PA. 'Stray Dog Menace: Implications and Management'. Economic and Political Weekly 51, no. 48 (2016): 58–65. Why are the stray dog population and number of dog bites in a city not positively correlated? Based on your experience of stray dogs, could you come up with hypotheses? Also think about how data were collected for this study. The authors studied other variables. The strongest correlation seen was that between the amount of municipal solid waste that a city generates and dog bites in a year in that city. To visualise the relationship, compare the blue line for municipal solid waste generation in Figure A3.14a with the blue line for dog bites in Figure A3.14b. The shapes of the two lines are similar. One aspect that is clear from this study is the sheer numbers of stray dogs in our cities and the potential for human–dog conflict that this brings about. This has led to diverse opinions, with animal activists calling for humane ways to control dog populations and citizens whose lives are affected calling for more stringent measures. The conflict has sometimes led people to take extreme measures such as mass unauthorised culling of stray dog populations by poisoning. Case study: Stray dog–wildlife interactions The impact of stray dogs on wildlife is another serious concern resulting from rising dog populations. Researchers at the Ashoka Trust for Research in Ecology and Environment (ATREE) have studied the impact of stray dogs on native wildlife species in many different landscapes over several decades. In 2017, ATREE reported on a country-wide survey which showed that dogs attack almost 80 wild species, including 31 threatened species, and four species that are critically endangered according to the IUCN Red list.21 Although 32% of these attacks happened when humans used dogs for illegal hunting, 45% of attacks were by stray dog packs working independently. Jackals, foxes, hares, rodents and livestock are some of the species commonly killed by dogs (Figure A3.15). Similar impacts have been found in other countries like the UK and the USA, where stray cats have been found to cause severe impacts on native bird and invertebrate species. As citizens, it is crucial for us to be aware of how the unregulated population increase of one species could affect other species and the ecosystem as a whole. It is essential for us to be aware of the consequences of the knowledge gap that exists among scientists, social activists, city officials, and the general public. Figure A3.15 The number of vertebrate species observed to be negatively impacted by domestic dogs. Adapted from Home, C, Vanak, A and Bhatnagar, Y. 'Canine Conundrum: Domestic Dogs as an Invasive Species and Their Impacts on Wildlife in India'. Animal Conservation, 21, no. 4 (2017): 275–282, doi: 10.1111/acv.12389. Despite the 'stray dog menace' situation in cities, they could benefit cities by providing ecosystem services. Each species has an ecological role or function in an ecosystem. Stray dogs' central role in the urban ecosystem is as a carnivore and a scavenger. If you remove the top predators from this food web, the prey species increase in population. Although stray dogs attack a large variety of species, most of their prey consists of rodents (classified as vermin)22 as well as cats and other organisms. They also play a role as scavengers by eating food discarded with garbage, road-kill, and animals that have died natural deaths. trophic cascade An ecological imbalance produced in an ecosystem by the addition or removal of top predators, which in turn affects the relative populations of species in that ecosystem. This top-down control is known as the trophic cascade regulation of an ecosystem. If a top-level carnivore is eradicated from the system, there is a ripple effect (cascade of reactions) in the trophic system's lower levels. Hence, vermin species such as rats will increase in a city with extremely unregulated and scattered resources. Increase in vermin may lead to the spread of diseases such as plague. Research Highlight Leopards and the balance of nature in a city Let's investigate the example of one of the fastest-growing metro cities in India and the world, Mumbai. Mumbai has the largest population of stray dogs globally (estimated at 96 000 animals), which has risen due to human tolerance and hundreds of tons of food waste that accumulate within the city. However, Mumbai is also close to Sanjay Gandhi National Park which houses approximately 35 leopards that often venture into the city. A very interesting analysis has shown that the top carnivore plays a crucial role in regulating the balance of trophic levels even in an urbanscape. Based on the current statistics on dog bites and rabies, Braczlowski et al.'s study highlighted fascinating findings.23 The researchers showed that leopards (the top carnivore in this urban area) could save up to 4000 medical treatments and 90 human lives per year. The researchers estimated that in monetary terms, such an estimate could reach as high as US$ 200 000 per year, reflecting an important ecosystem service provided by leopards. With this realisation about the intricacies of the food web and how all organisms are interconnected in this web of life, be aware of this 'balance of nature'24 and the role we play in it.25 Download Download annotated paper In this section, we saw how an urban landscape works similarly to a natural ecosystem and how one trophic level might have a cascading effect on the lower trophic levels in a city. Taking the example of stray dogs, we looked at how a top predator might have positive and negative roles within an ecosystem. On the one hand, an increase in unregulated and scattered urban resources (garbage) leads to an increase in stray dog populations with negative impacts on human health and wildlife. On the other hand, as a top carnivore in a city, dogs might be regulating the population of other species, particularly vermin, that might otherwise lead to an imbalance in the ecosystem and adversely affect human health. A3.5 Nature relatedness Scientific tools Bridging science, society and the environment Do we have a relationship with nature? Human biological evolution has followed the same process of natural selection as all other species. As with all species, we are biologically shaped by nature, a fundamental fact which evokes humility. The word 'nature'can have various meanings.26 It was first adopted from the Latin word, 'natura' meaning 'to be born'. Currently, urban ecosystem researchers are trying to understand the extent of humans' inherent relationship with nature through 'nature relatedness'. The concept of nature relatedness draws on Wilson's (1984) biophilia hypothesis, in which he argued that because humans evolved in nature, we have an innate need to connect with the natural world surrounding us.27 Your parents or grandparents will tell you how they connected to nature in their youth. Before the advent of TV and electronic devices, children had a more personal experience with nature, becoming aware of its beauty as well as its dangers. It now feels as if children in the 1970s and 1980s were like the 'last children in the woods' as defined by author Richard Louv.28 Industrialisation of our society increased from the late 1980s. Urban areas expanded, resulting in the fastest rates of land use changes that our country had ever experienced. The changing land use patterns led to nature fast disappearing from our backyards. Many of today's generation of children experience nature second-hand through TV and digital media. Some of these children find nature boring and a waste of their time. Many people have lost their connection with nature. Fast growth and urbanisation have been emerging as global health issues of the twenty-first century. Urbanscapes are becoming hubs of chronic communicable and non-communicable diseases and physical and mental disorders, made worse by living without a connection with nature, and having a lack of compassion towards it. Ancient philosophers, saints and modern gurus have claimed that nature has healing properties. We can now prove their claims scientifically. Recently, researchers have started examining peoples' perspectives and their experiences with nature, and the positive effects interacting with nature has on health outcomes.29 ecological economists People who study the trans-disciplinary field of ecological economics in which the embeddedness of the economy within the ecosystem is acknowledged and limits to economic growth are discussed. Studies in psychology emphasise the importance of nature for our mental wellbeing. Nature in the backyard with more vegetation cover and bird abundance is associated with lower prevalence of depression, anxiety and stress.30 Ecological economists in the UK have estimated the cost of anxiety and mood disorders to be 187.4 billion British pounds per year for Europe alone. Children diagnosed with ADHD (attention deficit hyperactivity disorder) significantly benefit from close contact with nature. In contrast, children with less nature closeness or nature relatedness may suffer from 'nature-deficit disorder'. Researchers have proposed a 'nature dosage' as a prescription or guidance for how frequently people need to engage with nature, and what types or characteristics of nature need to be incorporated into cities for the best health outcomes.31 qualitative index A tool used in social science research in which a single score or measure summarises the values of more than one qualitative variable such as an attitude or emotion. What determines nature relatedness? To understand the nature-deficit problem in urban areas, Nisbet et al. introduced a qualitative index or a self-report scale that measures one's nature related attributes.32 They found that stronger nature relatedness is associated with greater happiness and ecologically sustainable behaviour, while disconnection from nature is detrimental to human and environmental health. The theoretical background of nature relatedness draws on Wilson's biophilia hypothesis, where he argued that because humans evolved in nature, we have an innate need to connect with the natural world surrounding us.27 The following are two fun activities to help you understand what Wilson meant. The first activity will help you to evaluate your nature relatedness. The second activity will help you to connect better with nature. Exercise A3.5 Examining our nature relatedness Time required: 8–10 minutes This activity examines the nature related attribute of your personality. For each of the following, please rate the extent to which you agree with each statement, using the scale from 1 to 5 as indicated in the table. Please respond as you really feel, rather than how you think 'most people' feel. This will help you to determine where you lie on the scale of nature relatedness. You can test this with yourself first and then with your friends and family to see how nature related people around you are. As suggested by the researchers, stronger nature relatedness is associated with greater happiness and ecologically sustainable behaviour, while a disconnection from nature brings about harmful consequences for both human and environmental health. Nature relatedness scale Strongly disagree Disagree a little Neither agree or disagree Agree a little Agree strongly Statements you need to rate: I enjoy being outdoors, even in unpleasant weather. _________ Some species are just meant to die out or become extinct. _________ Humans have the right to use natural resources any way we want. _________ My ideal vacation spot would be a remote, wilderness area. _________ I always think about how my actions affect the environment. _________ I enjoy digging in the earth and getting dirt on my hands. _________ My connection to nature and the environment is a part of my spirituality. _________ I am very aware of environmental issues. _________ I take notice of wildlife wherever I am. _________ I don't often go out in nature. _________ Nothing I do will change problems in other places on the planet. _________ I am not separate from nature, but a part of nature. _________ The thought of being deep in the woods, away from civilisation, is frightening. _________ My feelings about nature do not affect how I live my life. _________ Animals, birds and plants should have fewer rights than humans. _________ Even in the middle of the city, I notice nature around me. _________ My relationship to nature is an important part of who I am. _________ Conservation is unnecessary because nature is strong enough to recover from any human impact. _________ The state of non-human species is an indicator of the future for humans. _________ I think a lot about the suffering of animals. _________ I feel very connected to all living things and the Earth. _________ Download Download indicator Q&A Exercise A3.6 Nature awareness booster Time required: 20–30 minutes This activity encourages you to notice and experience nature and increases your observation skills. It involves a nature walk and recording some information. Divide your class or study circle into groups and follow the steps in sequential order. Take a walk to the nearest natural area that is accessible to you. It could be a park or an open space in town or some farmland. Take a sheet of paper and pen with you and note what you observe. Or, use the figures below to check whether you come across one or more of these types of organisms or elements. Note other things that you observe, such as sounds, smells or how things feel when you touch them. Organisms and elements found in nature. Take a break for five minutes and present what you experienced with the other groups. Repeat the activity, but this time with the following questions: Where is the plant found? How big is the plant? What is the colour and texture of the soil that it is found in? What is the size and shape of the leaf? Feel the leaf and describe its texture. Are there any flowers on the plant? If yes, what colour? An insect What does the bug look like? Where did you see the bug? Were any other bug/insects present with the bug? How big is the bug? What colour is the bird? Where did you see it? What was the bird doing? Was it alone or was it with other birds? A water body Where is the water located? Estimate how big the water body is. Is this area shady or not? A type of habitat What does the habitat look like? Estimate how many species occupy this habitat. What name would you give this habitat? For example, pond, forest, or anything else? A caterpillar How big is the caterpillar? What colour is it? Describe where it is found, for example, tree branch, or leaf or flower? Is the caterpillar doing any activity? A spider How big is the spider? What colour is it? Where is it found? On a tree branch, or leaf, or flower? What is the spider doing? Is it seen with other spiders? Describe the sounds you hear, for example, melodious, raucous, repetitive, intrusive? Where do you think they are coming from? Describe the smell. Would you call it a strong smell or a very mild one? Do you like the smell? Where do you think it is coming from? Discuss your notes from your first and second observations. Do you think you were more observant when you had questions? Recreating and sharing your observations can often deepen your own understanding of them. Can your group think of a way to express your observations in a creative way? You could perform a short play, paint a mural, write a poem, or even choreograph a dance recital! Download Download guide In this section we considered whether increasing urbanisation and development and the digital age have meant that people are more disconnected from nature than they used to be. We looked at studies that indicate that living without a connection with nature, and having a lack of compassion towards it, contribute to physical and mental health problems, and that spending time interacting with nature and exploring it can be of great benefit to our health, and to that of the environment. We examined our nature relatedness and carried out an activity to boost our connection with nature. A3.6 Quiz Question A3.1 Choose the correct answer(s) Urban areas can be characterised as ecosystems because: They are artificial environments made of steel and concrete. They incorporate all the components that characterise an ecosystem. They have biotic and abiotic components, nutrient cycles and energy flow. They do not have large animals. Cities are made from steel and concrete, but they incorporate ecosystem characteristics such as abiotic and biotic components, energy flow and nutrient cycling. An ecosystem is characterised by abiotic and biotic components, energy flow and nutrient cycling, and all these features are present in an urban environment. An urbanised system has biotic and abiotic components, nutrient cycles and energy flow. There is no requirement for an ecosystem to contain large animals. By 2030, ___% of India's population will live in urban areas. While the Indian population is fast urbanising, we will not have such a great proportion of the population in urban areas. The percentage of Indians living in urban areas was recorded as 31% of the total population in the 2011 census. This will increase by 2030 due to rapid urbanisation. It is expected that over 50% of the population will be urban by 2030. This is considerably lower than the 31% urbanised in the 2011 census. Estimate what percentage of global land is defined as built-up. (This was not discussed in the chapter.) While urban areas are expanding, only about 1% of global land is classified as built-up. This figure is higher than the global percentage. This figure is much higher than the global percentage. This percentage is closer to the correct answer of 1%, but still too high. Which of the following processes is NOT a legitimate ecosystem service? pollination by insects water purification by soil fuel in the form of oil and natural gas changes to primary productivity Pollination by insects makes our agri-intensive societies possible. This is a natural process that regulates the amount of dirt in water. This is a genuine ecosystem service that runs our modern industrialised society. This is part of a natural cycle where plant primary productivity can change due to climatic variations. The effects of urbanisation on bees have been the following: The bee population is higher in rural areas than in areas of high pollution. Bees from heavily polluted areas have shorter lifespans than bees from less polluted areas. Bees from highly polluted areas have more particulate matter on their bodies than bees from less polluted areas. There was no correlation between bee populations and the degree of urbanisation. The bee population was indeed higher in rural areas than areas of high pollution. Pollution did adversely affect bee survival. Micrographs confirmed that more particulate matter was present on bees from highly polluted areas than on bees from less polluted areas. The authors found an inverse relationship between bee populations and the degree of urbanisation. In a trophic cascade, energy lost via heat means that tertiary consumers have access to ___% of the energy trapped by producers. This is the percentage of energy transferred between trophic levels. This is the percentage of energy lost in transfer between trophic levels. This is the percentage of the sun's energy that is converted to food by producers. This is the percentage of energy trapped by producers that reaches tertiary consumers. Data from the study of dog bites in metropolitan areas by Chandran and Azeez is shown in the table. Name of metro Dog bites in a year Number of organisations doing animal birth control Mumbai 90 000 4 Delhi 80 000 12 Hyderabad 52 264 2 Ahmedabad 41 141 1 Chennai 38 454 17 Bengaluru 19 066 0 Surat 14 478 0 Pune 12 731 2 Coimbatore 12 000 1 Thiruvananthapuram 10 000 1 Select statements that you think accurately reflect the relationship between the two parameters. Dog bites in a year correlate positively with the number of organisations doing animal birth control. Dog bites in a year correlate negatively with the number of organisations doing animal birth control. There is no correlation between dog bites in a year and the number of organisations doing animal birth control. Bengaluru and Surat have lower dog bites in a year because they have no organisations doing animal birth control. The data show no correlation between the two parameters The data show no pattern associating number of dog bites with level of dog birth control. The data do not support this because Coimbatore and Thiruvananthapuram have lower dog bites and each has one organisation. Biophilia as defined by the ecologist Wilson has the following characteristics: It is a concept that has been experimentally proven. It is an affinity that all human beings have with the natural world. It is innate and genetically determined. It is a hypothesis about the natural world. Biophilia is a hypothesis put forward by Wilson, that is, it has not been experimentally proven. This is the central concept of Wilson's hypothesis. Wilson believes that this is an innate property of human beings. It is still a hypothesis. A3.7 References United Nations, World Urbanization Prospects 2018: Highlights (ST/ESA/SER.A/421), Department of Economic and Social Affairs, Population Division (2019). ↩ United Nations, World Urbanization Prospects: The 2014 Revision, Highlights (ST/ESA/SER.A/352), Department of Economic and Social Affairs, Population Division (2014). ↩ Nagendra, H, Sudhira, HS, Katti, M and Schewenius, M, 'Sub-Regional Assessment of India: Effects of Urbanization on Land Use, Biodiversity and Ecosystem Services', in Urbanization, Biodiversity and Ecosystem Services: Challenges and Opportunities: A Global Assessment, eds T Elmqvist et al., pp. 65–74 (Dordrecht: Springer, 2013), doi: 10.1007/978-94-007-7088-1_6. ↩ McDonnell, MJ and Hahs, AK, 'Adaptation and Adaptedness of Organisms to Urban Environments', Annual Review of Ecology, Evolution, and Systematics 46, no. 1 (2015): 261–280, doi: 10.1146/annurev-ecolsys-112414-054258. ↩ Nagendra, H, Nature in the City: Bengaluru in the Past, Present, and Future (Oxford University Press, 2016), doi: 10.1093/acprof:oso/9780199465927.001.0001. ↩ World Health Organization, Urban Planning, Environment and Health: From Evidence to Policy Action (2010), accessed 24 February 2021. ↩ Kennedy, C, Cuddihy, J and Engel‐Yan, J, 'The Changing Metabolism of Cities', Journal of Industrial Ecology 11, no. 2 (2007): 43–59, doi: 10.1162/jie.2007.1107. ↩ Reddy, BS, 'Metabolism of Mumbai: Expectations, Impasse and the Need for a New Beginning'. Indira Gandhi Institute of Development Research, Mumbai (January 2013). ↩ The Economic Times, 'Surat to Be World's Fastest Growing City during 2019–35: Report, 7 December 2018, accessed 15 April 2021. ↩ Mukherjee, F and Singh, D, 'Assessing Land Use-Land Cover Change and its Impact on Land Surface Temperature Using LANDSAT Data: A Comparison of Two Urban Areas in India', Earth Systems and Environment 4, no. 2 (2020): 385–407, doi: 10.1007/s41748-020-00155-9. ↩ Paul, BK and Rashid, H, 'Land Use Change and Coastal Management', in Climatic Hazards in Coastal Bangladesh, eds BK Paul and H Rashid, pp. 183–207 (Boston: Butterworth-Heinemann, 2017), doi: 10.1016/B978-0-12-805276-1.00006-5. ↩ Pramanik, S and Punia, M, 'Land Use/Land Cover Change and Surface Urban Heat Island Intensity: Source–Sink Landscape-Based Study in Delhi, India', Environment, Development and Sustainability 22, no. 8 (2020): 7331–7356, doi: 10.1007/s10668-019-00515-0. ↩ Daily, GC et al., 'The Value of Nature and the Nature of Value', Science 289, no. 5478 (2000): 395–396, doi: 10.1126/science.289.5478.395. ↩ Das, M and Das, A, 'Dynamics of Urbanization and its Impact on Urban Ecosystem Services (UESs): A Study of a Medium Size Town of West Bengal, Eastern India', Journal of Urban Management 8, no. 3 (2019): 420–434, doi: 10.1016/j.jum.2019.03.002. ↩ Luniak, M, 'Synurbization: Adaptation of Animal Wildlife to Urban Development', in Proceedings of the 4th International Symposium on Urban Wildlife Conservation, pp. 50–55 (University of Arizona, 2004). ↩ Kuddus, M, Kumari, R and Ramteke, P, 'Studies on Air Pollution Tolerance of Selected Plants in Allahabad City, India', Journal of Environmental Research and Management 2, no. 3 (2011): 42–46. ↩ Singhal, RK, Kumar, M and Bose, B, 'Eco-Physiological Responses of Artificial Night Light Pollution in Plants', Russian Journal of Plant Physiology 66, no. 2 (2019): 190–202, doi: 10.1134/S1021443719020134. ↩ Thimmegowda, GG et al., 'A Field-Based Quantitative Analysis of Sublethal Effects of Air Pollution on Pollinators', Proceedings of the National Academy of Sciences 117, no. 34 (2020): 20653–20661, doi: 10.1073/pnas.2009074117. ↩ Times of India, 'Why Stray Dogs Divide India like Nothing Else', 15 March 2021, accessed 15 April 2021. ↩ Chandran, R and Azeez, PA. 'Stray Dog Menace: Implications and Management'. Economic and Political Weekly 51, no. 48 (2016): 58–65. ↩ Home, C, Vanak, A and Bhatnagar, Y, 'Canine Conundrum: Domestic Dogs as an Invasive Species and Their Impacts on Wildlife in India', Animal Conservation, 21, no. 4 (2017): 275–282, doi: 10.1111/acv.12389. ↩ Forest, D, 'Street Dogs Keep the Developing World from Going to the Rats', Animal People (1 July 2001). ↩ Braczkowski, AR, O'Bryan, CJ, Stringer, MJ, Watson, JEM, Possingham, HP and Beyer, HL, 'Leopards Provide Public Health Benefits in Mumbai, India', Frontiers in Ecology and the Environment 16, no. 3 (2018): 176–182, doi: 10.1002/fee.1776. ↩ Pimm, SL, 'The Balance of Nature? Ecological Issues in the Conservation of Species and Communities' (Chicago: University of Chicago Press, 1991). ↩ Simberloff, D, 'The "Balance of Nature": Evolution of a Panchreston', PLOS Biology 12, no. 10 (2014): e1001963, doi: 10.1371/journal.pbio.1001963. ↩ Ducarme, F and Couvet, D, 'What Does "Nature" Mean?', Palgrave Communications 6, no. 14 (2020): doi: 10.1057/s41599-020-0390-y ↩ Wilson, EO, Biophilia (Cambridge: Harvard University Press, 1984) ↩ ↩2 Louv, R, Last Child in the Woods: Saving Our Children from Nature-Deficit Disorder (London: Atlantic Books, 2013). ↩ Burkhardt, MA, 'Healing Relationships with Nature', Complementary Therapies in Nursing and Midwifery, 6, no. 1 (2000): 35–40, doi: 10.1054/ctnm.1999.0438. ↩ Cox, DTC et al., 'Doses of Neighborhood Nature: The Benefits for Mental Health of Living with Nature', BioScience 67, no. 2 (2017): 147–155, doi: 10.1093/biosci/biw173. ↩ Shanahan, DF, Bush, R, Gaston, KJ, Lin, BB, Dean, J, Barber, E and Fuller, RA, 'Health Benefits from Nature Experiences Depend on Dose', Scientific Reports 6, no. 1 (2016): 28551, doi: 10.1038/srep28551. ↩ Nisbet, EK, Zelenski, JM and Murphy, SA, 'The Nature Relatedness Scale: Linking Individuals' Connection with Nature to Environmental Concern and Behavior', Environment and Behavior 41, no. 5 (2009): 715–740, accessed doi: 10.1177/0013916508318748 ↩ Tap or select text, and then tap the bookmark icon to save a bookmark. Bookmarks are saved in your browser cache. Clearing your cache will remove them. Your last visit The last time you visited, you stopped reading here. Using iThink Biology Endorsements for iThink Biology A. Land and waterscapes A1. Western Ghats A1.1 Introduction to the Western Ghats A1.2 Scientific Discoveries A1.3 Plant and Animal distribution across the Western Ghats A1.4 Commons: many takers of the Kudremukh National Park A2. Waterscapes of India A2.2 Why does land harbour more diversity than water? A2.3 Why do organisms live where they do? A2.4 The water cycle and fresh water bodies A2.5 Changes in aquatic communities in a river A2.6 Has human activity altered fish distribution? A2.7 Crossing the salt barrier A3. Urbanscapes A3.4 City and public health B. Human health B1. Malaria B1.1 Introduction B1.2 Process and History of Science B1.3 Preventing malaria B1.4 Scientific tools B1.5 Quiz B1.6 References B2. Rotavirus B2.2 Rotavirus: double stranded RNA virus B2.3 How do we use model systems (including cell lines) to study Rotavirus? B2.4 Team work – team science in developing the vaccine B2.5 Interpreting epidemiological data based on rotavirus C. Food and agriculture C1. Rice C1.1 Introduction C1.2 The green revolution C1.3 Evolution C1.4 Domestication of rice C1.5 Categorisation of rice varieties C1.6 White rice and diabetes C1.7 Quiz C1.8 References C2. Cotton C2.2 Brief history of cotton growing in India C2.3 Agro-botany of cotton C2.4 Genetic improvements to cotton C2.5 Genetic improvements in cotton: producing Bt cotton C2.6 Impact of Bt cotton in India D. Interactions between organisms D1. Feline Fables D1.1 Introduction D1.2 Biogeography of felids D1.3 Territoriality and home range: how do cats coexist? D1.4 Cats are rare and elusive: population estimation of felids D1.5 Doing science with the help of citizens: does citizen science always work? D1.6 Why conserve? Human–wildlife conflict and cooperation D1.7 Quiz D1.8 References D2. Figs D2.2 Biology of the fig and the fig wasp D2.3 Studying figs as keystone species D2.4 The syconium as a closed ecosystem D2.5 Are fig eaters effective seed dispersers? D2.6 Science communication D3. The Kingdom of Fungi D3.2 Fungal Life Cycle and Ecology D3.3 Fungal Biodiversity D3.4 Harnessing Fungal Biodiversity D3.5 Fungal Diseases I. Numbers and scales in biology I.1 Introduction I.2 Length and Time Scales in Biology I.3 Logarithms I.4 Allometry II. Gene expression II.1 Introduction II.2 The central dogma II.3 Gene regulation II.4 Adding to the central dogma III. Molecular biology techniques III.1 Introduction III.2 Restriction enzymes III.3 Plasmids III.4 Gel electrophoresis III.5 Polymerase chain reaction for amplifying DNA III.6 Current status and the future IV. Evolution IV.1 Introduction IV.2 What is evolution? IV.3 What is natural selection? IV.4 What is genetic drift? IV.5 What is artificial selection? How is it different from natural selection? IV.6 How are new species formed? IV.7 What are phylogenetic trees and how do we read them? Exercise Answers Doing Biology in India Doing Biology Created by biologists and educators based at Azim Premji University. Built by Electric Book Works. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. azimpremjiuniversity.edu.in
CommonCrawl
Asian Pacific Journal of Cancer Prevention Asian Pacific Organization for Cancer Prevention (아시아태평양암예방학회) Health Sciences > Development of Pharmaceutical The Asian Pacific Journal of Cancer Prevention is a monthly electronic journal publishing papers in all areas of cancer control. Its is indexed on PubMed (Impact factor for 2014 : 2.514) and the scope is wide-ranging: including descriptive, analytical and molecular epidemiology; experimental and clinical histopathology/biology of preneoplasias and early neoplasias; assessment of risk and beneficial factors; experimental and clinical trials of primary preventive measures/agents; screening approaches and secondary prevention; clinical epidemiology; and all aspects of cancer prevention education. All of the papers published are freely available as pdf files downloadable from www.apjcpcontrol.org, directly or through PubMed, or obtainable from the first authors. The APJCP is financially supported by the UICC Asian Regional Office and the National Cancer Center of Korea, where the Editorial Office is housed. Volume 17 Issue sup3 Clinicopathological Factors and Gastric Cancer Prognosis in the Iranian Population: a Meta-analysis Somi, Mohammad Hossein;Ghojazadeh, Morteza;Bagheri, Masood;Tahamtani, Taraneh 853 https://doi.org/10.7314/APJCP.2015.16.3.853 PDF KSCI Background: Gastric cancer is the most common cancer in the Iranian population. The aim of this study was to determine the effect of clinicopathological factors on prognosis by meta-analysis. Materials and Methods: A literature search was conducted using MEDLINE, EMBASE and Cochrane library and extensive literature search using the Persian databases until February 2011. Prospective follow up studies with multivariate analysis of overall survival of the patients with gastric cancer were included in this review. The data were analyzed by CMA.2. Publication bias are checked by funnel plot and data are shown as Forest plots. Results: From a total of 63 articles, 14 retrospective studies which examined 5 prognostic factors and involving 10,500 patients were included. Tumor size (>35mm) was the main significant factor predicting an unfavorable prognosis for the patients with gastric cancer (RR=1.829, p<0.001) followed by presence of distant metastases (RR=1.607, p<0.001), poor differentiation (RR=1.408, p<0.001) and male sex (RR=1.194, p<0.001). Lymph node metastases (RR=1.058, p=0.698) and moderate differentiation (RR=0.836, p=0.043) were not statistically significant as prognostic factors. Conclusions: This meta-analysis suggests that tumor size>35mm, poor differentiation, presence of distant metastasis and male gender are strongly associated with a poor prognosis in Iranian patients with gastric cancer. Single Life Time Cytological Screening in High Risk Women as an Economical and Feasible Approach to Control Cervical Cancer in Developing Countries Like India Misra, Jata Shankar;Srivastava, Anand Narain;Das, Vinita 859 In view of funding crunches and inadequate manpower in cytology in developing countries like India, single lifetime screening for cervical cancer has been suggested. In this study, an attempt was made to cscreening to make it more effective for early detection. Cytological data were derived from the ongoing routine cervical cytology screening program for women attending Gynaecology Out Patient Department of Queen Mary's Hospital of K.G.Medical University, Lucknow, India during a span of 35 years (April 1971 - December 2005). Cervical smears in a total of 38,256 women were cytologically evaluated. The frequencies of squamous intraepithelial lesions of cervix (SIL) and carcinoma cervix were found to be 7.0% and 0.6%, respectively, in the series. Predisposing factors related to cervical carcinogenesis were analyzed in detail to establish the most vulnerable groups of women for single life time screening. The incidence of SIL and carcinoma cervix was found to be maximal in women above the age of 40 years irrespective of parity and in multiparous women (with three or more children) irrespective of age. The incidence of cervical cytopathologies was significantly higher in symptomatic women, the frequency of SIL being alarmingly higher in women complaining of contact bleeding and that of carcinoma cervix in older women with postmenopausal bleeding. It is consequently felt that single life time screening must include the three groups of women delineated above. Such selective screening appears to be the most economical, cost effective and feasible approach to affordably control the menace of cervical cancer in developing countries like India. Anti-cancer Properties of a Sesquiterpene Lactone-bearing Fraction from Artemisia khorassanica Rabe, Shahrzad Taghizadeh;Emami, Seyed Ahmad;Iranshahi, Mehrdad;Rastin, Maryam;Tabasi, Nafise;Mahmoudi, Mahmoud 863 Background: Artemisia species are important medicinal plants throughout the world. The present in vitro study, using a sesquiterpene lactone-bearing fraction prepared from Artemisia khorassanica (SLAK), sought to investigate anti-cancer properties of this plant and elucidate potential underlying mechanisms for the effects. Materials and Methods: Anti-cancer potential was evaluated by toxicity against human melanoma and fibroblast cell lines. To explore the involved pathways, pattern of any cell death was determined using annexin-V/PI staining and also the expression of Bax and cytochrome c was investigated by Western blotting. Results: The results showed that SLAK selectively caused a concentration-related inhibition of proliferation of melanoma cells that was associated with remarkable increase in early events and over-expression of both Bax and cytochrome c. Conclusions: The current experiment indicates that Artemisia may have anti-cancer activity. We anticipate that the ingredients may be employed as therapeutic candidates for melanoma. Microvessel Density as a Prognostic Factor in Ovarian Cancer: a Systematic Review and Meta-analysis He, Lei;Wang, Qiao;Zhao, Xia 869 Background: The prognostic value of microvessel density (MVD), reflecting angiogenesis, detected in ovarian cancer is currently controversial. Here we performed a meta-analysis of all relevant eligible studies. Materials and Methods: A comprehensive search of online PubMed, Medline, EMBASE and Sciencedirect was performed to identify all related articles. The search strategy was designed as 'microvessel density', 'ovarian cancer', 'ovarian neoplasm', 'CD34' and 'angiogenesis'. Results: The studies were categorized by author/year, number of patients, FIGO stage, histology, cutoff value for microvessel density, types of survival analysis, methods of hazard rations (HR) estimation, HR and its 95% confidence interval (CI). Combined hazard ratios suggested that high MVD was associated with poor overall survival (OS) and progression-free survival (PFS), with HR and 95% CIs of 1.84 (1.33-2.35) and 1.36 (1.06-1.66), respectively. Subgroup analysis showed that high MVD detected by CD34 was relevant for OS [HR=1.67 (1.36-2.35)], but not MVD detected with other antibodies [HR=2.11 (0.90-3.31)]. Another subgroup analysis indicated that high MVD in patients without pre-chemotherapy, but not with pre-chemotherapy, was associated with OS [HR=1.88(1.59-2.18 and HR=1.70 (-0.18-3.59)]. Conclusions: The OS and PFS with high MVD were significant poorer than with low MVD in ovarian cancer patients. However, high MVD detected by CD34 seems to be more associated with survival for patients without pre-chemotherapy. Knockdown of Med19 Suppresses Proliferation and Enhances Chemo-sensitivity to Cisplatin in Non-small Cell Lung Cancer Cells Wei, Ling;Wang, Xing-Wu;Sun, Ju-Jie;Lv, Li-Yan;Xie, Li;Song, Xian-Rang 875 Mediator 19 (Med19) is a component of the mediator complex which is a coactivator for DNA-binding factors that activate transcription via RNA polymerase II. Accumulating evidence has shown that Med19 plays important roles in cancer cell proliferation and tumorigenesis. The involvement of Med19 in sensitivity to the chemotherapeutic agent cisplatin was here investigated. We employed RNA interference to reduce Med19 expression in human non-small cell lung cancer (NSCLC) cell lines and analyzed their phenotypic changes. The results showed that after Med19 siRNA transfection, expression of Med19 mRNA and protein was dramatically reduced (p<0.05). Meanwhile, impaired growth potential, arrested cell cycle at G0/G1 phase and enhanced sensitivity to cisplatin were exhibited. Apoptosis and caspase-3 activity were increased when cells were exposed to Med19 siRNA and/or cisplatin. The present findings suggest that Med19 facilitates tumorigenic properties of NSCLC cells and knockdown of Med19 may be a rational therapeutic tool for lung cancer cisplatin sensitization. Prognostic Factors, Treatment and Outcome in a Turkish Population with Endometrial Stromal Sarcoma Donertas, Ayla;Nayki, Umit;Nayki, Cenk;Ulug, Pasa;Gultekin, Emre;Yildirim, Yusuf 881 Purpose: To analyze treatment modalities and prognostic factors in patients with Stage I-II endometrial stromal sarcoma (ESS). Materials and Methods: Twenty four patients (nineteen with low-grade ESS [LGESS] and five with high-grade ESS [HGESS]) were assessed retrospectively in terms of general characteristics, prognostic factors, treatment methods and survival. Results: Twenty patients were at Stage I and three were at Stage II. The stage of one patient could not be determined. With respect to age and comorbidity, no statistically significant difference was found among disease-free survival (DFS) (p=0.990; p=0.995). However, DFS was significantly shorter in Stage II than Stage I patients (p=0.002). It was also significantly shorter in HGESS patients than in LGESS patients (p=0.000). There was no statistically significant differences among the overall survival (OVS) times of patients with respect to age at diagnosis and comorbid disease (p=0.905; p=0.979) but OVS was significantly shorter in patients with HGESS (p=0.00) and Stage II disease (p=0.001). No statistically significant difference was found with respect to OVS between patients who received radiotherapy (RT) and those who did not receive RT (p=0.055). It was not statistically possible to include other treatment modalities in the analysis because of the small sample size. Conclusions: Grade and stage of a tumour were found to be the most important prognostic factors. It was not possible to determine the optimal surgical method and the effect of adjuvant treatment since the number of cases was insufficient. Matrix Metalloproteinase-2 -1306 C>T Gene Polymorphism is Associated with Reduced Risk of Cancer: a Meta-analysis Haque, Shafiul;Akhter, Naseem;Lohani, Mohtashim;Ali, Arif;Mandal, Raju K. 889 Matrix metalloproteinase-2 (MMP2) is an endopeptidase, mainly responsible for degradation of extracellular matrix components, which plays an important role in cancer disease. A single nucleotide polymorphism (SNP) at -1306 disrupts a Sp1-type promoter site. The results from the published studies on the association between MMP2 -1306 C>T polymorphism and cancer risk are contradictory and inconclusive. In the present study, a meta-analysis was therefore performed to evaluate the strength of any association between the MMP2 -1306 C>T polymorphism and risk of cancer. We searched all eligible studies published on association between MMP2 -1306 C>T polymorphism and cancer risk in PubMed (Medline), EMBASE and Google Scholar online web databases until December 2013. Genotype distribution data were collected to calculate the pooled odds ratios (ORs) and 95% confidence intervals (95%CIs) to examine the strength of the association. A total of 8,590 cancer cases and 9,601 controls were included from twenty nine eligible case control studies. Overall pooled analysis suggested significantly reduced risk associated with heterozygous genotype (CT vs CC: OR=0.758, 95%CI=0.637 to 0.902, p=0.002) and dominant model (TT+CT vs CC: OR=0.816, 95%CI=0.678 to 0.982, p=0.032) genetic models. However, allelic (T vs C: OR=0.882, 95%CI=0.738 to 1.055, p=0.169), homozygous (TT vs CC: OR=1.185, 95%CI=0.825 to 1.700, p=0.358) and recessive (TT vs CC+CT: OR=1.268, 95%CI=0.897 to 1.793, p=0.179) models did not show any risk. No evidence of publication bias was detected during the analysis. The results of present meta-analysis suggest that the MMP2 -1306 C>T polymorphism is significantly associated with reduced risk of cancer. However, further studies with consideration of different populations will be required to evaluate this relationship in more detail. Association of the PTEN IVS4 (rs3830675) Gene Polymorphism with Reduced Risk of Cancer: Evidence from a Meta-analysis Mandal, Raju K.;Akhter, Naseem;Irshad, Mohammad;Panda, Aditya K.;Ali, Arif;Haque, Shafiul 897 PTEN (phosphatase and tensin homologue), as a tumor suppressor gene, plays a significant role in regulating cell growth, proliferation, and apoptosis. Results from published studies for association between the PTEN IVS4 I/D (rs3830675) polymorphism and cancer risk are inconsistent and inconclusive. We therefore conducted a meta-analysis to evaluate the potential association between PTEN IVS4 I/D polymorphism and risk of cancer in detail. We searched PubMed (Medline) and EMBASE web databases to cover all relevant studies published until December 2013. The meta-analysis was carried out and pooled odds ratios (ORs) and 95% confidence intervals (95%CIs) were used to appraise the strength of association. A total of 1,993 confirmed cancer cases and 3,200 controls were included from six eligible case-control studies. Results from overall pooled analysis suggested a significant effect of the PTEN IVS4 I/D polymorphism and cancer risk in all genetic models, i.e., allele (I vs D: OR=0.743, 95%CI=0.648 to 0.852, p=0.001), homozygous (II vs DD: OR=0.673, 95%CI=0.555 to 0.816, p=0.001), heterozygous (ID vs DD: OR=0.641, 95%CI=0.489 to 0.840, p=0.001), dominant (II+ID vs DD: OR=0.626, 95%CI=0.489 to 0.802, p=0.001) and recessive (II vs DD+ID: OR=0.749, 95%CI=0.631 to 0.889, p=0.001). Significant publication bias was detected during the analysis. The present meta-analysis suggests that the PTEN IVS4 I/D polymorphism is significantly associated with reduced risk of cancer. However, future larger studies with other groups of populations are warranted to clarify this association. Effect of Hormone Therapy on Long-term Outcomes of Patients with Human Epidermal Growth Factor Receptor 2-and Hormone Receptor-Positive Metastatic Breast Cancer: Real World Experience in China Du, Feng;Yuan, Peng;Wang, Jia-Yu;Ma, Fei;Fan, Ying;Luo, Yang;Xu, Bing-He 903 Background: Among human epidermal growth factor receptor 2 (HER2)-positive breast cancer, more than half are also hormone receptor (HR)-positive. Although HR is a predictive factor for the efficacy of hormone therapy, there are still some uncertainties in regard to the effects on patients with HR-positive and HER2-positive metastatic breast cancers due to the potential resistance to hormone therapy caused by co-expression of HR and HER2. There are no clinical trials directly comparing the efficacy of hormonal therapy with chemotherapy. Materials and Methods: To examine the real-world effect of hormone therapy on patients with HR-positive and HER2-positive metastatic breast cancers, a cross-sectional study of a representative sample of the Chinese population was conducted. The study included 113 patients who received first-line and second-line palliative treatment between 2005 and 2010 in the Cancer Institute and Hospital, Chinese Academy of Medical Science. The effect of hormone therapy on overall survival (OS) was studied. Results: The patients who received hormone therapy (n=51) had better overall survival in contrast to those who received chemotherapy with anti-HER2 therapy (n=62) in first- or second-line treatment. The difference was of borderline statistical significance (51.8m vs 31.9m, p=0.065). In addition, the effect of hormone therapy did not differ significantly with other prognostic factors, including age (${\leq}50$ years or >50 years), disease free survival (${\geq}2$ years or < 2 years) and site of metastasis (visceral or bone/soft tissue). On multivariate analysis, administration of hormone therapy was associated with a trend toward a favorable prognosis (p=0.148, HR=0.693, 95%CI 0.422-1.139). Age more than 50 years was the sole independent harmful prognostic factor (p<0.001, HR=2.797, 95%CI 1.676-4.668). Conclusions: Our data suggest that hormonel therapy may improve outcomes of the patients with ER-positive and HER2-positive metastatic breast cancer. Vitamin B2 Intake and the Risk of Colorectal Cancer: a Meta-Analysis of Observational Studies Liu, Yan;Yu, Qiu-Yan;Zhu, Zhen-Li;Tang, Ping-Yi;Li, Ke 909 Background: A systematic review and meta-analysis of observational studies evaluated the association of intake of vitamin B2 with the incidence of colorectal cancer. Materials and Methods: Relevant studies were identified in MEDLINE via PubMed (published up to April 2014). We extracted data from articles on vitamin B2 and used multivariable-adjusted odds ratio (OR) and a random-effects model for analysis. Results: We found 8 articles meeting the inclusion criteria (4 of cohort studies and 4 of case-control studies) and a total of 7,750 colorectal cancer cases were included in this meta-analysis. The multivariable-adjusted OR for pooled studies for the association of the highest versus lowest vitamin B2 intake and the risk of colorectal cancer was 0.83 (95% confidence interval [95%CI]:0.75,0.91). We performed a sensitivity analysis for vitamin B2. If we omitted the study by Vecchia et al., the pooled OR was 0.86 (95%CI, 0.77,0.96). Conclusions: This is the first meta-analysis to study links between vitamin B2 and colorectal cancer. We found vitamin B2 intake was inversely associated with risk of colorectal cancer. However, further research and large sample studies need to be conducted to better validate the result. Evaluation of Antitumor and Antioxidant Activity of Sargassum tenerrimum against Ehrlich Ascites Carcinoma in Mice Patra, Satyajit;Muthuraman, Meenakshi Sundaram;Prabhu, A.T.J. Ram;Priyadharshini, R. Ramya;Parthiban, Sujitha 915 Context: In the last half century, discovering, developing and introducing of clinical agents from marine sources have seen great successes, with examples including the anti-cancer compound trabectedin. However, with increasing need for new anticancer drugs, further exploration for novel compounds from marine organism sources is strongly justified. Objective: The major aim of this study was to evaluate the antitumor and antioxidant potential of Sargassum tenerrimum J.Agardh (Sargassaceae) on Ehrlich ascites carcinoma (EAC) in Swiss albino mice. Materials and Methods: An ethanol extract of S. tenerrimum (EEST) from whole algae was used to evaluate cytotoxicity followed by in vivo assessment of toxicity, using biochemical parameters including hepatic and non-hepatic enzymes. Antioxidant properties were examined in animals bearing EAC treated with daily oral administration of 100-300 mg/kg extract suspension. Results: Antitumor effects of EEST in EAC bearing mice was observed with LD50 1815 mg/kg. Parameters like body weight, tumor volume, packed cell volume, tumor cell count, mean survival time and increase in life span in animals in the EAC bearing animals treated with EEST 300 mg/kg was comparable with control group. Significant differences were also seen with changes in total protein content, hepatic enzymes contents, MDA level, and free radical scavenging enzymes in untreated vs. EEST treated group animals. Conclusions: Evaluation of antioxidant enzymes and hepatic enzymes in the EAC animal model treated with EEST exhibited similar effects as the positive control drug 5-flurouracil. S. tenerrimum extracts contain effective antioxidants with significant antitumor activity. Could the Platelet-to-Lymphocyte Ratio be a Novel Marker for Predicting Invasiveness of Cervical Pathologies? Kose, Mesut;Celik, Fatih;Kose, Seda Kayman;Arioz, Dagistan Tolga;Yilmazer, Mehmet 923 Purpose: To determine whether the preoperative platelet to lymphocyte ratio (PLR) could predict invasiveness of cervical pathologies. Materials and Methods: Patients with preinvasive and invasive diseases were reviewed retrospectively, over a nine-year period, 2005-2014. The pathological records and completed blood counts of the patients were collected and recorded in the SPSS program. Patients were divided in two groups, preinvasive and invasive. Results: The median PLR was significantly higher in the invasive group than in the preinvasive group (p=0.03). There was a correlation between invasion of cervical cancer and white blood cell count, red cell distributing width (RDW), neutrophil-lymphocyte ratio (NLR), and PLR. Conclusions: This study showed that patients with uterine cervical cancer may present with leukocytosis, increased RDW, NLR and PLR. These cheap and easily available parameters, especially PLR, may provide useful information about the invasiveness of cervical lesions. Level of Awareness of Cervical and Breast Cancer Risk Factors and Safe Practices among College Teachers of Different States in India: Do Awareness Programmes Have an Impact on Adoption of Safe Practices? Shankar, Abhishek;Rath, G.K.;Roy, Shubham;Malik, Abhidha;Bhandari, Ruchir;Kishor, Kunal;Barnwal, Keshav;Upadyaya, Sneha;Srivastava, Vivek;Singh, Rajan 927 Background: Breast and cervical cancers are the most common causes of cancer mortality among women in India, but actually they are largely preventable diseases. Although early detection is the only way to reduce morbidity and mortality, there are limited data on breast and cervical cancer knowledge, safe practices and attitudes of teachers in India. The purpose of this study is to assess the level of awareness and impact of awareness programs in adoption of safe practices in prevention and early detection. Materials and Methods: This assessment was part of a pink chain campaign on cancer awareness. During cancer awareness events in 2011 at various women colleges in different parts in India, a pre-test related to cervical cancer and breast cancer was followed by an awareness program. Post-tests using the same questionnaire were conducted at the end of the interactive session, at 6 months and 1 year. Results: A total of 156 out of 182 teachers participated in the study (overall response rate was 85.7 %). Mean age of the study population was 42.4 years (range- 28-59 yrs). There was a significant increase in level of knowledge regarding cervical and breast cancer at 6 months and this was sustained at 1 year. Adoption of breast self examination (BSE) was significantly more frequent in comparison to CBE, mammography and the Pap test. Magazines and newspapers were sources for knowledge regarding screening tests for breast cancer in more than 60% of teachers where as more than 75% were educated by doctors regarding the Pap test. Post awareness at 6 months and 1 year, there was a significant change in alcohol and smoking habits. Major reasons for not doing screening test were found to be ignorance (50%), lethargic attitude (44.8%) and lack of time (34.6%). Conclusions: Level of knowledge of breast cancer risk factors, symptoms and screening methods was high as compared to cervical cancer. There was a significant increase in level of knowledge regarding cervical and breast cancer at 6 months and this was sustained at 1 year. Adoption of BSE was significantly greater in comparison to CBE, mammography and the Pap test. To inculcate safe practices in lifestyle of people, awareness programmes such as pink chain campaign should be conducted more widely and frequently. Prognostic Significance of Preoperative Anemia, Leukocytosis and Thrombocytosis in Chinese Women with Epithelial Ovarian Cancer Chen, Ying;Zhang, Lei;Liu, Wen-Xin;Liu, Xiang-Yu 933 Malignant tumors are often accompanied by increased risk of hematological abnormalities. However, few studies have reported any prognostic impact of preoperative thrombocytosis, leukocytosis and anemia in epithelia ovarian cancer (EOC). This study aimed to investigate preoperative hematological parameters for anemia, leukocytosis and thombocytosis in relation to established prognostic factors and survival in EOC cases. A total of 816 Chinese women treated for EOC were retrospectively included in the study focusing on the relationship between preoperative hemoglobin, leukocyte and platelet counts, and a panel of clinicopathologic characteristics and outcome. Preoperative anemia was present in 13.4%, leukocytosis in 16.7% and thrombocytosis in 22.8%. Additionally, EOC patients with low differentiation grade, advanced stage, lymph node (LN) metastasis, residual disease ${\geq}1cm$, ascites volume >1,000ml, serum cancer antigen 125 (CA125) >675U/ml, and disease recurrence had the higher prevalence of preoperative anemia, leukocytosis and thrombocytosis (all p<0.05). Moreover, EOC patients with older age or postmenopausal EOC patients had the higher prevalence of thrombocytosis (28.7% vs 17.3% or 26.0% vs 17.7%). Furthermore, in a Cox proportional hazard model, thrombocytosis was an independent factor for progression-free survival (PFS) and overall survival (OS) (p<0.001). Conclusively, preoperative anemia, leukocytosis or thrombocytosis in EOC patients is closely associated with more malignant disease phenotype and poorer prognosis. Significantly, thrombocytosis may independently predict the disease-specific survival for EOC patients. Semaphoring mAb: a New Guide in RIT in Inhibiting the Proliferation of Human Skin Carcinoma Liu, Yuan;Ma, Jing-Yue;Luo, Su-Ju;Sun, Chen-Wei;Shao, Li-Li;Liu, Quan-Zhong 941 Semaphoring is a transmembrane receptor which participates in many cytokine-mediated signal pathways that are closely related to the angiogenesis, occurrence and development of carcinoma. The present study was designed to access the effect of mono-antibody (mAb) guided radioimmunotherapy (RIT) on skin carcinoma and investigate the potential mechanisms. Semaphoring mAb was acquired from mice (Balb/c), purified with rProtein A column; purity, concentration and activity were tested with SDS-PAGE and indirect ELISA; specificity and expression on the cutanuem carcinoma line and tissue were tested by Western blotting; morphology change was assessed by microscopy. MTT assay and colony inhibition tests were carried out to test the influence on the proliferation of tumor cells; Western blotting was also carried out for expression of apoptosis-associated (caspase-3, Bax, Bcl-2) and proliferation-related (PI3K, p-Akt, Akt, p-ERK1/2, ERK1/2) proteins and analyse the change in signal pathways (PI3K/Akt and MEK/ERK). The purity of purified semaphorin mAb was 96.5% and the titer is about $1{\times}10^6$. Western blotting showed semaphoring mAb to have specifically binding stripes with semaphoring b1b2 protein, B16F10, and A431 cells at 39KDa, 100KDa and 130KDa, respectively. Positive expression was detected both in cutanuem carcinoma line and tissue and it mostly located in cell membranes. MMT assay revealed dose-relate and time-relate inhibitory effect of semaphorin mAb on A431 and B16F10. Colony inhibition tests also showed dose-relate inhibitory effects. Western blotting demonstrated the expression of apoptosis and proliferation-related protein and changes in signal pathway. In conclusion, we demonstrated that semaphorin is highly expressed on the tumor cell-surfaces and RIT with semaphorin mAb has effect in i nhibiting proliferation and accelerating apoptosis of tumor cells. Oral Concentrated Grape Juice Suppresses Expression of NF-kappa B, TNF-α and iNOS in Experimentally Induced Colorectal Carcinogenesis in Wistar Rats de Lima Pazine Campanholo, Vanessa Maria;Silva, Roseane Mendes;Silva, Tiago Donizetti;Neto, Ricardo Artigiani;Paiotti, Ana Paula Ribeiro;Ribeiro, Daniel Araki;Forones, Nora Manoukian 947 The aim of this study was to evaluate the effects of grape juice on colon carcinogenesis induced by azoxymethane (AOM) and expression of NF-kB, iNOS and TNF-${\alpha}$. Methods: Forty male Wistar rats were divided into 7 groups: G1, control; G2, 15 mg/kg AOM; G3, 1% grape juice 2 weeks before AOM; G4, 2% grape juice 2 weeks before AOM; G5, 1% grape juice 4 weeks after AOM; G6, 2% grape juice 4 weeks after AOM; G7, 2% grape juice without AOM. Histological changes and aberrant crypt foci (ACF) were studied, while RNA expression of NF-kB, TNF- and iNOS was evaluated by qPCR. Results: The number of ACF was higher in G2, and G4 presented a smaller number of crypts per focus than G5 (p=0.009) and G6. Small ACF (1-3) were more frequent in G4 compared to G2, G5 and G6 (p=0.009, p=0.009 and p=0.041, respectively). RNA expression of NF-kB was lower in G3 and G4 compared to G2 (p=0.004 and p=0.002, respectively). A positive correlation was observed between TNF-${\alpha}$ and NF-kB gene expression (p=0.002). In conclusion, the administration of 2% grape juice before AOM reduced the crypt multiplicity, attenuating carcinogenesis. Lower expression of NF-kB was observed in animals exposed to grape juice for a longer period of time, regardless of concentration. KRT13, FAIM2 and CYP2W1 mRNA Expression in Oral Squamous Cell Carcinoma Patients with Risk Habits Hartanto, Firstine Kelsi;Karen-Ng, Lee Peng;Vincent-Chong, Vui King;Ismail, Siti Mazlipah;Mustafa, Wan Mahadzir Wan;Abraham, Mannil Thomas;Tay, Keng Kiong;Zain, Rosnah Binti 953 Background: Expression of KRT13, FAIM2 and CYP2W1 appears to be influenced by risk habits, thus exploring the associations of these genes in oral squamous cell cancer (OSCC) with risk habits, clinico-pathological parameters and patient survival may be beneficial in identifying relevant biomarkers with different oncogenic pathways. Materials and Methods: cDNAs from 41 OSCC samples with and without risk habits were included in this study. Quantitative real-time PCR was used to analyze KRT13, FAIM2 and CYP2W1 in OSCC. The housekeeping gene (GAPDH) was used as an endogenous control. Results: Of the 41 OSCC samples, KRT13 was down-regulated in 40 samples (97.6%), while FAIM2 and CYP2W1 were down-regulated in 61.0% and 48.8%, respectively. Overall, there were no associations between KRT13, FAIM2 and CYP2W1 expression with risk habits, selected socio-demographic and clinico-pathological parameters and patient survival. Conclusions: Although this study was unable to show significance, there were some tendencies in the associations of KRT13, FAIM2 and CYP2W1 expression in OSCC with selected clinic-pathological parameters and survival. Estimation of Leucine Aminopeptidase and 5-Nucleotidase Increases Alpha-Fetoprotein Sensitivity in Human Hepatocellular Carcinoma Cases Abouzied, Mekky Mohammed;Eltahir, Heba M.;Fawzy, Michael Atef;Abdel-Hamid, Nabil Mohie;Gerges, Amany Saber;El-Ibiari, Hesham Mohmoud;Nazmy, Maiiada Hassan 959 Purpose: To find parameters that can increase alpha-fetoprotein (AFP) sensitivity and so help in accurate diagnosis and rapid management of hepatocullular carcinoma (HCC), as AFP has limited utility of distinguishing HCC from benign hepatic disorders for its high false-positive and false negative rates. Materials and Methods: Serum levels of AFP, 5'-nucleotidase enzyme activity (5-NU) and leucine aminopeptidase enzyme (LAP) activity were measured in 40 individuals. Results: LAP and 5'NU were elevated in HCC at p<0.001. Pearson correlation coefficients showed that changes in AFP exhibited positive correlation with both 5'-NU and LAP at (p<0.001). The complementary use of LAP only with AFP resulted in an increase in sensitivity of AFP from 75% to 90% in detecting HCC. The complementary use of both LAP and 5-NU with AFP resulted in an increased sensitivity of AFP in detecting HCC from 75% to 95%. Conclusions: LAP and 5-FU can be determined in HCC patients in combination with AFP to improve its sensitivity and decrease false negative results. Mechanistic Studies of Cyclin-Dependent Kinase Inhibitor 3 (CDKN3) in Colorectal Cancer Yang, Cheng;Sun, Jun-Jun 965 Colorectal cancer is one of the most severe subtypes of cancer, and has the highest propensity to manifest as metastatic disease. Because of the lack of knowledge of events that correlate with tumor cell migration and invasion, few therapeutic options are available. The current study aimed to explore the mechanism of colorectal cancer in hope of identifying the ideal target for future treatment. We first discovered the pro-tumor effect of a controversial cell cycle regulator, cylin-dependent kinase inhibitor 3 (CDKN3), which is highly expressed in colorectal cancer, and the possible related signaling pathways, by bioinformatics tools. We found that CDKN3 had remarkable effects in suppressing colorectal cancer cell proliferation and migration, inducing cell cycle arrest and apoptosis in a colorectal cancer cell line, SW480 cells. Our study, for the first time, provided consistent evidence showing overexpression of cell cycle regulator CDKN3, in colorectal cancer. The in vitro studies in SW480 cells revealed a unique role of CDKN3 in regulating cellular behavior of colorectal cancer cells, and implied the possibility of targeting CDKN3 as a novel treatment for colorectal cancer. Prognostic Factors Influencing Clinical Outcomes of Malignant Glioblastoma Multiforme: Clinical, Immunophenotypic, and Fluorescence in Situ Hybridization Findings for 1p19q in 816 Chinese Cases Qin, Jun-Jie;Liu, Zhao-Xia;Wang, Jun-Mei;Du, Jiang;Xu, Li;Zeng, Chun;Han, Wu;Li, Zhi-Dong;Xie, Jian;Li, Gui-Lin 971 Malignant glioblastoma multiforme (GBM) is the most malignant brain tumor and despite recent advances in diagnostics and treatment prognosis remains poor. In this retrospective study, we assessed the clinical and radiological parameters, as well as fluorescence in situ hybridization (FISH) of 1p19q deletion, in a series of cases. A total of 816 patients with GBM who received surgery and radiation between January 2010 and May 2014 were included in this study. Kaplan-Meier survival analysis and Cox regression analysis were used to find the factors independently influencing patient progression free survival (PFS) and overall survival (OS). Age at diagnosis, preoperative Karnofsky Performance Scale (KPS) score, KPS score change at 2 weeks after operation, neurological deficit symptoms, tumor resection extent, maximal tumor diameter, involvement of eloquent cortex or deep structure, involvement of brain lobe, Ki-67 and MMP9 expression level and adjuvant chemotherapy were statistically significant factors (p<0.05) for both PFS and OS in the univariate analysis. Cox proportional hazards modeling revealed that age ${\leq}50$ years, preoperative KPS score ${\geq}80$, KPS score change after operation ${\geq}0$, involvement of single frontal lobe, deep structure involvement, low Ki-67 and MMP9 expression and adjuvant chemotherapy were independent favorable factors (p<0.05) for patient clinical outcomes. Knowledge and Awareness about Breast Cancer and its Early Symptoms among Medical and Non-Medical Students of Southern Punjab, Pakistan Noreen, Mamoona;Murad, Sheeba;Furqan, Muhammad;Sultan, Aneesa;Bloodsworth, Peter 979 Breast cancer is the leading cause of morbidity and mortality globally but has an even more significant impact in developing countries. Pakistan has the highest prevalence among Asian countries. A general lack of public awareness regarding the disease often results in late diagnosis and poor treatment outcomes. The literacy rate of the Southern Punjab (Pakistan) is low compared to its Northern part. It is therefore vital that university students and especially medical students develop a sound knowledge about the disease so that they can spread awareness to others who may be less educated. This study therefore considers current knowledge and understanding about the early signs of breast cancer amongst a study group of medical and non-medical university students of the Southern Punjab, Pakistan. A cross-sectional descriptive analysis of the university students was carried out using a self-administered questionnaire to assess their awareness of breast cancer from March to May 2014. A total of 566 students participated in this study, out of which 326 were non-medical and 240 were from a medical discipline. Statistical analysis was carried out using Graph Pad Prism Version 5 with a significance level set at p<0.05. The mean age of the non medical and medical participants was 23 (SD 2.1) and 22 (SD 1.3) years, respectively. Less than 35% students were aware of the early warning signs of the breast cancer development. Knowledge of medical students about risk factors was significantly better than the non medical ones, but on the whole was insufficient. Our study indicated that knowledge regarding breast cancer was generally insufficient amongst the majority of the university students (75% non-medical and 55% medical) of Southern Punjab, Pakistan. This study highlights the need to formulate an awareness campaign and to organize conferences to promote breast cancer awareness among students in this region. Efficacy and Tolerability of Weekly Docetaxel, Cisplatin, and 5-Fluorouracil for Locally Advanced or Metastatic Gastric Cancer Patients with ECOG Performance Scores of 1 and 2 Turkeli, Mehmet;Aldemir, Mehmet Naci;Cayir, Kerim;Simsek, Melih;Bilici, Mehmet;Tekin, Salim Basol;Yildirim, Nilgun;Bilen, Nurhan;Makas, Ibrahim 985 Background: Docetaxel, cisplatin, 5-fluorouracil (DCF) given every three weeks is an effective, but palliative regimen and significantly toxic especially in patients who have a low performance score. Here, we aimed to evaluate the efficacy and tolerability of a weekly formulation of DCF in locally advanced and metastatic gastric cancer patients. Materials and Methods: 64 gastric cancer patients (13 locally advanced and 51 metastatic) whose ECOG (Eastern Cooperative Oncology Group) performance status (PS) was 1-2 and who were treated with at least two cycles of weekly DCF protocol as first-line treatment were included retrospectively. The weekly DCF protocol included $25mg/m^2$ docetaxel, $25mg/m^2$ cisplatin, and 24 hours infusion of $750mg/m^2$ 5-fluorouracil, repeated every week. Disease and patient characteristics, prognostic factors, treatment response, grade 3-4 toxicity related to treatment, progression free survival (PFS) and overall survival (OS) were evaluated. Results: Of the patients, 41 were male and 23 were female; the median age was 63 (29-82) years. Forty-one patients were ECOG-1 and 23 were ECOG-2. Of the total, 81.2% received at least three cycles of chemotherapy. Partial response was observed in 28.1% and stabilization in 29.7%. Overall, the disease was controlled in 57.8% whereas progression was noted in 42.2%. The median time to progression was 4 months (95%CI, 2.8-5.2 months) and median overall survival was 12 months (95%CI, 9.2-14.8 months). The evaluation of patients for grade 3-4 toxicity revealed that 10.9% had anemia, 7.8% had thrombocytopenia and 10.9% had neutropenia. Non-hematologic toxicity included renal toxicity (7.8%) and thrombosis (1.6%). Conclusions: In patients with locally advanced or metastatic gastric cancer who were not candidates for DCF administered every-3-weeks, a weekly formulation of DCF demonstrated modest activity with minimal hematologic toxicity, suggesting that weekly DCF is a reasonable treatment option for such patients. Side Population Cell Level in Human Breast Cancer and Factors Related to Disease-free Survival Jin, C.G.;Zou, T.N.;Li, J.;Chen, X.Q.;Liu, X.;Wang, Y.Y.;Wang, X.;Che, Y.H.;Wang, X.C.;Sriplung, Hutcha 991 Side population (SP) cells have stem cell-like properties with a capacity for self-renewal and are resistant to chemotherapy and radiotherapy. Therefore the presence of SP cells in human breast cancer probably has prognostic value. Objective: To investigate the characteristics of SP cells and identify the relationship between the SP cells levels and clinico-pathological parameters of the breast tumor and disease-free survival (DFS) in breast cancer patients. Materials and Methods: A total of 122 eligible breast cancer patients were consecutively recruited from January 1, 2006 to December 31, 2007 at Yunnan Tumor Hospital. All eligible subjects received conventional treatment and were followed up for seven years. Predictors of recurrence and/or metastasis and DFS were analyzed using Cox regression analysis. Human breast cancer cells were also obtained from fresh human breast cancer tissue and cultured by the nucleic acid dye Hoechst33342 with Verapami. Flow cytometry (FCM) was employed to isolate the cells of SP and non-SP types. Results: In this study, SP cells were identified using flow cytometric analysis with Hoechst 33342 dye efflux. Adjusted for age, tumor size, lymph nodal status, histological grade, the Cox model showed a higher risk of recurrence and/or metastasis positively associated with the SP cell level (1.75, 1.02-2.98), as well as with axillary lymph node metastasis (2.99, 1.76-5.09), pathology invasiveness type (1.7, 1.14-2.55), and tumor volume doubling time (TVDT) (1.54, 1.01-2.36). Conclusions: The SP cell level is independently associated with tumor progression and clinical outcome after controlling for other pathological factors. The axillary lymph node status, TVDT and the status of non-invasive or invasive tumor independently predict the prognosis of breast cancer. Expression of PGDH Correlates with Cell Growth in Both Esophageal Squamous Cell Carcinoma and Adenocarcinoma Yang, Guo-Tao;Wang, Juan;Xu, Tong-Zhen;Sun, Xue-Fei;Luan, Zi-Ying 997 Esophageal cancer represents the fourth most common gastrointestinal cancer and generally confers a poor prognosis. Prostaglandin-producing cyclo-oxygenase has been implicated in the pathogenesis of esophageal cancer growth. Here we report that prostaglandin dehydrogenase, the major enzyme responsible for prostaglandin degradation, is significantly reduced in expression in esophageal cancer in comparison to normal esophageal tissue. Reconstitution of PGDH expression in esophageal cancer cells suppresses cancer cell growth, at least in part through preventing cell proliferation and promoting cell apoptosis. The tumor suppressive role of PGDH applies equally to both squamous cell carcinoma and adenocarcinoma, which enriches our understanding of the pathogenesis of esophageal cancer and may provide an important therapeutic target. Verification of the Correlation between Progression-free Survival and Overall Survival Considering Magnitudes of Survival Post-progression in the Treatment of Four Types of Cancer Liu, Li-Ya;Yu, Hao;Bai, Jian-Ling;Zeng, Ping;Miao, Dan-Dan;Chen, Feng 1001 https://doi.org/10.7314/APJCP.2015.16.3.1001 PDF KSCI Background: With development and application of new and effective anti-cancer drugs, the median survival post-progression (SPP) is often prolonged, and the role of the median SPP on surrogacy performance should be considered. To evaluate the impact of the median SPP on the correlation between progression-free survival (PFS) and overall survival (OS), we performed simulations for treatment of four types of cancer, advanced gastric cancer (AGC), metastatic colorectal cancer (MCC), glioblastoma (GBM), and advanced non-small-cell lung cancer (ANSCLC). Materials and Methods: The effects of the median SPP on the statistical properties of OS and the correlation between PFS and OS were assessed. Further, comparisons were made between the surrogacy performance based on real data from meta-analyses and simulation results with similar scenarios. Results: The probability of a significant gain in OS and HR for OS was decreased by an increase of the SPP/OS ratio or by a decrease of observed treatment benefit for PFS. Similarly, for each of the four types of cancer, the correlation between PFS and OS was reduced as the median SPP increased from 2 to 12 months. Except for ANSCLC, for which the median SPP was equal to the true value, the simulated correlation between PFS and OS was consistent with the values derived from meta-analyses for the other three kinds of cancer. Further, for these three types of cancer, when the median SPP was controlled at a designated level (i.e., < 4 months for AGC, < 12 months for MCC, and <6 months for GBM), the correlation between PFS and OS was strong; and the power of OS reached 34.9% at the minimum. Conclusions: PFS is an acceptable surrogate endpoint for OS under the condition of controlling SPPs for AGC, MCC, and GBM at their limit levels; a similar conclusion cannot be made for ANSCLC. Impact of Bilateral Breast Cancer on Prognosis: Synchronous Versus Metachronous Tumors Ibrahim, Noha Y.;Sroor, Mahmoud Y.;Darwish, Dalia O. 1007 Background: The clinical significance of bilateral breast cancer is unclear and its influence on prognosis is controversial. Materials and Methods: Between 2005 and 2009 we identified 110 cases of bilateral breast cancer (BBC) ; 49 patients had synchronous (duration between the occurrence of carcinoma in both breasts was less than 12 months) and 61 had metachronous (duration was more than one year with no ipsilateral local recurrence). We compared the patient characteristics including age, menopausal status, clinical stage, tumor size, histological classification, lymph node status, and hormone receptor and Her-2 status. We also compared the treatment given and overall and disease free survival (DFS) of both groups. Results: Synchronous cases tend to present more aggressively than metachronous cases and age at first presentation adversely affects survival. The 5 year overall survival was 78.7% for metachronous and 60% for synchronous. Patients with positive hormonal status had better five year disease free survival in metachronous compared to synchronous cases, at 76% and 63%, respectively. Age at first presentation >45years had better DFS (65%) compared to those with age ${\leq}45$ years (52%) at 5 years follow up. Conclusions: Patients with synchronous breast cancer may have worse prognosis. Young age and hormone receptor negative were risk factors in our study. Close follow up and early detection of contralateral breast cancer is mandatory. Plasma Phosphoproteome and Differential Plasma Phosphoproteins with Opisthorchis Viverrini-Related Cholangiocarcinoma Kotawong, Kanawut;Thitapakorn, Veerachai;Roytrakul, Sittiruk;Phaonakrop, Narumon;Viyanant, Vithoon;Na-Bangchang, Kesara 1011 This study was conducted to investigate the plasma phosphoproteome and differential plasma phosphoproteins in cases of of Opisthorchis viverrini (OV)-related cholangiocarcinoma (CCA). Plasma phosphoproteomes from CCA patients (10) and non-CCA subjects (5 each for healthy subjects and OV infection) were investigated using gel-based and solution-based LC-MS/MS. Phosphoproteins in plasma samples were enriched and analyzed by LC-MS/MS. STRAP, PANTHER, iPath, and MeV programs were applied for the identification of their functions, signaling and metabolic pathways; and for the discrimination of potential biomarkers in CCA patients and non-CCA subjects, respectively. A total of 90 and 60 plasma phosphoproteins were identified by gel-based and solution-based LC-MS/MS, respectively. Most of the phosphoproteins were cytosol proteins which play roles in several cellular processes, signaling pathways, and metabolic pathways (STRAP, PANTHER, and iPath analysis). The absence of serine/arginine repetitive matrix protein 3 (A6NNA2), tubulin tyrosine ligase-like family, member 6, and biorientation of chromosomes in cell division protein 1-like (Q8NFC6) in plasma phosphoprotein were identified as potential biomarkers for the differentiation of healthy subjects from patients with CCA and OV infection. To differentiate CCA from OV infection, the absence of both serine/threonine-protein phosphatase 2A 56 kDa regulatory subunit beta isoform and coiled-coil domain-containing protein 126 precursor (Q96EE4) were then applied. A combination of 5 phosphoproteins may new alternative choices for CCA diagnosis. Impact of Age, Tumor Size, Lymph Node Metastasis, Stage, Receptor Status and Menopausal Status on Overall Survival of Breast Cancer Patients in Pakistan Mahmood, Humera;Faheem, Mohammad;Mahmood, Sana;Sadiq, Maryam;Irfan, Javaid 1019 Background: Survival of breast cancer patients depends on a number of factors which are not only prognostic but are also predictive. A number of studies have been carried out worldwide to find out prognostic and predictive significance of different clinicopathological and molecular variables in breast cancer. This study was carried out at Nuclear Medicine, Oncology and Radiotherapy Institute (NORI), Islamabad, to find out the impact of different factors on overall survival of breast cancer patients coming from Northern Pakistan. Materials and Methods: This observational retrospective study was carried out in the Oncology Department of NORI Hospital. A total of 2,666 patients were included. Data were entered into SPSS 20. Multinomial logistic regression analysis was performed to determine associations of different variables with overall survival. P values <0.05 were considered significant. Results: The mean age of the patients was 47.6 years, 49.5% being postmenopausal. Some 1,708 were ER positive and 1,615 were PR positive, while Her 2 neu oncogene positivity was found in 683. A total of 1,237 presented with skin involvement and 426 had chest wall involvement. Some 1,663 had > 5cm tumors. Lymph node involvement was detected in 2,131. Overall survival was less than 5 years in 669 patients, only 324 surviving for more than 10 years, and in the remainder overall survival was in the range of 5-10 years. Conclusions: Tumor size, lymph node metastases, receptor status, her 2 neu positivity, skin involvement, and chest wall involvement have significant effects whereas age and menopausal status have no significant effect on overall survival of breast cancer patients in Pakistan. Short Low Concentration Cisplatin Treatment Leads to an Epithelial Mesenchymal Transition-like Response in DU145 Prostate Cancer Cells Liu, Yi-Qing;Zhang, Guo-An;Zhang, Bing-Chang;Wang, Yong;Liu, Zheng;Jiao, Yu-Lian;Liu, Ning;Zhao, Yue-Ran 1025 Background: Prostate cancer is one of the main causes of cancer death, and drug resistance is the leading reason for therapy failure. However, how this occurs is largely unknown. We therrfore aimed to study the response of DU145 cells to cisplatin. Materials and Methods: Du145 prostate cancer cells were treated with a low dose of cisplatin for 24 h and cell viability and number were determined by MTT assay and trypan blue exclusion assay, respectively. The real time polymerase chain reaction (PCR) was used to assess responses to cisplatin treatment. Results: After 24h $2{\mu}g/ml$ treatment did not result in significant reduction in cell viability or number. However, it led to enhanced cancer cell invasiveness. E-cadherin mRNA was reduced, and vimentin, Snail, Slug, metalloproteinase 9 (MMP9) mRNA expression increased significantly, a feature of epithelial-mesenchymal transition (EMT). Conclusions: Short time low concentration cisplatin treatment leads to elevated invasiveness of DU145 cancer cells and this is possibly due to EMT. Expression of Glypican-3 is Highly Associated with Pediatric Hepatoblastoma: a Systemic Analysis Xiong, Xiao-Li;Qin, Huan;Yan, Su-Qi;Zhou, Li-Shan;Chen, Peng;Zhao, Dong-Chi 1029 Objective: Glypican-3 (GPC3) is reported to be an oncofetal protein that is a useful diagnostic immunomarker for hepatoblastoma. However, the results are not inclusive. This study systemically investigated the association between expression of GPC3 and pediatric hepatoblastoma. Methods: Clinical studies evaluating the association were identified using a predefined search strategy. GPC3 immunohistochemistry was applied in the pathological diagnosis of hepatoblastoma using the monoclonal antibodies with formalin-fixed and paraffin-embedded specimens. Positive predictive rates for the association between expression of GPC3 and pediatric hepatoblastoma were calculated. Results: Specimens from four clinical studies which including 134 patients with pediatric hepatoblastoma tested by GPC3 immunohistochemistry were considered eligible for inclusion. Systemic analysis showed that, in all patients, pooled positive predictive rate of the association between expression of GPC3 and pediatric hepatoblastoma was 95.5% (128/134). Conclusion: This systemic analysis suggests that the expression of glypican-3 is highly associated with the diagnosis of pediatric hepatoblastoma. SRD5A2 Gene Polymorphisms and the Risk of Benign Prostatic Hyperplasia but not Prostate Cancer Choubey, Vimal Kumar;Sankhwar, Satya Narayan;Carlus, S. Justin;Singh, Anand Narayan;Dalela, Divakar;Thangaraj, Kumarasamy;Rajender, Singh 1033 Background: Testosterone, a primary androgen in males, is converted into its most active form, dihydrotestosterone (DHT), by $5{\alpha}$-reductase type 2 (encoded by the SRD5A2 gene) in the prostate. DHT is necessary for prostatic growth and has five times higher binding affinity than testosterone for androgen receptors. We hypothesized that polymorphic variations in the SRD5A2 gene may affect the risk of benign prostatic hyperplasia and prostate cancer. Materials and Methods: We analyzed SRD5A2 gene polymorphisms in 217 BPH patients, 192 PCa cases, and 171 controls. Genotyping was undertaken using direct DNA sequencing. Genotype data were compared between cases and controls using a Chi square statistical tool. Results: We found that the A49T locus was monomorphic with 'AA' genotype in all subjects. At V89L locus, the presence of 'VV' showed a marginally significant correlation with increased BPH risk (p=0.047). At the $(TA)_n$ locus, longer TA repeats were found to be protective against BPH (p=0.003). However, neither of these polymoprhisms correlated with the risk of PCa. Conclusions: We conclude that A49T is monomorphic in the study population, VV marginally correlates with BPH risk, and longer $(TA)_n$ repeats are protective against BPH. None of these polymorphisms affect the risk of PCa. Genotypes of Hepatitis C Virus in Relapsed and Non-respondent Patients and their Response to Anti-Viral Therapy in District Mardan, Khyber Pakhtunkhawa, Pakistan Akhtar, Noreen;Bilal, Muhammad;Rizwan, Muhammad;Khan, Muhammad Asif;Khan, Aurangzeb 1037 Hepatitis C is a blood-borne infectious disease of liver, caused by a small enveloped, positive-single stranded RNA virus, called the hepatitis C virus (HCV). HCV belongs to the Flaviviridae family and has 6 genotypes and more than 100 subtypes. It is estimated that 185 million people are infected with HCV worldwide and 5% of these are in Pakistan. The study was designed to evaluate different genotypes of HCV circulating in District Mardan and to know about the behavior of these genotypes to different anti-viral regimes. In this study 3,800 patients were exposed to interferon alfa-2a plus Ribavirin treatment for 6-months and subjected to real-time PCR to check the viral response. Among these 3,677 (97%) patients showed no detectable HCV RNA while 123 (3%) patients (non-responders) remained positive for HCV RNA. Genotypes of their analyzed showed that most of them belonged to the 3a genotype. Non-responders (123) and relapsed (5) patients were subjected to PEG-interferon and Ribavirin therapy for next 6 months, which resulted into elimination of HCV RNA from 110 patients. The genotypes of the persisting resistant samples to anti-viral treatment were 3b, 2a, 1a and 1b. Furthermore, viral RNA from 6 patients remained un-typed while 4 patients showed mixed infections. HCV was found more resistant to antiviral therapy in females as compared to mals. The age group 36-45 in both females and males was found most affected by infection. In general 3a is the most prevalent genotype circulating in district Mardan and the best anti-viral therapy is PEG-interferon plus Ribavirin but it is common practice that due to the high cost patients receive interferon alfa-2a plus Ribavirin with consequent resistance in 3% patients given this treatment regime. Expression of Cox-2 and Bcl-2 in Paget's Disease of the Breast Alikanoglu, Arsenal Sezgin;Yildirim, Mustafa;Suren, Dinc;Tutus, Birsel;Kaya, Vildan;Topal, Cumhur Selcuk;Keser, Sevinc;Karadayi, Ayse Nimet;Kapucuoglu, Fatma Nilgun;Ayva, Sebnem;Gunduz, Seyda 1041 Background: Paget's disease (PD) is a rare form of intraepithelial adenocarcinoma that involves breast and extramammarian tissues. It is often associated with ductal carcinoma in situ and/or invasive ductal cancer. Molecular pathways that play a role in development of Paget's disease are stil unclear. Expression patterns of Cox-2 and bcl-2 were therefore assessed. Materials and Methods: Patients with a histopathological diagnosis of Paget's disease were included in this study. Patient files were analysed retrospectively. Results: Invasive cancer was diagnosed in 35 (76.1%) of the patients, 7 (15.2%) had ductal carcinoma in situ and 4 (8.7%) patients had no associated neoplasm. Twenty four (52.2%) patients showed COX-2 expression in Paget cells whereas no expression was seen in 22 (47.8%) patients. No relation was found between COX-2 expression and the lesion underlying Paget's disease (p=0.518). Bcl-2 expression in Paget cells was found positive in 12 (26.1%) and negative in 27 (58,7%) cases. There was no relation between Bcl-2 expression and the lesion accompanying Paget's disease (p=0.412). No relation was observed between COX-2 expression and Bcl-2 expression (p=0.389). Conclusions: In breast cancer, COX-2 expression is associated with poor prognostic factors. As COX-2 expression increases the tendency to metastasize also increases. In our study we found a significantly high COX-2 expression in Paget's disease of the breast. We suggest that COX-2 expression and inflammatory processes may play a role in pathogenesis of the Paget's disease of the breast. Pathological Investigation of Vertebral Tumor Metastasis from Unknown Primaries - a Systematic Analysis Zhang, Yan;Cai, Feng;Liu, Liang;Liu, Xiao-Dong 1047 Background: This systematic analysis was conducted to investigate pathological diagnosis of vertebral tumor metastasis with unknown primaries. Methods: Clinical studies conducted to pathologically investigate vertebral tumor metastasis were identified using a predefined search strategy. Pooled diagnosis (PD) of each pathological confirmation was calculated. Results: For vertebral tumor metastasis, 5 clinical studies which included 762 patients were considered eligible for inclusion. Systematic analysis suggested that, for all patients with vertebral tumor metastasis, dominant PD was pathologically confirmed with lung cancer in 21.7% (165/762), with breast cancer in 26.6% (203/762) and with prostate cancer in 19.2% (146/762). Other diagnosis that could be confirmed included lymphoma, multiple myeloma, renal cancer, for example, in this cohort of patients. Conclusions: This systemic analysis suggested that breast, lung and prostate lesions could be the most common pathological types of cancer for vertebral tumor metastasis formunknown primaries, and other common diagnoses could include lymphoma, multiple myeloma, renal cancer. Polymorphisms in Genes of the De Novo Lipogenesis Pathway and Overall Survival of Hepatocellular Carcinoma Patients Undergoing Transarterial Chemoembolization Wu, You-Sheng;Bao, Deng-Ke;Dai, Jing-Yao;Chen, Cheng;Zhang, Hong-Xin;Yang, YeFa;Xing, Jin-Liang;Huang, Xiao-Jun;Wan, Shao-Gui 1051 Aberrant expression of genes in de novo lipogenesis (DNL) pathway were associated with various cancers, including hepatocellular carcinoma (HCC). Single nucleotide polymorphisms (SNPs) of DNL genes have been reported to be associated with prognosis of some malignancies. However, the effects of SNPs in DNL genes on overall survival of HCC patients receiving transarterial chemoembolization (TACE) treatment are still unknown. In present study, nine SNPs in three genes (ACLY, ACACA and FASN) in DNL pathway were genotyped using the Sequenom iPLEX genotyping system in a hospital-based cohort with 419 HCC patients treated with TACE, and their associations with HCC overall survival were evaluated by Cox proportional hazard regression analysis under three genetic models (additive, dominant and recessive). Although we did not find any significant results in total analysis (all p>0.05), our stratified data showed that SNP rs9912300 in ACLY gene was significantly associated with overall survival of HCC patients with lower AFP level and SNP rs11871275 in ACACA gene was significantly associated with overall survival of HCC patients with higher AFP level. We further identified the significant interactions between AFP level and SNP rs9912300 or rs11871275 in the joint analysis. Conclusively, our data suggest that genetic variations in genes of DNL pathway may be a potential biomarker for predicting clinical outcome of HCC patients treated with TACE. Systematic Review of Available Guidelines on Fertility Preservation of Young Patients with Breast Cancer Haddadi, Mahnaz;Muhammadnejad, Samad;Sadeghi-Fazel, Fariba;Zandieh, Zahra;Rahimi, Gohar;Sadighi, Sanambar;Akbari, Parya;Mohagheghi, Mohammad-Ali;Mosavi-Jarrahi, Alireza;Amanpour, Saeid 1057 Background: Since the survival rate of breast cancer patients has improved, harmful effects of new treatment modalities on fertility of the young breast cancer patients has become a focus of attention. This study aimed to systematically review and critically appraise all available guidelines for fertility preservation in young breast cancer patients. Materials and Methods: Major citation databases were searched for treatment guidelines. Experts from relevant disciplines appraised the available guidelines. The AGREE II Instrument that includes 23 criteria in seven domains (scope and purpose of the guidelines, stakeholder involvement, rigor of development, clarity, applicability, editorial independence, and overall quality) was used to apprise and score the guidelines. Results: The search strategy retrieved 2,606 citations; 72 were considered for full-text screening and seven guidelines were included in the study. There was variability in the scores assigned to different domains among the guidelines. ASCO (2013), with an overall score of 68.0%, had the highest score, and St Gallen, with an overall score of 24.7%, had the lowest scores among the guidelines. Conclusions: With the promising survival rate among breast cancer patients, more attention should be given to include specific fertility preservation recommendations for young breast cancer patients. Cognitive Behavioral Therapy in Breast Cancer Patients - a Feasibility Study of an 8 Week Intervention for Tumor Associated Fatigue Treatment Eichler, Christian;Pia, Multhaupt;Sibylle, Multhaupt;Sauerwald, Axel;Friedrich, Wolff;Warm, Mathias 1063 Background: Tumor associated fatigue (TAF) or cancer related fatigue (CRF) is not a new concept. Nonetheless, no real headway has been made in the quantitative analysis of its successful treatment via cognitive behavioral therapy. Since 20 to 30% of all breast cancer patients suffer from anxiety and/or depression within the first year of their diagnosis, this issue needs to be addressed and a standard treatment protocol has to be developed. This study focused on developing a simple, reproducible and short (8 weeks) protocol for the cognitive behavioral therapy support of tumor associated fatigue patients. Materials and Methods: Between the year 2011 and 2012, 23 breast cancer patients fulfilled the diagnosis TAF requirements and were introduced into this study. Our method focused on a psycho-oncological support group using a predetermined, highly structured and reproducible, cognitive behavioral therapy treatment manual. Eight weekly, 90 minute sessions were conducted and patients were evaluated before and after this eight session block. Tumor fatigue specific questionnaires such as the multidimensional fatigue inventory (MFI) as well as the hospital anxiety and depression scale (HADS) were used in order to quantitatively evaluate patient TAF. Results: Of the 23 patients enrolled in the study, only 7 patients fulfilled the TAF diagnostic criteria after the psycho-oncological group treatment. This represents a 70% reduction in diagnosable tumor associated fatigue. The HADS analysis showed a 33% reduction in patient anxiety as well as a 57% reduction in patient depression levels. The MFI scores showed a significant reduction in 4 of the 5 evaluate categories. With the exception of the "mental fatigue" MFI category all results were statistically significant. Conclusions: This study showed that a highly structured, cognitive behavioral therapy group intervention will produce significant improvements in breast cancer patient tumor associated fatigue levels after only 8 weeks. Mortality Determinants in Colorectal Cancer Patients at Different Grades: a Prospective, Cohort Study in Iran Ahmadi, Ali;Mosavi-Jarrahi, Alireza;Pourhoseingholi, Mohamad Amin 1069 Background: Colorectal cancer (CRC) is an important cause of mortality and morbidity in many communities worldwide. This population based study was conducted to assess determinants of colorectal mortality in Iranian patients. Materials and Methods: A cohort of 1,127 cases of confirmed colorectal cancer registered in a population based registry covering 10 referral hospital in Tehran, Iran, were followed for five years. Information about tumor characteristics, smoking status and family history were collected at base line and survival status were followed every six months by contacting patient or next of kin (if patients died during the follow-up). The cause of death for each case was validated by verbal autopsy and referring to patient medical records at the time of death. The data were analyzed by Stata software using univariate and multivariate analysis (Cox regression). In building the model a p value of less than 5% was considered as significant. Results: The age at diagnosis was $53.5{\pm}14$ years. Sixty one percent were male. Colorectal mortality among the patients was 96.9 person-years among men and 83 person-years among women. Seventy five percent of patients lived for 2.72 years, 50% for 5.83, and 25% for 13 years after the diagnosis of colorectal cancer. The age at diagnosis was significantly different between men and women (p<0.03). Higher tumor grade predicted higher death rate; the adjusted hazard ratios were 1.79 (95%CI, 0.88-3.61), 2.16 (95%CI, 1.07-4.37), and 3.1 (95%CI, 1.51-6.34) for grades II, III, and IV respectively when they were compared with grade I as reference. Ethnicity, marital status, family history of cancer, and smoking were related to survival with different degrees of magnitude. Conclusions: Among many factors related to survival among the colorectal patients, tumor grade and smoking showed the highest magnitudes of association. Effect of CXCR4 and CD133 Co-expression on the Prognosis of Patients with Stage II~III Colon Cancer Li, Xiao-Feng;Guo, Xiao-Guang;Yang, Yong-Yan;Liu, Ai-Yong 1073 Background: To explore the relationship between CXCR4, CD133 co-expression and clinicopathological features as well as prognosis of patients with phase II~III colon cancer. Materials and Methods: Forty-nine paraffin-embedded samples of tumor tissue and epithelial tissue adjacent to cancer were collected from patients with colon cancer undergoing radical surgery in Baotou Cancer Hospital from January, 2010 to June, 2011. CXCR4 and CD133 expression was detected using immunohistochemistry and its relationship with clinicopathological features and the 3-year survival rate was analyzed. Results: In the tumor tissue and colonic epithelial tissue adjacent to cancer, the positive expression rates of CXCR4 were respectively 61.2% (30/49) and 8.16% (4/49), while those of CD133 being 36.7% (18/49) and 6.12% (3/49). CXCR4 and CD133 expression in tumor tissue was not related to patient age, gender, primary focal sites, tumor size, TNM staging, histological type, tumor infiltration depth and presence or absence of lymphatic metastasis, but CXCR4 and CD133 co-expression was associated with TNM staging and lymphatic metastasis. The 3-year survival rate of patients with CXCR4 and CD133 co-expression was 27.3% (3/11), and that of the remainderwas 76.3% (29/38), the difference being significant ($X^2=7.0206$, p=0.0081). Conclusions: CXCR4 and CD133 co-expression may be a risk factor for poor prognosis of patients with stage II~III colon cancer. RASAL1 Attenuates Gastric Carcinogenesis in Nude Mice by Blocking RAS/ERK Signaling Chen, Hong;Zhao, Ji-Yi;Qian, Xu-Chen;Cheng, Zheng-Yuan;Liu, Yang;Wang, Zhi 1077 Recent studies have suggested that the RAS protein activator like-1 (RASAL1) functions as a tumor suppressor in vitro and may play an important role in the development of gastric cancer. However, whether or not RASAL1 suppresses tumor growth in vivo remains to be determined. In the present study, we investigated the role of RASAL1 in gastric carcinogenesis using an in vivo xenograft model. A lentiviral RASAL1 expression vector was constructed and utilized to transfect the human poorly differentiated gastric adenocarcinoma cell line, BGC-823. RASAL1 expression levels were verified by quantitative real-time RT-PCR and Western blotting analysis. Then, we established the nude mice xenograft model using BGC-823 cells either over-expressing RASAL1 or normal. After three weeks, the results showed that the over-expression of RASAL1 led to a significant reduction in both tumor volume and weight compared with the other two control groups. Furthermore, in xenograft tissues the increased expression of RASAL1 in BGC-823 cells caused decreased expression of p-ERK1/2, a downstream moleculein the RAS/RAF/MEK/ERK signal pathway. These findings demonstrated that the over-expression of RASAL1 could inhibit the growth of gastric cancer by inactivation of the RAS/RAF/MEK/ERK pathway in vivo. This study indicates that RASAL1 may attenuate gastric carcinogenesis. Compliance with Smoke-Free Policies in Korean Bars and Restaurants in California: a Descriptive Analysis Irvin, Veronica L.;Hofstetter, C. Richard;Nichols, Jeanne F.;Chambers, Christina D.;Usita, Paula M.;Norman, Gregory J.;Kang, Sunny;Hovell, Melbourne F. 1083 Background: Compliance with California's smoke-free restaurant and bar policies may be more a function of social contingencies and less a function of legal contingencies. The aims of this study were: 1) to report indications of compliance with smoke-free legislation in Korean bars and restaurants in California; 2) to examine the demographic, smoking status, and acculturation factors of who smoked indoors; and 3) to report social cues in opposition to smoking among a sample of Koreans in California. Materials and Methods: Data were collected by telephone surveys administered by bilingual interviewers between 2007-2009, and included California adults of Korean descent who visited a Korean bar or restaurant in a typical month (N=2,173, 55% female). Results: 1% of restaurant-going participants smoked inside while 7% observed someone else smoke inside a Korean restaurant. Some 23% of bar-going participants smoked inside and 65% observed someone else smoke inside a Korean bar. Presence of ashtrays was related to indoor smoking in bars and restaurants. Among participants who observed smoking, a higher percentage observed someone ask a smoker to stop (17.6%) or gesture to a smoker (27.0%) inside Korean restaurants (N=169) than inside Korean bars (n=141, 17.0% observed verbal cue and 22.7% observed gesture). Participants who smoked inside were significantly younger and more acculturated than participants who did not. Less acculturated participants were significantly more to likely to be told to stop smoking. Conclusions: Ten years after implementation of ordinances, smoking appears to be common in Korean bars in California. Knowledge Regarding Early Detection of Cancer among Romanian Women having Relatives with Cancer Lotrean, Lucia Maria;Ailoaiei, Roxana;Popa, Monica;de Vries, Hein 1091 Cancers can be detected in early stages through awareness of suspicious symptoms or by specific actions undertaken by individuals or participation in medical checks or screening programmes. The present research had three objectives: to assess the knowledge of Romanian women who have relatives with cancer with regard to cancer symptoms and detection methods; to identify socio-demographics factors influencing their level of knowledge; provide information regarding the attitudes of women from the study regarding medical help-seeking in case of any symptom which might be associated with cancer. This cross-sectional study was performed in an oncological hospital from Cluj-Napoca, Romania. It involved 160 women aged 18-70 years, who had relatives with cancer. An anonymous questionnaire was filled in by the participants. The results showed that around 10% of the study sample recognized all the 8 listed symptoms associated with cancer and all the 7 listed methods for cancer detection. The results of the linear regression analyses show that the level of knowledge regarding both symptoms and methods for detection was higher among younger women (B=-0.390, p<0.01, respectively B=-0.260; p<0.01), among those living in urban areas (B=0.872, p<0.01, respectively B=0.676; p<0.01) and those having higher educational level (B=0.883, p<0.001, respectively B=0.536; p<0.001). The majority of the participants agreed with the importance of looking for medical help within weeks up to one month in case that a symptom which might be associated with cancer was observed. The study underlines the necessity that much more information should be given to women who have relatives with cancer about what they can do to detect cancer in an early stage. This is especially needed for older women, women living in rural areas and women having a lower educational level. Knowledge and Beliefs of Malaysian Adolescents Regarding Cancer Al-Naggar, Redhwan Ahmed;Jillson, Irene Anne;Abu-Hamad, Samir;Mumford, William;Bobryshev, Yuri V. 1097 Background: Few studies have explored the knowledge and attitudes of adolescents toward cancer prevention and treatment. This lack of research and its potential utility in the development of new educational initiatives and screening methods, or the reconstruction of existing ones, provided the impetus for this study. The primary research aim was to assess secondary school student knowledge of cancer and determine whether or not they possessed basic knowledge of cancer symptoms, risk factors, and treatments and to determine the relationship between cancer knowledge and key demographic factors. Materials and Methods: The Management and Science University conducted a cross-sectional study analyzing responses through cross-tabulation with the socio-demographic data collected. Results: The findings of our quantitative analysis suggest that Malaysian youth generally possess a moderate knowledge about cancer. Quantitative analyses found that socioeconomic inequalities and bias in education present as important factors contributing to cancer awareness, prevention, and treatment among Malaysian adolescents. Conclusions: The findings indicate that Malaysian youth generally possess a moderate knowledge about cancer but the current deficiencies in initiatives directed to cancer awareness continue to hinder the improvement in prevention of cancer among Malaysian adolescents. Ginsenoside Rh2 differentially Mediates microRNA Expression to Prevent Chemoresistance of Breast Cancer Wen, Xu;Zhang, He-Da;Zhao, Li;Yao, Yu-Feng;Zhao, Jian-Hua;Tang, Jin-Hai 1105 Chemoresistance is the most common cause of chemotherapy failure during breast cancer (BCA) treatment. It is generally known that the mechanisms of chemoresistance in tumors involve multiple genes and multiple signaling pathways,; if appropriate drugs are used to regulate the mechanisms at the gene level, it should be possible to effectively reverse chemoresistance in BCA cells. It has been confirmed that chemoresistance in BCA cells could be reversed by ginsenoside Rh2 (G-Rh2). Preliminary studies of our group identified some drugresistance specific miRNA. Accordingly, we proposed that G-Rh2 could mediate drug-resistance specific miRNA and corresponding target genes through the gene regulatory network; this could cut off the drug-resistance process in tumors and enhance treatment effects. G-Rh2 and breast cancer cells were used in our study. Through pharmaceutical interventions, we could explore how G-Rh2 could inhibit chemotherapy resistance in BCA, and analyze its impact on related miRNA and target genes. Finally, we will reveal the anti-resistance molecular mechanisms of G-Rh2 from a different angle in miRNA-mediated chemoresistance signals among cells. Pathologic Response During Chemo-radiotherapy and Variation of Serum VEGF Levels Could Predict Effects of Chemo-Radiotherapy in Patients with Esophageal Cancer Yu, Jing-Ping;Lu, Wen-Bin;Wang, Jian-Lin;Ni, Xin-Chu;Wang, Jian;Sun, Zhi-Qiang;Sun, Su-Ping 1111 Background: To investigate the relationship between pathologic tumor response to concurrent chemoradiotherapy and variation of serum VEGF in patients with esophageal cancer. Materials and Methods: Forty six patients with esophageal cancer who were treated with concurrent chemo-radiotherapy were enrolled. Endoscopic and pathologic examination was conducted before and four weeks afterwards. Serum level of VEGF was documented before, four weeks later and after chemo-radiotherapy. The relationship between pathologic response and the variation of serum level of VEGF and its influence on the prognosis were investigated. Results: Serum level of VEGF decreased remarkably during and after chemo-radiotherapy in patients whose pathologic response was severe (F=5.393, 4.587, P(0.05). There were no statistical differences of serum VEGF level before, during and after chemo-radiotherapy for patients whose pathologic response was moderate or mild. There were 18 (85.7%), 7 (53.8%) and 6 patients (50.0%) whose serum VEGF level dropped in the severe, moderate and mild group, respectively, with significant differences among these groups (p=0.046). Two year survival rates of patients with severe, moderate and mild pathologic response were 61.9%, 53.8% and 33.3% respectively, and no statistically difference between severe and mild group regarding OS (p=0.245) was tested. Conclusions: Tumor pathologic response during chemo-radiotherapy and the changes of serum VEGF lever could predict curative effects of chemo-radiotherapy in patients with esophageal cancer. Incidence of Cisplatin-Induced Nephrotoxicity and Associated Factors among Cancer Patients in Indonesia Prasaja, Yenny;Sutandyo, Noorwati;Andrajati, Retnosari 1117 Background: Cisplatin is still used as a first-line medication for solid tumors. Nephrotoxicity is a serious side effect that can decrease renal function and restrict applicable doses. This research aimed to obtain the profile of cisplatin-induced nephrotoxicity and its associated factors in adult cancer patients at Dharmais National Cancer Hospital (DNCH). Materials and Methods: The design was cross-sectional with data obtained from patient medical records. We retrospectively reviewed adult cancer patients treated with cisplatin ${\geq}60mg/m^2$ for at least four consecutive chemotherapy cycles from August 2011 to November 2013. The nephrotoxicity criterion was renal function decline characterized by creatinine clearance <60 ml/min using the Cockroft-Gault (CG) equation. Results: Eighty-eight subjects received at least four chemotherapy cycles of cisplatin. The prevalence of cisplatin nephrotoxicity was 34.1%. Symptoms could be observed after the first cycle of chemotherapy, and the degree of renal impairment was higher with increased numbers of cycles (r=-0.946, $r^2=89.5%$). Factors that affected the decline of renal function were patient age (p=0.008, OR=3.433, 95%CI= 1.363-8.645) and hypertension (p=0.026, OR=2.931, 95%CI=1.120-7.670). Conclusions: Cisplatin nephrotoxicity occurred in more than one-third of patients after the fourth cycle of chemotherapy and worsened after each cycle despite preventive strategies such as hydration. The decline of renal function induced by cisplatin ${\geq}60mg/m^2$ was affected by age and hypertension. Lack of Influence of the ACE1 Gene I/D Polymorphism on the Formation and Growth of Benign Uterine Leiomyoma in Turkish Patients Gultekin, Guldal Inal;Yilmaz, Seda Gulec;Kahraman, Ozlem Timirci;Atasoy, Hande;Dalan, A. Burak;Attar, Rukset;Buyukoren, Ahmet;Ucunoglu, Nazli;Isbir, Turgay 1123 Uterine leiomyomas (ULM), are benign tumors of the smooth muscle cells of the myometrium. They represent a common health problem and are estimated to be present in 30-70% of clinically reproductive women. Abnormal angiogenesis and vascular-related growth factors have been suggested to be associated with ULM growth. The angiotensin-I converting enzyme (ACE) is related with several tumors. The aim of this study was to identify possible correlation between ULM and the ACE I/D polymorphism, to evaluate whether the ACE I/D polymorphism could be a marker for early diagnosis and prognosis. ACE I/D was amplified with specific primer sets recognizing genomic DNA from ULM (n=72) and control (n=83) volunteers and amplicons were separated on agarose gels. The observed genotype frequencies were in agreement with Hardy-Weinberg equilibrium ($x^2=2.162$, p=0.339). There was no association between allele frequencies and study groups ($x^2=0.623$; p=0.430 for ACE I allele, $x^2=0.995$; p=0.339 for ACE D allele). In addition, there were no significant differences between ACE I/D polymorphism genotype frequencies and ULM range in size and number ($X^2=1.760;$ p=0.415 for fibroid size, $X^2=0.342;$ p=0.843 for fibroid number). We conclude that the ACE gene I/D polymorphism is not related with the size or number of ULM fibroids in Turkish women. Thus it cannot be regarded as an early diagnostic parameter nor as a risk estimate for ULM predisposition. High Quality Tissue Miniarray Technique Using a Conventional TV/Radio Telescopic Antenna Elkablawy, Mohamed A.;Albasri, Abdulkader M. 1129 Background: The tissue microarray (TMA) is widely accepted as a fast and cost-effective research tool for in situ tissue analysis in modern pathology. However, the current automated and manual TMA techniques have some drawbacks restricting their productivity. Our study aimed to introduce an improved manual tissue miniarray (TmA) technique that is simple and readily applicable to a broad range of tissue samples. Materials and Methods: In this study, a conventional TV/radio telescopic antenna was used to punch tissue cores manually from donor paraffin embedded tissue blocks which were pre-incubated at $40^{\circ}C$. The cores were manually transferred, organized and attached to a standard block mould, and filled with liquid paraffin to construct TmA blocks without any use of recipient paraffin blocks. Results: By using a conventional TV/radio antenna, it was possible to construct TmA paraffin blocks with variable formats of array size and number ($2-mm{\times}42$, $2.5-mm{\times}30$, $3-mm{\times}24$, $4-mm{\times}20$ and $5-mm{\times}12$ cores). Up to $2-mm{\times}84$ cores could be mounted and stained on a standard microscopic slide by cutting two sections from two different blocks and mounting them beside each other. The technique was simple and caused minimal damage to the donor blocks. H&E and immunostained slides showed well-defined tissue morphology and array configuration. Conclusions: This technique is easy to reproduce, quick, inexpensive and creates uniform blocks with abundant tissues without specialized equipment. It was found to improve the stability of the cores within the paraffin block and facilitated no losses during cutting and immunostaining. Primary Thyroid Lymphoma: Multi-Slice Computed Tomography Findings Li, Xu-Bin;Ye, Zhao-Xiang 1135 Background: The objective of this study was to investigate the MSCT characteristics of PTL in order to enhance the awareness of this uncommon entity among both clinicians and radiologists. Materials and Methods: The clinicopathological data and MSCT images of 27 patients with PTL were retrospectively reviewed. The MSCT appearances were classified into three types: type 1, solitary nodule surrounded by normal thyroid tissue; type 2, multiple nodules in the thyroid, and type 3, enlarged thyroid glands with a reduced attenuation with or without peripheral thin hyperattenuating thyroid tissue. Results: The patients were enrolled in the study with a mean age of 68 years (range, 51-86years) and compression symptoms or enlarged cervical lymph nodes at diagnosis. Hashimoto's thyroiditis was in 20 patients. All patients had non-Hodgkin lymphoma of B-cell in origin, including 22 cases of diffuse large B-cell lymphoma (DLBCL) and 5 of low-grade B-cell lymphoma of mucosa-associated lymphoid tissue (MALT). For MSCT appearance, type 1 pattern was observed in 2 patients, type 2 in 8, and seventeen type 3 in 17. The lesions occurred in more than one lobe with a mean maximal transverse diameter of 6.9 cm and an ill-defined margin. Most tumors showed a homogeneous attenuation equal to that of surrounding muscles before contrast and obvious enhancement after contrast. Cervical lymph node involvement and invasion of the trahea and (or) esophagus were mainly observed in patients with DLBCL. Conclusions: PTL should be clinically considered in elder patients presenting with a history of Hashimoto's thyroiditis and cervical lymphadenopathy. The MSCT characteristics of PTL includes a mass diffusely affecting more than one thyroid lobe, isointense to muscle and obvious enhancement before and after contrast. DLBCL, the most common histological subtype of PTL, is associated with a higher invasive tendency. Quantitative Assessment of the Diagnostic Role of CDH13 Promoter Methylation in Lung Cancer Zhong, Yun-Hua;Peng, Hao;Cheng, Hong-Zhong;Wang, Ping 1139 In order to explore the association between cadherin 13 (CDH13) gene promoter methylation and lung carcinoma (LC) risk, we carried out a meta-analysis with searching of PubMed, Web of Science. Ultimately, 17 articles were identified and analysised by STATA 12.0 software. Overall, we found a significant relationship between CDH13 promoter methylation and LC risk (odds ratio=6.98, 95% confidence interval: 4.21-11.56, p<0.001). Subgroup analyses further revealed that LC risk was increased for individuals carrying the methylated CDH13 compared with those with unmethylated CDH13. Hence, our study identified a strong association between CDH13 gene promoter methylation and LC and highlighted a promising potential for CDH13 methylation in LC risk prediction. Genetic Susceptibility to Oral Cancer due to Combined Effects of GSTT1, GSTM1 and CYP1A1 Gene Variants in Tobacco Addicted Patients of Pashtun Ethnicity of Khyber Pakhtunkhwa Province of Pakistan Zakiullah, Zakiullah;Ahmadullah, Ahmadullah;Khisroon, Muhammad;Saeed, Muhammad;Khan, Ajmal;Khuda, Fazli;Ali, Sajid;Javed, Nabila;Ovais, Muhammad;Masood, Nosheen;Khalil, Nasir Khan;Ismail, Mohammad 1145 Associations of GSTT1, GSTM1 and CYP1A1 gene variants with risk of developing oral cancer were evaluated in this study. A case-control study was conducted in Pashtun population of Khyber Pakhtunkhwa province of Pakistan in which 200 hospital based oral cancer cases and 151 population based healthy controls exposed to similar environmental conditions were included. Sociodemographic data were obtained and blood samples were collected with informed consent for analysis. GSTM1 and GSTT1 were analysed through conventional PCR method while specific RT-PCR method was used to detect CYP1A1 polymorphisms. Results were analyzed for conditional logistic regression model by SPSS version 20. The study shows that patients with either GSTM1 or GSTT1 null genotypes have significantly higher risk of oral cancer (adjusted odds (OR): (3.019 (1.861-4.898) and 3.011(1.865-4.862), respectively), which further increased when either one or both null genes were present in combination (adjusted odds (OR): (3.627 (1.981-6.642 and 9.261 (4.495-19.079), respectively). CYP1A1 rs4646903 gene variants individually showed weak association OR: 1.121 (0.717-1.752); however, in the presence of GSTM1 and/or GSTT1 null genotypes further increasing the association (adjusted odds (ORs): 4.576 (2.038-10.273), 5.593 (2.530-12.362) and 16.10 (3.854-67.260 for GSTM/GSTT null and CYP1A1 wild type, GSTM/GSTT either null and CYP1A1 variant alleles, and all 3 gene polymorphisms combinations, respectively). Our findings suggest that presence of GSTM1 and/or GSTT1 null genotypes along with variant alleles of CYP1A1 may be the risk alleles for oral cancer susceptibility in Pashtun population. Mutation Detection of E6 and LCR Genes from HPV 16 Associated with Carcinogenesis Mosmann, Jessica P.;Monetti, Marina S.;Frutos, Maria C.;Kiguen, Ana X.;Venezuela, Raul F.;Cuffini, Cecilia G. 1151 Human papillomavirus (HPV) is responsible for one of the most frequent sexually transmitted infections. The first phylogenetic analysis was based on a LCR region fragment. Nowadays, 4 variants are known: African (Af-1, Af-2), Asian-American (AA) and European (E). However the existence of sub-lineages of the European variant havs been proposed, specific mutations in the E6 and LCR sequences being possibly related to persistent viral infections. The aim of this study was a phylogenetic study of HPV16 sequences of endocervical samples from C${\acute{o}}$rdoba, in order to detect the circulating lineages and analyze the presence of mutations that could be correlated with malignant disease. The phylogenetic analysis determined that 86% of the samples belonged to the E variant, 7% to AF-1 and the remaining 7% to AF-2. The most frequent mutation in LCR sequences was G7521A, in 80% of the analyzed samples; it affects the binding site of a transcription factor that could contribute to carcinogenesis. In the E6 sequences, the most common mutation was T350G (L83V), detected in 67% of the samples, associated with increased risk of persistent infection. The high detection rate of the European lineage correlated with patterns of human migration. This study emphasizes the importance of recognizing circulating lineages, as well as the detection of mutations associated with high-grade neoplastic lesions that could be correlated to the development of carcinogenic lesions. Common Misconceptions and Future Intention to Smoke among Secondary School Students in Malaysia Caszo, Brinnell;Khair, Muhammad;Mustafa, Mohd Habbib;Zafran, Siti Nor;Syazmin, Nur;Safinaz, Raja Nor Intan;Gnanou, Justin 1159 Background: The prevalence of smoking among secondary school children continues to remain unchanged over the last 3 decades even though awareness regarding the health effects of smoking is increasing. Common misconceptions about smoking and parental influence could be factors influencing future intentions to smoke among these students. Hence, we looked at the common misconceptions as well as student perceptions about their future intention to smoke among Form 4 students in Shah Alam, Malaysia. Materials and Methods: This study was conducted by distribution of a questionnaire developed as part of the Global Youth Tobacco Survey to Form 4 student in 3 schools at Shah Alam. Results: Prevalence of smoking (current smokers) was 7.5%. Almost half of the children came from families where one or both parents smoked and a third of the parents had no discussion regarding consequences of smoking with them. A large number of students were classified as "triers" as they had tried smoking and were unsure of whether they would not be smoking in the future. Contrary to our expectations, students generally felt smoking did make one feel more uncomfortable and helped one to reduce body weight. Most students seemed to be aware of the ill-effects of smoking on health. They felt they had received adequate information from school regarding the effects on smoking on health. Conclusions: Our study showed that even though Form 4 students in Shah Alam were knowledgeable about ill-effects of smoking and were taught so as part of their school curriculum, the prevalence of smoking was still high. Students in the "trier group" represent a potential group of future smokers and strategies targeting tobacco control may be aimed at tackling these vulnerable individuals. Efforts are also needed to help educate secondary school children about common misconceptions and dispel myths associated with cigarette smoking. Malignant Transformation Rate and P53, and P16 Expression in Teratomatous Skin of Ovarian Mature Cystic Teratoma Zhu, Hai-Li;Zou, Zhen-Ning;Lin, Pei-Xin;Li, Wen-Xia;Huang, Ye-En;Shi, Xiao-Xin;Shen, Hong 1165 Objective: To investigate the incidence of malignant transformation and P53 and P16 expression in teratomatous skin of ovarian mature cystic teratoma. Materials and Methods: Data on ovarian teratoma specimens in nearly 10 years were reviewed. P53 and P16 expression were detected by immunohistochemistry in 25 cases of teratomatous skin of ovarian mature cystic teratoma, 20 cases of squamous cell carcinoma and 2 cases of squamous cell carcinoma originated from teratomatous skin. Results: Of 1913 cases of ovarian mature cystic teratoma in nearly 10 years, only two cases of squamous cell carcinoma were found in teratomatous skin, with malignant transformation rate of 0.1045%. P53 expression was detected in 2 cases squamous cell carcinoma originated from teratomatous skin and P16 overexpression in one. There were no expressions of P53 and P16 in 25 cases of teratomatous skin of ovarian mature cystic teratoma. Of 20 cases of squamous cell carcinoma P53 overexpression (positive rate of 55%) was detected in 11 cases, P16 overexpression (positive rate of 35%) in 7 cases. The positive rates of P53 and P16 expression in squamous cell carcinomas were significantly higher than that in the teratomatous skins (p< 0.001, p= 0.002). Conclusions: There was low risk of malignant transformation in teratomatous skin of ovarian mature cystic teratoma which can be explained by lower P53 and P16 expressionin teratomas than that in squamous cell carcinoma. Steroidal Saponins from Paris polyphylla Induce Apoptotic Cell Death and Autophagy in A549 Human Lung Cancer Cells He, Hao;Sun, Yan-Ping;Zheng, Lei;Yue, Zheng-Gang 1169 Background: Paris polyphylla (Chinese name: Chonglou) had been traditionally used for a long time and shown anti-cancer action. Based on the previous study that paris polyphylla steroidal saponins (PPSS) induced cytotoxic effect in human lung cancer A549 cells, this study was designed to further illustrate the mechanisms underlying. Materials and Methods: The mechanisms involved in PPSS-induced A549 cell death were investigated by phase contrast microscopy and fluorescence microscopy, flow cytometry and western blot analysis, respectively. Results: PPSS decreased the proportion of viable A549 cells, and exposure of A549 cells to PPSS led to both apoptosis and autophagy. Apoptosis was due to activations of caspase-8, caspase-3, as well as cleavage of PARP, and autophagy was confirmed by up-regulation of Beclin 1 and the conversion from LC3 I to LC3 II. Conclusions: PPSS was able to induce lung cancer A549 cell apoptosis and autophagy in vitro, the results underlining the possibility that PPSS would be a potential candidate for intervention against lung cancer. Matrix Metalloproteinase-9 -1562T Allele and its Combination with MMP-2 -735 C Allele are Risk Factors for Breast Cancer Rahimi, Zohreh;Yari, Kheirolah;Rahimi, Ziba 1175 Background: Expression of matrix metalloproteinases (MMPs) is up-regulated in human cancers. The aim of present study was to investigate the role of MMP-9 C-1562T polymorphism and its interaction with MMP-2 C-735T polymorphism in susceptibility to breast cancer in a population from Western Iran with Kurdish ethnic background. Materials and Methods: The study sample of 205 individuals consisted of 101 breast cancer patients and 104 healthy subjects. MMP-9 C-1562T and MMP-2 C-735T variants were identified using the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. Results: Among 67.4% of studied patients the breast cancer developed in the third and forth decades of the life. The frequency of MMP-9 T allele was 17.3% in patients and 10.1% in controls. The presence of T allele significantly increased the risk of breast cancer by 1.87-fold [OR=1.87 (95% CI 1.05-3.33, p=0.035)]. The frequency of MMP-9 CT+TT genotype tended to be higher in those patients with a family history of cancer in first degree-relatives (36.8%) than those without a family history (28.3%, p=0.37). We observed an interaction between the MMP-9 -1562 T allele with MMP-2 -735 C allele that significantly increased the risk of breast cancer [OR=1.42 (95% CI 1.02-1.98, p=0.036)]. Conclusions: The present study demonstrated that MMP-9 C-1562T polymorphism alone and in combination with MMP-2 C-735T polymorphism increased the risk of breast cancer that might be a useful biomarker in identifying women at risk of developing breast cancer. Also, this study revealed that in most women from Western Iran breast cancer presents in third and fourth decades of life. BreastLight Apparatus Performance in Detection of Breast Masses Depends on Mass Size Shiryazdi, Seyed Mostafa;Kargar, Saeed;Taheri-Nasaj, Hossein;Neamatzadeh, Hossein 1181 Background: Accurate measurement of breast mass size is fundamental for treatment planning. We evaluated performance of BreastLight apparatus in detection breast of masses with this in mind. Materials and Methods: From July 2011 to September 2013, a total of 500 women referred to mammography unit in Yazd, Iran for screening were recruited to this study. Performance of BreastLight in detection breast masses regard their sizeing, measured with clinical breast examination (CBE), mammography and sonography, was assessed. Sonographic and mammography examinations were performed according to breast density among women in two groups of women younger (n=105) and older (n=395) than 30 years. Size correlations were performed using Spearman rho analysis. Differences between mass size as assessed with the different methods (mammography, sonography, and clinical examination) and the BreastLight detection were analyzed using $X^2$-trend test. Results: Performance of the BreastLight in detection of lesions smaller than or equal to 1 cm assessed by CBE, mammography and sonography was 4.4%,7.7% and 12.5% and for masses larger than 4 cm was 65%, 100% and 57.1%, respectively. The performance of BreastLight in detection was significantly increased with larger masses (p<0.001). Conclusions: We conclude that clinical measurement of breast cancer size is as accurate as that from mammography or ultrasound. Accuracy can be improved by the use of a simple formula of both clinical and mammographic measurements. Which One is More Effective, Filgrastim or Lenograstim, During Febrile Neutropenia Attack in Hospitalized Patients with Solid Tumors? Sonmez, Ozlem Uysal;Guclu, Ertugrul;Uyeturk, Ummugul;Esbah, Onur;Turker, Ibrahim;Bal, Oznur;Budakoglu, Burcin;Arslan, Ulku Yalcintas;Karabay, Oguz;Oksuzoglu, Berna 1185 Background: Chemotherapy-induced febrile neutropenia (FN) with solid tumors causes mortality and morbidity at a significant rate. The purpose of this study was to compare the effects of filgastrim and lenograstim started with the first dose of antibiotics in hospitalized patients diagnosed with FN. Materials and Methods: Between February 2009 and May 2012, 151 patients diagnosed with FN were evaluated, retrospectively. In those considered appropriate for hospitalization, convenient antibiotic therapy with granulocyte colony stimulating factors was started within first 30 minutes by completing necessary examinations in accordance with FEN guide recommendations. Results: In this study, 175 febrile neutropenia attacks in 151 patients were examined. Seventy three of the patients were male and 78 were female. The average age was 53.6 and 53.6, respectively. The most common solid tumor was breast carcinoma in 38 (25%). One hundred and five FN patients (58%) were those who received granulocyte colony stimulating factors as primary prophylaxis. Conclusions: While studies comparing both drugs generally involve treatments started for prophylaxis, this study compared the treatment given during the febrile neutropenia attack. Compared to lenograstim, filgastrim shortens the duration of hospitalization during febrile neutropenia attack by facilitating faster recovery with solid tumors. Expression of Epidermal Growth Factor-like Domain 7 is Increased by Transcatheter Arterial Embolization of Liver Tumors Li, Zhi;Ni, Cai-Fang;Zhou, Jin;Shen, Xiao-Chun;Yin, Yu;Du, Peng;Yang, Chao 1191 Background: Epidermal growth factor-like domain multiple 7 (EGFL7), recently identified as a secreted protein regulated by oxygen exposure, plays a critical role in promoting metastasis of hepatocellular carcinoma (HCC). Transcatheter arterial embolization (TAE) is widely used for treatment of HCC, resulting in hypoxia in tumors and surrounding liver tissues. Accordingly, we proposed the hypothesis that there could be a relationship between expression of EGFL7 and response to TAE. Materials and Methods: We established a rabbit VX2 liver tumor model using percutaneous puncture technique guided by computed tomography. TAE and sham embolization were performed and the results were confirmed by MRI 3 weeks after inoculation. We investigated the EGFL7 expression of the two groups at 6h and 3 days after intervention by means of immunohistochemistry and Western blotting. Results: Immunohistochemical staining demonstrated that the levels of EGFL7 protein significantly increased in the TAE-treated tumors compared with the control group at 6 hours (P=0.031) and 3 days (P=0.020) after intervention. Meanwhile, the relative EGFL7 protein detected in TAE group also up-regulated compared with the control group at 6 hours (P=0.020) and 3 days (P=0.024) after intervention. Conclusions: This study reveals an increase of EGFL7 expression in rabbit VX2 liver tumors after TAE. The role of EGFL7 in HCC, especially its biological behavior after TAE, needs further investigation. Hypermethylation of TET1 Promoter Is a New Diagnosic Marker for Breast Cancer Metastasis Sang, Yi;Cheng, Chun;Tang, Xiao-Feng;Zhang, Mei-Fang;Lv, Xiao-Bin 1197 Breast cancer metastasis is a major cause of cancer-related death in women. However, markers for diagnosis of breast cancer metastasis are rare. Here, we reported that TET1, a tumor suppressor gene, was downregulated and hypermethylated in highly metastatic breast cancer cell lines. Moreover, silencing of TET1 in breast cancer cells increased the migration and spreading of breast cancer cells. In breast cancer clinical samples, TET1 expression was reduced in LN metastases compared with primary tissues. Besides, the methylation level of the TET1 promoter was increased significantly in LN metastases. Taken together, these findings indicate that promoter hypermethylation may contribute to the downregulation of TET1 and could be used as a promising marker for diagnosis in patients with breast cancer metastasis. Tobacco Chewing and Adult Mortality: a Case-control Analysis of 22,000 Cases and 429,000 Controls, Never Smoking Tobacco and Never Drinking Alcohol, in South India Gajalakshmi, Vendhan;Kanimozhi, Vendhan 1201 Background: Tobacco is consumed in both smoking and smokeless forms in India. About 35-40% of tobacco consumption in India is in the latter. The study objective was to describe the association between chewing tobacco and adult mortality. Materials and Methods: A case-control study was conducted in urban (Chennai city) and rural (Villupuram district) areas in Tamil Nadu state in South India. Interviewed in 1998-2000 about 80,000 families (48,000 urban and 32,000 rural) with members who had died during 1995-1998. These were the cases and their probable underlying cause of death was arrived at by verbal autopsy. Controls were 600,000 (500,000 urban, 100,000 rural) individuals from a survey conducted during 1998-2001 in the same two study areas from where cases were included. Results: Mortality analyses were restricted to non-smoking non-drinkers aged 35-69. The age, sex, education and study area adjusted mortality odds ratio was 30% higher (RR:1.3, 95%CI:1.2-1.4) in ever tobacco chewers compared to never chewers and was significant for deaths from respiratory diseases combined (RR:1.5, 95%CI:1.4-1.7), respiratory tuberculosis (RR:1.7, 95%CI:1.5-1.9), cancers all sites combined (RR:1.5, 95%CI:1.4-1.7) and stroke (RR:1.4, 95%CI:1.2-1.6). Of the cancers, the adjusted mortality odds ratio was significant for upper aero-digestive, stomach and cervical cancers. Chewing tobacco caused 7.1% of deaths from all medical causes. Conclusions: The present study is the first large study in India analysing non-smoking non-drinkers. Statistically significant excess risks were found among ever tobacco chewers for respiratory diseases combined, respiratory tuberculosis, stroke and cancer (all sites combined) compared to never tobacco chewers. Contralateral Breast Cancer: a Clinico-pathological Study of Second Primaries in Opposite Breasts after Treatment of Breast Malignancy Shankar, Abhishek;Roy, Shubham;Malik, Abhidha;Kamal, Vineet Kumar;Bhandari, Ruchir;Kishor, Kunal;Mahajan, M.K.;Sachdev, Jaineet;Jeyaraj, Pamela;Rath, G.K. 1207 Background: Breast cancer is by far the most frequent cancer of women (23 % of all cancers), ranking second overall when both sexes are considered together. Contralateral breast cancer (CBC) is becoming an important public health issue because of the increased incidence of primary breast cancer and improved survival. The present communication concerns a study to evaluate the role of various clinico-pathological factors on the occurrence of contralateral breast cancer. Materials and Methods: A detailed analysis was carried out with respect to age, menopausal status, family history, disease stage, surgery performed, histopathology, hormone receptor status, and use of chemotherapy or hormonal therapy. The diagnosis of CBC was confirmed on histopathology report. Relative risk with 95%CI was calculated for different risk factors of contralateral breast cancer development. Results: CBC was found in 24 (4.5%) out of 532 patients. Mean age of presentation was 43.2 years. Family history of breast cancer was found in 37.5% of the patients. There was statistically significant higher rate (83.3%) of CBC in patients in age group of 20-40 years with RR=11.3 (95% CI: 1.4, 89.4, p=0.006) seen in 20-30 years and RR=10.8 (95% CI:1.5-79.6, p=0.002) in 30-40 years as compared to older age of 60-70 years. Risk of development was higher in premenopausal women (RR=8.6, 95% CI: 3.5-21.3, $p{\leq}0.001$). Women with family history of breast cancer had highest rate (20.9%) of CBC (RR=5.4, 95% CI: 2.5-11.6, $p{\leq}0.001$). Use of hormonal therapy in hormone receptor positive patients was protective factor in occurrence of CBC but not significant (RR=0.7, 95% CI: 0.3-1.5, p=0.333). Conclusions: Younger age, premenopausal status, and presence of family history were found to be significant risk factors for the development of CBC. Use of hormonal therapy in hormone receptor positive patients might be protective against occurrence of CBC but did not reach significance. Survival Effect of Supportive Care Services for Turkish Patients with Metastatic Gastric Cancer Namal, Esat;Ercetin, Candas;Tokocin, Merve;Akcali, Zafer;Yigitbas, Hakan;Yavuz, Erkan;Celebi, Fatih;Totoz, Tolga;Pamukcu, Ozgul;Saglam, Emel 1213 Background: Gastric cancer is the second most common cause of cancer- related deaths worldwide and ranks $11^{th}$ or $14^{th}$ among all deaths. Patients with advanced disease require supportive care along with the medical and/or surgical treatment. Aim: To assess the need for palliative care for patients with advanced tumours along with standard clinical therapy. Materials and Methods: Eighty-four patients with metastatic (stage 4) gastric cancer, including both patients who had received surgical treatment or not, were followed up in Bagcilar Training and Research Hospital, Division of Medical Oncology between 2011 and 2014. They were categorised as supportive care (-) (Group 1, n=37) and (+) groups (Group 2, n=47) and evaluated retrospectively. Results: Demographic characteristics of the patients were as follows: mean age, Group 1, $65.2{\pm}10.5$ years, Group $2,63.7{\pm}11.3$ years; male/female ratio, Group 1, 21/16, Group 2, 28/19; distribution of Eastern Cooperative Oncology Group (ECOG) performance scores of 0 and 1, Group 1, ECOG 0 (n=9) and 1 (n=14), Group 2, ECOG 0 (34) and 1 (n=13) (p<0.0001); patients receiving second-line, Group 1 (n=7) and Group 2 (n=22) (p<0.008) or third - line chemotherapy,Group 2 (n=6) (p<0.02); mortality rates, Group 1, (n=28; 75.6%) and Group 2 (n=30; 63.8%); progression-free survival (PFS) rates, Group 1, $17.4{\pm}6$ weeks, Group 2, $28.3{\pm}16.2$ weeks; statistically significant overall survival rates, Group 1, $20.8{\pm}8.2$ weeks and Group 2, $28.3{\pm}162$ weeks (p<0.01). Conclusions: The supportive care team (medical oncologist, general surgeon, internal medicine specialist, algologist, psychiatrist and radiologist) can play a role in the treatment of metastatic gastric tumours, with improvements shown in terms of the performance status of cases, eligibility of patients to be on chemotherapy programmes for longer duration and overall survival rates in Turkey. BRCA1 and BRCA2 Common Mutations in Iranian Breast Cancer Patients: a Meta Analysis Forat-Yazdi, Mohammad;Neamatzadeh, Hossein;Sheikhha, Mohammad Hasan;Zare-Shehneh, Masoud;Fattahi, Mortaza 1219 Background: To date several common mutations in BRCA1 and BRCA2 associated with breast cancer have been reported in different populations. However, the common BRCA1 and BRCA2 mutations among breast cancer patients in Iran have not been described in detail. Materials and Methods: To comprehensively assess the frequency and distribution of the most common BRCA1 and BRCA2 mutations in Iranian breast cancer patients, we conducted this meta-analysis on 13 relevant published studies indentified in a literature search on PubMed and SID. Results: A total of 11 BRCA1 and BRCA2 distinct common mutations were identified, reported twice or more in the articles, of which 10 (c.2311T>C, c.3113A>G, c.4308T>C, c.4837A>G, c.2612C>T, c.3119G>A, c.3548A>G, c.5213G>A c.IVS16-92A/G, and c.IVS16-68A/G) mutations were in BRCA1, and 1 (c.4770A>G) was in BRCA2. The mutations were in exon 11, exon 13, intron 16, and exon 20 of BRCA1 and exon 11 of BRCA2. All have been previously reported in different populations. Conclusions: These meta analysis results should be helpful in understanding the possibility of any first true founder mutation of BRCA1/BRCA2 in the Iranian population. In addition, they will be of significance for diagnostic testing, genetic counseling and for epidemiological studies. Psychosocial Reaction Patterns to Alopecia in Female Patients with Gynecological Cancer undergoing Chemotherapy Ishida, Kazuko;Ishida, Junko;Kiyoko, Kanda 1225 This study aims to clarify the psychosocial reactions of female patients with gynecological cancer undergoing chemotherapy and in the process of suffering from alopecia and to examine their nursing support. The target group comprised female patients who had received two or more cycles of chemotherapy, were suffering from alopecia, and were aged 30-65. Data were collected from semi-structured interviews, conducted from the time the patients were informed by their doctors that they might experience alopecia due to chemotherapy to the time they actually experienced alopecia and until they were able to accept the change. Inductive qualitative analysis was employed to close in on the subjective experiences of the cancer patients. The results showed the existence of six phases in the psychosocial reactions in the process of alopecia: phase one was the reaction after the doctor's explanation; phase two was the reaction when the hair starts to fall out; phase three was the reaction when the hair starts to intensely fall out; phase four was the reaction when the hair has completely fallen out; phase five was the reaction to behavior for coping with alopecia; and phase six was the reaction to change in interpersonal human relationships. The results also made it clear that there are five types of reaction patterns as follows: 1) treatment priority interpersonal relationship maintenance type; 2) alopecia agitated interpersonal relationship maintenance type; 3) alopecia agitated interpersonal relationship reduction type; 4) alopecia denial interpersonal relationship reduction type; and 5) alopecia denial treatment interruption type. It is important to find out which of the five types the patients belong to early during treatment and provide support so that nursing intervention that suits each individual can be practiced. The purpose of this study is to make clear the process in which patients receiving chemotherapy come to accept alopecia and to examine evidence-based nursing care on patients with strong mental distress from alopecia. Analysis of FHIT Gene Methylation in Egyptian Breast Cancer Women: Association with Clinicopathological Features Zaki, Seham Mahrous;Abdel-Azeez, Hala A.;El Nagar, Mona Roshdy;Metwally, Khaled Abdel-Aziz;Ahmed, Marwa M. Samir S. 1235 Background: Fragile histidine triad (FHIT) gene is a tumor suppressor gene which involved in breast cancer pathogenesis. Epigenetics alterations in FHIT contributes to tumorigenesis of breast cancer. Objective: Our objective was to study FHIT promoter region hypermethylation in Egyptian breast cancer patients and its association with clinicopathological features. Materials and Methods: Methylation-specific polymerase chain reaction was performed to study the hypermethylation of FHIT promoter region in 20 benign breast tissues and 30 breast cancer tissues. Results: The frequency of hypermethylation of FHIT promoter region was significantly increased in breast cancer patients compared to bengin breast disease patients. The Odd's ratio (95%CI) of development of breast cancer in individuals with FHIT promoter hypermethylation (MM) was 11.0 (1.22-250.8). There were also significant associations between FHIT promoter hypermethylation and estrogen, progesterone receptors negativity, tumor stage and nodal involvment in breast cancer pateints. Conclusions: Our results support an association between FHIT promotor hypermethylation and development of breast cancer in Egyptian breast cancer patients. FHIT promoter hypermethylation is associated with some poor prognostic features of breast cancer. Treatment of Human Thyroid Carcinoma Cells with the G47delta Oncolytic Herpes Simplex Virus Wang, Jia-Ni;Xu, Li-Hua;Zeng, Wei-Gen;Hu, Pan;Rabkin, Samuel D.;Liu, Ren-Rin 1241 Background: Thyroid carcinoma is the most common malignancy of the endocrine organs. Although the majority of thyroid cancer patients experience positive outcomes, anaplastic thyroid carcinoma is considered one of the most aggressive malignancies. Current therapeutic regimens do not confer a significant survival benefit, and new therapies are urgently needed. Oncolytic herpes simplex virus (oHSV) may represent a promising therapy for cancer. In the present study, we investigated the therapeutic effects of a third-generation HSV vector, $G47{\Delta}$, on various human thyroid carcinoma cell lines in vitro. Two subcutaneous (s.c.) models of anaplastic thyroid carcinoma were also established to evaluate the in vivo anti-tumor efficacy of $G47{\Delta}$. Materials and Methods: The human thyroid carcinoma cell line ARO, FRO, WRO, and KAT-5, were infected with $G47{\Delta}$ at different multiplicities of infection (MOIs) in vitro. The survival rates of infected cells were calculated each day. Two s.c. tumor models were established using ARO and FRO cells in Balb/c nude mice, which were intratumorally (i.t.) treated with either $G47{\Delta}$ or mock. Tumor volumes and mouse survival times were documented. Results: $G47{\Delta}$ was highly cytotoxic to different types of thyroid carcinomas. For ARO, FRO, and KAT-5, greater than 30% and 80% of cells were killed at MOI=0.01 and MOI=0.1, respectively on day 5. WRO cells displayed modest sensitivity to $G47{\Delta}$, with only 21% and 38% of cells killed. In the s.c. tumor model, both of the anaplastic thyroid carcinoma cell lines (ARO and FRO) were highly sensitive to $G47{\Delta}$; $G47{\Delta}$ significantly inhibited tumor growth and prolonged the survival of mice bearing s.c. ARO and FRO tumors. Conclusions: The oHSV $G47{\Delta}$ can effectively kill different types of human thyroid carcinomas in vitro. $G47{\Delta}$ significantly inhibited growth of anaplastic thyroid carcinoma in vivo and prolonged animal survival. Therefore, $G47{\Delta}$ may hold great promise for thyroid cancer patients. Novel Mutations in IL-10 Promoter Region -377 (C>T), -150 (C>A) and their Association with Psoriasis in the Saudi Population Al-Balbeesi, Amal O.;Halwani, Mona;Alanazi, Mohammad;Elrobh, Mohammad;Shaik, Jilani P.;Khan, Akbar Ali;Parine, Narasimha Reddy 1247 Background: Psoriasis, a common cutaneous disorder characterized by inflammation and abnormal epidermal proliferation with a prevalence of 2-3% in the general population, may be linked to certain types of cancer. Several studies have reported an association between interleukin 10 (IL-10) variant polymorphisms and inflammatory diseases such as psoriasis vulgaris although the results vary according to the population studied. No studies have been performed in the Saudi population. The present study concerned novel variants and other genetic polymorphisms of the promoter and exonic regions of the IL10 gene in patients with moderate to severe psoriasis and potential differences in genotype compared to a group of healthy volunteers. Materials and Methods: Patients with moderate to severe psoriasis and healthy controls with no personal or family history of psoriasis were selected from the central region of Saudi Arabia. Polymorphisms of the IL 10 gene of both groups were genotyped. Results: We observed two novel variants in 5'UTR region of the promoter precursor with higher prevalence of the genotype with both wild-type alleles in patients compared to the healthy control group. The differences at positions -377 and -150 were significantly associated with disease, both the variants conferred strong protection against psoriasis in Saudi patients. Conclusions: This observation provides further support for the importance of the part that IL10 plays in the pathophysiology of this disease. Confirmation of our findings in larger populations of different ethnicities would provide evidence for the role of IL-10 in psoriasis. Prevalence of Anxiety May Not be Elevated in Thai Ovarian Cancer Patients Following Treatment Chittrakul, Saranya;Charoenkwan, Kittipat;Wongpakaran, Nahathai 1251 Background: To compare prevalence of anxiety in ovarian cancer patients following primary treatment to that of normal women and to examine predicting factor. Materials and Methods: In this cross-sectional study, 56 ovarian cancer patients who had primary surgical treatment within the past five years (cancer group) and 56 age-matched women who attended an outpatient clinic for check-ups (non-cancer group) were recruited from June 2013 to January 2014. The hospital anxiety and depression scale (HADS), was used to determine anxiety level of the participants with the score of ${\geq}11$ suggestive of anxiety. The prevalence of anxiety symptoms and mean HADS scores for anxiety were compared between the study groups. For those with ovarian cancer, associations of demographic and clinical factors with anxiety was examined. A p-value of <0.05 was considered significant. Results: Participants in the non-cancer group had higher rate of medical comorbidity, higher salary, and more frequent university education. The prevalence of anxiety was not different between the groups, at 7.1% each. The mean HADS scores for anxiety subscale were not significantly different between the groups, 5.0 in the cancer group vs 6.1 in the non-cancer group (p=0.09). On multivariable analysis, no demographic or clinical factors significantly associated with anxiety were identified. For the cancer group, no association between any particular factors and anxiety was demonstrated. Conclusions: The prevalence of anxiety in women with ovarian cancer following primary treatment was comparable to that of normal women seeking routine check-up. End Stage Palliative Care of Head and Neck Cancer: a Case Study Shishodia, Nitin Pratap;Divakar, Darshan Devang;Al Kheraif, Abdulaziz Abdullah;Ramakrishnaiah, Ravikumar;Pathan, Akbar Ali Khan;Parine, Narasimha Reddy;Chandroth, Santhosh Vediyera;Purushothaman, Binu 1255 Background: Locally advanced head and neck cancer is generally incurable and has a short survival rate. This study aimed to evaluate symptom relief, disease response, and acute toxicity after palliative hypo-fractionated radiotherapy and long-term survival in affected patients. Materials and Methods: Between January 2011 to December 2011, 80 patients who were histopathologically diagnosed as having stage III or stage IV head and neck squamous cell carcinoma based on Eastern Cooperative Oncology Group (ECOG) performance status 1-3, were offered palliative radiotherapy (20 Gy/5Fr/5 Days). Later these patients were evaluated on 30th day after completion of treatment for disease response based on World Health Organisation (WHO) criteria and palliation of symptoms using symptomatic response grading and acute toxicities by the Radiation Therapy Oncology Group (RTOG). Many patients were given post radiation therapy (RT) palliative chemotherapy for appropriate palliative care and a few patients were selected for further curative RT. The overall survival was also evaluated among this group of patients with last follow up date of 1st May, 2014. Results: The most common presenting complaint was pain followed by dysphagia. Most patients (60-70%) had appreciable relief in their presenting symptoms. A good response was observed in the majority following palliative RT; a few patients had progressive disease and some had stable and regressed disease. None of the patients experienced radiation toxicity that required hospital admission. Almost all showed grade one and two acute skin and mucosal toxicity one month after completion of treatment. The mean survival days for patients given only hypofractionated palliative RT was 307 days, those with post palliative RT and palliative chemotherapy was 390 days and patients who went on to receive further palliative RT and curative RT dose had significantly overall survival of 582 days. Conclusions: Advanced head and neck cancer should be identified for suitable palliative hypofractionated radiotherapy to achieve acceptable symptom relief in a great proportion of patients and should be followed by palliative chemotherapy or curative RT in suitable cases for long-term symptom-free survival. Geographic Disparities in Prostate Cancer Outcomes - Review of International Patterns Baade, Peter D.;Yu, Xue Qin;Smith, David P.;Dunn, Jeff;Chambers, Suzanne K. 1259 Background: This study reviewed the published evidence as to how prostate cancer outcomes vary across geographical remoteness and area level disadvantage. Materials and Methods: A review of the literature published from January 1998 to January 2014 was undertaken: Medline and CINAHL databases were searched in February to May 2014. The search terms included terms of 'Prostate cancer' and 'prostatic neoplasms' coupled with 'rural health', 'urban health', 'geographic inequalities', 'spatial', 'socioeconomic', 'disadvantage', 'health literacy' or 'health service accessibility'. Outcome specific terms were 'incidence', 'mortality', 'prevalence', 'survival', 'disease progression', 'PSA testing' or 'PSA screening', 'treatment', 'treatment complications' and 'recurrence'. A further search through internet search engines was conducted to identify any additional relevant published reports. Results: 91 papers were included in the review. While patterns were sometimes contrasting, the predominate patterns were for PSA testing to be more common in urban (5 studies out of 6) and affluent areas (2 of 2), higher prostate cancer incidence in urban (12 of 22) and affluent (18 of 20), greater risk of advanced stage prostate cancer in rural (7 of 11) and disadvantaged (8 of 9), higher survival in urban (8 of 13) and affluent (16 of 18), greater access or use of definitive treatment services in urban (6 of 9) and affluent (7 of 7), and higher prostate mortality in rural (10 of 20) and disadvantaged (8 of 16) areas. Conclusions: Future studies may need to utilise a mixed methods approach, in which the quantifiable attributes of the individuals living within areas are measured along with the characteristics of the areas themselves, but importantly include a qualitative examination of the lived experience of people within those areas. These studies should be conducted across a range of international countries using consistent measures and incorporate dialogue between clinicians, epidemiologists, policy advocates and disease control specialists. Inflammatory Breast Cancer in Tunisia from 2005 to 2010: Epidemiologic and Anatomoclinical Transitions from Published Data Mejri, N.;Boussen, H.;Labidi, S.;Bouzaiene, H.;Afrit, M.;Benna, F.;Rahal, K. 1277 Aim: To report epidemiologic and anatomoclinical transitions of inflammatory breast cancer (IBC) in Tunisia. Materials and Methods: Data including clinico-pathological data for 208 cases of T4d or PEV 3 non-metastatic breast cancer diagnosed between 2005 and 2010 were collected from patient records. Chi2 and Z tests were used to compare variables with two Tunisian historical series and a series about Arab-American patients. Results: Thirty three percent of our patients had their first child before 23 years of age and 56% had their menarche before 12 years, 75% never receiving oral contraception. Obesity was observed in 42% of women and IBC occurred during pregnancy in 13% of cases. Tumor grade was II-III in 90% of cases, HR was negative in 52%, HER2 was over expressed in 31% and invasion of more than 3 axillary nodes occurred in 18% of patients. We observed a pCR rate of 19% after neoadjuvant treatment (anthracyline-taxane used in 79%, trastuzumab in 27% ). Compared to historical Tunisian series (since 1996), IBC epidemiology remained stable in terms of median age, menopausal status and obesity. However we observed a significant decrease in median clinical tumor size and number of positive axillary lymph nodes. Comparison to IBC in Arab-Americans showed a significant difference in terms of median age, menopausal status, positivity of hormonal receptors and educational level. Conclusions: Our assessment of epidemiologic transition showed a reduction of clinco-pathological stage of IBC, keeping the same characteristics as compared to Tunisian historical series over a period of 14 years. Features seem to be different in Arab-American patients, probably related to migration, "occidentalization" of life style and improvement in socio-economic level. Serum Biomarkers for Early Detection of Hepatocellular Carcinoma Associated with HCV Infection in Egyptian Patients Zekri, Abdel-Rahman;Youssef, Amira Salah El-Din;Bakr, Yasser Mabrouk;Gabr, Reham Mohamed;El-Rouby, Mahmoud Nour El-Din;Hammad, Ibtisam;Ahmed, Entsar Abd El-Monaem;Marzouk, Hanan Abd El-Haleem;Nabil, Mohammed Mahmoud;Hamed, Hanan Abd El-Hafez;Aly, Yasser Hamada Ahmed;Zachariah, Khaled S.;Esmat, Gamal 1281 Background: Early detection of hepatocellular carcinoma using serological markers with better sensitivity and specificity than alpha fetoprotein (AFP) is needed. Aims: The aim of this study was to evaluate the diagnostic value of serum sICAM-1, ${\beta}$-catenin, IL-8, proteasome and sTNFR-II in early detection of HCC. Materials and Methods: Serum levels of IL-8, sICAM-1, sTNFR-II, proteasome and ${\beta}$-catenin were measured by ELISA assay in 479 serum samples from 192 patients with HCC, 96 patients with liver cirrhosis (LC), 96 patients with chronic hepatitis C (CHC) and 95 healthy controls. Results: Serum levels of proteasome, sICAM-1, ${\beta}$-catenin and ${\alpha}FP$ were significantly elevated in HCC group compared to other groups (P-value<0.001), where serum level of IL-8 was significantly elevated in the LC and HCC groups compared to CHC and control groups (P-value <0.001), while no significant difference was noticed in patients with HCC and LC (P-value=0.09). Serum level of sTNFR-II was significantly elevated in patients with LC compared to HCC, CHC and control groups (P-value <0.001); also it was significantly higher in HCC compared to CHC and control groups (P-value <0.001). ROC curve analysis of the studied markers between HCC and other groups revealed that the serum level of proteasome had sensitivity of 75.9% and specificity of 73.4% at a cut-off value of $0.32{\mu}g/ml$ with AUC 0.803 sICAM-1 at cut off value of 778ng/ml, the sensitivity was 75.8% and the specificity was 71.8% with AUC 0.776. ${\beta}$-catenin had sensitivity and specificity of 70% and 68.6% respectively at a cut off value of 8.75ng/ml with an AUC of 0.729. sTNFR-II showed sensitivity of 86.3% and specificity of 51.8% at a cut off value of 6239.5pg/ml with an AUC of 0.722. IL-8 had sensitivity of 70.4% and specificity of 52.3% at a cut off value of 51.5pg/ml with AUC 0.631. Conclusions: Our data supported the role of proteasome, sICAM-1, sTNFR-II and ${\beta}$-catenin in early detection of HCC. Also, using this panel of serological markers in combination with ${\alpha}FP$ may offer improved diagnostic performance over ${\alpha}FP$ alone in the early detection of HCC. Anal Papanicolaou Smear in Women with Abnormal Cytology: a Thai Hospital Experience Sananpanichkul, Panya;Pittyanont, Sirida;Yuthavisuthi, Prapap;Thawonwong, Nutchanok;Techapornroong, Malee;Bhamarapravatana, Kornkarn;Suwannarurk, Komsun 1289 Background: Anal intraepithelial lesions (AIL) are likely to represent a precursor for anal cancer. Women infected with human immunodeficiency virus (HIV) may be at higher risk of anal cancer but a screening program for AIL still is not routinely recommended. We here studied the relationship of dysplastic cells from cervical and anal cytology in HIV-infected women. Materials and Methods: This prospective study was conducted in Prapokklao Hospital, Thailand during 2013-2014. Five hundred and ninety nine HIV-infected women were recruited. Participants who had cytological reports of equally or over "abnormal squamous/glandular cells of undetermined significance" (ASC-US) were classified as abnormal cervical or anal cytology. Descriptive statistics and logistic regression analysis were used to evaluate correlations between groups. Results: HIV-infected women with abnormal cervical cytology had 3.8 times more risk (adjusted odd ratio 3.846, 95% confidence interval 1.247-11.862, p-value. 019) for abnormal anal cytology. The major problem of the anal Pap test in this study was the inadequacy of the collected specimens for evaluation (34.4%, 206/599). Sensitivity, specificity, positive predictive value, negative predictive value and accuracy of cervical and anal Pap tests were 93.9/12.0, 87.3/96.9, 39.7/21.4, 99.4/94.1 and 88.1/91.4 percent, respectively. Conclusions: Abnormal cervical cytology in HIV-infected women indicates elevated risk for abnormal anal cytology. The sensitivity of the anal Pap test for detection of AIL 2/3 in HIV-infected women was quite low while specificity was excellent. Inadequacy of specimen collection for evaluation was a major limitation. Improvement of sample collection is recommended for future investigations. The Economic Burden of Cancer in Korea in 2009 Kim, So Young;Park, Jong-Hyock;Kang, Kyoung Hee;Hwang, Inuk;Yang, Hyung Kook;Won, Young-Joo;Seo, Hong-Gwan;Lee, Dukhyoung;Yoon, Seok-Jun 1295 Background: Cancer imposes a significant economic burden on individuals, families and society. The purpose of this study was to estimate the economic burden of cancer using the healthcare claims and cancer registry data in Korea in 2009. Materials and Methods: The economic burden of cancer was estimated using the prevalence data where patients were identified in the Korean Central Cancer Registry. We estimated the medical, non-medical, morbidity and mortality cost due to lost productivity. Medical costs were calculated using the healthcare claims data obtained from the Korean National Health Insurance (KNHI) Corporation. Non-medical costs included the cost of transportation to visit health providers, costs associated with caregiving for cancer patients, and costs for complementary and alternative medicine (CAM). Data acquired from the Korean National Statistics Office and Ministry of Labor were used to calculate the life expectancy at the time of death, age- and gender-specific wages on average, adjusted for unemployment and labor force participation rate. Sensitivity analysis was performed to derive the current value of foregone future earnings due to premature death, discounted at 3% and 5%. Results: In 2009, estimated total economic cost of cancer amounted to $17.3 billion at a 3% discount rate. Medical care accounted for 28.3% of total costs, followed by non-medical (17.2%), morbidity (24.2%) and mortality (30.3%) costs. Conclusions: Given that the direct medical cost sharply increased over the last decade, we must strive to construct a sustainable health care system that provides better care while lowering the cost. In addition, a comprehensive cancer survivorship policy aimed at lower caregiving cost and higher rate of return to work has become more important than previously considered. Importance of Meta-Analysis and Practical Obstacles in Oncological and Epidemiological Studies: Statistics Very Close but Also Far! Tanriverdi, Ozgur;Yeniceri, Nese 1303 Studies of epidemiological and prognostic factors are very important for oncology practice. There is a rapidly increasing amount of research and resultant knowledge in the scientific literature. This means that health professionals have major challenges in accessing relevant information and they increasingly require best available evidence to make their clinical decisions. Meta-analyses of prognostic and other epidemiological factors are very practical statistical approaches to define clinically important parameters. However, they also feature many obstacles in terms of data collection, standardization of results from multiple centers, bias, and commentary for intepretation. In this paper, the obstacles of meta-analysis are briefly reviewed, and potential problems with this statistical method are discussed.
CommonCrawl
Biomass productivity of selected poplar (Populus spp.) cultivars in short rotations in northern Poland§ Marzena Niemczyk ORCID: orcid.org/0000-0002-1508-24971, Tomasz Wojda1 & Adam Kaliszewski2 New Zealand Journal of Forestry Science volume 46, Article number: 22 (2016) Cite this article Renewable energy sources such as biomass are an important aspect of the energy policy of the European Union. As the use of 'full-value wood' for energy purposes has been restricted, short-rotation forestry may be an alternative source of woody biomass. In Poland, the most promising genus is poplar (Populus spp.). Ten poplar cultivars from the Aigeiros or Tacamahaca sections of the genus Populus were compared in 5- and 6-year rotations for biomass components and yields. Additional aims were to preliminarily (a) identify a suitable rotation length and (b) evaluate the sprouting capacity of various cultivars in the climate of northern Poland. The following variables were measured: diameter at breast height (DBH), height, survival rate, single-tree dry mass, crop biomass production, and sprouting ability. The cultivars 'NE-42' and 'Fritzi Pauley' showed the best growth characteristics (DBH and height) and highest biomass production (7.6 and 7.7 t ha−1 year−1, and 5.2 and 6.9 t ha−1 year−1, respectively, for cultivars in the 5- and 6-year cycles). These cultivars were also distinguished by a large number of coppice shoots and a high shoot length. Eight cultivars did well enough to produce worthwhile data, and five of these gave higher biomass production (t DM ha−1 year−1) during the 6-year, as opposed to the 5-year cycle. Of the eight cultivars analysed, 'AF-8' had the poorest growth parameters and produced two thirds less dry biomass than either the 'NE-42' or 'Fritzi Pauley' cultivars. Data for two Italian cultivars ('AF-6' and 'MON') were not analysed because of their cold tenderness and their high mortality. Rotation length is important for biomass production in energy plantations. Most of the tested poplar cultivars gave higher biomass productivity over an initial 6-year cycle than over a 5-year one. Our preliminary results suggest that the 'NE-42' and 'Fritzi Pauley' cultivars performed best among those tested. Both of these have been tested previously in Poland in medium and long rotations. The data indicate the importance of testing cultivars under local climatic conditions before planting on a commercial scale. Expanding the use of renewable energy sources, such as woody biomass, is important in the European Union (EU). Such energy resources can, among other benefits, act as a substitute for fossil fuels, thereby reducing carbon dioxide emissions to the atmosphere, and also decrease EU dependence on foreign energy imports (Berndes and Hansson 2007). The development of renewable energy sources has been substantially supported at the EU level by Directive 2009/28/EC of the European Parliament and the Council of 23 April 2009. According to this Directive, Poland is obliged to obtain at least 15% of its energy from renewable sources in its gross final consumption of energy by 2020. In 2013, the share of renewable energy sources in the overall primary energy production in Poland amounted to 11.9% and was generated mainly from solid biofuels (76.6%), including wood, waste wood, and wood residues (GUS 2015). Directive 2009/28/EC has been incorporated into Polish law by the Act on Renewable Energy Sources, which was adopted by the Parliament on 20 February 2015 (Ustawa 2015). The Act imposes considerable restrictions on the use of the so-called full-value wood, which is more useful for purposes other than energy production. As a result, a special support system has been set up that is dedicated to renewable energy producers. The latter may be granted tradable 'green certificates' if the generated energy entirely or partially comes from renewable energy sources. The green certificates may be traded with other energy producers who cannot fulfil their obligation to generate energy in a sustainable way. The scheme encompasses a range of renewable fuels including wood that is not classified as 'full-value wood' (which is strictly defined in the Act); suitable wood for use as a renewable energy source includes that derived from trees grown in short- and medium-rotation plantations on agricultural land. Under Polish conditions, such plantations are managed on short cycles, usually 1–10 years (the time interval between felling), or medium rotations (the interval between planting and replanting) of 15–25 years (Zajączkowski 2013), using genera such as willow (Salix L.) and poplar (Populus L.) (Herve and Ceulemans 1996; Verwijst 2001; Stolarski 2009; Benetka et al. 2014) and, recently, black locust (Robinia pseudoacacia L.) (Lambert et al. 2010; Wojda et al. 2015). Many researchers also recommend growing such species (especially poplars) on cycles no shorter than 5–6 years (Fang et al. 1999; Alig et al. 2000; Boelcke and Kahle 2000). According to Zajączkowski (unpublished data), poplars produce the highest yield among other fast-growing species in Polish conditions under a short-rotation coppice (SRC) regime. One of the basic conditions for such plantations to be profitable is to use cultivars with known productivity. In Europe, including Poland, several poplar species and their hybrids are used to grow energy wood. They generally belong to one of two sections of the Populus genus, either the Aigeiros section (black poplars) or Tacamahaca section (balsam poplars) (Zajączkowski and Wojda 2012). S. Aigeiros contains the species P. deltoides Bart. Ex Marsh, P. fremontii S.Watson, and P. nigra L. while s. Tacamahaca includes P. angustifolia E.James, P. balsamifera L., P. maximowiczii Henry, and P. trichocarpa Torr. & A.Gray ex Hook. Considerable breeding efforts have produced a number of fast-growing poplar cultivars (Stanton et al. 2010; Karp et al. 2011); these are derived almost entirely from inter-specific crosses (Benetka et al. 2014). In Europe, new poplar hybrids are mainly derived from crosses between the s. Aigeiros species P. deltoides and P. nigra, and called P. × canadensis Moench (Bisoffi and Gullberg 1996), or between the s. Aigeiros species P. nigra with the s. Tacamahaca species P. maximowiczii (Stanton et al. 2010). In Poland, there is a long tradition of growing hybrid NE-42 (known in Poland as 'Hybrida 275') that was produced from the s. Tacamahaca species cross P. maximowiczii × P. trichocarpa. This inter-species hybrid has been tested on many occasions in long- and medium-rotation forestry plantations (Zajączkowski and Wojda 2012), as has an intra-specific hybrid of P. trichocarpa called 'Fritzi Pauley'. In SRC systems, resprouting capacity is critical for biomass production in consecutive rotations until the end of productive life (rotation age) of a given plantation; productive abilities of the cultivars are also important, as is the length of the productive cycle (time from planting to first felling or between felling events). The aim of this study was to evaluate the use of 10 selected poplar cultivars from the Aigeiros and Tacamahaca sections in short felling cycles for energy purposes and to compare the biomass yields of these cultivars. Additional aims were to preliminarily identify a suitable cycle length and preliminarily evaluate the sprouting capacity of various cultivars under the climatic conditions of northern Poland. Location and climatic conditions The experiment was carried out in northern Poland (N 54° 4′ 26″, E 20° 30′ 4″). According to the physical and geographic divisions of Poland (Kondracki 2011), the experimental area was located within Eastern Europe, in a subarea of the East European Lowland, within the East Baltic-Belarusian Province. The mean annual temperature of the study area was 7.6 °C. Annual rainfall was approximately 626 mm, and the growing season was approximately 200 days. According to the IUSS Working Group WRB (2015), the main type of soil on the research area is Cambisol. The study location was selected because it experiences the most severe climatic conditions for forest and agricultural production in Poland. By testing the productivity of the analysed cultivars in unfavourable conditions, a benchmark for the potentially lowest plantation yields was obtained. Planting material was produced from woody cuttings of 10 cultivars as shown in Table 1. Table 1 Source and parentage of the poplar cultivars used Layout, establishment, and tending The experimental area for the study was established in April 2010 on post-agricultural land covering 5.91 ha. Poplar saplings were planted in holes, created by an earth auger powered by a tractor, with spacing of 2.5 × 3 m (1333 plants ha−1). The experimental area was fenced to prevent browsing by wild animals. The study layout was a randomised complete block design with three block replicates. Cultivars were randomly assigned to plots within each block. One hundred saplings (10 × 10) of a given poplar cultivar were planted within each plot. Bordering rows were planted around the experimental area. During the first 2 years, the plantation was weeded once per year. After 5 years of growth, 20 trees of each block of 100 were harvested (in early spring 2015). Two rows of trees were cut in each plot from the north side, while retaining the bordering rows. One year later, resprouting capacity of the stumps was assessed. At the same time (early spring 2016), 20 more trees per plot were cut. Analysis of growth parameters of individual trees at 5 and 6 years old was performed for the same 80 trees per plot. We cut 20 trees from each plot (20% of all trees in the plantation) after 5 years of growth to determine the sprouting capacity and length of shoots from the stump at 1 year after felling (to evaluate sprouting capacity in the second 5-year cycle). Measurements and determinations were performed on 5-year-old trees (5-year cycle) and repeated on the same trees 1 year later (6-year cycle), in early spring in the years 2015 and 2016, respectively. The survival rate was based on the number of living trees. Diameter at breast height (DBH; measured at a height of 1.3 m) of all trees was measured in millimetres. Heights (cm) were recorded for 20 trees in each plot. The height curve was constructed separately for each cultivar in a given block, according to the following function (Näslund 1936): $$ H={\left(\frac{\mathrm{DBH}}{\alpha +\beta *\mathrm{D}\mathrm{B}\mathrm{H}}\right)}^2+1.3 $$ tree height (m), DBH: diameter at breast height (cm), and α, β : fitted coefficients. The estimated coefficients (α, β) of the regression function for each cultivar in each block were used to estimate the height of trees from the entire range of DBH and, as a result, to define an average height for cultivars. Twenty trees were cut in early spring in 2015, and their length and thickness were measured in order to evaluate tree weight for the 5-year cycle. Fresh-weight biomass of these trees was recorded (to the nearest 1 g). In 2016, after measurement of growth parameters (see above), 20 more trees per plot were cut in order to evaluate 6-year-old trees. The basal area of all trees was used to identify an 'average tree' for each cultivar in a given block (replicate). The 'average tree' samples were used to evaluate the per cent share of above-ground dry matter (DM) for each cultivar. The cross-sectional areas of the average trees were measured at the middle of every 2-m section and their fresh biomass determined by weighing 20-cm-long bolts from the middle of every 2-m section. Trunk and branch samples were taken from each replicate and weighed separately. The size of these samples ranged from 6284 to 7004 g, depending on the thickness of the trees. The 'average tree' samples were dried at 105 °C until their weights stabilised. The per cent share of DM was estimated as the ratio of the dry mass of samples to their fresh biomass (for each tree separately). Total DM yields per unit area were determined from the weight of harvested fresh biomass obtained from a given replicate multiplied by the appropriate value for the per cent DM and calculated for a given unit area (ha) per year, taking plant survival rate into account. During the second 5-year cycle, heights of 1-year-old shoots regrowing from the 20 stumps in each plot were measured, using the tallest shoot of each stump. The number of shoots more than 50 cm in length was counted for each stump. Statistical analyses were performed to test the significance of differences between averages of the following dependent variables: DBH, height, and estimated dry mass of trees grown on 5-year and 6-year cycles and shoot height and number of shoots per stump after 1 year of growth during the second 5-year cycle. Analysis of variance (ANOVA) was performed using the following model: $$ {y}_{ijk}=\mu +{\alpha}_i+{\beta}_j+{e}_{ij} $$ y ijk : dependent variable, μ : α i : clone fixed effect, β j : block fixed effect, and e ijk : random test error. The basic assumptions within the model were tested before carrying out the ANOVA. The tested assumptions were the normal distribution of the variables (using the Kolmogorov–Smirnov test [α = 0.05]) and the homogeneity of variances (using Levene's test [α = 0.05]). When the ANOVA indicated significant inter-cultivar differences, Tukey's HSD test was used (α = 0.05). Biomass production of cultivars growing on 5-year and 6-year cycles (t ha−1 year−1) was compared using a contrast analysis (α = 0.05). The statistical analyses were performed using the statistical package Statistica 10.0 (StatSoft 2011). Survival rate Six of the 10 cultivars tested had high survival rates (>90%) at both ages. Two cultivars ('AF2', 'AF8') had a survival rate of just under 90% (Table 2). However, cultivars 'AF-6' and 'MON' suffered extensive frost damage and showed high individual plant mortality (survival rate <50%). Accordingly, these two cultivars were excluded from further analysis. Table 2 Average survival, DBH, and height (H) comparison of the eight cultivars analysed in 5- and 6-year cycles The average (quadratic mean) DBH at 5 years of age for the eight cultivars analysed was 95.3 mm. The average (arithmetic mean) diameters of the various cultivars showed statistically significant differences. Cultivars 'NE-42' and 'Fritzi Pauley' had the largest mean DBH values (110.7 and 105.6 mm, respectively), which were significantly greater than the DBH values of other cultivars (Table 2). In contrast, the cultivar 'AF-8' had the lowest DBH at 62.6 mm. As expected, the average (quadratic mean) DBH at 6 years of age in the eight cultivars was higher than that in the 5-year-old trees, i.e. 110.5 vs. 95.3 mm. As with the 5-year cycle, the 'NE-42' and 'Fritzi Pauley' cultivars had the highest mean DBH in the 6-year cycle and the lowest mean DBH was for 'AF-8' trees (74.6 cm) (Table 2). The average (quadratic mean) height for all eight cultivars analysed on a 5-year cycle was 9.48 m. In a similar manner to the analysis of DBH, the 'NE-42' and 'Fritzi Pauley' cultivars were the tallest at 10.4 and 10.2 m, respectively (arithmetic mean) (Table 2). The average (quadratic mean) height for all eight cultivars analysed on a 6-year cycle had an average (quadratic mean) height of 10.8 m. The 'Fritzi Pauley' cultivar produced significantly taller trees (arithmetic mean 13.2 m), than all other analysed cultivars. Trees of the 'AF-8' cultivar had the lowest mean heights in both the 5- and 6-year cycles, of 7.6 and 8.4 m, respectively (Table 2). Diameter and height increment Knowledge of the growth dynamics of various cultivars could be helpful in selecting appropriate production-cycle lengths. The 'NE-42' and 'Fritzi Pauley' cultivars produced the largest trees on the 5-year cycle (Table 2). However, after one more year of growth (i.e. a 6-year cycle), the 'Fritzi Pauley' clone showed its genetic potential and achieved the greatest DBH increment (18.5 mm) among all cultivars as well as the largest height increment (3.0 m) and significantly exceeded all other cultivars for both metrics (Fig. 1). In comparison to 'Fritzi Pauley', the other cultivars were significantly inferior with respect to both of these growth characteristics, achieving annual DBH increments of 12.0 mm ('AF-8') to 16.8 mm ('Degrosso') and annual height increments of 0.5 m ('Degrosso') to 1.6 m ('NE-42') (Fig. 1). Annual calculated DBH and height increments (±SE) of eight poplar cultivars at age 5 years in 2015. Different lowercase letters between columns indicate statistical differences (p < 0.05) (Tukey's HSD test) Single-tree dry mass The largest DM in the 5-year cycle was recorded for an 'NE-42' tree (28.9 kg), which was significantly greater than those of the other seven cultivars. The DM values for the 'Fritzi Pauley' (20.4 kg), 'Koster' (18.7 kg), and 'Polargo' (18.2 kg) cultivars were statistically homogeneous. In contrast, the dry mass of the single tree of cultivar 'AF-8' in the 5-year cycle was significantly lower at 7.2 kg (Table 3). Table 3 Dry matter (DM) values of analysed cultivars, based on surviving individuals After 6 years of biomass production, the highest calculated DMs for a single tree were 35.2 and 32.4 kg for cultivars 'NE-42' and 'Fritzi Pauley', respectively. The lowest mass calculated for a single tree was that of 'AF-8' (13.2 kg) (Table 3). Biomass production data are provided in Table 3 and Fig. 2 and show large differences among cultivars. Notably, there was an improvement in the productivity of 'Fritzi Pauley' between the 5- and 6-year cycles of 5.2 and 6.9 t ha−1 year−1, respectively. The 'AF-8' cultivar had the lowest biomass productivity (2.6 t ha−1 year−1) among all eight cultivars analysed (Table 3 and Fig. 2). Dry matter (DM) yield comparison per unit area (t ha−1 year−1) of 5- and 6-year-old cultivars. The data were analysed using a contrast analysis. NS non-significant difference (p > 0.05); *significant difference (p < 0.05); **p < 0.01; ***p < 0.001 Height and number of shoots during the second cycle An ANOVA of the number of 1-year-old shoots growing from stumps of trees harvested after 5 years of growth showed significant variation among cultivars. The 'NE-42', 'Fritzi Pauley', and 'Koster' cultivars formed a group that produced the largest number of shoots. The lowest numbers of shoots were produced by cultivars 'AF-8' and 'AF-2' (Table 4). Table 4 Mean lengths and number of shoots per stump after the first year of the second 5-year cycle ANOVA also identified statistically significant differences for shoot lengths; however, the ranking was different to that of shoot number. The 'Degrosso', 'Albelo', 'AF-8', 'NE-42', and 'Fritzi Pauley' cultivars formed a group that produced the longest shoots while the 'Koster' and 'Polargo' cultivars had the shortest shoots (Table 4). There is a growing body of research on the use of poplar wood from short-rotation plantations as an energy resource in southern and western Europe (Spinelli et al. 2009; Aravanopoulos 2010; Gonzalez-Garcia et al. 2010). However, neither accumulation of timber volume nor sprouting ability in various poplar cultivars has been well studied with respect to climate conditions in Poland (Szczukowski and Stolarski 2013; Niemczyk et al. 2016). The results of the present study suggest that cultivars 'NE-42' and 'Fritzi Pauley' had not only the best DBH and height but also the highest biomass production. Conversely, cultivar 'AF-8' had the lowest growth figures of those analysed. The 'AF-6' and 'MON' cultivars suffered from frost damage and high mortality, so they should not be grown for energy purposes in Poland because of their poor growth under the prevailing climatic conditions. A previous study by Zajączkowski et al. (unpublished data) showed that cultivar 'Fritzi Pauley' produces good biomass yields after 4 years when grown on a 4-year cycle in south-western Poland. Climatic conditions are milder in this location compared to the current study, and a biomass yield of approximately 8 – 9 t DM ha−1 year−1 was obtained. Studies conducted in similar climatic conditions in other countries of central and eastern Europe showed similar or lower biomass productivity in poplar clones compared to the current work. For example, Lazdina et al. (2014) investigated growth rates in the Italian cultivars 'AF-2', 'AF-6', and 'AF-8' in Latvia (north-east of Poland) on a 3-year-cycle coppice system. Annual fresh biomass production (FM) was 6.62 t FM ha−1 for 'AF-6', 3.34 t ha−1 for 'AF-2', and 2.66 t ha−1 for 'AF-8'. These rates equate to biomass production of 2.71 t DM ha−1 for 'AF-6', 1.36 t ha−1 for 'AF-2', and 1.09 t ha−1 for 'AF-8' using a rate/share of DM (%) of 41% (Niemczyk et al. 2016). Interestingly, the 'AF-6' cultivar had relatively good productivity in that study but did very poorly in the current study. Benetka et al. (2014) compared the productivity of four local cultivars in a study conducted in the Czech Republic, south-west of Poland, using the 'NE-42' cultivar as a control. The 'NE-42' cultivar produced 8.3 t DM ha−1 year−1 during the first cycle under the most favourable site conditions. Each succeeding cycle (from the four employed) produced even higher yields, i.e. 15.4, 18.9, and 15.9 t ha−1 year−1, respectively. Importantly, 'NE-42' was highly productive regardless of soil conditions. Such high productivity could be due to the exceptional adaptive capacities of this cultivar, which was selected from several thousand seedlings derived from planned crosses over 80 years ago (Stout and Schreiner 1933) and was one of 27 Schreiner and Stout crosses that were introduced to Poland in 1938. Interest in this cultivar increased during the 1950s because of its high resistance to disease and pests (Bugała 1973), and it is currently among the primary poplar cultivars recommended for use in Polish plantations (Zajączkowski and Wojda 2012). Higher yields than those obtained in the current study could potentially be achieved from plantations in more favourable climatic conditions, such as in southern Europe. Yields could also be obtained more quickly since cycle length is one of the most important economic aspects of the production process (Armstrong et al. 1999; Nassi o Di Nasso et al. 2010). For example, biomass production in Italy averaged 9.9 t ha−1 year−1 for 1-year cycles and 16.4 t ha−1 year−1 for 3-year cycles (Nassi o Di Nasso et al. 2010). Similar average dry-mass yields (13.9 t ha−1 year−1) were obtained from two poplar plantations with 3- and 6-year cycles in northern Italy (Manzone and Calvo 2016). However, Sabatti et al. (2014) found that biomass production differed significantly among consecutive biennial coppice cycles for six various poplar genotypes in Italy. In the first cycle, biomass production amounted to 16 t ha−1 year−1, peaked at 20 t ha−1 year−1 in the second, and decreased to 17 t ha−1 year−1 in the third cycle. The highest biomass production was found, inter alia, for the 'Monviso' and 'AF-8' cultivars with mean annual dry-mass production of 19.5 and 19.3 t ha−1 year−1, respectively. Conversely, cultivar 'AF-8' had the lowest biomass productivity (2.6 t ha−1 year−1) in the current study and the 'Monviso' cultivar was excluded from our experiment because of poor tolerance to the severe climatic conditions of northern Poland. A study in Poland found that biomass production from one 4-year cycle was higher than that from two 2-year cycles combined (Zajączkowski et al., unpublished data). These results confirm the conclusions of Armstrong et al. (1999) and Nassi o Di Nasso et al. (2010), which stated that average production increases when poplar plantations are grown on longer cycles. Many researchers recommend cycles no shorter than 5–6 years (Fang et al. 1999; Alig et al. 2000; Boelcke and Kahle 2000). The average yield (in t ha−1 year−1) of five of the eight cultivars analysed increased significantly with a 6-year cycle compared with a 5-year cycle. However, defining the optimal rotation length of energy poplar plantations under Polish climatic conditions will require further research and exploration of even longer cutting cycles. In addition, the sprouting capacity of the currently tested cultivars is important as it contributes to the economic efficiency of subsequent cycles until the end of biomass production in a given plantation. Significant differences were observed in shoot regrowth from stumps of different cultivars after the first year of the second 5-year cycle. The 'NE-42', 'Fritzi Pauley', and 'Koster' cultivars produced the largest number of shoots. The first two are crosses of P. trichocarpa, and their growth abilities can be linked to the developmental characteristics inherited from this species. However, the resprouting ability of the 'Koster' cultivar, which is a cross between P. deltoides and P. nigra, is harder to explain. Benetka et al. (2014) indicated that poplar clone selection should focus on the production of thick, strong shoots, rather than a large number of weaker shoots. Shoot thickness was not measured in the current study, but the 'Degrosso', 'Albelo', 'AF-8', 'NE-42', and 'Fritzi Pauley' cultivars produced the longest shoots. The 'NE-42' and 'Fritzi Pauley' cultivars produced large numbers of relatively long shoots, so these two cultivars are likely to be the most useful for biomass production in either short or medium cycles. However, the final rating, which will evaluate total biomass production of a given cultivar, will not be made until the end of biomass production over three harvesting cycles and the evaluation of an optimal cycle length in the climatic conditions of northern Poland. The Italian cultivars 'AF-6' and 'Monviso' did not adapt to the local climatic conditions, and the productivity of other Italian clones was poor. In the 5-year cycle, the 'NE-42' cultivar produced the highest amount of biomass among the eight cultivars analysed. However, the 'Fritzi Pauley' cultivar also demonstrated a high biomass production potential in a 6-year cycle. In addition, both of these cultivars had the benefit of producing many long shoots after the first year of growth in the second 5-year cycle. The length of the production cycle is important for biomass accumulation in energy plantations. Our preliminary results indicate that five of eight cultivars analysed gave higher biomass productivity over a 6-year cycle than over a 5-year cycle. Overall, the results of the present study indicate the need to test cultivars in local climatic conditions before deploying them on a commercial scale. Diameter at breast height Dry matter FM: Fresh biomass production SRC: Short-rotation coppice Alig, R. J., Adams, D. M., McCarl, B. A., & Ince, P. J. (2000). Economic potential of short-rotation woody crops on agriculture land for pulp fiber production in the United States. Forest Products Journal, 50(5), 67–74. Aravanopoulos, F. A. (2010). Breeding of fast growing forest tree species for biomass production in Greece. Biomass and Bioenergy, 34(11), 1531–1537. Armstrong, A., Johns, C., & Tubby, I. (1999). Effects of spacing and cutting cycle on the yield of poplar grown as an energy crop. Biomass and Bioenergy, 17(4), 305–314. Benetka, V., Novotna, K., & Stochlova, P. (2014). Biomass production of Populus nigra L. clones grown in short rotation coppice systems in three different environments over four rotations. iForest, 7(2014), 233–239. doi:10.3832/ifor1162-007. Berndes, G., & Hansson, J. (2007). Bioenergy expansion in the EU: Cost-effective climate change mitigation, employment creation and reduced dependency on imported fuels. Energy Policy, 35(12), 5965–5979. Bisoffi, S., & Gullberg, U. (1996). Poplar breeding and selection strategies. In R. F. Stettler, H. D. Bradshaw Jr., P. E. Heilman, & T. M. Hinckley (Eds.), Biology of Populus and its implications for management and conservation (pp. 139–158). Ottawa, ON, Canada: NRC Research Press, National Research Council of Canada. Boelcke, B., & Kahle, P. (2000). Leistung schnellwachsender Baumarten im Kurzumtrieb auf landwirtschaftlichen Nutzflächen im Nordosten Deutschlands und erste Auswirkungen auf die Bodeneigenschaften. Die Holzzucht, 53, 5–10. Bugała, W. (1973). Systematyka i zmienność. In S. Białobok (Ed.), Topole Populus L. (pp. 515). Warszawa-Poznań, Państwowe Wydawnictwo Naukowe Directive 2009/28/EC of the European Parliament and of the Council on the promotion of the use of energy from renewable sources and amending and subsequently repealing Directives 2001/77/EC and 2003/30/EC. Official Journal of the European Union L 140. Accessed 5 June 2009, pp. 16-47 Fang, S., Xu, X., Lu, S., & Tang, L. (1999). Growth dynamics and biomass production in short-rotation poplar plantations: 6-year results for three clones at four spacings. Biomass & Bioenergy, 17(5), 415–425. Gonzalez-Garcia, S., Gasol, C. M., Gabarrel, X., Rieradevall, J., Teresa Moreira, M., & Feijoo, G. (2010). Environmental profile of ethanol from poplar biomass as transport fuel in Southern Europe. Renewable Energy, 35(5), 1014–1023. GUS. (2015). Energia ze źródeł odnawialnych w 2013 r. Warszawa: Główny Urząd Statystyczny. Herve, C., & Ceulemans, R. (1996). Short-rotation coppice vs non-coppiced poplar: a comparative study at two different field sites. Biomass and Bioenergy, 11(2-3), 139–150. IUSS Working Group WRB. (2015). World Reference Base for Soil Resources 2014, update 2015 International soil classification system for naming soils and creating legends for soil maps. World Soil Resources Reports No. 106. Rome: FAO. Karp, A., Hanley, S. J., Trybush, S. O., Macalpine, W., Pei, M., & Shield, I. (2011). Genetic improvement of willow for bioenergy and biofuels. Journal of Integrative Plant Biology, 53(2), 151–165. Kondracki, J. (2011). Geografia regionalna Polski. Warszawa: Wydawnictwo Naukowe PWN. Lambert, M. S., Timpledon, M. T., & Marseken, S. F. (2010). Short rotation forestry. Saarbrücken: VDM Publishing. Lazdina, D., Bardulis, A., Bardule, A., Lazdins, A., Zeps, M., & Jansons, A. (2014). The first three-year development of Alasia poplar clones AF2, AF6, AF7, AF8 in biomass short rotation coppice experimental cultures in Latvia. Agronomy Research, 12(2), 543–552. Manzone, M., & Calvo, A. (2016). Energy and CO2 analysis of poplar and maize crops for biomass production in north Italy. Renewable Energy, 86(February), 675–681. Näslund, M. (1936). Skogsförsöksanstaltens gallringsförsök i tallskog. Meddelanden från Statens Skogsförsöksanstalt, 29, 169. Nassi o Di Nasso, N., Guidi, W., Ragaglini, G., Tozzini, C., & Bonari, E. (2010). Biomass production and energy balance of a 12-year-old short-rotation coppice poplar stand under different cutting cycles. GCB Bioenergy, 2(2), 89–97. Niemczyk, M., Wojda, T., & Kantorowicz, W. (2016). Przydatność hodowlana wybranych odmian topoli w plantacjach energetycznych o krótkim cyklu produkcji. Sylwan, 160(4), 292–298. Sabatti, M., Fabbrini, F., Harfouche, A., Beritognolo, I., Mareschi, L., Carlini, M., Paris, P., & Scarascia-Mugnozza, G. (2014). Evaluation of biomass production potential and heating value of hybrid poplar genotypes in a short-rotation culture in Italy. Industrial Crops and Products, 61(November), 62–73. Spinelli, R., Nati, C., & Magagnotti, N. (2009). Using modified foragers to harvest short-rotation poplar plantations. Biomass and Bioenergy, 33(5), 817–821. Stanton, B.J., Neale, D.B., & Lim, S. (2010). Populus breeding: from the classical to the genomic approach. In S. Jansson, R.P. Bhalerao, & A.T. Groover (Eds.), Plant Genetics and Genomics: Crops and Models (8, pp. 309-342). Springer Science + Business Media StatSoft, Inc. STATISTICA (data analysis software system), version 10. (2011). http//:www.statsoft.com Stolarski, M. J. (2009). Agrotechniczne i ekonomiczne aspekty produkcji biomasy wierzby krzewiastej (Salix spp.) jako surowca energetycznego. Rozprawy i Monografie. UWM Olsztyn, 148, 1–145. Stout, A.B., & Schreiner, E.J. (1933). Results of a project in hybridizing poplars. Journal of Heredity, 24, 216–229. Szczukowski, S., & Stolarski, M. (2013). Plantacje drzew i krzewów szybko rosnących jako alternatywa biomasy z lasu—stan obecny, szanse i zagrożenia rozwoju. In P. Gołos & A. Kaliszewski (Eds.), Biomasa leśna na cele energetyczne (pp. 32–46). Sękocin Stary: IBL. Ustawa (2015). Ustawa z dnia 20 lutego 2015 r. o odnawialnych źródłach energii. Dz.U. 2015 poz. 478 Verwijst, T. (2001). Willows: an underestimated resource for environment and society. The Forestry Chronicle, 77(2), 281–285. Wojda, T., Klisz, M., Jastrzebowski, S., Mionskowski, M., Szyp-Borowska, I., & Szczygiel, K. (2015). The geographical distribution of the black locust (Robinia pseudoacacia L.) in Poland, and its role on non-forest land. Papers on Global Change, 22, 101–113. doi:10.1515/igbp−2015−0018. Zajączkowski K. (2013). Hodowla Lasu: Plantacje drzew szybko rosnących. Powszechne Wydawnictwo Rolnicze i Leśne. Warszawa pp. 168 Zajączkowski, K., & Wojda, T. (2012). Plantacje topolowe w przyrodniczych warunkach Polski. Studia i Materiały CEPL w Rogowie, 33(4), 136–142. Funding for this research was provided by the Directorate General of the State Forests. We would like to thank our colleagues from the Forest Research Institute: Szymon Krajewski, Władysław Kantorowicz, Grzegorz Jakubowski, and Piotr Wrzesiński for the help during the field data collection and laboratory sample analysis. MN was the primary author, undertook the technical design of the study, data collection, and data analysis, and contributed to writing the manuscript. TW contributed to the technical design of the study, data collection, and revisions of the manuscript. AK contributed to writing the manuscript. All authors have agreed to the authorship and the order of authorship for this manuscript; and all authors have the appropriate permissions and rights to the reported data. All authors read and approved the final version of the manuscript. Department of Silviculture and Genetics, Forest Research Institute, Braci Leśnej 3, Sękocin Stary, 05-090, Raszyn, Poland Marzena Niemczyk & Tomasz Wojda Department of Forest Resources Management, Forest Research Institute, Braci Leśnej 3, Sękocin Stary, 05-090, Raszyn, Poland Adam Kaliszewski Search for Marzena Niemczyk in: Search for Tomasz Wojda in: Search for Adam Kaliszewski in: Correspondence to Marzena Niemczyk. § Based on a paper presented at the Forest Genetics for Productivity conference that was held in Rotorua, New Zealand, March 14-18, 2016. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Niemczyk, M., Wojda, T. & Kaliszewski, A. Biomass productivity of selected poplar (Populus spp.) cultivars in short rotations in northern Poland§ . N.Z. j. of For. Sci. 46, 22 (2016) doi:10.1186/s40490-016-0077-8 'NE-42' 'Fritzi Pauley' Biomass productivity Populus cultivar Forest Genetics for Productivity Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
State space model Zhe Chen and Emery N. Brown (2013), Scholarpedia, 8(3):30868. doi:10.4249/scholarpedia.30868 revision #189565 [link to/cite this article] Curator: Zhe Chen Emery N. Brown Peter McCullagh Leo Trottier Dr. Zhe Chen, MIT, Cambridge, MA, USA Dr. Emery N. Brown, Harvard-MIT HST, Cambridge, MA State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic dependence between the latent state variable and the observed measurement. The state or the measurement can be either continuous or discrete. The term "state space" originated in 1960s in the area of control engineering (Kalman, 1960). SSM provides a general framework for analyzing deterministic and stochastic dynamical systems that are measured or observed through a stochastic process. The SSM framework has been successfully applied in engineering, statistics, computer science and economics to solve a broad range of dynamical systems problems. Other terms used to describe SSMs are hidden Markov models (HMMs) (Rabiner, 1989) and latent process models. The most well studied SSM is the Kalman filter, which defines an optimal algorithm for inferring linear Gaussian systems. An important objective of computational neuroscience is to develop statistical techniques to characterize the dynamic features inherent in neural and behavioral responses of experimental subjects collected during neurophysiological experiments. In neuroscience experiments, measurements of neural or behavioral data are often dynamic, noisy and have rich temporal structures. Examples of such include intracellular or extracellular recordings, neuronal spike trains, local field potentials, EEG, MEG, fMRI, calcium imaging, and behavioral measures (such as the reaction time and decision choice). Questions of interest may include how to analyze spike trains from ensembles of hippocampal place cells to infer the rodent's position in the environment or how to identify the sources of dipole using multi-channel MEG recordings. Regardless of their specific modality and applications, SSM provides a unified and powerful paradigm to model and analyze these signals in a dynamic fashion in both time and space. 1 Formalism and theory 3 Statistical inference and learning 4 Applications in neuroscience 5 Research topics 8 Recommended readings Formalism and theory The objective of state space modeling is to compute the optimal estimate of the hidden state given the observed data, which can be derived as a recursive form of Bayes's rule (Brown et al., 1998; Chen, Barbieri and Brown, 2010). In a general state space formulation, let x(t) denote the state and y(0:t) denote the cumulative observations up to time t, the filtering posterior probability distribution of the state conditional on the observations y(0:t) is $$p({\bf x}(t)|{\bf y}(0:t)) ={p({\bf x}(t), {\bf y}(0:t)) \over p({\bf y}(0:t)) } = {p({\bf x}(t)|{\bf y}(0:t-1)) p({\bf y}(0:t)|{\bf x}(t),{\bf y}(0:t-1))\over p({\bf y}(t)|{\bf y}(0:t-1)) } ={p({\bf x}(t)|{\bf y}(0:t-1)) p({\bf y}(t)|{\bf x}(t),{\bf y}(0:t-1))\over p({\bf y}(t)|{\bf y}(0:t-1))}\tag{1} $$ where the last equality of Equation (1) follows the conditional independence assumption between the observations. The one-step state prediction, known as the Chapman-Kolmogorov equation, is $$p({\bf x}(t)|{\bf y}(0:t-1)) = \int p({\bf x}(t)|{\bf x}(t-1)) p({\bf x}(t-1)|{\bf y}(0:t-1)) d{\bf x}(t-1)\tag{2}$$ where the probability distribution (or density) p(x(t)|x(t-1)) describes a state transition equation, and the probability distribution (or density) p(y(t)|x(t),y(0:t-1)) is the observation equation. Equations (1) and (2) provide the fundamental relations to develop state space models and analyses. For an illustration purpose, consider a discrete-time multivariate linear Gaussian system, the SSM is characterized by two linear equations. State equation. The n-dimensional hidden state process x(t+1) follows a first-order Markovian dynamics, as it only depends on the previous state at time t and is corrupted by a (correlated or uncorrelated) state noise process n(t) $$\textbf{x}(t+1) = \textbf{Ax}(t) + \textbf{n}(t)\tag{3}$$ where A is an n × n state-transition matrix. The state equation describes the state space evolution of a stochastic dynamical system. Observation equation. The m-dimensional measurement y(t) is subject to a linear transformation of the hidden state x(t) and is further corrupted by a measurement noise process $\textbf{v}(t)$ $$\textbf{y}(t) = \textbf{Bx}(t) + \textbf{v}(t)\tag{4}$$ When the noise processes n(t) and v(t) are both Gaussian with zero mean and respective covariance matrices Q and R, y(t) will be a Gaussian process. Equation (3) defines a first-order autoregressive (AR) process. Although the first-order Markovian transition is assumed in equation (3) , a higher-order AR structure can be always reformulated and transformed into a first-order AR formulation by concatenating several state vectors into a new state vector (for example, ${\bf x}_{new}$(t) = [x(t), x(t-1)]). Given a series of observations Y={y(1), …, y($T_0$)} during a time interval [0,$T_0$], in light of equations (3) and (4), the joint (complete data) likelihood function for the linear Gaussian SSM is $$p(X,Y|\theta) = {1\over (2\pi)^{n/2}|{\bf Q}|^{1/2}} \exp\Big\{ \sum\limits_{t=1}^{T_0-1} (-0.5({\bf x}(t+1)-{\bf Ax}(t))^T {\bf Q}^{-1}({\bf x}(t+1)-{\bf Ax}(t))\Big\} + {1\over (2\pi)^{m/2}|{\bf R}|^{1/2}} \exp\Big\{ \sum\limits_{t=1}^{T_0} (-0.5({\bf y}(t)-{\bf Bx}(t))^T {\bf R}^{-1}({\bf y}(t)-{\bf Bx}(t))\Big\} \tag{5}$$ where the superscript T denotes the vector or matrix transpose operator, and the augmented variable θ={A, B, Q, R, x(0)} includes the initial state condition and parameters that fully characterize the linear Gaussian SSM. The linear Gaussian SSM can be extended to a broad class of dynamical Bayesian networks (Ghahramani, 1998) by changing one or more following conditions about the state or measurement variables (Chen, Barbieri and Brown, 2010): (i) from continuous state to discrete or mixed-value state variable in equation (3); (ii) from continuous observation to discrete or mixed observation in equation (4); (iii) from Gaussian to non-Gaussian noise processes in equation (3) or (4); and (iv) inclusion of nonlinearity in equation (3) or (4). For instance, changing condition (i) may result in a discrete-time HMM or switching SSM; changing condition (ii) or (iii) may result in a generalized SSM with a generalized linear model (GLM) (McCullagh and Nelder, 1989) in place of equation (4); and changing condition (iv) may result in a nonlinear neural filter. In addition, a control variable can be incorporated into the state equation (3), which will result in a standard linear quadratic Gaussian (LQG) control system for which the optimal solution can be derived analytically (Bertsekas, 2005). In modeling discrete neural signals, such as neuronal spike trains (point processes) or spike count (Poisson process), new variants of SSM may emerge. For simplicity, consider a single neural point process whose conditional intensity function λ(t) is modulated by a constant baseline rate parameter μ, a latent continuous state variable x(t) and an observed covariate u(t). When Δ is sufficiently small, the product λ(t)Δ is approximately equal to the probability of observing a spike within the interval ((t-1)Δ, tΔ] (in which there is at most one spike). For illustration simplicity, assume that the state equation is characterized by a first-order AR process, and the observation equation is characterized by a point process likelihood function (Smith and Brown, 2003) $$x(t+1)=\rho x(t) + n(t) \tag{6}$$ $$\log\lambda(t)=\mu+\alpha x(t) + \beta u(t) \tag{7}$$ $$p(Y|X,\theta)=\exp\Big\{\int_0^{T_0} \log\lambda(\tau) dy(\tau) -\int_0^{T_0}\lambda(\tau)d\tau\Big \} \tag{8}$$ where θ={σ, ρ, μ, α, β, $x_0$}, n(t) denotes a zero-mean Gaussian variable with variance $\sigma^2$, |ρ|<1 denotes the AR coefficient, and the indicator variable $dy(\tau)$ is equal to 1 when there is a spike within the interval ((t-1)Δ, tΔ] and 0 otherwise. From equations (6) and (8), the complete data likelihood function is derived as $$p(X,Y|\theta)=\Big[{(1-\rho^2) \over 2\pi \sigma^2}\Big]^{1/2} \exp\Big\{-{1\over 2\sigma^2}\Big[(1-\rho^2) x_0^2 +\sum\limits_{t=1}^{T_0-1}(x(t+1)-\rho x(t))^2\Big]\Big\} + \exp\Big\{\int_0^{T_0} \log\lambda(\tau) dy(\tau) -\int_0^{T_0}\lambda(\tau)d\tau\Big \} \tag{9}$$ Provided that the parameter θ is known, direct optimization of p(X,Y|θ) or logp(X,Y|θ) will yield the a global optimal state estimate of {x(t)}. In the presence of multivariate point process observations $Y=\{y_c(1:t)\}_{c=1}^C$, provided that the observations are driven by a common state process $X=\{x(t)\}_{t=1}^{T_0}$ and the multivariate point process observations are conditionally independent at any time $t$ given a parameter θ, the generic complete data likelihood is derived as $$p(X,Y|\theta)= \prod_{t=1}^{T_0}p(x(t)|x(t-1),\theta) \prod_{c=1}^C p(y_c(t)|x(t),\theta) \tag{10}$$ Such a probabilistic model is often used to characterize population neuronal spike trains (Brown et al., 1998; Brown et al., 2003; Chen, Barbieri and Brown, 2010). Statistical inference and learning A common objective of statistical inference for SSM is to infer the state (including its uncertainty) based on the time series observations. In light of equation (1), the goal of state space analysis is to estimate the posterior probability distribution (or density) p(x(t)|Y). In the special case of linear Gaussian SSM, the predictive posterior distribution is fully characterized by the conditional mean and conditional covariance of a Gaussian distribution. When the state and observation equations of the linear Gaussian SSM are known, the optimal inference algorithm is described by recursive Kalman filtering (where Y={y(1), …, y(t)} is used for an online operation) or fixed-interval Kalman smoothing (where Y={y(1), …, y($T_0$)} is used for an offline operation). When the state is discrete (as in the HMM) and the state and observation equations are known, the optimal solution is given by the Viterbi algorithm. In contrast, in the presence of point process observations, a discrete analog of Kalman filtering operation is described by a point process filtering operation (Brown et al., 1998; Smith and Brown, 2003). For the above simple example (equations (6)-(8)), the following point process filtering equations are used to infer the state $\{x(t)\}$ $$x(t+1|t) = \rho x(t|t) \;\;\; (\textrm{one-step mean prediction}) \tag{11}$$ $$\sigma^2_x(t+1|t) =\rho^2 \sigma^2_x(t|t) +\sigma^2 \;\;\; (\textrm{one-step variance prediction}) \tag{12}$$ $$ x(t+1|t+1) = x(t+1|t) + \sigma^2_x(t+1|t) \alpha \Big[dy(t+1)-\exp\Big(\mu+\alpha x(t+1|t+1) +\beta u(t+1)\Big)\Delta \Big] \;\;\; (\textrm{posterior mode})\tag{13}$$ $$ σ^2_x(t+1|t+1) = \Big[ \Big(\sigma^2_x(t+1|t) \Big)^{-1} +\alpha^2 \exp\Big(\mu+\alpha x(t+1|t+1)+\beta u(t+1)\Big)\Delta \Big]^{-1} \;\;\; (\textrm{posterior variance})\tag{14}$$ where x(t+1|t+1) and $σ^2_x$(t+1|t+1) denote the posterior state mode and posterior state variance, respectively. Equations (11)-(14) are reminiscent of Kalman filtering. Equations (11) and (12) for one-step mean and variance predictions are the same as Kalman filtering, but equations (13) and (14) are different from Kalman filtering due to the presence of non-Gaussian observations and nonlinear operation in (13). In equation (13), [d y(t+1) − exp(μ + αx(t+1|t+1) + βu(t+1))Δ] is viewed as the innovations term, and $σ^2_x$(t+1|t) may be interpreted as a "Kalman gain". The quantity of the Kalman gain determines the "step size" of error correction. In equation (14), the posterior state variance is derived by inverting the second derivative of the log-posterior probability density logp(X|Y,θ) based on a Gaussian approximation of the posterior distribution around the posterior mode (Brown et al., 1998; Smith and Brown, 2003; Eden et al., 2004). When the state and observation equations are completely or partially unknown, the parameter θ and the state {x(t)} need to be jointly estimated. In statistics, likelihood inference is a well-established and asymptotically efficient approach for parameter estimation. Specifically, the expectation-maximization (EM) algorithm (Dempster, Laird and Rubin, 1977) provides a general framework to maximize or increase the likelihood by iteratively updating the latent state and parameter variables. Reconsider the example used in equations (6)-(8), the corresponding EM algorithm consists of the following two steps. E-step: At the k-th step, compute the expected complete data log likelihood (Q-function) based on the estimate ${\it θ}^{(k)}$ $$Q(\theta|\theta^{(k)}) = E[\log p(X,Y|\theta) || \theta^{(k)}] \tag{15} $$ and estimate the expected statistics of the latent process (E[x(t)||${\it θ}^{(k)}$], E[$x^2$(t)||${\it θ}^{(k)}$] and E[x(t)x(t+1)||${\it θ}^{(k)}$]) using point process filtering (equations (11) through (14)) and fixed-interval Kalman smoothing. M-step: Update ${\it θ}^{(k)}$ to ${\it θ}^{(k+1)}$ such that ${\it θ}^{(k+1)}=\arg\max_{θ} Q({\it θ}|{\it θ}^{(k)}$). This can be achieved by setting the partial derivative of the Q-function to zero (i.e., $\frac{\partial Q}{\partial \theta}=0$), which may be solved via either numerical optimization (in the case of μ, α and β) or closed-form solutions (in the case of σ, ρ and $x_0$). The E and M steps are executed iteratively until the likelihood reaches a local maximum. Upon convergence, the EM algorithm yields a point estimate of θ, the confidence intervals of θ can be assessed from the likelihood principle (Pawitan, 2001; Brown et al., 2003). Alternatively, Bayesian inference (Gelman et al., 2005) provides another approach that aims to estimate the full posterior of the state and parameters based on Bayesian statistics. When the state variable is non-Gaussian, particle filtering or smoothing may be used to numerically approximate the posterior distribution; when the parameter variable is non-Gaussian, Gaussian approximation, Gibbs sampling or Markov chain Monte Carlo (MCMC) approaches may be employed. The common goal of these inference methods is to estimate the joint posterior of the state and the parameters using Bayes's rule $$p(X,\theta|Y)\approx p(X|Y)p(\theta|Y)= { p(Y|X,\theta)p(X)p(\theta)\over p(Y)}={p(Y|X,\theta)p(X)p(\theta)\over \int p(Y|X,\theta)p(X)p(\theta)dX d\theta }\tag{16}$$ where the first equation assumes a factorial form of the posterior for the state and the parameters, and p(X) and p(θ) denote the prior distributions for the state and the parameters, respectively. The denominator of equation (16) is a normalizing constant known as the partition function. In general, Monte Carlo-based Bayesian inference or learning is powerful yet computationally expensive (Doucet, de Frietas, and Gordon, 2001; Gilks, Richardson and Spiegelhalter, 1996). A trade-off between tractable computational complexity and good performance is to exploit various approximate Bayesian inference methods, such as expectation propagation (Minka, 1999), mean-field approximation (Opper and Saad, 2001) and variational approximation (Jordan et al., 1999; Ghahramani, 1998; Beal and Ghahramani, 2006). These approximation techniques can also be integrated or combined to produce new methods, such as Monte Carlo EM or variational MCMC algorithms (McLachlan and Krishnan, 2008; Andrieu et al., 2003). Applications in neuroscience Numerous applications of SSM to dynamic analyses of neuroscience data can be found in the literature (Paninski et al., 2009; Chen, Barbieri and Brown, 2010). Several important and representative applications are highlighted here. • Population neuronal decoding: Examples include decoding the movement kinematics from primate motor cortex (M1) neurons in neural prosthetics (Brockwell, Rojas and Kass, 2004; Wu et al., 2006, 2009; Kulkarni and Paninski, 2008; Srinivasan et al., 2006, 2007) or goal-directed movement control (Srinivasan and Brown, 2007; Shanechi et al., 2012, 2013), or decoding rat's spatial location from hippocampus ensemble spike trains (Brown et al., 1998; Barbieri et al., 2004). Truccolo et al. (2008) applied the first point-process state-space analysis to decode M1 neuronal spike trains recorded in patients with tetraplegia. • Analysis of single neuronal plasticity or dynamics: Examples include tracking the receptive field of rat hippocampal neurons in navigation (Brown et al., 2001) and analyzing between-trial monkey hippocampal neuronal dynamics during associative learning experiments (Czanner et al., 2008). • Identification of the state of neuronal ensembles: Examples include detecting stimulus-driven cortical state during behavior (Jones et al., 2007; Kemere et al., 2008) or detecting intrinsic cortical up/down states during slow wave sleep (Chen et al., 2009). • Assessment of learning behavior of experimental subjects: Examples include characterizing dynamic behavioral responses in neuroscience experiments (Smith et al., 2004, 2005, 2007; Prerau et al., 2009). • Inverse problems: Examples include solving EEG or MEG inverse problems (Galka et al., 2004; Lamus et al., 2012), deconvolving fMRI time series (Penny et al., 2005) and deconvolving spike trains from calcium imaging (Vogelstein et al., 2009, 2010). • Other neuroscience applications such as spike sorting (Herbst et al., 2008; Calabrese and Paninski, 2011), assessment of higher-order neuronal synchrony (Shimazaki et al, 2012), prediction of spike timing (Kobayashi and Shinomoto, 2007), unfolding of population neuronal representation (Chen et al., 2012), and causality analysis (Havlicek et al., 2010). An important topic in state-space modeling is model selection, or specifically to select the (discrete or continuous-valued) state dimensionality. Classical likelihood-based approaches rely on Akaike's information criterion (AIC) or Bayesian information criterion (BIC), but these measures are often practically inefficient especially in the presence of sparse data samples. Following the Bayesian principle of "letting data speak for themselves", the model selection problem has recently been tackled using nonparametric Bayesian inference, for instance in the cases of infinite HMM (Beal et al., 2002; Teh et al., 2006) and the switching SSM (Fox et al., 2010, 2011). Moreover, inference of a large-scale SSM for neuroscience data remains another important research topic. Exploiting the structure of the system, such as the sparsity, smoothness and convexity, may allow for employing efficient state-of-the-art optimization routines and imposing domain-dependent priors for regularization (Paninski et al., 2009; Paninski, 2010). Finally, developing consistent goodness-of-fit assessment for neuroscience data would help to validate and compare different statistical models (Brown et al., 2003). Andrieu C, de Freitas N, Doucet A, Jordan MI. (2003) An introduction to MCMC for machine learning. Machine Learning, 50(1): 5-43. Barbieri R, Frank LM, Nguyen DP, Quirk MC, Solo V, Wilson MA, Brown EN. (2004) Dynamic analyses of information encoding by neural ensembles. Neural Computation, 16: 277-307. Beal M, Ghahramani Z, Rasmussen CE. (2002) The infinite hidden Markov model. In Advances in Neural Information Processing Systems, 14: 577-584. Cambridge, MA: MIT Press. Beal M, Ghahramani Z. (2006) Variational Bayesian learning of directed graphical models. Bayesian Analysis, 1(4): 793-832. Bertsekas D. (2005) Dynamic Programming and Optimal Control. Boston, MA: Athena Scientific. Brown EN, Frank LM, Tang D, Quirk MC, Wilson MA. (1998) A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18:7411-7425. Brown EN, Nguyen DP, Frank LM, Wilson MA, Solo V. (2001) An analysis of neural receptive field plasticity by point process adaptive filtering. Proceedings of the National Academy of Sciences, 98: 12261-12266. Brown EN, Barbieri R, Eden UT, Frank LM. (2003) Likelihood methods for neural data analysis. In: Feng J. (Ed.) Computational Neuroscience: A Comprehensive Approach (pp. 253-286), London: CRC Press. Brockwell A, Rojas A, Kass R. (2004) Recursive Bayesian decoding of motor cortical signals by particle filtering. Journal of Neurophysiology, 91:1899-1907. Calabrese A, Paninski L. (2011) Kalman filter mixture model for spike sorting of non-stationary data. Journal of Neuroscience Methods, 196(1): 159-169. Chen Z, Vijayan S, Barbieri R, Wilson MA, Brown EN. (2009) Discrete- and continuous-time probabilistic models and algorithms for inferring neuronal UP and DOWN states. Neural Computation, 21(7): 1797-1862. Chen Z, Barbieri R, Brown EN. (2010) State-space modeling of neural spike train and behavioral data. In Oweiss K (Ed.) Statistical Signal Processing for Neuroscience and Neurotechnology, Chap. 6 (pp. 161-200). Academic Press. Chen Z, Kloosterman F, Brown EN, Wilson MA. (2012) Uncovering hidden spatial topology represented by hippocampal population neuronal codes. Journal of Computational Neuroscience, 33: 227-255. Czanner G, Eden UT, Wirth, S, Yanike M, Suzuki WA, Brown EN. (2008) Analysis of between-trial and within-trial neural spiking dynamics. Journal of Neurophysiology, 99, 2672-2693. Dempster A, Laird N, Rubin DB. (1977) Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B39: 1-38. Doucet A, de Freitas N, Gordon N. (2001) Sequential Monte Carlo Methods in Practice. Springer. Eden UT, Frank LM, Barbieri R, Solo V, Brown EN. (2004) Dynamic analyses of neural encoding by point process adaptive filtering. Neural Computation, 16: 971-998. Fox EL, Sudderth EB, Jordan MI, Willsky AS. (2010) Bayesian nonparametric learning of Markov switching processes. IEEE Signal Processing Magazine, 28(11): 43-54. Fox EL, Sudderth EB, Jordan MI, Willsky AS. (2011) Bayesian nonparametric inference of switching dynamic linear models. IEEE Transactions on Signal Processing, 59(4): 1569-1585. Galka A, Yamashita O, Ozaki T, Biscay R, Valdés-Sosa P. (2004) A solution to the dynamical inverse problem of EEG generation using spatiotemporal Kalman filtering. NeuroImage, 23(2): 435-453. Gelman A, Carlin JB, Stern HS, Rubin DB. (2004) Bayesian Data Analysis (2nd ed), Chapman & Hall/CRC. Ghahramani Z. (1998) Learning dynamic Bayesian networks. In Giles CL and Gori M (Eds.), Adaptive Processing of Sequences and Data Structures (pp. 168-197). Berlin: Springer-Verlag. Gilks WR, Richardson S, Spiegelhalter DJ. (1996). Markov Chain Monte Carlo in Practice. Chappman & Hall/CRC Press. Havlicek M, Jan J, Brazdil M, Calhoun VD. (2010) Dynamic Granger causality based on Kalman filter for evaluation of functional network connectivity in fMRI data. NeuroImage, 53(1): 65-77. Herbst JA, Gammeter S, Ferrero D, Hahnloser RH. (2008) Spike sorting with hidden Markov models. Journal of Neuroscience Methods, 174(1):126-134. Jones LM, Fontanini A, Sadacca BF, Katz DB. (2007) Natural stimuli evoke analysis dynamic sequences of states in sensory cortical ensembles. Proceedings of National Academy of Sciences USA 104: 18772-18777. Jordan MI, Ghahramani Z, Jaakkola TS, Saul LK. (1999) An introduction to variational methods for graphical models. Machine Learning, 37:183-233. Kalman RE. (1960) A new approach to linear filtering and prediction problems. Transactions of the ASME--Journal of Basic Engineering, 82:35-45. Kemere C, Santhanam G, Yu BM, Afshar A, Ryu SI, Meng TH, Shenoy KV. (2008) Detecting neural-state transitions using hidden Markov models for motor cortical prostheses. Journal of Neurophysiology, 100: 2441-2452. Kobayashi R, Shinomoto S. (2007) State space method for predicting the spike times of a neuron. Physical Review E75, 011925. Koller D, Friedman N. (2009) Probabilistic Graphical Models. Cambridge, MA: MIT Press. Kulkarni J, Paninski L. (2008) State-space decoding of goal-directed movements. IEEE Signal Processing Magazine, 25:78-86. Lamus C, Hamalainen MS, Temereanca S, Long CJ, Brown EN, Purdon PL. (2012) A spatiotemporal dynamic distributed solution to the MEG inverse problem. NeuroImage, 63(2): 894-909. McCullagh P, Nelder JA. (1989) Generalized Linear Models (2nd ed.). Chapman & Hall/CRC Press. McLachlan GJ, Krishnan T. (2008) The EM Algorithm and Extensions (2nd ed.). New York: Wiley. Minka TP. (2001) A family of algorithms for approximate Bayesian inference. PhD thesis, Dept. EECS, Massachusetts Institute of Technology, Cambridge, MA. Opper M, Saad D. (2001) Advanced Mean Field Methods: Theory and Practice. Cambridge, CA: MIT Press. Paninski L, Ahmadian Y, Ferreira DG, Koyama S, Rad KR, Vidne M, Vogelstein JT, Wu W. (2009) A new look at state-space models for neural data. Journal of Computational Neuroscience, 29(1-2): 107-126. Paninski L. (2010) Fast Kalman filtering on quasilinear dendritic trees. Journal of Computational Neuroscience, 28: 211-228. Pawitan Y. (2001) In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford: Clarendon Press. Penny W, Ghahramani Z, Friston K. (2005). Bilinear dynamical systems. Philosophical Transactions of Royal Society of London B, 360: 983-993. Prerau MJ, Smith AC, Eden UT, Kubota Y, Yanike M, Suzuki W, Graybiel AM, Brown EN. (2009) Characterizing learning by simultaneous analysis of continuous and binary measures of performance. Journal of Neurophysiology, 102:3060-3072. Rabiner LR. (1989) A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2): 257-286. Shanechi MM, Wornell GW, Williams ZM, Brown EN. (2013) Feedback-controlled parallel point process filter for estimation of goal-directed movements from neural signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 21: 129-140. Shanechi MM, Brown EN, Williams ZM. (2012) Neural population partitioning and a concurrent brain-machine interface for sequential control motor function. Nature Neuroscience, 12: 1715-1722, Shimazaki H, Amari S, Brown EN, Gruen S. (2012) State-space analysis of time-varying higher-order spike correlation for multiple neural spike train data. PLoS Computational Biology, 8(3): e1002385. Smith AC, Brown EN. (2003) Estimating a state-space model from point process observations. Neural Computation, 15: 965-991. Smith AC, Frank LM, Wirth S, Yanike M, Hu D, Kubota Y, Graybiel AM, Suzuki WA, Brown EN. (2004) Dynamical analysis of learning in behavior experiments. Journal of Neuroscience, 24: 447-461. Smith AC, Stefani MR, Moghaddam B, Brown EN. (2005) Analysis and design of behavioral experiments to characterize population learning. Journal of Neurophysiology, 93: 776-792. Smith AC, Wirth S, Suzuki W, Brown EN. (2007) Bayesian analysis of interleaved learning and response bias in behavioral experiments. Journal of Neurophysiology, 97: 2516-2524. Srinivasan L, Eden UT, Willsky AS, Brown EN. (2006) A state-space analysis for reconstruction of goal-directed movements using neural signals, Neural Computation, 18: 2465-2494. Srinivasan L, Eden UT, Mitter SK, Brown EN. (2007) General-purpose filter design for neural prosthetic devices. Journal of Neurophysiology, 98: 2456-2475. Srinivasan L, Brown EN. (2007) A state-space framework for movement control to dynamic goals through brain-driven interfaces. IEEE Transactions on Biomedical Engineering, 54(3): 526-535. Teh YW, Jordan MI, Beal MJ, Blei DM. (2006) Hierarchical Dirichlet processes. Journal of American Statistical Association, 101: 1566-1581. Truccolo W, Friehs GM, Donoghue JP, Hochberg LR. (2008) Primary motor cortex tuning to intended movement kinematics in humans with tetraplegia. Journal of Neuroscience, 28(5): 1163-1178. Vogelstein J, Watson B, Packer A, Yuste R, Jedynak B, Paninski L. (2009) Spike inference from calcium imaging using sequential Monte Carlo methods. Biophysical Journal, 97(2): 636-655. Vogelstein J, Packer A, Machado TA, Sippy T, Babadi B, Yuste R, Paninski L. (2010) Fast nonnegative deconvolution for spike train inference from population calcium imaging. Journal of Neurophysiology, 104: 3691-3704. Wu W, Gao Y, Bienenstock E, Donoghue JP, Black MJ. (2006) Bayesian population decoding of motor cortical activity using a Kalman filter. Neural Computation, 18: 80-118. Wu W, Kulkarni JE, Hatsopoulos NG, Paninski L. (2009) Neural decoding of hand motion using a linear state-space model with hidden states. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 17: 370-378. Andre Longtin (2010) Stochastic dynamical systems. Scholarpedia, 5(4):1619. Jose Pedro Segundo (2010) Spike train and point processes. Scholarpedia, 5(7):5729. David Spiegelhalter and Kenneth Rice (2009) Bayesian statistics. Scholarpedia, 4(8):5230. David H. Terman and Eugene M. Izhikevich (2008) State space. Scholarpedia, 3(3):1924. Andrew J. Viterbi (2008) Viterbi algorithm. Scholarpedia, 4(1):6246. Barber D, Cemgil AT, Chiappa S. (2011) Bayesian Time Series Models. Cambridge University Press. Cappé O, Moulines E and Rydén T. (2005) Inference in Hidden Markov Models. Berlin: Springer. Durbin J and Koopman SJ. (2001) Time Series Analysis by State Space Methods. Oxford University Press. Kim C-J and Nelson CR. (1999) State-Space Models with Regime Switching: Classical and Gibbs-Sampling Approaches with Applications. Cambridge, MA: MIT Press. Kitagawa G and Gersh W. (1996) Smoothness Priors Analysis of Time Series. New York: Springer. Ozaki T. (2012) Time Series Modeling of Neuroscience Data. Chapman & Hall/CRC Press. Sponsored by: Leo Trottier, Department of Cognitive Science, University of California, San Diego, CA, USA Reviewed by: Dr. Peter McCullagh, Department of Statistics, University of Chicago, IL Reviewed by: Anonymous (via Eugene M. Izhikevich, Editor-in-Chief of Scholarpedia, the peer-reviewed open-access encyclopedia) Retrieved from "http://www.scholarpedia.org/w/index.php?title=State_space_model&oldid=189565" Brain Corporation Prize 2012 "State space model" by Zhe Chen and Emery N. Brown is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Permissions beyond the scope of this license are described in the Terms of Use
CommonCrawl
The Means Consider two similar problems: A fellow travels from city $A\;$ to city $B.\;$ For the first hour, he drove at the constant speed of $20\;$ miles per hour. Then he (instantaneously) increased his speed and, for the next hour, kept it at $30\;$ miles per hour. Find the average speed of the motion. A fellow travels from city $A\;$ to city $B.\;$ The first half of the way, he drove at the constant speed of $20\;$ miles per hour. Then he (instantaneously) increased his speed and traveled the remaining distance at $30\;$ miles per hour. Find the average speed of the motion. The two problems are often confused and the difference between them may not be immediately obvious. The second one, as a mathematical conundrum, has been included in many math puzzles books. By definition, the average speed S of the motion that lasted time T over the distance D is $\displaystyle\text{Average speed} = \frac{\text{Total distance}}{\text{Total time}}\;$ or $\displaystyle S = \frac{D}{T}.$ The definition applies directly to the first problem. The fellow was on the road for the total of $T=2\;$ hours. Going at $20\;$ mi/h for the first hour the fellow covered the distance of $20\;$ miles. Similarly, in the second hour he covered the distance of $30\;$ miles. Therefore, in $2\;$ hours he traveled the total of $D=20+30=50\;$ miles. The average speed then is found to be $\displaystyle S=\frac{50}{2}=25\;$ m/h. The second problem is only a little more complex. There are three ways to write the formula above: $\displaystyle S=\frac{D}{T},\;$ $D=ST,\;$ $\displaystyle T=\frac{D}{S}.\;$ I apply the latter one. Let $d\;$ be half the distance between the two cities. The first leg of the journey took $\displaystyle\frac{d}{20}\;$ hours, the second $\displaystyle\frac{d}{30}.\;$ Therefore, on the whole, the fellow was on the road $\displaystyle T=\frac{d}{20}+\frac{d}{30}\;$ hours. During that time he covered $D=2d\;$ miles. It follows that his average speed is given by $\displaystyle S=\frac{2d}{\displaystyle\frac{d}{20}+\frac{d}{30}}\;$ or, after cancelling out the common factor $d,\;$ and a few arithmetic operations $S=24\;$ m/h. If the distance between the cities were, as in the first problem, $50\;$ miles, then the journey would take $T=\displaystyle\frac{50}{24}\;$ hours or $2\;$ hours and $5\;$ minutes, of which $1\;$ hour and $15\;$ minutes $(=\displaystyle\frac{25}{20}\;$ hours) were spent on the first $25\;$ miles and $50\;$ minutes $(=\displaystyle\frac{25}{30}\;$ hours) on the second $25\;$ mile stretch. What we found is that, depending on the circumstances, to determine the average speed of motion, computations based on the same basic formula $\left(S = \displaystyle\frac{D}{T}\right)\;$ may have to follow different routes. Now there are three means in music: first the arithmetic, secondly the geometric, and thirdly the subcontrary, the so-called harmonic. Archytas, cited by Porphyry in his Commentary on Ptolemy's Harmonics I. Thomas, Greek Mathematical Works, v1, Harvard University Press, 2006, p 113 For the given two quantities, $20\;$ and $30,\;$ the number $\displaystyle\frac{20+30}{2}\;$ is known as their arithmetic mean while $\displaystyle\frac{2}{\displaystyle\frac{1}{20}+\frac{1}{30}}\;$ is the harmonic mean of the two numbers. Of course the are more general definitions. For convenience, it is customary to group a finite sequence of numbers $a_{1},a_{2}, \ldots, a_{n}\;$ in a vector form $\mathbf{a}=(a_{1},a_{2},\ldots, a_{n}).\;$ Then I am going to use the following shorthands: $\displaystyle A(\mathbf{a}) = \frac{1}{n}\sum\mathbf{a} = \frac{1}{n}\sum_{i=1}^{n}a_{i}\;$ and $\displaystyle H(\mathbf{a}) = \frac{n}{\sum\displaystyle\frac{1}{\mathbf{a}}} = \frac{n}{\displaystyle\sum_{i=1}^{n}\frac{1}{a_{i}}},$ Observe that $\displaystyle H(\mathbf{a})=\frac{1}{A(\frac{1}{\mathbf{a}}}).\;$ (For those who are not very comfortable or familiar with the summation formulas, it might be useful to verify this statement for small values of $n,\;$ say $2,\;$ $3,\;$ and $4.)\;$ This leads to a more general definition $M_{r}(\mathbf{a}) = (\frac{1}{n}\sum_{i=1}^{n}\mathbf{a}^{r})^{1/r} = (\frac{1}{n}\sum_{i=1}^{n}a_{i}^{r})^{1/r}$ where I assume that all $a_{i}$'s are non-negative and $r\;$ is a real number different from $0.\;$ For example, $A(\mathbf{a}) = M_{1}(\mathbf{a})\;$ whereas $H(\mathbf{a}) = M_{-1}(\mathbf{a}).\;$ In general, $M_{r}(\mathbf{a})\;$ is the mean value of numbers $a_{1}, a_{2}, \ldots, a_{n}\;$ with the exponent $r.\;$ $M_{2}(\mathbf{a})\;$ is known as the quadratic average. It differs by a factor of $n^{1/2}\;$ from the Euclidean distance from the end of the vector $\mathbf{a}\;$ to the origin. Do other means have special names? Not that I am aware of. Are they of any use? I must say that I dislike this question. I am appalled by the current tendency to place an emphasis (while teaching and learning) on the so-called useful things. (Should, for example, a math teacher be concerned with how and whether factoring is used in real world? I do not know whether superellipses were of any use until Piet Hein discovered their esthetic properties.) In the case of the means, considering $M_{r}(\mathbf{a})\;$ for nonzero real $r\;$ almost immediately proves to be a nice idea. It appears that once the means were defined for nonzero exponents, it becomes also possible to define $M_{0}(\mathbf{a}).\;$ A 2-step proof is very simple but needs an application of what's known as the L'Hôpital Rule, a subject covered in any Calculus I course. By the L'Hôpital Rule, it follows that the limit, $\displaystyle\lim_{r\rightarrow 0}M_{r}(\mathbf{a})\;$ exists and is equal to the geometric mean of numbers $a_{1}, a_{2}, \ldots, a_{n}.\;$ Thus the following complements the definition of $M_{r}(\mathbf{a}):$ $M_{0}(\mathbf{a}) = (a_{1}a_{2} \ldots a_{n})^{1/n}=\sqrt[n]{a_{1}a_{2} \ldots a_{n}}.$ For $r > 0,\;$ $M_{r}(\mathbf{a}) < M_{2r}(\mathbf{a}),\;$ provided not all $a_{i}$'s are equal. If they are then $M_{r}(\mathbf{a}) = M_{2r}(\mathbf{a}).\;$ Inequality $M_{r}(\mathbf{a}) < M_{2r}(\mathbf{a})\;$ can be rewritten as $M_{1}(\mathbf{a}^{r}) < M_{2}(\mathbf{a}^{r}),\;$ where, by convention, $\mathbf{a}^{r}= (a_{1}^{r},a_{2}^{r}, \ldots, a_{n}^{r}).\;$ The latter follows from a beautiful identity $\displaystyle\sum_{i=1}^{n}a_{i}^{2}\sum_{i=1}^{n}b_{i}^{2} - (\sum_{i=1}^{n}a_{i}b_{i})^{2} = \frac{1}{2}\sum_{i=1}^{n}(a_{i}b_{j} - a_{j}b_{i})^{2}$ whose right-hand side is non-negative and is equal to zero only when vectors $\mathbf{a}\;$ and $\mathbf{b}\;$ are proportional. The lemma follows from the latter with $b_{i} = 1\;$ for all $i = 1, 2, \ldots, n.\;$ Applying Lemma repeatedly we get $\displaystyle M_{r}(\mathbf{a}) > M_{\frac{r}{2}}(\mathbf{a}) > M_{\frac{r}{2^{2}}}(\mathbf{a}) > M_{\frac{r}{2^{3}}}(\mathbf{a}) \ldots\;\rightarrow\;M_{0}(\mathbf{a}),$ as above (remember the L'Hôpital Rule!). From here it follows that for any $r \gt 0,\;$ $M_{r}(\mathbf{a}) \gt M_{0}(\mathbf{a})\;$ (except, of course, for the case where all $a_{i}$'s are equal). This is a generalization of the Arithmetic mean - Geometric mean inequality $M_{1}(\mathbf{a}) \gt M_{0}(\mathbf{a}).\;$ On the other hand, if $r \lt 0\;$ and $s = -r,\;$ we can write $\displaystyle M_{r}(\mathbf{a}) = M_{-s}(\mathbf{a}) = \frac{1}{M_{s}(\frac{1}{\mathbf{a}})} \lt \frac{1}{M_{0}(\frac{1}{\mathbf{a}})} = M_{0}(\mathbf{a}).$ Therefore, for any two real $s\;$ and $r,\;$ $s \lt r,\;$ we have a further generalization: $M_{s}(\mathbf{a}) \lt M_{r}(\mathbf{a}).\;$ The applet below illustrates this property of the means. The number bars are draggable up and down, while the exponent r can be changed by dragging the horizontal slider. If you are reading this, your browser is not set to run Java applets. Try IE11 or Safari and declare the site https:///www.cut-the-knot.org as trusted in the Java setup. What if applet does not run? It's also easily seen that, unless all $a_{i}$'s are the same, $\min\{a_{i}\} \lt M_{r}(\mathbf{a}) \lt \max\{a_{i}\}$ Furthermore, it can be shown that $M_{r}(\mathbf{a})\;$ approaches $\max\{a_{i}\}\;$ as $r\;$ grows without limit. Also, $M_{r}(\mathbf{a})\;$ approaches $\min\{a_{i}\}\;$ as $r\;$ tends to $-\infty.\;$ For this reason, we also define $M_{-\infty}(\mathbf{a}) = \min\{a_{i}\}\;$ and $M_{\infty}(\mathbf{a}) = \max\{a_{i}\}.\;$ Finally we can claim that, for any two real $s\;$ and $r,\;$ $s < r,\;$ and not all $a_{i}$'s equal, $M_{-\infty}(\mathbf{a}) \lt M_{s}(\mathbf{a}) \lt M_{r}(\mathbf{a}) \lt M_{\infty}(\mathbf{a}).$ Here's a proof for $M_{s}(\mathbf{a}) \lt M_{r}(\mathbf{a}):$ Hölder's inequality tells us that: $\sum \mathbf{u}^{\alpha}\mathbf{v}^{1-\alpha}\le \left(\sum \mathbf{u}\right)^{\alpha}\cdot\left(\sum \mathbf{v}\right)^{1-\alpha},\,$ for $\alpha\in (0,1).$ Let $s=r\cdot\alpha,\,$ $\mathbf{u}=\displaystyle \frac{1}{n}\mathbf{a}^r,\,$ $\mathbf{v}=\displaystyle \frac{\mathbf{1}}{n}:$ $\mathbf{u}^{\alpha}=n^{-\alpha}\mathbf{a}^s,\,$ $\sum \mathbf{u}^{\alpha}\mathbf{v}^{1-\alpha}=\sum n^{-\alpha}\mathbf{a}^s\cdot n^{\alpha-1}=\displaystyle \frac{\sum \mathbf{a}^s}{n}.$ $\displaystyle\begin{align}\left(\sum \mathbf{u}\right)^{\alpha}\cdot\left(\sum \mathbf{v}\right)^{1-\alpha}&=\left(\sum \mathbf{a}^r\right)^{\frac{s}{r}}\cdot\left(\sum\frac{1}{n}\right)^{1-\frac{s}{r}}\\ &=\left(\frac{\sum \mathbf{a}^r}{n}\right)^{\frac{s}{r}}\cdot\left(\sum\frac{1}{n}\right)\\ &=\left(\frac{\sum \mathbf{a}^r}{n}\right)^{\frac{s}{r}}. \end{align}$ The combination yieds $\displaystyle \frac{\sum \mathbf{a}^s}{n}\le\left(\frac{\sum \mathbf{a}^r}{n}\right)^{\frac{s}{r}},\,$ and the conclusion follows. A question is often raised whether or not it is useful to study continuous functions that are not given by a single formula like $\displaystyle f(x) = \frac{x^{2} - 1}{x^{5} + 1}.\;$ For a fixed $\mathbf{a},$ $f(x) = M_{x}(\mathbf{a})\;$ serves an example of a continuous function that can't be represented in such a manner. As we saw above, $f(0)\;$ requires an essentially different formula than other values of $x.\;$ A Nice Exercise Given a trapezoid with bases of lengths $a\;$ and $b.\;$ Consider 4 line segments parallel to the bases. Prove that the length of the segment that divides the trapezoid into two of equal area is the quadratic mean of $a\;$ and $b,\;$ the length of the segment half way between the bases is the arithmetic mean of $a\;$ and $b,\;$ the length of the segment through the point of intersection of diagonals is the harmonic mean of $a\;$ and $b,\;$ the length of the segment that divides the trapezoid into two similar ones is the geometric mean of $a\;$ and $b.\;$. Another Variant In the diagram on the right, $AB\;$ is a diameter of the semicircle $ATB\;$ with center $O.\;$ $CT\perp AB,\;$ $CF\perp OT,\;$ $OR\perp AB.\;$ Let $AC=a,\;$ $BC=b,\;$ $OR=\displaystyle\frac{a-b}{2}.\;$ Then $CT\;$ is the geometric mean of $a\;$ and $b,\;$ $OT\;$ is the arithmetic mean of $a\;$ and $b,\;$ $TF\;$ is the harmonic mean of $a\;$ and $b,\;$ $AR\;$ is the quadratic mean of $a\;$ and $b.\;$ Obviously, $AR \ge AO=OT\ge CT\ge TF.$ It is worth remembering that, unless all $a_{i}$'s are equal When all $a_{i}$'s are equal, so are the means. E. F. Beckenbach, R. Bellman, Introduction to Inequalities, Random House, 1975 S. L. Greitzer, Arbelos, v6, MAA, 1988 G. H. Hardy, J.E.Littlewood, G.Pólya, Inequalities, Cambridge Univ Pr., 1998 O. A. Ivanov, Easy as p, Springer, 1998 Averages, Arithmetic and Harmonic Means The Size of a Class: Two Viewpoints Averages of divisors of a given integer Family Statistics: an Interactive Gadget Averages in a sequence Arithmetic and Geometric Means Geometric Meaning of the Geometric Mean A Mathematical Rabbit out of an Algebraic Hat AM-GM Inequality The Mean Property of the Mean Harmonic Mean in Geometry |Contact| |Front page| |Contents| |Algebra|
CommonCrawl
Protein hydrolysates in animal nutrition: Industrial production, bioactive peptides, and functional significance Yongqing Hou1, Zhenlong Wu2, Zhaolai Dai2, Genhu Wang3 & Guoyao Wu1,2,4 Recent years have witnessed growing interest in the role of peptides in animal nutrition. Chemical, enzymatic, or microbial hydrolysis of proteins in animal by-products or plant-source feedstuffs before feeding is an attractive means of generating high-quality small or large peptides that have both nutritional and physiological or regulatory functions in livestock, poultry and fish. These peptides may also be formed from ingested proteins in the gastrointestinal tract, but the types of resultant peptides can vary greatly with the physiological conditions of the animals and the composition of the diets. In the small intestine, large peptides are hydrolyzed to small peptides, which are absorbed into enterocytes faster than free amino acids (AAs) to provide a more balanced pattern of AAs in the blood circulation. Some peptides of plant or animal sources also have antimicrobial, antioxidant, antihypertensive, and immunomodulatory activities. Those peptides which confer biological functions beyond their nutritional value are called bioactive peptides. They are usually 2–20 AA residues in length but may consist of >20 AA residues. Inclusion of some (e.g. 2–8%) animal-protein hydrolysates (e.g., porcine intestine, porcine mucosa, salmon viscera, or poultry tissue hydrolysates) or soybean protein hydrolysates in practical corn- and soybean meal-based diets can ensure desirable rates of growth performance and feed efficiency in weanling pigs, young calves, post-hatching poultry, and fish. Thus, protein hydrolysates hold promise in optimizing the nutrition of domestic and companion animals, as well as their health (particularly gut health) and well-being. A protein is a macromolecule usually consisting of twenty different amino acids (AAs) linked via peptide bonds. Selenoproteins contain selenocysteine as a rare AA, but no free selenocysteine is present in animal cells. Protein is a major component of animal tissues (e.g., skeletal muscle, mammary glands, liver, and the small intestine) and products (e.g., meat, milk, egg, and wool). For example, the protein content in the skeletal muscle of growing beef cattle or pigs is approximately 70% on a dry-matter basis [1]. Thus, adequate intake of dietary protein is essential for maximum growth, production performance, and feed efficiency in livestock, poultry and fish. After being consumed in a meal by animals, the proteins in feed ingredients (e.g., blood meal, meat & bone meal, intestine-mucosa powder, fish meal, soybean meal, peanut meal, and cottonseed meal) are hydrolyzed into small peptides (di- and tri-peptides) and free AAs by proteases and oligopeptidases in the small intestine [2]; however, the types of resultant peptides can vary greatly with the physiological conditions of the animals and the composition of their diets. To consistently manufacture peptides from the proteins of animal and plant sources, robust chemical, enzymatic or microbial methods have been used before feeding to improve their nutritional quality and reduce any associated anti-nutritional factors [3, 4]. The last two methods can also improve the solubility, viscosity, emulsification, and gelation of peptides. In animal production, high-quality protein is not hydrolyzed as feed additives. Only animal byproducts, brewer's byproducts, and plant ingredients containing anti-nutritional factors are hydrolyzed to produce peptides for animal feeds. Proteases isolated from various sources (including bacteria, plants, and yeast) are used for the enzymatic method, whereas intact microorganisms are employed for culture in the microbial approach. To date, protein hydrolysates have been applied to such diverse fields as medicine, nutrition (including animal nutrition), and biotechnology [5]. The major objectives of this article are to highlight enzyme- and fermentation-based techniques for the industrial preparation of protein hydrolysates and to discuss the nutritional and functional significance of their bioactive peptides in animal feeding. Definitions of amino acids, peptides, and protein Amino acids are organic substances that contain both amino and acid groups. All proteinogenic AAs have an α-amino group and, except for glycine, occur as L-isomers in animals and feedstuffs. A peptide is defined as an organic molecule consisting of two or more AA residues linked by peptide bonds [2]. The formation of one peptide bond results in the removal of one water molecule. In most peptides, the typical peptide bonds are formed from the α-amino and α-carboxyl groups of adjacent AAs. Peptides can be classified according to the number of AA residues. An oligopeptide is comprised of 2 to 20 AA residues. Those oligopeptides containing ≤ 10 AA residues are called small oligopeptides (or small peptides), whereas those oligopeptides containing 10 to 20 AA residues are called large oligopeptides (or large peptides). A peptide, which contains ≥ 21 AA residues and does not have a 3-dimensional structure, is termed a polypeptide [6]. A protein consists of one or more high-molecular-weight polypeptides. The dividing line between proteins and polypeptides is usually their molecular weight. Generally speaking, polypeptides with a molecular weight of ≥ 8,000 Daltons (i.e., ≥ 72 AA residues) are referred to as proteins [6]. For example, ubiquitin (a single chain of 72 AA residues) and casein α-S1 (200 AA residues) are proteins, but glucagon (29 AA residues) and oxytocin (9 AA residues) are peptides. However, the division between proteins and peptides simply on the basis of their molecular weights is not absolute. For example, insulin [51 AA residues (20 in chain A and 31 in chain B)] is well recognized as a protein because it has the defined 3-dimensional structure exhibited by proteins. In contrast, PEC-60 (a single chain of 60 AA residues) [7] and dopuin (a single chain of 62 AA residues) [8], which are isolated from the pig small-intestinal mucosae, are called polypeptides. Figure 1 illustrates the four orders of protein structures (1): primary structure (the sequence of AAs along the polypeptide chain; (2) secondary structure (the conformation of the polypeptide backbone); (3) tertiary structure (the three-dimentional arrangement of protein); and (4) quaternary structure (the spatial arrangement of polypeptide subunits). The primary sequence of AAs in a protein determines its secondary, tertiary, and quaternary structures, as well as its biological functions. The forces stabilizing polypeptide aggregates are hydrogen and electrostatic bonds between AA residues. The four orders of protein structures. A protein has (1): a primary structure (the sequence of AAs along the polypeptide chain; (2) a secondary structure (the conformation of the polypeptide backbone); (3) a tertiary structure (the three-dimensional arrangement of protein); and (4) a quaternary structure (the spatial arrangement of polypeptide subunits). The primary sequence of AAs in a protein determines its secondary, tertiary, and quaternary structures, as well as its biological functions Trichloroacetic acid (TCA; the final concentration of 5%) or perchloric acid (PCA; the final concentration of 0.2 mol/L) can fully precipitate proteins, but not peptides, from animal tissues, cells, plasma, and other physiological fluids (e.g., rumen, allantoic, amniotic, intestinal-lumen fluids, and digesta) [9, 10]. Ethanol (the final concentration of 80%) can effectively precipitate both proteins and nucleic acids from aqueous solutions [11]. This method may be useful to remove water-soluble inorganic compounds (e.g., aluminum) from protein hydrolysates. Of note, 1% tungstic acid can precipitate both proteins and peptides with ≥ 4 AA residues [10]. Thus, PCA or TCA can be used along with tungstic acid to distinguish small and large peptides. Industrial production of protein hydrolysates General considerations of protein hydrolysis The method of choice for the hydrolysis of proteins depends on their sources. For example, proteins from feathers, bristles, horns, beaks or wool contain the keratin structure and, therefore, are usually hydrolyzed by acidic or alkaline treatment, or by bacterial keratinases [3]. In contrast, animal products (e.g., casein, whey, intestine, and meat) and plant ingredients (e.g., soy, wheat, rice, pea, and cottonseed proteins) are often subject to general enzymatic or microbial hydrolysis [4, 5]. The hydrolysis of proteins by cell-free proteases, microorganisms, acids, or bases results in the production of protein hydrolysates. The general procedures are outlined in Fig. 2. Depending on the method used, the hydrolysis times may range from 4 to 48 h. In cases where bacteriostatic or bactericidal preservatives (e.g., benzoic acid) are used in the prolonged hydrolysis of proteins by enzymes or microorganisms, the hydrolysis is usually terminated by heating to deactivate the enzyme or enzyme systems. After hydrolysis, the insoluble fractions are separated from the protein hydrolysates with the use of a centrifuge, a filter (e.g., with a 10,000 Dalton molecular weight cut-off), or a micro filtration system [3]. The filtration process is often repeated several times to obtain a desirable color and clarity of the solution. Charcoal powder is commonly used to decolorize and remove haze-forming components. If very low concentrations of salts are desired, the filtrate may be subjected to exchange chromatography to remove the salts. After filtration, the protein hydrolysate product is heat-treated (pasteurized) to kill or reduce the microorganisms. Finally, the product is dried and packaged. General procedures for the production of peptides from animal and plant proteins. Peptides (including bioactive peptides) can be produced from proteins present in animal products (including by-products) or plant-source feedstuffs material (e.g., soybeans and wheat) through chemical, enzymatic, or microbial hydrolysis. These general procedures may need to be modified for peptide production, depending on protein sources and product specifications Degree of hydrolysis The protein hydrolysates include free AAs, small peptides, and large peptides. The proportions of these products vary with the sources of proteins, the quality of water, the type of proteases, and the species of microbes. The degree of hydrolysis, i.e., the extent to which the protein is hydrolyzed, is measured by the number of peptide bonds cleaved, divided by the total number of peptide bonds in a protein and multiplied by 100 [3]. The number of peptide bonds cleaved is measured by the moles of free AAs plus the moles of TCA- or PCA-soluble peptides. Due to the lack of standards for all the peptides generated from protein hydrolysis, it is technically challenging to quantify peptides released from animal-, plant- or microbial-sources of proteins. The percentage of AAs in the free form or the peptide form is calculated as follows: $$ \mathrm{Percentage}\kern0.5em \mathrm{of}\kern0.5em \mathrm{A}\mathrm{A}\mathrm{s}\kern0.5em \operatorname{in}\kern0.5em \mathrm{the}\kern0.5em \mathrm{free}\kern0.5em \mathrm{form}\kern0.5em \left(\%\right)=\left(\mathrm{Total}\;\mathrm{free}\kern0.5em \mathrm{A}\mathrm{A}\mathrm{s}/\mathrm{Total}\kern0.5em \mathrm{A}\mathrm{A}\mathrm{s}\kern0.5em \mathrm{in}\kern0.5em \mathrm{protein}\right)\times 100\%; $$ $$ \mathrm{Percentage}\kern0.5em \mathrm{of}\kern0.5em \mathrm{A}\mathrm{A}\mathrm{s}\kern0.5em \mathrm{in}\kern0.5em \mathrm{peptides}\left(\%\right)=\kern0.5em \left(\mathrm{Total}\kern0.5em \mathrm{A}\mathrm{A}\mathrm{s}\kern0.5em \mathrm{in}\kern0.5em \mathrm{peptides}/\mathrm{Total}\kern0.5em \mathrm{A}\mathrm{A}\mathrm{s}\kern0.5em \mathrm{in}\kern0.5em \mathrm{protein}\right)\times 100\%; $$ When the catabolism of AAs is limited (as in enzymatic hydrolysis), the percentage of AAs in peptides is calculated as (total AAs in protein – free AAs)/total AAs in protein x 100%. High-performance liquid chromatography (HPLC) is widely used to determine free AAs [12]. HPLC and other analytical techniques (e.g., nuclear magnetic resonance spectroscopy, matrix assisted laser desorption ionization-time of flight mass spectrometry, peptide mapping, and ion-exchange chromatography) are often employed to characterize peptides in protein hydrolysates [13, 14]. When standards are available, HPLC can be used to analyze peptides. Methods for protein hydrolysis Acid hydrolysis of proteins Acid hydrolysis of a protein (gelatin) at a high temperature was first reported by the French chemist H. Braconnot in 1920. It is now established that the complete hydrolysis of protein in 6 mol/L HCl occurs at 110 °C for 24 h [12]. A much shorter period of time (e.g., 2 to 6 h) is used to produce peptides [3]. After the hydrolysis, the product is evaporated, pasteurized, and spray dried. The majority of acid protein hydrolysates are used as flavor enhancers (e.g., flavoring products such as hydrolyzed vegetable protein) [5]. The method of acid hydrolysis of a protein offers the advantage of low cost. However, this process results in the complete destruction of tryptophan, a partial loss of methionine, and the conversion of glutamine into glutamate and of asparagine into aspartate [5]. Alkaline hydrolysis of proteins Alkaline agents, such as calcium, sodium, or potassium hydroxide (e.g., 4 mol/L), can be used at a high temperature (e.g., 105 °C) for 20 h to completely hydrolyze protein [12, 15]. Lower temperatures (e.g., 27 to 55 °C) and a shorter period of the hydrolysis time (e.g., 4 to 8 h) are often desirable for the generation of peptides in the food industry [5]. After the hydrolysis, the product is evaporated, pasteurized, and spray dried. Like acid hydrolysis of proteins, alkaline hydrolysis of proteins offers the advantage of low cost and can have a 100% recovery rate of tryptophan [12]. However, this process results in the complete destruction of most AAs (e.g., 100% loss). Thus, although alkaline hydrolysis is often used for the production of foaming agents (e.g., substitutes for egg proteins) and fire extinguisher foams, it is not widely used in the food industry. Cell-free proteases The peptide bonds of proteins can be broken down by many different kinds of proteases, which can be classified as exopeptidases and endopeptidases based on the type of reaction, namely hydrolysis of a peptide bond in the terminal region (an exopeptidase) or within an internal region (an endopeptidase) of a protein [2]. Some proteases hydrolyze dipeptides (dipeptidases), whereas others remove terminal AA residues that are substituted, cyclized, or linked by isopeptide bonds (namely peptide linkages other than those of α-carboxyl to α-amino groups; e.g., ω-peptidases). When a protease exhibits a marked preference for a peptide bond formed from a particular AA residue, the name of this AA is used to form a qualifier (e.g., "leucine" aminopeptidase and "proline" endopeptidase). In contrast, for enzymes with very complex or broad specificity, alphabetical or numerical serial names (e.g., peptidyl-dipeptidase A, peptidyl-dipeptidase B, dipeptidyl-peptidase I, and dipeptidyl-peptidase II) are employed for protein hydrolysis. Some proteases may have both exopeptidase and endopeptidase properties (e.g., cathepsins B and H). Enzymatic hydrolysis takes place under mild conditions (e.g., pH 6–8 and 30 - 60 °C) and minimizes side reactions. Most of the cell-free enzymes for producing protein hydrolysates are obtained from animal, plant and microbial sources (Table 1). Enzymes of animal sources (particularly pigs) for protein hydrolysis are pancreatin, trypsin, pepsin, carboxylpeptidases and aminopeptidases; enzymes of plant sources are papain and bromelain; and enzymes of bacterial and fungal sources are many kinds of proteases with a broad spectrum of optimal temperatures, pH, and ion concentrations [16, 17]. The enzymes from commercial sources may be purified, semi-purified, or crude from the biological sources. The hydrolysis of proteins can be achieved by a single enzyme (e.g., trypsin) or multiple enzymes (e.g., a mixture of proteases known as Pronase, pepsin and prolidase). The choice of enzymes depends on the protein source and the degree of hydrolysis. For example, if the protein has a high content of hydrophobic AAs, the enzyme of choice would be the one that preferentially breaks downs the peptide bonds formed from these AAs. Fractionation of protein hydrolysates is often performed to isolate specific peptides or remove undesired peptides. It is noteworthy that the hydrolysis of some proteins (e.g., soy proteins and casein with papain for 18 h) can generate hydrophobic peptides and AAs with bitterness [18]. The addition of porcine kidney cortex homogenate or activated carbon to protein hydrolysates can reduce the bitterness of the peptide product [3]. Compared to acid and alkaline hydrolysis of proteins, the main advantages of enzyme hydrolysis of proteins are that: (a) the hydrolysis conditions (e.g., like temperature and pH) are mild and do not result in any loss of AAs; (b) proteases are more specific and precise to control the degree of peptide-bond hydrolysis; and (c) the small amounts of enzymes can be easily deactivated after the hydrolysis (e.g., 85 °C for 3 min) to facilitate the isolation of the protein hydrolysates. The disadvantages of enzymatic hydrolysis of protein include the relatively high cost and the potential presence of enzyme inhibitors in the raw protein materials. Table 1 Proteases commonly used for protein hydrolysis The efficiency and specificity of protein hydrolysis differ between microbial- and animal-source proteases [19], as reported for their lipase activity [20, 21]. For example, the hydrolysis of 18 mg casein by 40 μg pancreatin (pancreatic enzymes from the porcine pancreas) for 1–2 h in a buffer solution (43 mmol/L NaCl, 7.3 mmol/L disodium tetraborate, 171 mmol/L boric acid and 1 mmol/L CaCl2, pH 7.4) yields the numbers and sequences of peptides differently than NS 4 proteases (from Nocardiopsis prasina) and NS 5 proteases (from Bacillus subtilis) [19]. The pancreatin exhibited the activities of trypsin (cleavage of peptide bonds from Arg and Lys sites), chymotrypsin (cleavage of peptide bonds from Phe, Trp, Tyr, and Leu), and elastase (cleavage of peptide bonds from (Ala and other aliphatic AAs). In contrast, the microbial proteases were characterized by relatively low trypsin, carboxypeptidase and elastase activities, but high chymotrypsin activity. The time course of hydrolysis is similar among the microbial and pancreatic enzymes when caseins are the substrates. However, the rates of peptide generation are higher for pancreatin than the microbial enzymes when soya protein is the substrate. The efficiency of production of peptides with a molecular weight less than 3 kDa is higher for pancreatin than the microbial enzymes when caseins are substrates, but is similar among the microbial and pancreatic enzymes when soya protein is the substrate. In contrast, the efficiency of production of peptides with a molecular weight between 3 and 10 kDa is similar among the microbial and pancreatic enzymes when caseins are substrates or when soya protein is the substrate within 1 h incubation, but is higher for the microbial enzymes than the animal-source pancreatin. Microbial hydrolysis of protein Microorganisms release proteases to hydrolyze extracellular proteins into large peptides, small peptides and free AAs. Small peptides can be taken up by the microbes to undergo intracellular hydrolysis, yielding free AAs. Microorganisms also produce enzymes other than proteases to degrade complex carbohydrates and lipids [22]. Protein fermentation is classified into a liquid- or solid-state type. Liquid-state fermentation is performed with protein substrates under high-moisture fermentation conditions, whereas the solid-state fermentation is carried out under low-moisture fermentation conditions. The low moisture level of the solid-state fermentation can help to reduce the drying time for protein hydrolysates. Soy sauce (also called soya sauce), which originated in China in the 2nd century AD, was perhaps the earliest product of protein fermentation by microorganisms [3]. The raw materials were boiled soybeans, roasted grain, brine, and Aspergillus oryzae or Aspergillus sojae (a genus of fungus). In Koji culturing, an equal amount of boiled soybeans and roasted wheat is cultured with Aspergillus oryzae, A. sojae, and A. tamari; Saccharomyces cerevisiae (yeasts), and bacteria, such as Bacillus and Lactobacillus species. Over the past two decades, various microorganisms have been used to hydrolyze plant-source proteins, such as Lactobacillus rhamnosus BGT10 and Lactobacillus zeae LMG17315 for pea proteins, Bacillus natto or B. subtilis for soybean, and fungi A. oryzae or R. oryzae for soybean [23–25]. Lactic acid bacteria, such as Lactobacillus and Lactococcus species, are commonly used to ferment milk products. The major advantages of fermentation are that the appropriately used microorganisms can not only break down proteins into peptides and free AAs, but can also remove hyper-allergic or anti-nutritional factors present in the matrix of the ingredients (e.g., trypsin inhibitors, glycinin, β-conglycinin, phytate, oligosaccharides raffinose and stachyose, saponins in soybeans). The disadvantages of the microbial hydrolysis of protein are relatively high costs, as well as changes in microbial activity under various conditions and, therefore, inconsistency in the production of peptides and free AAs. Bioactive peptides in protein hydrolysates Bioactive peptides are defined as the fragments of AA sequences in a protein that confer biological functions beyond their nutritional value [25]. They have antimicrobial, antioxidant, antihypertensive, and immunomodulatory activities. These bioactive peptides are usually 2–20 AA residues in length, but some may consist of >20 AA residues [23]. Many of them exhibit common structural properties, such as a relatively small number of AAs, a high abundance of hydrophobic AA residues, and the presence of Arg, Lys, and Pro residues [24]. In animals, endogenous peptides fulfil crucial physiological or regulatory functions. For example, PEC-60 activates Na/K ATPase in the small intestine and other tissues [26]. Additionally, many intestinal peptides (secreted by Paneth cells) have an anti-microbial function [27]. Furthermore, the brain releases numerous peptides to regulate endocrine status, food intake, and behavior in animals [28]. Transport of small peptides in the small intestine In the small intestine, peptide transporter 1 (PepT1) is responsible for the proton-driven transport of extracellular di- and tri-peptides through the apical membrane of the enterocyte into the cell [29]. However, due to the high activity of intracellular peptidases in the small intestine [2], it is unlikely that a nutritionally significant quantity of peptides in the lumen of the gut can enter the portal vein or the lymphatic circulation. It is possible that a limited, but physiologically significant, amount of peptides (particularly those containing an imino acid) may be absorbed intact from the luminal content to the bloodstream through M cells, exosomes, and enterocytes via transepithelial cell transport [30, 31]. Diet-derived peptides can exert their bioactive (e.g., physiological and regulatory) actions at the level of the small intestine, and the intestinally-generated signals can be transmitted to the brain, the endocrine system, and the immune system of the body to beneficially impact the whole body. ACE-inhibitory peptides The first food-derived bioactive peptide, which enhanced vitamin D-independent bone calcification in rachitic infants, was produced from casein [32]. To date, many angiotensin-I converting enzyme (ACE)-inhibitory peptides have been generated from milk or meat (Table 2). ACE removes the C-terminal dipeptide His-Leu in angiotensin I (Ang I) to form Ang II (a potent vasoconstrictory peptide), thereby conferring their anti-hypertensive effects [33]. The best examples for ACE-inhibitory peptides are Ile-Pro-Pro (IPP) and Val-Pro-Pro (VPP), both of which are derived from milk protein through the hydrolysis of neutral protease, alkaline protease or papain [34]. There is evidence that these two proline-rich peptides may partially escape gastrointestinal hydrolysis and be transported across the intestinal epithelium into the blood circulation [35]. Similarly, the hydrolysis of proteins from meat [36] and egg yolk [37] also generates potent ACE inhibitors. Table 2 Antihypertensive peptides generated from the hydrolysis of animal products Antioxidative and antimicrobial peptides Many small peptides from animal products (e.g., fish and meat) (Table 3) and plant-source feedstuffs [25] have anti-oxidative functions by scavenging free radicals and/or inhibiting the production of oxidants and pro-inflammatory cytokines [38–41]. These small peptides can reduce the production of oxidants by the small intestine, while enhancing the removal of the oxidants, resulting in a decrease in their intracellular concentrations and alleviating oxidative stress (Fig. 3). Many of the bioactive peptides have both ACE-inhibitory and anti-oxidative effects [36, 37]. Additionally, some peptides from animal (Table 4) and plant protein-hydrolysates [25] also have antimicrobial effects, as reported for certain endogenous peptides in the small intestine [27]. These antimicrobial peptides exert their actions by damaging the cell membrane of bacteria, interfering with the functions of their intracellular proteins, inducing the aggregation of cytoplasmic proteins, and affecting the metabolism of bacteria [42–44], but the underlying mechanisms remain largely unknown [27]. Table 3 Antioxidative peptides generated from the hydrolysis of animal proteins Inhibition of cellular oxidative stress by dietary small peptides in the small intestine. The small peptides, which are supplemented to the diets of animals (particularly young animals), can reduce the production of oxidants by the small intestine and enhance the removal of the oxidants, leading to a decrease in their intracellular concentrations and alleviating oxidative stress. (−), inhibition; (+), activation; ↓, decrease Table 4 Antimicrobial peptides generated from the hydrolysis of animal proteins or synthesized by intestinal mucosal cells Opioid peptides The hydrolysis of certain proteins [e.g., casein, gluten (present in wheat, rye and barley), and soybeans] in the gastrointestinal tract can generate opioid peptides [45]. This can be performed in vitro by using digestive enzymes from the small intestine of mammals (e.g., pigs). Opioid peptides are oligopeptides (typically 4–8 AA residues in length) that bind to opioid receptors in the brain to affect the gut function [46, 47], as well as the behavior and food intake of animals (Table 5). Furthermore, the protein hydrolysates containing opioid-like peptides may be used as feed additives to alleviate stress, control pain and sleep, and modulate satiety in animals. Table 5 Opioid peptides generated from the enzymatic hydrolysis of animal and plant proteins in the gastrointestinal tract Applications of plant- and animal-protein hydrolysates in animal nutrition General consideration A major goal for animal agriculture is to enhance the efficiency of feed utilization for milk, meat and egg production [48]. This approach requires optimal nutrition to support the function of the small intestine as the terminal site for the digestion and absorption of dietary nutrients [49]. To date, peptides generated from the hydrolysis of plant and animal proteins are included in the diets for feeding pigs, poultry, fish, and companion animals. The outcomes are positive and cost-effective for the improvement of intestinal health, growth and production performance [50]. The underlying mechanisms may be that: (a) the rate of absorption of small peptides is greater than that of an equivalent amount of free AAs; (b) the rate of catabolism of small peptides by the bacteria of the small intestine is lower than that of an equivalent amount of free AAs; (c) the composition of AAs entering the portal vein is more balanced with the intestinal transport of small peptides than that of individual AAs; (e) provision of functional AAs (e.g., glycine, arginine, glutamine, glutamate, proline, and taurine) to enhance anti-oxidative reactions and muscle protein synthesis [51, 52]; and (e) specific peptides can improve the morphology, motility and function of the gastrointestinal tract (e.g., secretion, motility, and anti-inflammatory reactions), endocrine status in favor of anabolism, and feed intake, compared with an equivalent amount of free AAs. In swine nutrition research, most of the studies involving the addition of peptides to diets have been conducted with post-weaning pigs to improve palatability, growth, health, and feed efficiency [53–58]. This is primarily because young animals have immature digestive and immune systems and weanling pigs suffer from reduced feed intake, gut atrophy, diarrhea, and impaired growth. Moreover, peptide products have been supplemented to the diets of calves [59], poultry [60, 61], fish [62, 63], and companion animals [64] to improve their nutrition status, gut function, and abilities to resist infectious diseases. Plant peptides As noted previously, plant-source protein ingredients often contain allergenic proteins and other anti-nutritional factors which can limit their practical use, particularly in the diets of young animals [50] and companion animals [64]. For example, soybeans can be processed to manufacture soybean meal and soybean protein concentrates for the elimination of some anti-nutritional substances. However, the soy products still contain considerable amounts of protein-type allergens (e.g., glycinin and β-conglycinin) and significant quantities of trypsin inhibitors, lectins (hemagglutinins), phytic acid, soy oligosaccharides (raffinose and stachyose), and steroid glycosides (soy saponins) [18, 24, 25]. Fermentation of soybeans by the commonly used microorganism (e.g., Aspergillus species, Bacillus species, and Lactobacillus species) has been reported to improve growth performance and feed efficiency in weanling pigs [50]. Thus, 3- to-7-week-old pigs fed a corn- and soybean meal-based diet containing 3% or 6% fermented soybean meal grew at a rate comparable to that of the same percentage of dried skim milk [54]. Likewise, 4.9% fermented soybean meal could replace 3.7% spray-dried plasma protein in the diets of 3- to-7-week-old pigs fed a corn- and soybean meal-based diet without affecting growth performance or feed efficiency [54]. Similar results were obtained for the Atlantic salmon fed a diet containing 40% protein from fermented soy white flakes [60]. Of interest, 50% of fish meal in the diet of juvenile red sea bream can be replaced by the same percentage of soybean protein hydrolysate [63]. The inclusion of plant-protein hydrolysate in diets is important in aquaculture because fish meal is becoming scarce worldwide. Furthermore, as a replacement of the expensive skim milk powder, the hydrolysate of soy protein isolate (19.7% in diet) can be used to sustain high growth-performance in calves [59]. Finally, acidic hydrolysates of plant proteins (e.g., wheat gluten which contains a high amount of glutamine plus glutamate), often called hydrolyzed vegetable proteins, can be included at a 1 to 2% level in the diets of companion animals to provide savory flavors due to the high abundance of glutamate in the products [64]. Animal peptides Postweaning piglets fed a diet containing 6% spray-dried porcine intestine hydrolysate (SDPI; the co-product of heparin production) for 2 wk had better growth performance than those fed the control diet, the basal diet containing spray-dried plasma, or the basal diet containing dried whey [55, 56]. There was a carry-over effect on enhancing growth performance during weeks 3–5 postweaning in piglets that were previously fed the SDPI [56], which was likely due to an increased area of the intestinal villus as well as improved digestion and absorption of dietary nutrients [57]. Similarly, Stein (2002) reported that piglets (weaned at 20 days of age) fed a weanling diet containing 1.5, 3 or 4.5% SDPI had better growth performance and greater feed efficiency in comparison to piglets consuming the same amount of a fish meal-supplemented diet. Of note, these effects of the SDPI supplementation were dose-dependent. In addition, postweaning piglets fed a corn-, soybean meal-, and dried whey-based diet containing 6% enzymatically hydrolyzed proteins (from blend of swine blood and selected poultry tissues) exhibited a growth rate and a feed efficiency that were comparable to those for piglets fed a diet containing the same percentage of spray-dried blood cells [53]. Likewise, the inclusion of 2.5 5 or 7.5% hydrolyzed porcine mucosa in a corn- and soybean meal-based diet enhanced daily weight gain and nutrient retention in growing chicks [61]. Furthermore, broilers fed a diet containing 5% Atlantic salmon protein hydrolysates (from the viscera) had better growth performance than those fed a diet with or without 4% fish meal [60]. Finally, addition of the protein hydrolysate of fish by-products to the diet (at a 10% inclusion level) improved intestinal development, growth, immunological status, and survival in European sea bass larvae challenged with Vibrio anguillarum (a Gram-negative bacterium) [65]. Thus, SDPI or other hydrolysates of animal proteins hold promise for animal production. Potential scale and economic value for the global use of animal and plant protein hydrolysates in animal feeding Industrial processing of domestic farm animals generates large amounts of tissues (30–40% of body weight) not consumed by humans, including viscera, carcass-trimmings, bone (20–30% of body weight), fat, skin, feet, small-intestinal tissue (2% of body weight), feather (up to 10% of body weight), and collectible blood (5% body weight), with the global human-inedible livestock and poultry byproducts being ~54 billion kg/yr [66–68]. Likewise, fish processing industries produce large amounts of wastes (up to 55% of body weight), such as muscle-trimmings (15–20%), skin and fins (1–3%), bones (9–15%), heads (9–12%), viscera (12–18%), and scales, with the global human-inedible fish byproducts being ~6 billion kg/yr [66–69]. Thus, the global annual volume of total animal by-products generated by the processing industries is approximately 60 billion kg annually. Assuming that only 5% of the animal by-products and plant products for feed are used for protein hydrolysis, and based on the current average prices of animal, soybean, and wheat protein hydrolysates [70], their yields are 3, 6.75 and 12.75 billion kg/yr, respectively, and their economic values are 4.5, 3.88 and 20.02 billion US $/yr (Table 6). Thus, protein hydrolysates from the by-products of pigs or poultry and from plant ingredients hold great promise in sustaining the animal agriculture and managing companion animals worldwide. Table 6 Potential scale and economic values for the global use of animal and plant protein hydrolysates (PH) in animal feeding Future research directions The nutritional value of protein hydrolysates as flavor enhancers, functional ingredients, and precursors for protein synthesis depends on the composition of free AAs, small peptides and large peptides in the products, as well as their batch-to-batch consistence. At present, such data are not available for the commercially available products of animal or plant hydrolysates and should be obtained with the use of HPLC and mass spectrometry. Only when the composition of protein hydrolysates is known, can we fully understand their functionally active components and the mechanisms of their actions. In addition, the net rates of the transport of small peptides across the small intestine are not known for all the protein hydrolysates currently used in animal feeding. This issue can be readily addressed with the use of Ussing chambers [71]. There is also concern that some animal protein hydrolysates, which contain a high proportion of oligopeptides with a high abundance of basic AAs, have a low palatability for animals (particularly weanling piglets), and, therefore, the inclusion of the protein hydrolysates in animal feeds may be limited. Such a potential problem may be substantially alleviated through: (a) the addition of exopeptidases and a longer period of hydrolysis to remove basic and aliphatic AAs from the C- and N-terminals of the polypeptides; and (b) appropriate supplementation with glycine, monosodium glutamate and inosine. Furthermore, the role of animal and plant protein hydrolysates in the signaling of intestinal epithelial cells and bacteria and metabolic regulation in these cells should be investigated to better understand how these beneficial products improve gut integrity, immunity, and health. Finally, the potential of protein hydrolysates as alternatives to dietary antibiotics should be explored along with studies to elucidate the underlying mechanisms. All these new lines of research will be particularly important for animals with compromised intestinal structure and function (e.g., neonates with intrauterine growth restriction and early-weaned mammals) and raised under adverse environmental conditions (e.g., high or low ambient temperatures). Plant- and animal-protein hydrolysates provide highly digestible peptides and bioactive peptides, as well as specific AAs (e.g., glutamate) to confer nutritional and physiological or regulatory functions in animals. The industrial production of these protein hydrolysates involves: (a) strong acidic or alkaline conditions, (b) mild enzymatic methods, or (c) fermentation by microorganisms. The degree of hydrolysis is assessed by the number of peptide bonds cleaved divided by the total number of peptide bonds in a protein. Chemical hydrolysis is often employed to generate savory flavors, whereas microbial fermentation not only produces peptides but also removes anti-nutritional factors in protein ingredients. In addition to their nutritional value to supply AAs, bioactive peptides (usually 2–20 AA residues in length) have antimicrobial, antioxidant, antihypertensive, and immunomodulatory roles. These peptides exert beneficial effects on improving intestinal morphology, function, and resistance to infectious diseases in animals (including pigs, calves, chickens, companion animals, and fish), thereby enhancing their health and well-being, as well as growth performance and feed efficiency. This provides a cost-effective approach to converting animal by-products, brewer's byproducts, or plant feedstuffs into high-quality protein-hydrolysate ingredients to feed livestock, poultry, fish, and companion animals. ACE: Angiotensin-I converting enzyme HPLC: High-performance liquid chromatography Perchloric acid Protein hydrolysates SDPI: Spray-dried porcine intestine hydrolysate TCA: Wu G, Cross HR, Gehring KB, Savell JW, Arnold AN, McNeill SH. Composition of free and peptide-bound amino acids in beef chuck, loin, and round cuts. J Anim Sci. 2016;94:2603–13. Wu G. Amino acids: biochemistry and nutrition. Boca Raton: CRC Press; 2013. Pasupuleki VK, Braun S. State of the art manufacturing of protein hydrolysates. In: Pasupuleki VK, Demain AL, editors. Protein hydrolysates in biotechnology. New York: Springer Science; 2010. p. 11–32. Dieterich F, Rogerio W, Bertoldo MT, da Silva VSN, Gonçalves GS, Vidotti RM. Development and characterization of protein hydrolysates originated from animal agro industrial byproducts. J Dairy Vet Anim Res. 2014;1:00012. Pasupuleki VK, Holmes C, Demain AL. Applications of protein hydrolysates in biotechnology. In: Pasupuleki VK, Demain AL, editors. Protein hydrolysates in biotechnology. New York: Springer Science; 2010. p. 1–9. Kyte J. Structure in protein chemistry. 2nd ed. New York: Garland Science; 2006. p. 832. Agerberth B, Söderling-Barros J, Jörnvall H, Chen ZW, Ostenson CG, Efendić S, et al. Isolation and characterization of a 60-residue intestinal peptide structurally related to the pancreatic secretory type of trypsin inhibitor: influence on insulin secretion. Proc Natl Acad Sci U S A. 1989;86:8590–4. Chen ZW, Bergman T, Ostenson CG, Efendic S, Mutt V, Jörnvall H. Characterization of dopuin, a polypeptide with special residue distributions. Eur J Biochem. 1997;249:518–22. Rajalingam D, Loftis C, Xu JJ, Kumar TKS. Trichloroacetic acid-induced protein precipitation involves the reversible association of a stable partially structured intermediate. Protein Sci. 2009;18:980–93. Moughan PJ, Darragh AJ, Smith WC, Butts CA. Perchloric and trichloroacetic acids as precipitants of protein in endogenous ileal digesta from the rat. J Sci Food Agric. 1990;52:13–21. Wilcockson J. The differential precipitation of nucleic acids and proteins from aqueous solutions by ethanol. Anal Biochem. 1975;66:64–8. Dai ZL, Wu ZL, Jia SC, Wu G. Analysis of amino acid composition in proteins of animal tissues and foods as pre-column o-phthaldialdehyde derivatives by HPLC with fluorescence detection. J Chromatogr B. 2014;964:116–27. Sapan CV, Lundblad RL. Review of methods for determination of total protein and peptide concentration in biological samples. Proteomics Clin Appl. 2015;9:268–76. Larive CK, Lunte SM, Zhong M, Perkins MD, Wilson GS, Gokulrangan G, et al. Separation and analysis of peptides and proteins. Anal Chem. 1999;71:389R–423R. McGrath R. Protein measurement by ninhydrin determination of amino acids released by alkaline hydrolysis. Anal Biochem. 1972;49:95–102. Kunst T. Protein modification in optimize functionality: protein hydrolysates. In: Whitaker J, Voragen A, Wong D, editor. Handbook of food enzymology. New York: Marcel Dekker; 2003. p. 222–36. Dixon MM, Webb EC. Enzymes. 3rd ed. New York: Academic; 1979. Kim MR, Kawamura Y, Lee CH. Isolation and identification of bitter peptides of tryptic hydrolysate of soybean 11S glycinin by reverse-phase high-performance liquid chromatography. J Food Sci. 2003;68:2416–22. Andriamihaja M, Guillot A, Svendsen A, Hagedorn J, Rakotondratohanina S, Tome' D, et al. Comparative efficiency of microbial enzyme preparations versus pancreatin for in vitro alimentary protein digestion. Amino Acids. 2013;44:563–72. Layer P, Keller J. Lipase supplementation therapy: standards, alternatives, and perspectives. Pancreas. 2003;26:1–7. Sikkens EC, Cahen DL, Kuipers EJ, Bruno MJ. Pancreatic enzyme replacement therapy in chronic pancreatitis. Best Pract Res Clin Gastroenterol. 2010;24:337–47. Smid EJ, Lacroix C. Microbe-microbe interactions in mixed culture food fermentations. Curr Opin Biotechnol. 2013;24:148–54. Bah CS, Carne A, McConnell MA, Mros S, Bekhit A-D. Production of bioactive peptide hydrolysates from deer, sheep, pig and cattle red blood cell fractions using plant and fungal protease preparations. Food Chem. 2016;202:458–66. Li-Chan ECY. Bioactive peptides and protein hydrolysates: research trends and challenges for application as nutraceuticals and functional food ingredients. Curr Opin Food Sci. 2015;1:28–37. López-Barrios L, Gutiérrez-Uribe JA, Serna-Saldívar SO. Bioactive peptides and hydrolysates from pulses and their potential use as functional ingredients. J Food Sci. 2014;79:R273–83. Kairane C, Zilmer M, Mutt V, Sillard R. Activation of Na, K-ATPase by an endogenous peptide, PEC-60. FEBS Lett. 1994;345:1–4. Bevins CL, Salzman NH. Paneth cells, antimicrobial peptides and maintenance of intestinal homeostasis. Nature Rev Microbiol. 2011;9:356–68. Engel JA, Jerlhag E. Role of appetite-regulating peptides in the pathophysiology of addiction: implications for pharmacotherapy. CNS Drugs. 2014;28:875–86. Zhanghi BM, Matthews JC. Physiological importance and mechanisms of protein hydrolysate absorption. In: Pasupuleki VK, Demain AL, editors. Protein hydrolysates in biotechnology. New York: Springer Science; 2010. p. 135–77. Gardner ML. Absorption of intact peptides: studies on transport of protein digests and dipeptides across rat small intestine in vitro. Q J Exp Physiol. 1982;67:629–37. Gardner ML, Wood D. Transport of peptides across the gastrointestinal tract. Biochem Soc Trans. 1989;17:934–7. Mellander O. The physiological importance of the casein phosphopeptide calcium salts. II. Peroral calcium dosage of infants. Acta Soc Med Ups. 1950;55:247–55. Ryan JT, Ross RP, Bolton D, Fitzgerald GF, Stanton C. Bioactive peptides from muscle sources: meat and fish. Nutrients. 2011;3:765–91. Power O, Jakeman P, FitzGerald RJ. Antioxidative peptides: enzymatic production, in4vitro and in vivo antioxidant activity and potential applications of milk-derived antioxidative peptides. Amino Acids. 2013;44:797–820. Martínez-Augustin O, Rivero-Gutiérrez B, Mascaraque C, de Medina FS. Food-derived bioactive peptides and intestinal barrier function. Int J Mol Sci. 2014;15:22857–73. Ryder K, Ael-D B, McConnell M, Carne A. Towards generation of bioactive peptides from meat industry waste proteins: Generation of peptides using commercial microbial proteases. Food Chem. 2016;208:42–50. Zambrowicz A, Pokora M, Setner B, Dąbrowska A, Szołtysik M, Babij K, et al. Multifunctional peptides derived from an egg yolk protein hydrolysate: isolation and characterization. Amino Acids. 2015;47:369–80. Shimizu M, Son DO. Food-derived peptides and intestinal functions. Curr Pharm Des. 2007;13:885–95. Bah CS, Bekhit A-D, McConnell MA, Carne A. Generation of bioactive peptide hydrolysates from cattle plasma using plant and fungal proteases. Food Chem. 2016;213:98–107. Memarpoor-Yazdia M, Asoodehb A, Chamania JK. A novel antioxidant and antimicrobial peptide from hen egg white lysozyme hydrolysates. J Funct Foods. 2012;4:278–86. Power O, Jakeman P, FitzGerald RJ. Antioxidative peptides: enzymatic production, in vitro and in vivo antioxidant activity and potential applications of milk-derived antioxidative peptides. Amino Acids. 2013;44:797–820. Lima CA, Campos JF, Filho JLM, Converti A, da Cunha MGC, Porto ALF. Antimicrobial and radical scavenging properties of bovine collagen hydrolysates produced by Penicillium aurantiogriseum URM 4622 collagenase. J Food Sci Technol. 2015;52:4459–66. Osman A, Goda HA, Abdel-Hamid M, Badran SM, Otte J. Antibacterial peptides generated by Alcalse hydrolysis of goat whey. LWT-Food Sci Technol. 2016;65:480–86. Wald M, Schwarz K, Rehbein H, Bußmann B, Beermann C. Detection of antibacterial activity of an enzymatic hydrolysate generated by processing rainbow trout by-products with trout pepsin. Food Chem. 2016;205:221–28. Froetschel MA. Bioactive peptides in digesta that regulate gastrointestinal function and intake. J Anim Sci. 1996;74:2500–8. San Gabriel A, Uneyama H. Amino acid sensing in the gastrointestinal tract. Amino Acids. 2013;45:451–61. Fernstrom JD. Large neutral amino acids: dietary effects on brain neurochemistry and function. Amino Acids. 2013;45:419–30. Wu G, Fanzo J, Miller DD, Pingali P, Post M, Steiner JJ, et al. Production and supply of high-quality food protein for human consumption: sustainability, challenges and innovations. Ann NY Acad Sci. 2014;1321:1–19. Wu G, Bazer FW, Cross HR. Land-based production of animal protein: impacts, efficiency, and sustainability. Ann NY Acad Sci. 2014;1328:18–28. McCalla J, Waugh T, Lohry E. Protein hydrolysates/peptides in animal nutrition. In: Pasupuleki VK, Demain AL, editors. Protein hydrolysates in biotechnology. New York: Springer Science; 2010. p. 179–90. Hou YQ, Yin YL, Wu G. Dietary essentiality of "nutritionally nonessential amino acids" for animals and humans. Exp Biol Med. 2015;240:997–1007. Hou YQ, Yao K, Yin YL, Wu G. Endogenous synthesis of amino acids limits growth, lactation and reproduction of animals. Adv Nutr. 2016;7:331–42. Lindemann MD, Cromwell GL, Monegue HJ, Cook H, Soltwedel KT, Thomas S, et al. Feeding value of an enzymatically digested protein for early-weaned pigs. J Anim Sci. 2000;78:318–27. Kim SW, van Heugten E, Ji F, Lee CH, Mateo RD. Fermented soybean meal as a vegetable protein source for nursery pigs: I. Effects on growth performance of nursery pigs. J Anim Sci. 2010;88:214–24. Zimmerman D. Interaction of intestinal hydrolysate and spray-dried plasma fed to weanling pigs, Iowa State University, Ames, IA, Experiment 9615.1996. Zimmerman D. The duration of carry-over growth response to intestinal hydrolysate fed to weanling pigs, Iowa State University, Ames, IA, Experiment 9612, 1996. Kim JH, Chae BJ, Kim YG. Effects of replacing spray dried plasma protein with spray dried porcine intestine hydrolysate on ileal digestibility of amino acids and growth performance in early-weaned pigs. Asian-Aust J Anim Sci. 2000;13:1738–42. Stein H. The effect of including DPS 50RD and DPS EX in the phase 2 diets for weanling pigs. Brookings: South Dakota State University; 2002. Lalles JP, Toullec R, Pardal PB, Sissons JW. Hydrolyzed soy protein isolate sustains high nutritional performance in veal calves. J Dairy Sci. 1995;78:194–204. Opheim M, Sterten H, Øverland M, Kjos NP. Atlantic salmon (Salmo salar) protein hydrolysate – Effect on growth performance and intestinal morphometry in broiler chickens. Livest Sci. 2016;187:138–45. Frikha M, Mohiti-Asli M, Chetrit C, Mateos GG. Hydrolyzed porcine mucosa in broiler diets: effects on growth performance, nutrient retention, and histomorphology of the small intestine. Poult Sci. 2014;93:400–11. Refstie S, Sahlström S, Bråthen E, Baeverfjord G, Krogedal P. Lactic acid fermentation eliminates indigestible carbohydrates and antinutritional factors in soybean meal for Atlantic salmon (Salmo salar). Aquaculture. 2005;246:331–45. Khosravi S, Rahimnejad S, Herault M, Fournier V, Lee CR, Dio Bui HT, et al. Effects of protein hydrolysates supplementation in low fish meal diets on growth performance, innate immunity and disease resistance of red sea bream Pagrus major. Fish Shellfish Immunol. 2015;45:858–68. Nagodawithana TW, Nelles L, Trivedi NB. Protein hydrolysates as hypoallergenic, flavors and palatants for companion animals. In: Pasupuleki VK, Demain AL, editors. Protein hydrolysates in biotechnology. New York: Springer Science; 2010. p. 191–207. Kotzamanis YP, Gisbert E, Gatesoupe FJ, Zambonino Infante J, Cahu C. Effects of different dietary levels of fish protein hydrolysates on growth, digestive enzymes, gut microbiota, and resistance to Vibrio anguillarum in European sea bass (Dicentrarchus labrax) larvae. Comp Biochem Physiol A. 2007;147:205–14. Martínez-Alvarez O, Chamorro S, Brenes A. Protein hydrolysates from animal processing by-products as a source of bioactive molecules with interest in animal feeding: A review. Food Res Int. 2015. doi: 10.1016/j.foodres.2015.04.005. Ghosh PR, Fawcett D, Sharma SB, Poinern DEJ. Progress towards sustainable utilisation and management of food wastes in the global economy. Int J Food Sci. Volume 2016, Article ID 3563478. Irshad A, Sureshkumar S, Shalima Shukoor A, Sutha M. Slaughter house by-product utilization for sustainable meat industry-a review. Int J Res Dev. 2015;5:4725–734. Food and Agriculture Organization [67]. http://www.fao.org/worldfoodsituation/csdb/en/. Accessed on 8 Dec 2016. Animal feed prices. https://www.alibaba.com/product. Accessed on 8 Dec 2016. Wang WW, Dai ZL, Wu ZL, Lin G, Jia SC, Hu SD, etal. Glycine is a nutritionally essential amino acid for maximal growth of milk-fed young pigs. Amino Acids. 2014;46:2037–45. We thank our colleagues for collaboration on animal nutrition research. Work in our laboratories was supported by the National Natural Science Foundation of China (31572416, 31372319, 31330075 and 31110103909), Hubei Provincial Key Project for Scientific and Technical Innovation (2014ABA022), Hubei Hundred Talent program, Natural Science Foundation of Hubei Province (2013CFA097), Agriculture and Food Research Initiative Competitive Grants (2014-67015-21770 and 2015-67015-23276) from the USDA National Institute of Food and Agriculture, and Texas A&M AgriLife Research (H-8200). GW conceived this project. YQH and GW wrote the manuscript. ZLW, ZLD, and GHW contributed to the discussion and revision of the article. GW had the primary responsibility for the content of the paper. All authors read and approved this manuscript. None of the authors have any competing interests in the manuscript. All authors read and approved the final manuscript. This article reviews published studies and does not require the approval of animal use or consent to participate. Hubei Key Laboratory of Animal Nutrition and Feed Science, Hubei Collaborative Innovation Center for Animal Nutrition and Feed Safety, Wuhan Polytechnic University, Wuhan, 430023, China Yongqing Hou & Guoyao Wu College of Animal Science and Technology, China Agricultural University, Beijing, China Zhenlong Wu, Zhaolai Dai & Guoyao Wu Research and Development Division, Shanghai Gentech Industries Group, Shanghai, China, 201015 Genhu Wang Department of Animal Science, Texas A&M University, College Station, TX, USA, 77843 Guoyao Wu Yongqing Hou Zhenlong Wu Zhaolai Dai Correspondence to Guoyao Wu. Hou, Y., Wu, Z., Dai, Z. et al. Protein hydrolysates in animal nutrition: Industrial production, bioactive peptides, and functional significance. J Animal Sci Biotechnol 8, 24 (2017). https://doi.org/10.1186/s40104-017-0153-9
CommonCrawl
Why are hypergeometric series important and do they have a geometric or heuristic motivation? Apart from telling that the hypergeometric functions (or series) are the solutions to the (essentially unique?) fuchsian equation on the Riemann sphere with 3 "regular singular points", the wikipedia article doesn't illuminate much about why this kind of special functions should form such a natural topic in mathematics (and in fact have been throughout 19th century). Simply: What are hypergeometric series really, and why they should be (or have been in the past centuries) important/interesting? special-functions hypergeometric-functions QfwfqQfwfq $\begingroup$ Among grad students in Amsterdam there is the urban legend (in the sense defined in the MO question of the same name) that prof. Koornwinder (now emeritus but still active) would be on every thesis comitee and always would ask a (serious, relevant, interesting, inspriring) question relating the subject of the thesis (independent of the field) to hypergeometric functions. I scanned his site [staff.fnwi.uva.nl/t.h.koornwinder/] for articles answering your question but could not find any, however the collection of articles that are there might give some overview of the relevance of the subject. $\endgroup$ – Vincent Sep 4 '14 at 19:40 $\begingroup$ Related: "The Kummer confluent hypergeometric function and some of its applications ..." by Georgiev et al. $\endgroup$ – Tom Copeland Jun 25 '17 at 20:47 In the 19th century, a lot of efforts were made in order to solve the general quintic equation $x^5+a_4x^4 +a_3x^3 +a_2x^2 +a_1x +a_0$ using special functions. It turns out that the roots of this equation are expressible in terms of hypergeometric series. To wit, one possibility is by first reducing the number of parameters, to the form $x^5-x-t=0$. Then a Lagrange inversion argument essentially gives a root $$ z=t {}_4 F_3(\frac15,\frac25,\frac35,\frac45,\frac12,\frac34,\frac54,\frac{5^5}{4^4}t^4)=t+t^5+10\frac{ t^9}{2!}+15\cdot 14 \frac{t^{13}}{3!}+\ldots $$ Qfwfq J.C. OttemJ.C. Ottem $\begingroup$ For a combinatorial interpretation of this series solution see qchu.wordpress.com/2010/10/08/… . $\endgroup$ – Qiaochu Yuan Mar 10 '11 at 16:58 $\begingroup$ I should point out that the connection between hypergeometric functions and algebraic equations is more than a pure coincidence of series inversion. Felix Klein's 'Lectures on the Icosahedron' offers a very nice derivation of the roots from a geometric viewpoint, using the symmetry group of the icosahedron and a Galois resolvent of degree 60. $\endgroup$ – J.C. Ottem Mar 10 '11 at 17:11 $\begingroup$ The reduced coefficients of the inverted series are aerated oeis.org/A002294, the quintic Pfaff-Fuss-Catalan integer sequence. $\endgroup$ – Tom Copeland Jun 25 '17 at 20:56 Hypergeometric series are solutions of a large class of differential equations. A series $\sum_{k} a_k t^k$ is hypergeometric if $Q_{k}=\frac{a_{k+1}}{a_k}$ is a rational function. Many familiar functions (trigonometric functions, exponential,logarithm,Hermite polynomials, Laguerre polynomials, etc) are hypergeometric. Hypergeometric functions show up as solutions of many important ordinary differential equations. In particular in physics, for example in the study of the hydrogene atom (Laguerre polynomials) and in simple problems of classical mechanics (Hermite polynomials appear in the study of the harmonic oscillator). Hypergeometric functions are also important in the study of elliptic elliptic curves where they can be used to compute the inverse of the $j$-invariant. I guess you can read more about them in this wikipedia page or in these notes. Several examples of applications to number theory, physics and combinatorics can be read here . JMEJME $\begingroup$ In the study of the hydrogen atom it is the Laguerre polynomials that appear. Hermite polynomials are important for the quantum harmonic oscillator. $\endgroup$ – Marcel Apr 10 '18 at 20:17 $\begingroup$ @Marcel, you are right, I am correcting this. $\endgroup$ – JME Mar 12 '19 at 23:29 One possible answer is that hypergeometric series were (and are) used to compute periods of elliptic integrals. In modern terminology, take a smooth cubic $X \subset \mathbb{P}^2$ whose Weierstrass form is $y^2w=x(x-w)(x-\lambda w), \quad \lambda \in \mathbb{P}^1-\{0, 1, \infty\}$. Then $X$ is an elliptic curve, then it can be written as $\mathbb{C}/\Lambda$, where $\Lambda$ is a lattice. It turns out that the generators $\omega_1(\lambda)$, $\omega_2(\lambda)$ of $\Lambda$, i.e. the periods of the associated Weierstrass $\wp$-function $\wp(z; \Lambda):=\frac{1}{z^2} + \sum_{l \in \Lambda-0} \big(\frac{1}{(z-l)^2}-\frac{1}{l^2} \big)$, can be written in terms of the standard hypergeometric series $F$, namely $\omega_1(\lambda)=i\pi F(\frac{1}{2},\frac{1}{2} , 1, 1-\lambda)$, $\omega_2(\lambda)=i \pi F(\frac{1}{2}, \frac{1}{2}, 1, \lambda)$. For further details see Chapter 1 of Kobliz's book "Introduction to elliptic curves and modular forms". Francesco PolizziFrancesco Polizzi Hypergeometric functions arise as matrix coefficients of representations of Lie groups. This is my formal answer, but I will also describe very informally why I believe this is an answer to your question. (By this I roughly mean: why this might have triggered interest of 19th century mathematicians who didn't have the language to be aware that this is what they were looking at. However: I do no not know a thing about the history of the subject so this is just a mathematical remark from a modern perspective.) Interesting geometric spaces tend to have a rich group of symmetries (either because that is what makes them interesting (e.g. the circle), or because the symmetry is the only thing that gives us any grip on the object (e.g. spacetime)). Also studying a space through the functions on it has proved to be a powerful way to do mathematics. Hence representations of Lie groups on vector spaces of functions arise naturally. Now it turns out that the matrix coefficient functions (of smaller representations) really give you a grip on these rather huge representations. The canonical example is when the group is compact and the interesting space on which the functions live is the group itself. In this case the Peter-Weyl theorem states that matrix coefficient functions form a basis. (Which at least makes it kind of credible that they are also useful for understanding functions on other compact homogeneous spaces.) Now not everything is compact. (This is relevant here as Gauss' hypergeometric function appears as a matrix coefficient of an $SL(2, \mathbb{R})$-representation. (Incidentally this was the answer to prof. Koornwinder's question at my own thesis defense, see my comment to the original post above.)) However, also in this case matrix coefficient functions help you understand general representations on function spaces. For instance, in the infinite dimensional case (for non-compact groups irreducible representations need not be finite dimensional) it is not immediately clear (to say the least) that a given Lie algebra representation integrates to group representation. One way to make this work (given some extra conditions) is to first construct the matrix coefficients of the hypothesized group representation and then construct the actual representation from that (see for instance [1]). To me this always felt a bit like proving the existence of the Yeti from its footprint, but the difference is that it actually works. Final remark: this is just the first part of JME's answer in disguise since in the setting of a Lie group acting on a space of functions on a homogeneous space (for that Lie-group), the differential equations JME is talking about come from the action of the Lie algebra. [1] v.d. Ban: Induced representations and the Langlands classification [http://www.staff.science.uu.nl/~ban00101/manus/edinb.pdf] VincentVincent The (general) hypergeometric equation has one more property which has not yet been mentionned: it has as (formal) solution at 0 exactly a series whose sequence of coefficients satisfies a first-order linear recurrence equation with polynomial coefficients. [One has to include all pFq here, including divergent ones]. The result above is a formal version of the action of the inverse Mellin transform on linear recurrence equations with polynomial coefficients. In other words, structurally speaking, the hypergeometric equation is 'first order' (because its coefficient sequence is, not because its differential equation is), but it is the most general such. The fact that this corresponds to a lot of known functions, as well as showing up in quite a few other places, is (often) a reflection of these structural properties. Jacques CaretteJacques Carette You might like the wonderful book A=B: http://www.math.upenn.edu/~wilf/AeqB.html Not the answer you're looking for? Browse other questions tagged special-functions hypergeometric-functions or ask your own question. A good reference to grok hypergeometric functions? hyperbolic functions and Gauss hypergeometric series Relations between some works by Deligne-Mostow and Thurston question about equality series containing hypergeometric term and a simple term
CommonCrawl
IAM-PIMS Joint Distinguished Colloquium Dynamics of Abyssal Ocean Currents Gordon E. Swaters Location: UBC The ocean is the regulator of Earth's climate. The world's oceans store an enormous quantity of heat, which is redistributed throughout the world via the currents. Because the density of water is about a thousand times larger than the density of air, the ocean has a substantial inertia associated with it compared to the atmosphere. This implies that it takes an enormous quantity of energy to change an existing ocean circulatory pattern compared to the atmospheric winds. One can think of the ocean as the "memory" and "integrator" of past and evolving climate states. One can characterize ocean currents into two broad groups. The first are the wind-driven currents. These currents are most intense near the surface of the ocean. Their principal role is to transport warm equatorial waters toward the polar regions. The second group of currents are those that are driven by density contrasts with the surrounding waters. Among these are the deep, or abyssal, currents flowing along or near the bottom of the oceans in narrow bands. Their principal role is to transport cold, dense waters produced in the polar regions toward the equator. My research group is working toward understanding the dynamics of these abyssal currents. In particular, we have focused on developing innovative mathematical and computational models to describe the evolution, including the transition to instability and interaction with the surrounding ocean and bottom topography, of these currents. The goal of this research is to better understand the temporal variability in the planetary scale dynamics of the ocean climate system. Our work can be seen as "theoretical" in the sense that we attempt to develop new models to elucidate the most important dynamical balances at play and "process-oriented" in the sense that we attempt to use these models to make concrete predictions about the evolution of these flows. As such, our work is a blend of classical applied mathematics, high-performance computational science and physical oceanography. In this talk, we will attempt to give an overview of our work in this area. Transition pathways in complex systems: throwing ropes over rough mountain passes, in the dark This lecture describes the statistical mechanics of trajectory space and examples of what can be learned from it. These examples include numerical algorithms for studying rare but important events in complex systems -- systems where transition states are not initially known, where transition states need not coincide with saddles in a potential energy landscape, and where the number of saddles and other features are too numerous and complicated to enumerate explicitly. This methodology for studying trajectories is called "transition path sampling." Extensive material on this topic can be found at the web site: gold.cchem.berkeley.edu . Spatial complexity in ecology and evolution Ulf Dieckmann The International Institute for Applied Systems Analysis The field of spatial ecology has expanded dramatically in the last few years. This talk gives an overview of the many intriguing phenomena arising from spatial structure in ecological and evolutionary models. While traditional ecological theory sadly fails to account for such phenomena, complex simulation studies offer but limited insight into the inner workings of spatially structured ecological interactions. The talk concludes with a survey of some novel methods for simplifying spatial complexity that offer a promising middle ground between spatially ignorant and spatially explicit approaches. Turbulence and its Computation Parviz Moin Center for Turbulence Research, Stanford University and NASA Ames Research Center Turbulence is a common state of fluid motion in engineering applications and geophysical and astrophysical scales. Prediction of its statistical properties and the ability to control turbulence is of great practical significance. Progress toward a rigorous analytic theory has been prevented by the fact that turbulence is a mixture of high dimensional chaos and order, and turbulent flows possess a wide range of temporal and spatial scales with strong non-linear interactions. With the advent of supercomputers it has become possible to compute some turbulent flows from basic principles. The data generated from these calculations have helped to understand the nature and mechanics of turbulent flows in some detail. Recent examples from large scale computations of turbulent flows and novel numerical experiments used to study turbulence will be presented. These display a wide range in complexity from decaying turbulence in a box to turbulent combustion in a combustor of a real jet engine. The hierarchy of methods for computing turbulent flows and the problem of turbulence closure will be discussed. Recent applications of optimal control theory to turbulence control for drag and noise reduction will be presented. Fast accurate solution of stiff PDE Lloyd N. Trefethen Oxford University Computing Laboratory Many partial differential equations combine higher-order linear terms with lower-order nonlinear terms. Examples include the KdV, Allen-Cahn, Burgers, and Kuramoto-Sivashinsky equations. High accuracy is typically needed because the solutions may be highly sensitive to small perturbations. For simulations in simple geometries, spectral discretization in space is excellent, but what about the time discretization? Too often, second-order methods are used because higher order seems impractical. In fact, fourth-order methods are entirely practical for such problems, and we present a comparison of the competing methods of linearly implicit schemes, split step schemes, integrating factors, "sliders", and ETD or exponential time differencing. In joint work with A-K Kassam we have found that a fourth-order Runge-Kutta scheme known as ETDRK4, developed by Cox and Matthews, performs impressively if its coefficients are stably computed by means of contour integrals in the complex plane. Online examples show that accurate solutions of challenging nonlinear PDE can be computed by a 30-line Matlab code in less than a second of laptop time. Detached-Eddy Simulation Philippe R. Spalart Boeing Corp., Seattle DES is a recent technique, devised to predict separated flows at high Reynolds numbers with a manageable cost, for instance an airplane landing gear or a vehicle. The rationale is that on one hand, Large-Eddy Simulation (LES) is unaffordable in the thin regions of the boundary layer, and on the other hand, Reynolds-Averaged Navier-Stokes (RANS) models seem permanently unable to attain sufficient accuracy in regions of massive separation. DES contains a single model, typically with one transport equation, which functions as a RANS model in the boundary layer and as a Sub-Grid-Scale model in separated regions, where the simulation becomes an LES. The approach has spread to a number of groups worldwide, and appears quite stable. A range of examples are presented, from flows as simple as a circular cylinder to flows as complex as a fighter airplane beyond stall. The promise and the limitations of the technique are discussed. Numerical Simulation of Turbulence Joel H. Ferziger Turbulence is a phenomenon (or rather a set of phenomena) that is difficult to deal with both mathematically and physically because it contains both deterministic and random elements. However, the equations governing its behavior are well known. After a short discussion of the physics of turbulence, we will give a discussion of the approaches used to deal with it and an example of the use of simulation techniques to learn about the physics of turbulence and the development of simple models for engineering use. Approximation Algorithms and Games on Networks Eva Tardos In this talk we discuss work at the intersection of algorithms design and game theory. Traditional algorithms design assumes that the problem is described by a single objective function. One of the main current trends of work focuses on approximation algorithm, where computing the exact optimum is too hard. However, there is an additional difficulty in a number of settings. It is natural to consider algorithmic questions where multiple agents each pursue their own selfish interests. We will discuss problems and results that arise from this perspective. Algorithms and Software for Dynamic Optimization with Application to Chemical Vapor Deposition Processes Linda Petzold In recent years, as computers and algorithms for simulation have become more efficient and reliable, an increasing amount of attention has focused on the more computationally intensive tasks of sensitivity analysis and optimal control. In this lecture we describe algorithms and software for sensitivity analysis and optimal control of large-scale differential-algebraic systems, focusing on the computational challenges. We introduce a new software package DASPK 3.0 for sensitivity analysis, and discuss our progress to date on the COOPT software and algorithms for optimal control. An application from the chemical vapor deposition growth of a thin film YBCO high-temperature superconductor will be described. The Mathematics of Reflection Seismology Gunther Uhlmann Reflection seismology is the principal exploration tool of the oil industry and has many other technical and scientific uses. Reflection seismograms contain enormous amounts of information about the Earth's structure, obscure by complex reflection and refraction effects. Modern mathematical understanding of wave propagation in heterogeneous materials has aided in the unraveling of this complexity. The speaker will outline some advances in the theory of oscillatory integrals which have had immediate practical application in seismology. Radial Basis Functions - A future way to solve PDEs to spectral accuracy on irregular multidimensional domains? Bengt Fornberg University of Colorado It was discovered about 30 years ago that expansions in Radial Basis Functions (RBFs) provide very accurate interpolation of arbitrarily scattered data in any number of spatial dimensions. With both computational cost and coding effort for RBF approximations independent of the number of spatial dimensions, it is not surprising that RBFs have since found use in many applications. Their use as basis functions for the numerical solution of PDEs is however surprisingly novel. In this Colloquium, we will discuss RBF approximations from the perspective of someone interested in pseudospectral (spectral collocation) methods primarily for wave-type equations. PIMS PDE/Geometry Seminar Unusual comparison properties of capillary surfaces Robert Finn Date: This talk will address a question that was raised about 30 years ago by Mario Miranda, as to whether a given cylindrical capillary tube always raises liquid higher over its section than does a cylinder whose section strictly contains the given one. Depending on the specific shapes, the answer can take unanticipated forms exhibiting nonuniformity and discontinuous reversal in behavior, even in geometrically simple configurations. The presentation will be for the most part complete and self-contained, and is intended to be accessible for a broad mathematical audience. This talk will address a question that was raised about 30 years ago by Mario Miranda, as to whether a given cylindrical capillary tube always raises liquid higher over its section than does a cylinder whose section strictly contains the given one. Depending on the specific shapes, the answer can take unanticipated forms exhibiting nonuniformity and discontinuous reversal in behavior, even in geometrically simple configurations. The presentation will be for the most part complete and self-contained, and is intended to be accessible for a broad mathematical audience. String Theory Seminar D-particles with multipole moments of higher dimensional branes Mark van Raamsdonk PIMS-MITACS Seminar on Computational Statistics and Data Mining A Simple Model for a Complex System: Predicting Travel Times on Freeways John A. Rice A group of researchers from the Departments of EECS, Statistics, and the Institute for Transportation Research at UC Berkeley has been collecting and studying data on traffic flow on freeways in California. I will describe the sources of data and give an overview of the problems being addressed. I will go into some detail on a particular problem-forecasting travel times over a network of freeways. Although the underlying system is very complex and tempting to model, a simple model is surprisingly effective at forecasting. Some of the work the group is doing appears on these websites: http://www.dailynews.com/news/articles/0201/20/new01.asp http://oz.berkeley.edu/~fspe/ http://http.cs.berkeley.edu/~zephyr/freeway/ http://www.its.berkeley.edu/projects/freewaydata/ http://www.path.berkeley.edu/ http://http.cs.berkeley.edu/~pm/RoadWatch/index.html http://www.path.berkeley.edu/~pettyk/rssearch.html Robust Factor Model Fitting and Visualization of Stock Market Returns R. Douglas Martin Stock market returns are often non-Gaussian by virtue of containing outliers. Modeling stock returns and calculating portfolio risk is almost invariably accomplished by fitting a linear model, called a "factor" model in the finance community, using the sanctified method of ordinary least squares (OLS). However, it is well-known that stock returns are often non-Gaussian by virtue of containing outliers, and that OLS estimates are not robust toward outliers. Modern robust regression methods are now available that are not for stock returns using firm size and book-to-market as the factors, where we show that OLS gives a misleading result. Then we show how Trellis graphics displays can be used to obtain quick, penetrating visualization of stock returns factor model data, and to obtain convenient comparisons of OLS and robust factor model fits. Last but not least, we point out that robust factor model fits and Trellis graphics displays are in effect powerful "data mining tools" for better understanding of financial data. Our examples are constructed using a new S-PLUS Robust Methods library and S-PLUS Trellis graphics displays. PIMS-MITACS Financial Seminar Series Levy Processes in Financial Modeling Dilip Madan We investigate the relative importance of diffusion and jumps in a new jump diffusion model for asset returns. In contrast to the standard modelling of jumps for asset returns, the jump component of our process can display finite or infinite activity, and finite or infinite variation. Empirical investigations of time series indicate that index dynamics are essentially devoid of a diffusion component, while this component may be present in the dynamics of individual stocks. This result leads to the conjecture that the risk-neutral process should be free of a diffusion component for both indices and individual stocks. Empirical investigation of options data tends to confirm this conjecture. We conclude that the statistical and risk-neutral processes for indices and stocks tend to be pure jump processes of infinite activity and finite variation. PIMS Distinguished Lecture Series Systems of Nonlinear PDEs arising in economic theory Testing the foundations of microeconomic theory leads us into a mathematical analysis of systems of nonlinear PDEs. Some of these can be solved in a C^\infty framework by using the classical Darboux theorem and its recent extensions, others require analysticity and more refined tools, such as the Cartan-Kahler theorem. Care will be taken to explain the economic framework and the tools of differential geometry. Odd embeddings of lens spaces David Gillman Colliding Black Holes and Gravity Waves: A New Computational Challenge Douglas N. Arnold Institute for Mathematics and its Applications An ineluctable, though subtle, consequence of Einstein's theory of general relativity is that relatively accelerating masses generate tiny ripples on the curved surface of spacetime which propagate through the universe at the speed of light. Although such gravity waves have not yet been detected, it is believed that technology is reaching the point where detection is possible, and a massive effort to construct worldwide network of interferometer gravity wave observatories is well underway. They promise to be our first window to the universe outside the electromagnetic spectrum and so, to astrophysicists and others trying to fathom a universe composed primarily of electromagnetically dark matter, the potential payoff is enormous. If gravitational wave detectors are to succeed as observatories, we must learn to interpret the wave forms which are detected. This requires the numerical simulation of the violent cosmic events, such as black hole collisions, which are the most likely sources of detectable radiation, via the numerical solution of the Einstein field equations. The Einstein equations form a system of ten second order nonlinear partial differential equations in four-dimensional spacetime which, while having a very elegant and fundamental geometric character, are extremely complex. Their numerical solution presents an enormous computational challenge which will require the application of state-of-the-art numerical methods from other areas of computational physics together with new ideas. This talk aims to introduce some of the scientific, mathematical, and computational problems involved in the burgeoning field of numerical relativity, discuss some recent progress, and suggest directions of future research. Chow Forms and Resultants - old and new Mathematical Science Research Institute (Berkeley) The Mandelbrot Set, the Farey Tree, and the Fibonacci Sequence Robert L. Devaney In this lecture several folk theorems concerning the Mandelbrot set will be described. It will be shown how one can determine the dynamics of the corresponding quadratic maps by visualizing tiny regions in the Mandelbrot set as well as how the size and location of the bulbs in the Mandelbrot set is governed by Farey arithmetic. A Computational View of Randomness Avi Wigderson The current state of knowledge in Computational Complexity Theory suggests two strong empirical "facts" (whose truth are the two major open problems of this field). 1. Some natural computational tasks are infeasible (e.g. it seems so for computing the functions PERMANENT, FACTORING, CLIQUE, SATISFIABILITY ...) 2. Probabilistic algorithms can be much more efficient than deterministic ones. (e.g it seems so for PRIMALITY, VERIFYING IDENTITIES APPROXIMATING VOLUMES...). As it does with other notions (e.g. knowledge, proof..), Complexity Theory attempts to understand the notion of randomness from a computational standpoint. One major achievement of this study is the following (surprising?) relation between these two "facts" above: THEOREM: (1) contradicts (2) In words: If ANY "natural" problem is "infeasible", then EVERY probabilistic algorithm can be "efficiently" "derandomized". I plan to explain the sequence of important ideas, definitions, and techniques developed in the last 20 years that enable a formal statement and proof of such theorems. Many of them, such as the formal notions of "pseudo-random generator", and "computational indistinguishability" are of fundamental interest beyond derandomization; they have far reaching implications on our ability to build efficient cryptographic systems, as well as our inability to efficiently learn natural concepts and effectively prove natural mathematical conjectures (such as (1) above). Thematic Programme on Inverse Problems and Applications Reconstructing the Location and Magnitude of Refractive Index Discontinuities from Truncated Phase-Contrast Tomographic Projections Mark Anastasio Illinois Institute of Technology Joint work with Daxin Shi, Yin Huang, and Francesco De Carlo. I. INTRODUCTION: In recent years, much effort has been devoted to developing imaging techniques that rely on contrast mechanisms other than absorption. Phase-contrast computed tomography (CT) is one such technique that exploits differences in the real part of the refractive index distribution of an object to form an image using a spatially coherent light source. Of particular interest is the ability of phase-contrast CT to produce useful images of objects that have very similar or identical absorption properties. In applications such as microtomography, it is imperative to reconstruct an image with high resolution. Experimentally, the demand of increased resolution can be achieved by highly collimating the incident light beam and using a microscope optic to focus the transmitted image, formed on a scintillator screen, onto the detector. When the object is larger than the field-of-view (FOV) of the imaging system, the measured phase-contrast projections are necessarily truncated and one is faced with the so-called local CT reconstruction problem. To circumvent the non-local nature of conventional CT, local CT algorithms have been developed that aim to to reconstruct a filtered image that contains detailed information regarding the location of discontinuities in the imaged object. Such information is sufficient for determining the structural composition of an object, which is the primary task in many biological and materials science imaging applications. II. METHODS A. Theory of Local Phase-Contrast Tomography: We have recently demonstrated that the mathematical theory of local CT, which was originally developed for absorption CT, can be applied naturally for understanding the problem of reconstructing the location of image boundaries (i.e., discontinuities) from truncated phase-contrast projections. Our analysis suggested the use of a simple backprojection-only algorithm for reconstructing object discontinuities from truncated phase-contrast projection data that is simpler and more theoretically appropriate than use of the FBP algorithm or use of the exact reconstruction algorithm for phase-contrast CT that was recently proposed by Bronnikov [1]. We demonstrated that the reason why this simple backprojection-only procedure represents an effective local reconstruction algorithm for phase-contrast CT is that the filtering operation that needs to be explicitly applied to the truncated projection data in conventional absorption CT is implicitly applied to the phase-contrast projection data (before they are measured) by the act of paraxial wavefield propagation in the near-field. In this talk, we review the application of local CT reconstruction theory to the phase-contrast imaging problem. Using concepts from microlocal analysis, we describe the features of an object that can be reliably reconstructed from incomplete phase-contrast projection data. In many applications, the magnitude of the refractive index jump across an interface may provide useful information about the object of interest. For the first time, we demonstrate that detailed information regarding the magnitude of refractive index discontinuities can be extracted from the phase-contrast projections. Moreover, we show that these magnitudes can be reliable reconstructed using adaptations of algorithms that were originally developed for absorption local CT. B. Numerical Results: We will present extensive numerical results to corroborate our theoretical assertions. Both simulation data and experimental coherent X-ray projection data acquired at the Advanced Photon Source (APS) at Argonne National Laboratory will be utilized. We will compare the ability of the available approximate and exact reconstruction algorithms to provide images that contain accurate information regarding the location and magnitude of refractive index discontinuities. The stability of the algorithms to data noise and inconsistencies will be reported. In Fig. 1, we show some examples of phase-contrast images reconstructed from noiseless simulation data. III. SUMMARY In this talk, we address the important problem of reconstructing the location and magnitude of refractive index discontinuities in phase-contrast tomography. We theoretically investigate existing and novel reconstruction algorithms for reconstructing such information from truncated phase-contrast tomographic projections and numerically corroborate our findings using simulation and experimental data. IV. REFERENCES [1] A. Bronnikov, "Theory of quantitative phase-contrast computed tomography," Journal of the Optical Society of America (A), vol. 19, pp. 472-480, 2002. Anna Celler (Inverse Problems and Nuclear Medicine): Medical Imaging Research Group Division of Nuclear Medicine Vancouver Hospital and Health Sciences Centre Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) are two nuclear medicine (NM) imaging techniques that visualize in 3D distributions of radiolabeled tracers inside the human body. Since concentration of the tracer in each location in the body reflects its physiology, these techniques constitute powerful diagnostic tools to investigate organ functions and changes in metabolism caused by disease processes. Currently however, clinical studies image only stationary activity distributions and the analysis of the results remains mainly qualitative. As it is believed that absolute quantitation of the data would greatly enhance diagnostic accuracy of the tests, a lot of research effort is directed towards this goal. Reconstructions that create 3D tomographic images from the data acquired around the patient represent an example of the inverse problem application. In the last years this area has undergone rapid development but still important questions persist. The data are incomplete, noisy and altered by physics phenomena and the acquisition process. This causes the problem to be illposed so that even small changes in the data can produce large effects in the solution. The talk will present basic principles of NM data acquisition and image creation and will relate them to the underlying physics effects. A discussionof the most important factors that limit quantitation and a short overview of the correction methods will follow. Different approaches to dynamic imaging will be presented. Reconstruction Methods in Optical Tomography and Applications to Brain Imaging Dr. Simon Arridge In the first part of this talk I will discuss methods for reconstruction of spatially-varying optical absorbtion and scattering images from measurements of transmitted light through highly scattering media. The problem is posed in terms of non-linear optimisation, based on a forward model of diffusive light propgation, and the principle method is linearisation using the adjoint field method. In the second part I will discuss the particular difficulties involved in imaing the brain. These include: - Accounting for non or weakly scattering regions that do not satisfy the diffusion approximation (the void problem) - Accounting for anisotropic scattering regions - Constructing realistic 3D models of the head shape - Dynamic imaging incorporating temporal regularisation Fast Hierarchical Algorithms for Tomography Yoram Bresler The reconstruction problem in practical tomographic imaging systems is recovery from samples of either the x-ray transform (set of the line-integral projections) or the Radon transform (set of integrals on hyperplanes) of an unknown object density distribution. The method of choice for tomographic reconstruction is filtered backprojection (FBP), which uses a backprojection step. This step is the computational bottleneck in the technique, with computational requirements of O(N^3) for an NxN pixel image in two dimensions, and at least O(N^4) for an NxNxN voxel image in three dimensions. We present a family of fast hierarchical tomographic backprojection algorithms, which reduce the complexities to O(N^2 log N) and O(N^3 log N), respectively. These algorithms employ a divide-and-conquer strategy in the image domain, and rely on properties of the harmonic decomposition of the Radon transform. For image sizes typical in medical applications or airport baggage security, this results in speedups by a factor of 50 or greater. Such speedups are critical for next-generation real-time imaging systems. How Medical Science will Benefit from Mathematics of the Inverse Problem Thomas F. Budinger Lawrence Berkeley National Laboratory and University of California Berkeley and San Francisco Selection of in-vivo imaging modalities (i.e x-ray, MRI, PET, SPECT, light absorption, fluorescence and luminescence, current source and electrical potential) can be logically approached by evaluating biological parameters relative to the biomedical objective (e.g. cardiac apoptosis vs cardiac stem cell trafficking and vs plaque composition vs plaque surface chemistry). For that evaluation, contrast resolution, of highest importance for modality selection in most cases, is defined as the signal to background for the desired biochemical or physiological parameter. But a particular modality which has exquisite biological potential (e.g. MRI and SPECT for atherosclerosis characterization) might not be deployed in medical science because appropriate algorithms are not available to deal with problems of blurring, variable point spread function, background scatter, detection sensitivity, attenuation and refraction. Trade-offs in technique selection frequently pit contrast resolution against intrinsic instrument resolution (temporal and spatial) and depth or size of the object. For example, imaging vulnerable carotid plaques using a molecular beacon with 5:1 signal to background and with 7 mm resolution in the human neck can be argued as superior to imaging tissue characteristics with 1:3:1 signal to background at 0.5 mm resolution with MRI. Another example is the use of the multidetector CT (helical) due to its relative speed instead of MRI to characterize coronary plaques even though MRI has much better intrinsic contrast mechanisms. The superior speed of modern CT argues for its preferred use. Some old examples of how mathematics of the inverse problem have enabled medical science advances include incorporation of attenuation compensation in SPECT imaging which brought SPECT to a quantitative technique, light transmission and fluorescence emission tomography, iterative reconstruction algorithm for all methods, and incorporation of phase encoding for MRI reconstruction. Current work on new mathematical approaches includes endeavors to improve resolution, improve sampling speed, decrease background and achieve reliable quantitation. Examples are rf exposure reduction in MRI by selective radio frequency pulses requiring low peak power, dose reduction by iterative reconstruction schemes in X-Ray CT, implementation of coded aperture models for emission tomography, 3D and time reversal ultrasound, a multitude of transmission and stimulated emission methods for light wavelength of 400nm to 3 cm, and electrical potential and electric source imaging. Many of these subjects will be discussed at this workshop and all rely on innovations in mathematics applied to the inverse problem. New Multiscale Thoughts on Limited-Angle Tomography Emmanuel Candes This talk is concerned with the problem of reconstructing an object from noisy limited-angle tomographic data---a problem which arises in many important medical applications. Here, a central question is to describe which features can be reconstructed accurately from such data and how well, and which features cannot be recovered. We argue that curvelets, a recently developed multiscale system, may have a great potential in this setting. Conceptually, curvelets are multiscale elements with a useful microlocal structure which makes them especially adapted to limited-angle tomography. We develop a theory of optimal rates of convergence which quantifies that features which are microlocally in the "good" direction can be recovered accurately and which shows that adapted curvelet-biorthogonal decompositions with thresholding can achieve quantitatively optimal rates of convergence. We hope to report on early numerical results. Computed Imaging for Near-Field Microscopy P. Scott Carney Near-field optics provides a means to observe the electromagnetic field intensity in close proximity to a scattering of radiating sample. Modalities such as near-field scanning optical microscopy (NSOM) and photon scanning tunneling microscopy (PSTM) accomplish these measurements by placing a small probe close to the object (in the "near-zone") and then precision controlling the position. The data are usually plotted as a function probe position and the resulting figure is called an image. These modalities provide a means to circumvent the classical Rayleigh-Abbe resolution limits, providing resolution on scales of a small fraction of a wavelength. There are a number of problems associated with the interpretation of near-field images. If the probe is slightly displaced from the surface of the object, the image quality degrades dramatically. If the sample is thick, the subsurface features are obscured. The quantitative connection between the measurements and the optical properties of the sample is unknown. To resolve all these problems it is desirable to solve the inverse scattering problem (ISP) for near-field optics. The solution of the ISP provides a means to tomographically image thick samples and assign quantitative meaning to the images. Furthermore, data taken at distances up to one wavelength from the sample may be processed to obtain a focused, or reconstructed image of the sample at subwavelength scales. Preferred Pitches in Multislice Spiral CT from Periodic Sampling Adel Faridani Joint work with Larry Gratton. Applications of sampling theory in tomography include the identification of efficient sampling schemes; a qualitative understanding of some artifacts; numerical analysis of reconstruction methods; and efficient interpolation schemes for non-equidistant sampling. In this talk we present an application of periodic sampling theorems in three-dimensional multisclice helical tomography shedding light on the question of preferred pitches. Spherical Means and Thermoacoustic Tomography Oregon Stage University In thermoacoustic tomography, impinging radiation causes local heating which generates sound waves. These are measured by transducers, and the problem is to recover the density of emitters. This may be modelled as the recovery of the initial value of the time derivative of the solution of the wave equation from knowledge of the solution on (part of) the boundary of the domain. This talk, in conjunction with the talk by Sarah Patch, will report on recent work by the author, S. Patch and Rakesh on uniqueness and stability and an inversion formula, in odd dimensions, for the special case when measurements are taken on an entire sphere surrounding the object. The well-known relation between spherical means and solutions of the wave equation then implies results on recovery of a function from its spherical means. Transient Elastography and Supersonic Shear Imaging Mathias Fink Laboratoire Ondes et Acoustique ESPCI, Paris Palpation is a standard medical practice which relies on qualitative estimation of the tissue Young's modulus E. In soft tissues the Young's modulus is directly proportional to the shear modulus ó (E = 3ó). It explains the great interest for developing quantitative imaging of the shear modulus distribution map. This can be achieved by observing with NMR or with ultrasound the propagation of low frequency shear waves (between 50 Hz and 500 Hz) in the body. The celerity of these waves is relatively low (between 1 and 10 m/s) and these waves can be produced either by vibrators coupled to the body or by ultrasonic radiation pressure. We have developed an ultra high-rate ultrasonic scanner that can give 10.000 ultrasonic images per second of the body. With such a high frame-rate we can follow in real time the propagation of transient shear waves, and from the spatio-temporal evolution of the displacement fields, we can use inversion algorithm to recover the shear modulus map. New inversion algorithm can be used that are no more limited by diffraction limits. In order to obtain unbiased shear elasticity map, different configurations of shear sources induced by radiation pressure of focused transducer arrays are used. A very interesting configuration that induces quasi plane shear waves will be described. It used a sonic shear source that moves at supersonic velocities, and that is created by using a very peculiar beam forming in the transmit mode. In vitro and in vivo results will be presented that demonstrate the interest of this new transient elastographic technique. Effects of Target Non-localization on the Contrast of Optical Images: Lessons for Inverse Reconstruction Amir Gandjabkhche A general inversion formula for cone beam CT Alexander Katsevich Given a rather general weight function n, we derive a new cone beam transform inversion formula. The derivation is explicitly based on Grangeat's formula and the classical 3D Radon transform inversion. The new formula is theoretically exact and is represented by a two-dimensional integral. We show that if the source trajectory C is complete (and satisfies two other very mild assumptions), then substituting the simplest uniform weight n gives a convolution-based filtered back-projection algorithm. However, this easy choice is not always optimal from the point of view of practical applications. Uniform weight works well for closed trajectories, but the resulting algorithm does not solve the long object problem if C is not closed. In the latter case one has to use the flexibility in choosing n and find the weight that gives an inversion formula with the desired properties. We show how this can be done for spiral CT. It turns out that the two inversion algorithms for spiral CT proposed earlier by the author are particular cases of the new formula. For general trajectories the choice of weight should be done on a case by case basis. The Green's Function for the Radiative Transport Equation Reconstruction of conductivities in the plane Kim Knudsen Joint work with Jennifer Mueller, Samuli Siltanen and Alex Tamasan. In this talk I will consider the mathematical problem behind Electrical Impedance Tomography, the inverse conductivity problem. The problem is to reconstruct an isotropic conductivity distribution in a body from knowledge of the voltage-to-current map at the boundary of the body. I will discuss the two-dimensional problem and give a reconstruction algorithm, which is direct and mathematically exact. The method is based on the so-called dbar-method of inverse scattering. Both theoretical validation of the algorithm and numerical examples will be given. Inverse scattering problem with a random potential Matti Lassas Rolf Nevanlinna Institute In these talk we consider scattering from random media and the inverse problem for it. As a stereotype of inverse scattering problems, we consider the SchrÎdinger equation $$ (\Delta+q+k^2)u(x,y,k)=\delta_y $$ with a random potential $q(x)$. Also, we discuss shortly the relation of this problem to medical imaging. The potential $q(x)$ is assumed to be a Gaussian random function which covariance function $E(q(x)q(y))$ is smooth outside the diagonal. We show how the realizations of the amplitude of the scattered field $|u_s(x,y,k)|$, averaged over frequency parameter $k>1$, can be used to determine stochastic properties of $q$, in particular the principal symbol of the covariance operator. This corresponds to finding the correlation length function of the random medium. In contrast to applied literature, we approach the problem with methods that do not require approximations. In technical point of view, we analyze the scattering from the random potential by combining methods of harmonic and microlocal analysis with stochastic, in particular with theory of ergodic processes. Interior Elastodynamics Inverse Problems: Recovery of Shear Wavespeed in Transient Elastography Dr. Joyce McLaughlin For this new inverse problem the data is the time and space dependent interior displacement measurements of a propagating elastic wave. The medium is initially at rest with a wave initiated at the boundary or at an interior point by a broad band source. A property of the wave is that it has a propagating front. For this new problem we present well posedness results and an algorithm that recovers shear wavespeed from the time and space dependent position of the propagating wavefront. We target the application from transient elastography where images are created of the shear wavespeed in biological tissue. The goal is to create a medical diagnostic tool where abnormal tissue is identified by its abnormal shear stiffness characteristics. Included in our presentation are images of stiffness changes recovered by our algorithms and using data measured in the laboratory of Mathias Fink. Reconstructions of Chest Phantoms by the D-Bar Method for Electrical Impedance Tomography Jennifer Mueller In this talk a direct (noniterative) reconstruction algorithm for EIT in the two-dimensional geometry is presented. The algorithm is based on the mathematical uniqueness proof by A. Nachman [Ann. of Math. 143 (1996)] for the 2-D inverse conductivity problem. Reconstructions from experimental data collected on a saline-filled tank containing agar heart and lung phantoms are presented, and the results are compared to reconstructions by the NOSER algorithm on the same data. 3D Emission Tomography via Plane Integrals Frank Natterer University of Munster In emission tomography one reconstructs the activity distribution of a radioactive tracer in the human body by measuring the activity outside the body using collimated detectors. Usually the collimators single out lines along which the measurements are taken. In a novel design (Solstice camera) weighted plane integrals are measured instead. By a statistical error analysis it can be shown that the Solstice concept is superior to the classical line scan for high resolution, making Solstice attractive for small animal imaging. By a suitable approximation of the weight function we can reduce the reconstruction problem to Marr's two stage algorithm for the 3D Radon transform, leading to an efficient algorithm. In order to account for attenuation we approximate the 3D problem by the 2D attenuated Radon transform which can be inverted by Novikov's algorithm. We show reconstructions from simulated and measured data. Information Geometry, Alternating Minimizations, and Transmission Tomography Joseph A. O'Sullivan Many medical imaging problems can be formulated as statistical inverse problems to which estimation theoretic methods can be applied. Statistical likelihood functions can be viewed in information-theoretic terms as well. Maximizations of statistical likelihood functions for several image estimation problems, including emission and transmission tomography, can be reformulated as double minimizations of information divergences. Properties of minimizations of I-divergences are studied in information geometry. This more general viewpoint yields new characterizations of algorithms and new algorithms for transmission tomography. These new algorithms are described in detail as are medical imaging applications of transmission tomography in the presence of metal. Imaging in Clutter George Papanicolau Array imaging, like synthetic aperture radar, does not produce good reflectivity images when there is clutter, or random scattering inhomogeneities, between the reflectors and the array. Can the blurring effects of clutter be controlled? I will discuss this issue in some detail and show that if bistatic array data is available and if the data is suitably preprocessed to stabilize clutter effects then a good deal can be done to minimize blurring. Thermoacoustic Tomography - An Inherently 3D Generalized Radon Inversion Problem Sarah Patch GE Medical Systems Joint work with D. FINCH, RAKESH. A hybrid imaging technique using RF excitation measures ultrasound (US) data. Cancerous tissue is hypothesized to preferentially absorb RF energy, heating more and expanding faster than surrounding healthy tissue. Pressure waves therefore emanate from cancerous inclusions and are detected by US transducers located on the surface of a sphere surrounding the imaging object. A formula for the contrast function is derived in terms of data measured over the entire imaging surface. Existence and uniqueness for the inverse problem when transducers cover only a hemisphere also hold. However, explicit inversion for this clinically realizable case remains an open problem. Limited Data Tomography in science and industry Eric Todd Quinto Tomography has revolutionized diagnostic medicine, scientific testing, and industrial nondestructive evaluation, and some of the most difficult problems involve limited data, in which some data are missing. This talk will describe two practical problems and give the mathematical background. The first problem, in industrial nondestructive evaluation (joint with Perceptics, Inc.), uses limited-angle exterior CT to reconstruct a rocket mockup. The second, in electron microscopy (joint with Sidec Technologies), uses limited angle local CT to reconstruct RNA and a virus. ECGI : A Noninvasive Imaging Modality for Cardiac Electrophysiology and Arrhythmias Yoram Rudy Case Western Reserve Nonlinear image reconstruction in optical tomography using an iterative Newton-Krylov method Martin Schweiger Image reconstruction in optical tomography can be formulated as a nonlinear least squares optimisation problem. This paper describes an inexact regularised Gauss-Newton method to solve the normal equation, based on a projection onto the Krylov subspaces. The Krylov linear solver step addresses the Hessian only in the form of matrix-vector multiplications. We can therefore utilise an implicit definition of the Hessian, which only requires the computation of the Jacobian and the regularisation term. This method avoids the explicit formation of the Hessian matrix which is often intractable in large-scale three-dimensional reconstruction problems. We apply the method to the reconstructions of 3-D test problems in optical tomography, whereby we recover the volume distribution of absorption and scattering coefficients in a heterogeneous highly scattering medium from boundary measurements of infrared light transmission. We show that the Krylov method compares favourably to the explicit calculation of the Hessian both in terms of memory space and computational cost. Inversion of the Bloch Equation Meir Shinnar Rutgers University of Medicine and Dentistry of New Jersey The Inverse Polynomial Reconstruction Method for Two Dimensional Image Reconstruction Bernie Shizgal Three-dimensional X-ray imaging with few radiographs Samuli Siltanen Gunma University In medical X-ray tomography, three dimensional structure of tissue is reconstructed from a collection of projection images. In many practical imaging situations only a small number of truncated projections is available from a limited range of view. Traditional reconstruction algorithms, such as filtered backprojection (FBP), do not give satisfactory results when applied to such data. Instead of FBP, Bayesian inversion is suggested for reconstruction. In this approach, a priori information is used to compensate for the incomplete information of the measurement data. Examples with in vitro measurements from dental radiology and surgical imaging are presented. Applications of Diffusion MRI to Electrical Impedance Tomography David Tuch Diffusion MRI measures the molecular self-diffusion of the endogeneous water in tissue. In this talk, I will discuss various applications of diffusion MRI to electrical impedance tomography (EIT). In particular, I will discuss (i) how the anisotropy information from diffusion tensor imaging (DTI) can inform the EIT forward model, and (ii) how particular transport conservation principles measured with DTI can provide priors or hard constraints for the EIT inverse problem. I will also discuss some recent work on mapping non-tensorial diffusion using spherical tomographic inversions of the diffusion signal. Cascade Topology Seminar The best picture of Poincare's homology sphere Homotopy self-equivalences of 4-manifolds Ian Hambleton Skein theory in knot theory and beyond Vaughan Jones New perspectives on self-linking Topological robotics; topological complexity of projective spaces Sergey Yuzvinsky Thematic Programme on Asymptotic Geometric Analysis Entropy increases at every step Shiri Artstein Convolution Inequalities in Convex Geometry Keith Ball The talk presents a new an approach to entropy via a local reverse Brunn-Minkowski inequality. Applications will be presented by other speakers. Optimal Measure Transportation Franck Barthe Université de Marne-la-Vallée Almost sure weak con- vergence and concentration for the circular ensembles of Dyson Gordon Blower On risk aversion and optimal terminal wealth Christer Borell Chalmers University Density and current interpolation Yann Brenier CNRS, Nice We discuss different way of interpolating densities, including the Moser lemma and the Monge-Kantorovich method. Natural extensions to currents interpolation will be addressed. Asymptotic behaviour of fast diffusion equations Jose A. Carrillo Fast Diffusion to self-similarity: complete spectrum, long-time asymptotics and numerology Jochen Denzler Measure Concentration, Transportation Cost, and Functional Inequalities Michel Ledoux University of Toulouse We present a triple description of the concentration of measure phenomenon, geometric (through Brunn-Minkoswki inequalities), measuretheoretic (through transportation cost inequalities) and functional (through logarithmic Sobolev inequalities), and investigate the relationships between these various viewpoints by means of hypercontractive bounds. This expository introduction directed at students and newcomers to the field has been already delivered at the Edinburgh ICMS meeting last April. Nonlinear diffusion to self-similarity: spreading versus shape via gradient flow Robert McCann Geometric inequalities of hyperbolic type Vitali Milman Entropy jumps in the presence of a spectral gap Assaf Naor Free probability and free diffusion Roland Speicher Concentration of non-Lipschitz functions and combinatorial applications Van Vu University of California at San Diego We survey recent results concerning the concentration of functions with large Lipschitz coeffcients and their applications in combinatorial setting. Optimal paths related to transport problems Qinglan Xia (n,d,lambda)-graphs in Extremal Combinatorics Noga Alon Sylvester's Question, Convex Bodies, Limit Shape Imre Barany Transportation versus Rearrangement Universite de Marne la Vallee How to Compute a Norm? Alexander Barvinok Phase Transition for the Biased Random Walk on Percolation Clusters Noam Berger Phase Transition in the Random Partitioning Problem Christian Borgs New Results on Green's Functions and Spectra for Discrete Schroedinger Operators Jean Bourgain On Optimal Transportation Theory Recent Results in Combinatorial Number Theory Mei-Chu Chang Graphical Models of the Internet and the Web Jennifer Chayes Random Sections and Random Rotations of High Dimensional Convex Bodies Apostolos Giannopoulos On the Sections of Product Spaces and Related Topics Efim Gluskin The Poisson Cloning Model for Random Graphs with Applications to k-core Problems, Random 2-SAT, and Random Digraphs Jeong Han Kim We will introduce a new model for random graphs, called the Poisson cloning model, in which all degrees are i.i.d. Poisson random variables. After showing how close this model is to the usual random graph model G(n; p), we will prove threshold phenomena of the k-core problem. The kcore problem is the question of when the random graph G(n; p) contains a k-core, where a k-core of a graph is a largest subgraph with minimum degree at least k. This, in particular, improves earlier bounds of Pittel, Spencer & Wormald. The method can be easily generalized to prove similar results for random hypergraphs. If time allows, we will also discuss the scaling window of random 2-SAT and/or the giant (strong) component of random digraphs. Results and Problems around Borsuk's Conjecture Gil Kalai Random Submatrices of a Given Matrix Ravindran Kannan The Regularity Lemma for Sparse Graphs Yoshiharu Kohyakawa University of San Paulo One of the fundamental tools in asymptotic graph theory is the well-known regularity lemma of Szemereedi. In essence, the regularity lemma tells us that any large graph may be decomposed into a bounded number of quasi-random, induced bipartite graphs. Thus, this lemma is a powerful tool for detecting and making transparent the random-like behaviour of large deterministic graphs. Furthermore, in general, the quasi-random structure that the lemma provides is amenable to deep analysis, and this makes the lemma a very important tool. The quasi-random bipartite graphs that Szemereedi's lemma uses in its decomposition are certain graphs in which the edges are uniformly distributed. The measure of uniformity is such that this concept becomes trivial for graphs of vanishing density. To manage sparse graphs, one may adjust this notion of uniform edge distribution in a natural way, and it is a routine matter to check that the original proof extends to this notion, provided we restrict ourselves to graphs of vanishing density that do not contain `dense patches'. However, the quasi-random structure that the lemma reveals in this case is not too informative, and this puts into question the applicability of this variant of the lemma for `sparse graphs'. Nevertheless, there have been some successful applications of the lemma in this context. In this talk, we shall concentrate on the diffculties one faces and how one can overcome them in certain situations. Algorithmic Applications of Graph Eigenvalues and Related Parameters Michael Krivelevich Tiling Problems and Spectral Sets Izabella Laba Some Estimates of Norms of Random Matrices (non iid case) Rafal Latala Warsaw University Discrete Analytic Functions and Global Information from Local Observation Laszlo Lovasz Concentration and Random Permutations Colin McDiarmid Some phenomena of large dimension in Convex Geometric Analysis Metric Ramsey-Type Phenomena In this talk we will discuss the problem of finding lower bounds for the distortion required to embed certain metric spaces in Hilbert space. We will show that these problems are intimately connected to certain Poincare type inequalities on graph metrics, and we will discuss recent developments which are based on the analysis of the behavior of Markov chains in metric spaces. These new methods allow us to strengthen known results by showing that large subsets of certain natural graphs must be significantly distorted if one wishes to embed them in Hilbert space. On a Non-symmetric Version of the Khinchine-Kahane Inequality Krzysztof Oleszkiewicz Some Large Dimension Problems of Mathematical Physics Leonid Pastur Universitée Pierre & Marie Curie Crayola and Dice: Graph Colouring via the Probabilistic Method Bruce Reed We survey recent results on graph colouring via the probabilistic method. Tools used are the Local Lemma and Concentration inequalities. Ramsey Properties of Random Structures Andrzej Rucinski Distances between Sections of Convex Bodies Mark Rudelson Probabilistically Checkable Proofs (PCP) and Hardness of Approximation Shmuel Safra <2, well embed in l_1^{an}, for any a >1 Gideon Schechtman The Weizmann Institute I'll discuss a recent result of Johnson and myself a particular case of which is the statement in the title. Introduction to the Szemeredi Regularity Lemma Miklos Simonovits Hungarian Academy of Science The Percolation Phase Transition on the n-cube Gordon Slade Zeroes of Random Analytic Functions Mikhail Sodin On the Largest Eigenvalue of a Random Subgraph of the Hypercube Alexander Soshnikov University of California at Davis On the Ramsey- and Turan-type Problems Benjamin Sudakov On Pseudorandom Matrices Stanislaw Szarek Universitée Paris VI Families of Random Sections of Convex Bodies Nicole Tomczak-Jaegermann Expander Graphs - where Combinatorics and Algebra Compete and Cooperate Institute for Advanced Studies Expansion of graphs can be given equivalent deFInitions in combinatorial and algebraic terms. This is the most basic connection between combinatorics and algebra illuminated by expanders and the quest to construct them. The talk will survey how fertile this connection has been to both FIelds, focusing on recent results. There are infinitely many irrational values of the zeta function at the odd integers Applications of zonoids to Asymptotic Geometric Analysis Yehoram Gordon The Kakeya conjecture (Part 1) University of Britich Columbia Stability of uniqueness results for convex bodies Rolf Schneider Minkowski's existence theorem and some applications Random Matrices: Gaussian Unitary Ensemble and Beyond (Part 1) Noncommutative M-structure and the interplay of algebra and norm for operator algebras David Blecher We report on a recent joint paper with Smith and Zarikian, following on from work of the author, Effros and Zarikian on noncommutative M-structure. Certain nonlinear but convex equations play a role. We discuss some extensions of these results, and some related ideas. Operator spaces as `quantized' Banach spaces Edward Effros In the beginning it appeared that linear spaces of operators would have a theory much like that for Banach spaces. This misperception grew out of a series of remarkable discoveries, such as Arveson's version of the Hahn-Banach Theorem, Ruan's axiomatization of the operator spaces, and the theory of projective and injective tensor products. The problems of using Banach space theory as one's sole guide became apparent when one considered such classical notions as local relexivity. Owing to the availability of modern operator algebra theory, researchers have made great strides in understanding the beautiful and unexpected nature of these spaces. Random Matrices and Magic Squares Alexander Gamburd Characteristic polynomials of random unitary matrices have been intensively studied in recent years: by number theorists in connection with Riemann zeta-function, and by theoretical physicists in connection with Quantum Chaos. In particular, Haake and collaborators have computed the variance of the coeficients of these polynomials and raised the question of computing the higher moments. The answer, obtained in recent joint work with Persi Diaconis, turns out to be intimately related to counting integer stochastic matrices (magic squares). Free Entropy Dimension and Hyperfinite von Neumann algebras Kenley Jung I will give a general introduction to Voiculescu's notions of free entropy and free entropy dimension and then discuss what they have in store for the most tractable of von Neumann algebras: those which are hyper nite and have a tracial state. The central limit procedure for noncommuting random variables and applications Marius Junge We investigated the algebra of central limits of a fixed set of random variables in the (commutative and) noncommutative context and matrix valued version thereof. In the noncommutative framework states instead of traces provide new examples of complex gaussian variables such that the real part does no longer commute with the imaginary part. Using this procedure, we may embed the operator Hilbert space (a central object in the theory of operator spaces introduced by Pisier) in a noncommutative L1 space and calculate the operator space analogue of the projection constant of the n-dimensional Hilbert space. A Good formula for noncommutative cumulants Franz Lehner Cumulants linearize convolution of measures. We use a formula of Good to define noncommutative cumulants. It turns out that the essential property needed is exchangeability of random variables. This provides a simple unified method to derive the known examples of cumulants, like free cumulants and various q-cumulants, and will hopefully lead to interesting new examples. Holomorphic functional calculus and square functions on non-commutative $L_p$-spaces Christian Le Merdy A2-point functions for multi-matrix models, and non-crossing partitions in an annulus Alexandru Nica Hilbertian Operator spaces with few completely bounded maps Eric Ricard Can non-commutative $L^p$ spaces be renormed to be stable? Haskell Rosenthal On Real Operator Spaces Zhong Jin Ruan The Role of Maximal $L_p$ Bounds in Quantum Information Theory Mary Beth Ruskai Determinantal Random Point Fields The purpose of the talk is to give an introduction to determinantal random point fields. Determinantal random point fields appear naturally in random matrix theory, probability theory, quantum mechanics, combinatorics, representation theory and some other areas of mathematics and physics. The first part of the talk will be devoted to some general results (i.e. existence and uniqueness theorem) and examples. In the second part we will concentrate on the CLT type results for the linear statistics and ergodic properties of the translation-invariant determinantal point fields. Maximization of free entropy On the maximality of subdiagonal algebras Quanhua Xu Université de Franche-Comté We consider the open problem on the maximality of subdiagonal algebras posed by Arveson in 1967. We prove that a subdiagonal algebra is maximal if it is invariant under the modular automorphism group of a normal faithful state. The method of minimal vectors George Androulakis An introduction to the uniform classification of Banach spaces Yoav Benyamini Baire-1 functions and spreading models Vassiliki Farmaki Athens University Selecting unconditional basic sequences Tadek Figiel Polish Academy of Sciences The Banach envelope of Paley-Wiener type spaces Ep for 0<p<1 Weak topologies and properties that are fulfilled almost everywhere Tamara Kuchurenko On Frechet differentiability of Lipschitz functions, part I Joram Lindenstrauss The Hebrew University of Jerusalem On Frechet differentiability of Lipschitz functions, part II The structure of level sets of Lipschitz quotients Beata Randrianantoanina How many operators do there exist on a Banach space? Thomas Schlumprecht Lambda_p sets for some orthogonal systems Lior Tzafriri The Hebrew University Sigma shrinking Markushevich bases and Corson compacts Vaclav Zizler Frontiers of Mathematical Physics, Brane-World and Supersymmetry Physics with Large Extra Dimensions (lecture 1) Ignatios Antoniadis Fixing Runaway Moduli Cliff Burgess Radius-dependent Gauge Coupling Renormalization in AdS5 Kiwoon Choi Gauge Theories of the Symmetric Group in the large N Limit Alessandro D'Adda INFN, Torino Shape versus Volume: Rethinking the Properties of Large Extra Dimensions Keith Dienes Solving the Hierarchy Problem without SUSY or Extra Dimensions: An Alternative Approach Universal Extra Dimensions Bogdan Dobrescu Deconstructing Warped Gauge Theory and Unification Hyung Do Kim KIAS Adding Flavour to ADS/CFT Andreas Karch Little Higgses Emanuel Katz Twisted superspace and Dirac-Kaehler Fermions Noboru Kawamoto What can neutrino oscillation tell us about the possible existence of an extra dimension? C.S. Lam Limitation of Cardy-Verlinde Formula on the Holographic Description of Brane Cosmology Y.S. Myung Inje University Instanton effects in 5d Theories and Deconstruction Erich Poppitz A New Non-Commutative Field Theory Konstantin Savvidis Conformal Invariant String with Extrinsic Curvature Action George Savvidy National Research Center Demokritos Nonplanar Corrections to PP-wave Strings Gordon Semenoff Cosmological Constant Problem in Infinite Volume Extra Dimensions: a Possible Solution Mikhail Shifman Topological Effects in Our Brane World from Extra Dimensions Brane World Cosmology: From Superstring to Cosmic Strings Henry Tye Supersoft Supersymmetry Breaking Neal Weiner International Conference on Robust Statistics (ICORS 2002) Dimension Reduction and Nonparametric Regression: A Robust Combination Claudia Becker University of Dortmund In modern statistical analysis, we often aim at determining a functional relationship between some response and a high-dimensional predictor variable. It is well-known that estimating this relationship from the data in a nonparametric setting can fail due to the curse of dimensionality. But a lower dimensional regressor space may be suffcient to describe the relationship of interest. In the following, we consider the two main steps of a combined procedure in this setting: the dimension reduction step and the step of estimating the functional relation in the reduced space. The occurrence of outliers can disturb this process in several ways. When finding the reduced regressor space, the dimension may be wrongly determined. If the dimension is correctly estimated, the space itself may not be found correctly. As a consequence, it may happen that the functional relationship cannot be found, or an incorrect relation is determined. If both, dimension and space are correctly identified, outliers may still in uence the function estimation. Hence, we aim at constructing robust methods which are able to detect irregularities such as outliers in the data and at the same time can adjust the dimension and estimate the function without being affected by such phenomena. Robust Inference for the Cox Model Tadeusz Bednarski University of Zielona Gora Robust Estimators in Partly Linear Models Graciela Boente John Tukey and "Troubled" Time Series Data David Brillinger On various occasions when discussing time series analysis John Tukey made reference to the use of robust methods. In this talk we will mention those remarks of his that we have found and discuss some other methods as well. On the Bianco-Yohai Estimator for High Breakdown Logistic Regression Christophe Croux University of Leuven Bianco and Yohai (1996) proposed a highly robust procedure for estimation of the logistic regression model. The results they obtain were very promising. We complement there study by providing a fast and stable algorithm to compute this estimator. Moreover, we discuss the problem of the existence of the estimator. We make a comparison with other robust estimators by means of a simulation study and examples. A discussion of the breakdown point of robust estimators for the logistic regression model will also be given. Breakdown and groups Laurie Davies University of Essen The concept of breakdown point was introduced by Hodges (1967) and Hampel (1968, 1971) and still plays an important though at times a controversial role in robust statistics. In practice its use is confined to location, scale and linear regression problems and to functionals which have the appropriate equivariance structure. Attempts to extend the concept to other situations have not been successful. In this talk we clarify the role of the group structure in determining the maximal breakdown point of functionals which have the equivariance structure induced by the group. The analysis suggests that if a problem does not have a suficiently rich group of transformations under which it remains invariant then there is no canonical definition of breakdown and the highest possible breakdown point will be 1. Robust Factor Analysis Peter Filzmoser Vienna University of Technology Two robust approaches to factor analysis are presented and compared. The first one uses a robust covariance matrix for estimating the factor loadings and the specific variances. The second one estimates factor loadings, scores and specific variances directly, using the alternating regression technique. Straight Talks about Robust Methods Xuming He Instead of presenting another research result, I wish to use this opportunity to initiate some discussions on the views and uses of modern robust statistical methods. They will reflect some of the questions and concerns that have been nagging me for years, such as 1. Do we tend to be too demanding when we evaluate a robust procedure? 2. Is computational complexity a major hurdle or is there something more serious? 3. Do asymptotic properties matter? 4. Is the breakdown point a really pessimistic measure of robustness? 5. Should we promote the use of robust methods in exploratory or confirmatory data analysis? 6. Are robust methods needed to handle huge data sets with many variables? I may argument the discussions with my own consulting experience where awareness of robustness often plays a very positive role. Please join me in examining those issues with an open mind and maybe we will agree to disagree. Statistical Analysis of Microarray Data from Affymetrix Gene Chips Karen Kafadar Data obtained from experiments using Affymetrix gene chips are processed and analyzed using the statistical algorithms provided with the product. The details of the algorithms, including the calculations and parameters, are described in mechanical terms in the Appendices to their user's manual (Affymetrix Microarray Suite User Guide, Version 4.0, 2000). I will describe these details using a statistical framework, compare the algorithm with others that have been proposed (Li and Wong, 2002; Efron et al. 2001), and offer modifications that may provide more robust analyses and thus more insightful interpretations of the data. Approaches to Robust Multivariate Estimation Based on Projections Ricardo Maronna Projections are a useful tool to construct robust estimates of multivariate location and scatter with interesting theoretical and practical properties. In particular: the estimate proposed by Stahel (1981) and Donoho (1982), which was the first equivariant estimate with a high breakdown for all dimensions; Estimates with a maximum bias independent of the dimension, proposed by Maronna, Stahel and Yohai, (1992) for scatter and by Tyler (1994) for location, also studied by Adrover and Yohai (2002); and two recent fast proposals for high-dimensional data: one by Pea and Prieto (2001) based on the kurtosis of projections, and another by Maronna and Zamar (2002) based on pairwise robust covariances. Results and relatinships among these estimates will be reviewed. Robust Statistics in Portfolio Optimization Doug Martin University of Washington and Insightful In this talk we discuss several applications of robust statistics in portfolio optimization, some of which have been only partially developed or are merely ideas of areas for future work. The primary focal points will be (a) The use of influence functions in connection with optimal portfolio quantities of interest, e.g., global minimum variance and associated mean return, tangency portfolio mean and variance, and Sharpe ratio, and (b) The use of robust covariance matrix and mean vector estimates in Markowitz optimal portfolios, and (c) Robustification of the new conditional valueat- risk (CVaR) portfolio theory due to Rockafellar and Uryasev. A brief tutorial on the CVaR optimality theory will be provided, along with discussion of critical questions related to robustifying this approach. The Multihalver Stephan Morgenthaler Robust Space Transformations for Distance-based Outlier Raymond Ng In the first part of this talk, we will present the notion of distance-based outliers. This is a nonparametric approach, and is particularly suitable for high dimensional data. We will show a case study based on video trajectory surveillance. For distance-based outlier detection, there is an underlying multi-dimensional data space in which each tuple/object is represented as a point in the space. We observe that in the presence of variability, correlation, outliers and/or differing scales, we may get unintuitive results if an inappropriate space is used. The fundamental question addressed in the second half of this talk is: "What then is an appropriate space?". We propose using a robust space transformation called the Donoho-Stahel estimator. We will focus on the computation of the transformation. Multivariate Outlier Detection and Cluster Identification David Rocke We examine relationships between the problem of robust estimation of multivariate location and shape and the problem of maximum likelihood assignment of multivariate data to clusters. Recognition of the connections between estimators for clusters and outliers immediately yields one important result that we demonstrate in this paper; namely, outlier detection procedures can be improved by combining them with cluster identification techniques. Using this combined approach, one can achieve practical breakdown values that approach the theoretical limits. We report computational results that demonstrate the effectiveness of this approach. In addition, we provide a new robust clustering method. Resistant Parametric and Nonparametric Modelling in Finance Elvezio Ronchetti We discuss how resistant parametric and nonparametric techniques can be used in the statistical analysis of financial models. As an illustration we re-examine the empirical evidence concerning one factor models for the short rate process and we focus on the estimation of the drift and the volatility. Standard classical parametric procedures are highly unstable in this application. On the other hand, robust procedures deal with deviations from the assumptions on the model and are still reliable and reasonably efficient in a neighborhood of the model. Finally, we show that resistant techniques are necessary also in the nonparametric framework, in particular for reliable bandwidth selection. This is joint work with Rosario Dell'Aquila and Fabio Trojani, University of Southern Switzerland, Lugano. Robustness Against Separation and Outliers in Binary Regression Peter Rousseeuw The logistic regression model is commonly used to describe the effect of one or several explanatory variables on a binary response variable. Here we consider an alternative model under which the observed response is strongly related but not equal to the unobservable true response. We call this the hidden logistic regression (HLR) model because the unobservable true responses act as a hidden layer in a neural net. We propose the maximum estimated likelihood method in this model, which is robust against separation unlike all existing methods for logistic regression. We then construct an outlier-robust modification of this estimator, called the weighted maximum estimated likelihood (WEMEL) method, which is robust against both problems. Estimating the p-values of Robust Tests for the Linear Model Matias Salibian-Barrera There are several proposals of robust tests for the linear model in the literature (see, for example, Markatou, Stahel and Ronchetti, 1991). The finite-sample distributions of these test statistics are not known and their asymptotic distributions have been studied under the assumption that the scale of the errors is known, or that it can be estimated without affecting the asymptotic behaviour of the tests. This is in general true when the errors have a symmetric distribution. Bootstrap methods can, in principle, be used to estimate the distribution of these test statistics under less restrictive assumptions. However, robust tests are typically based on robust regression estimates which are computationally demanding, specially with moderate- to high-dimensional data sets. Another problem when bootstrapping potentially contaminated data is that we cannot control the proportion of outliers that might enter the bootstrap samples. This could seriously affect the bootstrap estimates of the distribution of the test statistics, specially in their tails. Hence, the resulting p-value estimates may be critically affected by a relatively small amount of outliers in the original data. In this paper we propose an extension of the Robust Bootstrap (Salibian-Barrera and Zamar, 2002) to obtain a fast and robust method to estimate p-values of robust tests for the linear model under less restrictive assumptions. Computational Issues in Robust Statistics Arnold J. Stromberg Hundreds, and perhaps thousands, of papers have been published in the area of robust statistics, yet robust methods are still not used routinely by most applied statisticians. An important reason for this is the many computational issues in robust statistics. Most applied statisticians agree conceptually that robust methods are a good idea, but they fail to use them for a number of reasons. Often, software is not available. Other times, like in linear regression, there are so many choices, it is not clear which estimator to use. In still other situations, the data sets are too big for robust techniques to handle. This paper discusses these issues and others. High Breakdown Point Multivariate M-Estimation David Tyler In this talk, a general study of the properties of the M-estimates of multivariate location and scatter with auxiliary scale proposed in Tatsuoka and Tyler (2000) is presented. This study provides a unifying treatment for some of the high breakdown point methods develop for multivariate statistics, as well as a unifying framework for comparing these methods. The multivariate M-estimates with auxiliary scale include as special cases the minimum volume ellipsoid estimates [Rousseeuw (1985)], the multivariate S-estimates [Davies (1987)], the multivariate constrained M-estimates [Kent and Tyler (1996)], and the recently introduced multivariate MM-estimates [Tatsuoka and Tyler (2000)]. The results obtained for the multivariate MM-estimates, such as its breakdown point, its influence function and its asymptotic distribution, are entirely new. The breakdown points of the M-estimates of multivariate location and scatter for fixed scale are also derived. This generalizes the results on the breakdown points of the univariate redescending M-estimates of location with fixed scale given by Huber (1984). Semiparametric Random Effects Models for Longitudinal Data Jane-Ling Wang A class of semiparametric regression models to describe the influence of covariates on a longitudinal (or functional) response is described. The model includes indices, which are linear functions of the covariates, unknown random functions of the indices, and unknown variance functions. They are thus semiparametric random effects models with many parsimonious submodels. The parametric components of the indices are estimated via quasi-score estimating equations, and the unknown smooth random and variance functions are estimated nonparametrically. Consistency of the procedures is obtained, and the procedure is illustrated with fecundity data for 1000 female Mediterranean fruit flies. Robust, Sequential Design Strategies Doug Wiens High Breakdown Point Robust Regression with Censored Data Victor Yohai Robustness Issues for Confidence Intervals Julie Zhou In many inference problems, it is of interest to compute confidence intervals or regions for the parameters of interest in the model under consideration. As with point estimation, it is important to know about the robustness of the confidence intervals. This involves evaluating the performance of the interval in terms of coverage and length in the face of small perturbations of the data or the model. Ideally we would like a procedure which gives efficient intervals and accurate coverage in the neighborhood of the model. In this talk, we will address the issues of robustness for confidence intervals and assess the robustness of some particular intervals. We will propose several measures including empirical influence function, gross-error sensitivity, and finite-sample breakdown point to study the robustness of confidence intervals. Those measures are applied to examine the robustness of unconditional intervals in the regression model for both the regression parameters and the scale and conditional intervals. Pacific Northwest String Theory Seminar Non-commutative Space And Chan-Paton Algebra in Open String Field Algebra Kazuyuki Furuuchi PIMS, University of British Columbia Adding Flavor to AdS/CFT Localized Closed String Tachyons David Kutasov Extension of Boundary String Field Theory on Disc and RP2 Worldsheet Geometries Shin Nakamura Comments on Vacuum String Field Theory Kazumi Okuyama Wilson Loops in N=4 Super Yang-Mills Theory Jan Plefka AEI, Potsdam The Hierarchy Unification and the Entropy of de Sitter Space Nonperturbative Nonrenormalization in a Non-supersymmetric Nonlocal String Theory Eva Silverstein Index Puzzles in SUSY gauge mechanics Matthias Staudacher Quantum Gravity in dS-Space? Leonard Susskind Thematic Programme on Nonlinear Partial Differential Equations Recent Progress in Complex Geometry - Part 1 (unavailable), Part 2, Part 3, Part 4 Gang Tian Date: August 14-16, 2001 Geometric Variational Problems - Part 1, Part 2, Part 3, Part 4 Date: August 8-10, 2001 Variational problems in relativistic quantum mechanics: Dirac-Fock equations - Part 1, Part 2, Part 3, Part 4 Eric Séré Université Paris IX Date: August 2, 4, 7, 2001 Energy minimizers of the copolymer problem - Part 1, Part 2, Part 3, Part 4 CNRS Nice, on leave from Universite Paris 6 Date: July 30, 31, 2001 Variational problems related to operators with gaps and applications in relativistic quantum mechanics - Part 1, Part 2, Part 3 Maria Esteban Date: July 30,31 and August 1, 2001 On De Giorgi's conjecture in dimensions 4 and 5 Nassif Ghoussoub Dynamics of Ginsburg-Landau and related equations - Part 1, Part 2, Part 3, Part 4 Fang Hua Lin Courant Institute Diffusions, cross-diffusions, and their steady states - Part 1, Part 2 Changfeng Gui Date: July 23 - 24, 2001 Diffusion & Cross Diffusion in Pattern Formation - Part 1, Part 2 Wei-Ming Ni About the De Giorgi conjecture in dimensions 4 and 5 Propagation of fronts in excitable media - Part 1, Part 2, Part 3, Part 4 Henri Berestycki Université Paris VI Ergodicity, singular perturbations, and homogenization in the HJB equations of stochastic control Martino Bardi Fully nonlinear stochastic partial differential equations - Theory and Applications - Part 1, Part 2, Part 3, Part 4 Panagiotis Souganidis Date: July 3 - 4, 2001 Frontiers of Mathematical Physics, Particles, Fields and Strings Noncommutative Supersymmetric Tubes Dongsu Bak University of Seoul Location: SFU D-branes on Orbifolds: The Standard Model Robert Leigh Orientifolds, Conifolds and Quantum Deformations Soonkeon Nam PIMS-MITACS Workshop on Inverse Problems and Imaging Sturm-Liouville problems with eigenvalue dependent and independent conditions Paul Binding We consider Sturm-Liouville problems with boundary conditions affinely dependent on the eigenvalue parameter. These are classified into three types, one being the standard case where the eigenvalue does not appear explicitly. We exhibit transformations between problems with these different types of boundary condition, preserving all eigenvalues and norming constants, except possibly two. In consequence, we extend some standard inverse Sturm-Liouville results to cases with eigenvalue dependent boundary conditions. Wavetracing: Ray tracing for the propagation of band-limited signals for traveltime tomography Kenneth P. Bube Many seismic imaging techniques require computing traveltimesand travel paths. Methods to compute raypaths are usually based onhigh frequency approximations. In some situations like head waves,these raypaths minimize traveltime, but are not paths along whichmost of the energy travels. We present an approach to computingraypaths, using a modification of ray bending which we call"wavetracing," that computes raypaths and traveltimes that aremore consistent with the paths and times for the band-limited signalsin real seismic data. Wavetracing shortens the raypath, while stillkeeping the raypath within the Fresnel zone for a characteristicfrequency of the signal. This is joint work with John Washbourneof TomoSeis, Inc. Synthetic Aperture Radar Margaret Cheney Department of Mathematical Sciences In Synthetic Aperture Radar (SAR) imaging, a plane or satellite carrying an antenna flies along a (usually straight) flight track. The antenna emits pulses of electromagnetic radiation; this radiation scatters off the terrain and is received back at the same antenna. These signals are used to produce an image of the terrain. The problem of producing a high-resolution image from SAR data is very similar to problems that arise in geophysics and tomography; techniques from seismology and X-ray tomography are now making their way into the SAR community. This talk will outline a mathematical model for the SAR imaging problem and discuss some of the associated problems. Optimal Linear resolution and conservation of information Keith S. Cover In linear inverse theory, when trying to estimate a model from data, it is widely advocated in the literature that finding an model which fits the data is the method of choice. However, several common algorithms yield estimates with optimal or near optimal linear resolution that do not fit the data. Prominent examples are the windowed discrete Fourier transform and algorithms following the Backus and Gilbert method. The Backus and Gilbert algorithms are often avoided because uncertainties of how to interpret estimates that do not fit the data. It is shown that algorithms with linear resolution, provided they can be expressed as a matrix multiplication which is invertible, produce an estimate which, along with its resolution functions and noise statistics, is a complete summary of all the models that fit the data. Such estimates also completely conserve the information provided by the data. If the resulting linear resolution of the algorithm is optimal or near optimal such estimates also effectively communicate the inherent nonuniqueness of the solution to an interpreter. This simple but novel theoretical finding will provide a valuable frame work in which to interpret the results of the linear inversion algorithms including those of the Backus and Gilbert type. Microlocal Analysis and Seismic Inverse Scattering in Anisotropic Elastic Media Maarten V. de Hoop A level set method for shape reconstruction in electromagnetic cross-borehole tomography Oliver Dorn In geophysical applications, it is often the case that the shapes of some obstacles in the earth (e.g. pollutant plumes) have to be monitored from electromagnetic data. These problems can be considered as (ill-posed) nonlinear inverse problems, where typically iterative solution techniques and some regularization are required. Starting from some simple initial guess for the shapes, these shapes evolve during the reconstruction process in order to minimize a suitably chosen cost functional. Since the geometries of the hidden objects can be quite complicated and are not known a priori, a solution algorithm has to be able to model changes in the geometries and in the topologies of these objects during the reconstruction process reliably. We have developed a shape reconstruction algorithm which uses a level set representation for modelling the evolving shapes during the reconstructions. The algorithm, as well as the results of various numerical experiments, are discussed in the talk. Applications of Sampling Theory in Tomography Computed tomography produces images of opaque objects by reconstructing a density function f from measurements of its line integrals. We describe how Shannon Sampling Theory can be utilized to find the minimum number of measurements needed to achieve a desired resolution in the reconstructed image. An error analysis and numerical experiments are presented showing how to achieve high quality images from a minimal amount of data. Geometric singularities in tomography Statistical Estimation of the Parameters of a PDE Colin Fox University of Auckland Non-invasive imaging remains a difficult problem in those cases where the forward map can only be adequately simulated by solving the appropriate partial-differential equation (PDE) subject to boundary conditions. However, in those problems, the inherent uncertainty in images recovered from actual measurements may be quantified using knowledge of the forward map and the mesurement process. We demonstrate image recovery for the problem of electrical conductivity imaging by sampling the distribution of all possible images and calculating summary statistics. This route to solving inverse problems has a number of advantageous points, including the ability to quantify accuracy of the recovered image, and a straightforward way to include model complexity such as complete descriptions of real electrodes. Geophysical Inversion in the new millennium Larry Lines Geophysicists have been working on solutions to the inverse problem since the dawn of our profession. This presentation is an evaluation of inversion's present state and abbreviates an evaluation given by the authors in the January 2001 issue of Geophysics. Geophysical interpreters currently infer subsurface properties on the basis of observed data sets, such as seismograms or potential field recordings. A rough model of the process that produces the recorded data resides within the interpreter's brain; the interpreter then uses this rough mental model to reconstruct subsurface properties from the observed data. In modern parlance, the inference of subsurface properties from observed data is identified with the solution of a so-called "inverse problem". The currently used geophysical processing techniques can be viewed as attempts to solve the ubiquitous inverse problem: we have geophysical data, we have an abstract model of the process that produces the data, and we seek algorithms that allow us to invert for the model parameters. The theoretical and computational aspects of inverse theory will gain importance as geophysical processing technology continues to evolve. Iterative geophysical inversion is not yet in widespread use in the exploration industry today because the computing resources are barely adequate for the purpose. After all, it is only now that 3-D prestack depth migration has become economically feasible, and the day will surely not be far off when the inversion algorithms described above will come into their own, enabling the geophysicist to invert observations not only for a structure's subsurface geometry, but also for a growing number of detailed physical, chemical, and geological features. The day that such operations become routine will also be the day that geophysical inverse theory has come into its own in both mineral and petroleum exploration. coauthor: Sven Treitel Approximate Fourier integral wavefield extrapolators for heterogeneous, anisotropic media Gary Margrave Seismic imaging uses wavefield data recorded on the earth's surface to construct images of the internal structure. A key part of this process is the extrapolation of wavefield data into the earth's interior. Most commonly, wavefield extrapolation is based on ray theory and incorporates a high-frequency approximation that allows the development of analytic expressions. This leads to computationally efficient imaging algorithms that incorporate both the advantages and the limitations of raytracing. An alternative approach is to perform a plane-wave decomposition of the recorded data and extrapolate each plane wave independently. For homogeneous media, the Fourier transform can be used for the plane-wave decomposition and phase shifts propagate the plane waves. We explore an approximate extension of this concept to heterogeneous media that uses pseudodifferential operator theory. In heterogeneous media, a plane wave does not remain planar as it propagates so there is not a one-to-one correspondence between plane-wave spectra at two different depth levels. A Fourier integral operator that performs the appropriate plane-wave mixing can be developed from pseudo-differential operator theory applied to the variable-coefficient scalar wave equation. We discuss the derivation of the operator and its basic properties. In particular, we demonstrate that the transpose of the operator is also a viable Fourier integral wavefield extrapolator with a first order error that opposes the original operator. Thus a simple symmetric operator, the average of our first extrapolator and its transpose, is more accurate. We show that that our first operator performs a spatially nonstationary phase shift that is simultaneous with the inverse Fourier transformation. The transpose operator also performs a nonstationary phase shift but simultaneously with the forward Fourier transform. We present both numerical experiments and theoretical arguments to characterize our results and discuss their possible extensions. coauthor: Michael LamoureuxDepartment of Mathematics and Statistics, University of Calgary Simulation studies on Bioelectric and Biomagnetic Reconstruction of Currents on Curved surfaces and in Spherical Volume conductors Ceon Ramon Reconstruction and resolution enhancement of the current distribution on curved surfaces and in volume conductors from the bioelectric or biomagnetic data is proposed. Applications will be in the reconstruction of current distribution in the heart wall or the localization of the sources in the brain. Our image reconstruction procedure is divided in two steps. First, the bioelectric or biomagnetic inverse problem is solved by use of the weighted pseudo-inverse techniques to reconstruct an initial image of the current distribution on a curved surface or in a volume conductor from a given electric potential or magnetic field profile. The current distribution thus obtained has poor resolution, it can barely resemble the original shape of the current distribution. The second step improves the resolution of the reconstructed image by using the method of alternating projections. The procedure assumes that images can be represented by line-like elements and involves finding the line-like elements based on the initial image and projecting back onto the original solution space. Simulation studies were performed on a set of parallel conductors on the curved surface modeled as set of multiple closely resemble the original shape of the conductors. Simulation studies were also performed for distributed dipolar sources in a spherical volume conductor. Resonation was performed with a 3-D alternating projection technique developed by us. Position of the reconstructed dipoles matched closely with the original dipoles. However, slight error was found in matching the dipolar intensity. Coauthors:Akira Ishimaru - Dept. of Electrical Engineering, University of WashingtonRobert Marks - Dept. of Electrical Engineering, University of WashingtonJoreg Schrieber - Biomagnetics Center, F. S. University, Jena, GermanyJens Haueisen - Biomagnetics Center, F. S. University, Jena, GermanyPaul Schimpf - Dept of Computer Science and Electrical Engineering, Washington State University Wave equation least-squares Migration/Inversion Mauricio D. Sacchi coauthor: Henning KuehlDepartment of Physics, University of AlbertaLeast-squares (LS) migration based on Kirchhoff modeling/migration operators has been proposed in the literature to account for uneven subsurface illumination and to reduce imaging artifacts due to irregularly and/or coarsely sampled seismic wavefields (Nemeth et al., 1999; Duquet et al., 2000). In this presentation we show that least-squares migration can also be used to improve the performance of generalized phase-shift pre-stack Double-Square-Root (DSR) migration. Simulations with complete and incomplete data were used to test the feasibility of the proposed algorithm. In this case, rather than estimating an image of the subsurface by downward propagating wavefields measured at z=0, the image is estimated by solving a linear inverse problem. The solution of this problem requires the specification of two operators: a forward (modeling) operator and its adjoint (migration). The image can be retrieved using the method of conjugate gradients with different regularization schemes. In particular, we have developed a regularization strategy that operates on common angle images. Simulations with complete and incomplete data were used to test the feasibility of the proposed algorithm. Duquet, B., Marfurt, J.K., and Dellinger, J.A., 2000, Kirchhoff modeling, inversion for reflectivity, and subsurface illumination, Geophysics, 65, 1195-1209. Nemeth, T., Wu, C., and Schuster, G.T., 1999, Least-squares migration of incomplete reflection data, Geophysics, 64, 208-221. PIMS Pacific Northwest Seminar on String Theory Tachyon condensation in open string field theory Washington Taylor Holographic renormalization Kostas Skenderis Priceton University String theoretic mechanisms for spacetime singularity resolution D-branes as noncommutative solitons: an algebraic approach Emil Martinec Strings in AdS_3 and the SL(2,R) WZW model Hiroshi Ooguri Thematic Programme on Graph Theory and Combinatorial Optimization Random Homomorphisms Peter Winkler Let H be a fixed small graph, possibly with loops, and let G be a (possibly infinite) graph. Let f be chosen from the set Hom(G,H) of all homomorphisms from G to H. If H is Kn, f is a proper coloring of G if H consists of two adjacent vertices one of which is looped, f is (in effect) an independent set in G These and other H give rise to "hard constraint" models of interest in statistical mechanics. One way to phrase the physicists' key question is: when G is infinite, is there a canonical way to pick f uniformly at random? When G is a Cayley tree, f can be generated by a branching random walks on H and using this approach, we are able to characterize the H for which Hom(G,H) always has a unique "nice" probability distribution. We will sketch the proof but spend equal time illustrating the bizarre things that can happen when H is not so well behaved. Reference: Graham R. Brightwell and Peter Winkler, Graph homomorphisms and phase transitions, J. Comb. Theory Series B (1999) 221--262. Acyclic coloring, strong coloring, list coloring and graph embedding I will discuss various coloring problems and the relations among them. A representative example is the conjecture, studied in a joint paper with Sudakov and Zaks, that the edges of any simple graph with maximum degree d can be colored by at most d+2 colors with no pair of adjacent edges of the same color and no 2-colored cycle. A three-color theorem for some graphs evenly embedded on orientable surfaces Joan Hutchinson The easiest planar graph coloring theorem states that a graph in the plane can be 2-colored if and only if every face is bounded by an even number of edges; call such a graph "evenly embedded." What is the chromatic number of evenly embedded graphs on other surfaces? Three, provided the surface is orientable and the graph is embedded with all noncontractible cycles sufficiently long. We give a new proof of this result, using a theorem from Robertson-Seymour graph minors work and a technique of Hutchinson, Richter, and Seymour in a proof of a related 4-color theorem for Eulerian triangulations. Colourings and orientations of graphs Adrian Bondy Université Claude Bernard To each proper colouring c:V -> {1,2,...,k} of the vertices of a graph G, there corresponds a canonical orientation of the edges of G, edge uv being oriented from u to v if and only if c(u) > c(v). This simple link between colourings and orientations is but the tip of the iceberg. The ties between the two notions are far more profound and remarkable than are suggested by the above observation. The aim of this talk is to describe some of these connections. Integral polyhedra related to even-cycle and even-cut matroids Bertrand Guenin Amalgamations of Graphs - Lecture 1, Part 1, Part 2, Lecture 2, Part 1, Part 2 Chris Rodger University of Auburn Date: June 19 - June 30, 2000 TBA - Lecture 1 Part 1, Part 2, Lecture 2 Ron Gould For any audio recordings that are not listed above, please click here.
CommonCrawl
Mathematical Programming July 2019 , Volume 176, Issue 1–2, pp 403–427 | Cite as Behavior of accelerated gradient methods near critical points of nonconvex functions Michael O'Neill Stephen J. Wright Full Length Paper Series B We examine the behavior of accelerated gradient methods in smooth nonconvex unconstrained optimization, focusing in particular on their behavior near strict saddle points. Accelerated methods are iterative methods that typically step along a direction that is a linear combination of the previous step and the gradient of the function evaluated at a point at or near the current iterate. (The previous step encodes gradient information from earlier stages in the iterative process). We show by means of the stable manifold theorem that the heavy-ball method is unlikely to converge to strict saddle points, which are points at which the gradient of the objective is zero but the Hessian has at least one negative eigenvalue. We then examine the behavior of the heavy-ball method and other accelerated gradient methods in the vicinity of a strict saddle point of a nonconvex quadratic function, showing that both methods can diverge from this point more rapidly than the steepest-descent method. Accelerated gradient methods Nonconvex optimization This work was supported by NSF Awards IIS-1447449, 1628384, 1634597, and 1740707; AFOSR Award FA9550-13-1-0138; and Subcontract 3F-30222 from Argonne National Laboratory. Part of this work was done while the second author was visiting the Simons Institute for the Theory of Computing, and partially supported by the DIMACS/Simons Collaboration on Bridging Continuous and Discrete Optimization through NSF Award CCF-1740425. Mathematics Subject Classification 90C26 49M30 We are grateful to Bin Hu for his advice and suggestions on the manuscript. We are also grateful to the referees and editor for helpful suggestions. A properties of the sequence \(\{t_k\}\) defined by (51) In this "Appendix" we show that the following two properties hold for the sequence defined by (51): $$\begin{aligned} \frac{t_{k-1}-1}{t_k} \;\; \text{ is } \text{ an } \text{ increasing } \text{ nonnegative } \text{ sequence } \end{aligned}$$ $$\begin{aligned} \lim _{k \rightarrow \infty } \frac{t_{k-1}-1}{t_k} = 1. \end{aligned}$$ We begin by noting two well known properties of the sequence \(t_k\) (see for example [4, Section 3.7.2]): $$\begin{aligned} t_k^2 - t_k = t_{k-1}^2 \end{aligned}$$ $$\begin{aligned} t_k \ge \frac{k+1}{2}. \end{aligned}$$ To prove that \(\frac{t_{k-1}-1}{t_k}\) is monotonically increasing, we need $$\begin{aligned} \frac{t_{k-1}-1}{t_k} = \frac{t_{k-1}}{t_k} - \frac{1}{t_k} \le \frac{t_k}{t_{k+1}} - \frac{1}{t_{k+1}} = \frac{t_k - 1}{t_{k+1}}, \quad k=1,2,\ldots . \end{aligned}$$ Since \(t_{k+1} \ge t_k\) (which follows immediately from (51)), it is sufficient to prove that $$\begin{aligned} \frac{t_{k-1}}{t_k} \le \frac{t_k}{t_{k+1}}. \end{aligned}$$ By manipulating this expression and using (54), we obtain the equivalent expression $$\begin{aligned} t_{k-1} \le \frac{t_k^2}{t_{k+1}} = \frac{t_{k+1}^2 - t_{k+1}}{t_{k+1}} = t_{k+1} - 1. \end{aligned}$$ By definition of \(t_{k+1}\), we have $$\begin{aligned} t_{k+1} = \frac{\sqrt{4t_k^2 + 1} + 1}{2} \ge t_k + \frac{1}{2} = \frac{\sqrt{4t_{k-1}^2 + 1} + 1}{2} + \frac{1}{2} \ge t_{k-1} + 1. \end{aligned}$$ Thus (56) holds, so the claim (52) is proved. The sequence \(\{ (t_{k-1}-1)/t_k \}\) is nonnegative, since \((t_0-1)/t_1 = 0\). Now we prove (53). We can lower-bound \((t_{k-1}-1)/t_k\) as follows: $$\begin{aligned} \frac{t_{k-1}-1}{t_k}&= \frac{2(t_{k-1} -1)}{\sqrt{4 t_{k-1}^2 +1} + 1}\ge \frac{2(t_{k-1} -1)}{\sqrt{4 t_{k-1}^2} + 2} \nonumber \\&= \frac{2(t_{k-1} -1)}{2 (t_{k-1} + 1)} = 1 - \frac{2}{t_{k-1}+1}. \end{aligned}$$ For an upper bound, we have from \(t_k \ge t_{k-1}\) that $$\begin{aligned} \frac{t_{k-1} -1}{t_k} \le \frac{t_{k-1}}{t_k} \le 1. \end{aligned}$$ Since \(t_{k-1} \rightarrow \infty \) (because of (55)), it follows from (57) and (58) that (53) holds. Attouch, H., Cabot, A.: Convergence rates of inertial forward–backward algorithms. SIAM J. Optim. 28(1), 849–874 (2018)MathSciNetCrossRefzbMATHGoogle Scholar Attouch, H., Goudou, X., Redont, P.: The heavy ball with friction method, I. The continuous dynamical system: global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Commun. Contemp. Math. 2(01), 1–34 (2000)MathSciNetCrossRefzbMATHGoogle Scholar Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\). SIAM J. Optim. 26(3), 1824–1834 (2016)MathSciNetCrossRefzbMATHGoogle Scholar Bubeck, S.: Convex optimization: algorithms and complexity. Found. Trends Mach. Learn. 8(3–4), 231–357 (2015)CrossRefzbMATHGoogle Scholar Chambolle, A., Dossal, Ch.: On the convergence of the iterates of the "fast iterative shrinkage/thresholding algorithm". J. Optim. Theory Appl. 166(3), 968–982 (2015)MathSciNetCrossRefzbMATHGoogle Scholar Du, S.S., Jin, C., Lee, J.D., Jordan, M.I., Singh, A., Poczos, B.: Gradient descent can take exponential time to escape saddle points. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 1067–1077. Curran Associates Inc, Red Hook (2017)Google Scholar Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Math. Program. 156(1–2), 59–99 (2016)MathSciNetCrossRefzbMATHGoogle Scholar Jin, C., Netrapalli, P., Jordan, M.I.: Accelerated gradient descent escapes saddle points faster than gradient descent. arXiv preprint arXiv:1711.10456, (2017) Lee, J.D., Simchowitz, M., Jordan, M.I., Recht, B.: Gradient descent only converges to minimizers. JMLR Workshop Conf. Proc. 49(1), 1–12 (2016)Google Scholar Li, H., Lin, Z.: Accelerated proximal gradient methods for nonconvex programming. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 379–387. Curran Associates Inc, Red Hook (2015)Google Scholar Nesterov, Y.: A method for unconstrained convex problem with the rate of convergence \(O(1/k^2)\). Dokl AN SSSR 269, 543–547 (1983)Google Scholar Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, New York (2004)CrossRefzbMATHGoogle Scholar Polyak, B.T.: Introduction to Optimization. Optimization Software (1987)Google Scholar Recht, B., Wright, S.J.: Nonlinear Optimization for Machine Learning (2017). (Manuscript in preparation)Google Scholar Shub, M.: Global Stability of Dynamical Systems. Springer, Berlin (1987)CrossRefzbMATHGoogle Scholar Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. Technical report, Department of Mathematics, University of Washington, (2008)Google Scholar Zavriev, S.K., Kostyuk, F.V.: Heavy-ball method in nonconvex optimization problems. Comput. Math. Model. 4(4), 336–341 (1993)CrossRefzbMATHGoogle Scholar © Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2018 1.Computer Sciences DepartmentUniversity of WisconsinMadisonUSA O'Neill, M. & Wright, S.J. Math. Program. (2019) 176: 403. https://doi.org/10.1007/s10107-018-1340-y Accepted 09 October 2018
CommonCrawl
polymake wiki The references are sorted by date of appearance, the most recent papers are at the top. Papers about polymake Benjamin Assarf, Ewgenij Gawrilow, Katrin Herr, Michael Joswig, Benjamin Lorenz, Andreas Paffenholz, Thomas Rehn. Computing convex hulls and counting integer points with polymake. Math. Program. Comput (1–38). Springer, Berlin/Heidelberg, 2017. Sven Herrmann, Michael Joswig, Marc E. Pfetsch. Computing the bounded subcomplex of an unbounded polyhedron. Comput. Geom (541–551). Elsevier (North-Holland), Amsterdam, 2013. Michael Joswig, Andreas Paffenholz. Defect polytopes and counter-examples with polymake. ACM Commun. Comput. Algebra (177–179). Association for Computing Machinery (ACM), New York, NY, 2011. Ewgenij Gawrilow, Michael Joswig, Thilo Rörig, Nikolaus Witte. Drawing polytopal graphs with polymake. Comput. Vis. Sci (99–110). Springer, Berlin/Heidelberg, 2010. Michael Joswig, Benjamin Müller, Andreas Paffenholz. polymake and lattice polytopes (491–502). Nancy: The Association. Discrete Mathematics & Theoretical Computer Science (DMTCS), 2009. Ewgenij Gawrilow, Michael Joswig. Geometric reasoning with polymake (37-52). Gesellschaft für wissenschaftliche Datenverarbeitung mbh Göttingen, 2005. Ewgenij Gawrilow, Michael Joswig. polymake: an approach to modular software design in computational geometry (222–231). New York, NY: Association for Computing Machinery (ACM), 2001. Ewgenij Gawrilow, Michael Joswig. polymake: a framework for analyzing convex polytopes (43–73). Basel: Birkhäuser, 2000. Publications with references to polymake Joseph Doolittle, Eran Nevo, Guillermo Pineda-Villavicencio, Julien Ugon, David Yost. On the reconstruction of polytopes. Discrete Comput. Geom (285–302). Springer US, New York, NY, 2019. Cesar Ceballos, Arnau Padrol, Camilo Sarmiento. Geometry of \(\nu \)-Tamari lattices in types \(A\) and \(B\). Trans. Am. Math. Soc (2575–2622). American Mathematical Society (AMS), Providence, RI, 2019. Jörg Grande. Red-green refinement of simplicial meshes in \(d\) dimensions. Math. Comput (751–782). American Mathematical Society (AMS), Providence, RI, 2019. Simon Hampe, Michael Joswig, Benjamin Schröter. Algorithms for tight spans and tropical linear spaces. J. Symb. Comput (116–128). Elsevier (Academic Press), London, 2019. Maria Angelica Cueto, Hannah Markwig. Tropical geometry of genus two curves. J. Algebra (457–512). Elsevier (Academic Press), San Diego, CA, 2019. Holger Eble, Michael Joswig, Lisa Lamberti, William B. Ludington. Cluster partitions and fitness landscapes of the Drosophila fly microbiome. J. Math. Biol (861–899). Springer, Berlin/Heidelberg, 2019. Nikolai Yu. Zolotykh, Sergei I. Bastrakov. Two variations of graph test in double description method. Comput. Appl. Math (9). Springer, Berlin/Heidelberg; Sociedade Brasileira de Matemática Aplicada e Computacional (SBMAC), São Carlos, 2019. Benjamin Schröter. Multi-splits and tropical linear spaces from nested matroids. Discrete Comput. Geom (661–685). Springer US, New York, NY, 2019. Charles Jordan, Michael Joswig, Lars Kastner. Parallel enumeration of triangulations. Electron. J. Comb (research paper p3.6, 27). Prof. André Kündgen c/o California State University San Marcos, Deptartment of Mathematics, San Marcos, CA, 2018. Michael Joswig, Lars Kastner. New counts for the number of triangulations of cyclic polytopes (264–271). Springer, Cham, 2018. Ali Alilooee, Ivan Soprunov, Javid Validashti. Generalized multiplicities of edge ideals. J. Algebr. Comb (441–472). Springer US, New York, NY, 2018. Benjamin Assarf, Michael Joswig, Julian Pfeifle. Webs of stars or how to triangulate free sums of point configurations. J. Comb. Theory, Ser. A (183–214). Elsevier (Academic Press), San Diego, CA, 2018. Bo Lin, Ruriko Yoshida. Tropical Fermat-Weber points. SIAM J. Discrete Math (1229–1245). Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2018. Francesco Grande, Arnau Padrol, Raman Sanyal. Extension complexity and realization spaces of hypersimplices. Discrete Comput. Geom (621–642). Springer US, New York, NY, 2018. Claudio Fontanari, Riccardo Ghiloni, Paolo Lella. A computational approach to the ample cone of moduli spaces of curves. Int. J. Algebra Comput (37–51). World Scientific, Singapore, 2018. Michael Joswig, Benjamin Schröter. The degree of a tropical basis. Proc. Am. Math. Soc (961–970). American Mathematical Society (AMS), Providence, RI, 2018. Xavier Allamigeon, Pascal Benchimol, Stéphane Gaubert, Michael Joswig. Log-barrier interior point methods are not strongly polynomial. SIAM J. Appl. Algebra Geom (140–178). Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2018. Sören Berg, Katharina Jochemko, Laura Silverstein. Ehrhart tensor polynomials. Linear Algebra Appl (72–93). Elsevier (North-Holland), New York, NY, 2018. Simon Telen, Bernard Mourrain, Marc Van Barel. Solving polynomial systems via truncated normal forms. SIAM J. Matrix Anal. Appl (1421–1447). Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2018. Anna Seigal, Guido Montúfar. Mixtures and products in two graphical models. J. Algebr. Stat (1–20). 2018. doi: 10.18409/jas.v9i1.90. https://doi.org/10.18409/jas.v9i1.90 Federico Castillo, Fu Liu, Benjamin Nill, Andreas Paffenholz. Smooth polytopes with negative Ehrhart coefficients. J. Comb. Theory, Ser. A (316–331). Elsevier (Academic Press), San Diego, CA, 2018. David Avis, Charles Jordan. mplrs: a scalable parallel vertex/facet enumeration code. Math. Program. Comput (267–302). Springer, Berlin/Heidelberg, 2018. Bea Schumann, Jacinta Torres. A non-levi branching rule in terms of Littelmann paths. Proc. Lond. Math. Soc. (3) (1077–1100). 2018. doi: 10.1112/plms.12175. https://doi.org/10.1112/plms.12175 Johannes Hofscheier, Lukas Katthän, Benjamin Nill. Ehrhart theory of spanning lattice polytopes. Int. Math. Res. Not. IMRN (5947–5973). 2018. doi: 10.1093/imrn/rnx065. https://doi.org/10.1093/imrn/rnx065 Karim A. Adiprasito, Bruno Benedetti, Frank H. Lutz. Extremal examples of collapsible complexes and random discrete Morse theory. Discrete Comput. Geom (824–853). Springer US, New York, NY, 2017. P. E. Haxell, A. D. Scott. On lower bounds for the matching number of subcubic graphs. J. Graph Theory (336–348). Wiley, Hoboken, NJ, 2017. Michael Joswig, Benjamin Schröter. Matroids from hypersimplex splits. J. Comb. Theory, Ser. A (254–284). Elsevier (Academic Press), San Diego, CA, 2017. Ulf-Rainer Fiebig, Kirstin Strokorb, Martin Schlather. The realization problem for tail correlation functions. Extremes (121–168). Springer US, New York, NY, 2017. Asghar Moeini. Identification of unidentified equality constraints for integer programming problems. Eur. J. Oper. Res (460–467). Elsevier (North-Holland), Amsterdam, 2017. Sarah B. Brodsky, Cesar Ceballos, Jean-Philippe Labbé. Cluster algebras of type \(D_4\), tropical planes, and the positive tropical Grassmannian. Beitr. Algebra Geom (25–46). Springer, Berlin/Heidelberg, 2017. Simon Keicher, Thomas Kremer. A test for monomial containment. J. Symb. Comput (74–90). Elsevier (Academic Press), London, 2017. Xavier Caruso. Estimation des dimensions de certaines variétés de Kisin. J. Reine Angew. Math (1–77). De Gruyter, Berlin, 2017. Simon Hampe. The intersection ring of matroids. J. Comb. Theory, Ser. B (578–614). Elsevier (Academic Press), San Diego, CA, 2017. József Balogh, Ping Hu, Bernard Lidický, Florian Pfender, Jan Volec, Michael Young. Rainbow triangles in three-colored graphs. J. Comb. Theory, Ser. B (83–113). Elsevier (Academic Press), San Diego, CA, 2017. Gregory S. Warrington. Orthogonal bases for transportation polytopes applied to Latin squares, magic squares and sudoku boards. Linear Algebra Appl (285–304). Elsevier (North-Holland), New York, NY, 2017. Teodor Backhaus, Xin Fang, Ghislain Fourier. Degree cones and monomial bases of Lie algebras and quantum groups. Glasg. Math. J (595–621). Cambridge University Press, Cambridge, 2017. Evgeny Feigin, Ghislain Fourier, Peter Littelmann. Favourable modules: filtrations, polytopes, Newton-Okounkov bodies and flat degenerations. Transform. Groups (321–352). Springer (Birkhäuser), Boston, MA, 2017. Jules Depersin, Stéphane Gaubert, Michael Joswig. A tropical isoperimetric inequality. Sémin. Lothar. Comb (78b.27, 12). Universität Wien, Fakultät für Mathematik, Wien, 2017. Bo Lin, Bernd Sturmfels, Xiaoxian Tang, Ruriko Yoshida. Convexity in tree spaces. SIAM J. Discrete Math (2015–2038). Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2017. Andreas Gathmann, Dennis Ochse. Moduli spaces of curves in tropical varieties (253–286). Cham: Springer, 2017. Raul Epure, Yue Ren, Hans Schönemann. The polymake interface in singular and its applications (109–117). Springer, Cham, 2017. Andreas Gathmann, Hannah Markwig, Dennis Ochse. Tropical moduli spaces of stable maps to a curve (287–309). Cham: Springer, 2017. Winfried Bruns, Richard Sieg, Christof Söger. Normaliz 2013–2016 (123–146). Cham: Springer, 2017. Johannes Rauh. The polytope of $k$-star densities. Electron. J. Combin (Paper 1.4, 21). 2017. Michał Lasoń, Mateusz Michałek. Non-normal very ample polytopes—constructions and examples. Exp. Math (130–137). 2017. doi: 10.1080/10586458.2015.1128370. https://doi.org/10.1080/10586458.2015.1128370 Sonia Balagopalan. On a vertex-minimal triangulation of $\mathbb R\rm P^4$. Electron. J. Combin (Paper 1.52, 23). 2017. Emmanuel Tsukerman, Ellen Veomett. A general method to determine limiting optimal shapes for edge-isoperimetric inequalities. Electron. J. Combin (Paper 1.26, 22). 2017. Andreas Paffenholz. polyDB: a database for polytopes and related objects (533–547). Cham: Springer, 2017. Hayato Waki, Florin Nae. Boundary modeling in model-based calibration for automotive engines via the vertex representation of the convex hulls. Pac. J. Math. Ind (1–7). Kyushu University, Faculty of Mathematics, Institute of Mathematics for Industry, Fukuoka; Springer (SpringerOpen), Berlin/Heidelberg, 2017. Thorsten Theobald. Some recent developments in spectrahedral computation (717–739). Cham: Springer, 2017. Lars Kastner. Toric Ext and Tor in polymake and Singular: the two-dimensional case and beyond (423–441). Cham: Springer, 2017. Stefan Forcey, Logan Keefe, William Sands. Facets of the balanced minimal evolution polytope. J. Math. Biol (447–468). Springer, Berlin/Heidelberg, 2016. Benjamin Assarf, Benjamin Nill. A bound for the splitting of smooth Fano polytopes with many vertices. J. Algebr. Comb (153–172). Springer US, New York, NY, 2016. Ghislain Fourier. PBW-degenerated Demazure modules and Schubert varieties for triangular elements. J. Comb. Theory, Ser. A (132–152). Elsevier (Academic Press), San Diego, CA, 2016. Jason Cantarella, Clayton Shonkwiler. The symplectic geometry of closed equilateral random walks in 3-space. Ann. Appl. Probab (549–596). Institute of Mathematical Statistics (IMS), Beachwood, OH/Bethesda, MD, 2016. Vissarion Fisikopoulos, Luis Peñaranda. Faster geometric algorithms via dynamic determinant computation. Comput. Geom (1–16). Elsevier (North-Holland), Amsterdam, 2016. Jeff Sommars, Jan Verschelde. Pruning algorithms for pretropisms of Newton polytopes (489–503). Cham: Springer, 2016. Mesut Şahin, Ivan Soprunov. Multigraded Hilbert functions and toric complete intersection codes. J. Algebra (446–467). Elsevier (Academic Press), San Diego, CA, 2016. Mateusz Michałek, Rosa M. Miró-Roig. Smooth monomial Togliatti systems of cubics. J. Comb. Theory, Ser. A (66–87). Elsevier (Academic Press), San Diego, CA, 2016. Xin Fang, Ghislain Fourier. Marked chain-order polytopes. Eur. J. Comb (267–282). Elsevier (Academic Press), London, 2016. Jean-Paul Doignon, Samuel Fiorini, Selim Rexhep. The linear extension polytope of a poset (81–84). Amsterdam: Elsevier, 2016. Liam Solus, Caroline Uhler, Ruriko Yoshida. Extremal positive semidefinite matrices whose sparsity pattern is given by graphs without \(K_{5}\) minors. Linear Algebra Appl (247–275). Elsevier (North-Holland), New York, NY, 2016. Hans Schönemann. Extending singular with new types and algorithms (110–113). Cham: Springer, 2016. Ewgenij Gawrilow, Simon Hampe, Michael Joswig. The polymake XML file format (403–410). Cham: Springer, 2016. Hans-Gert Gräbe. Semantic-aware fingerprints of symbolic research data (411–418). Cham: Springer, 2016. Michael Joswig. Book review of: D. Maclagan and B. Sturmfels, Introduction to tropical geometry. Jahresber. Dtsch. Math.-Ver (233–237). Springer, Berlin/Heidelberg, 2016. Janko Böhm, Wolfram Decker, Simon Keicher, Yue Ren. Current challenges in developing open source computer algebra systems (3–24). Springer, [Cham], 2016. doi: 10.1007/978-3-319-32859-1_1. https://doi.org/10.1007/978-3-319-32859-1_1 Malkhaz Bakuradze, Alexander Gamkrelidze, Joseph Gubeladze. Affine hom-complexes. Port. Math (183–205). 2016. doi: 10.4171/PM/1984. https://doi.org/10.4171/PM/1984 Michael Joswig, Georg Loho, Benjamin Lorenz, Benjamin Schröter. Linear programs and convex hulls over fields of Puiseux fractions (429–445). Springer, [Cham], 2016. doi: 10.1007/978-3-319-32859-1_37. https://doi.org/10.1007/978-3-319-32859-1_37 Ghislain Fourier. Marked poset polytopes: Minkowski sums, indecomposables, and unimodular equivalence. J. Pure Appl. Algebra (606–620). Elsevier (North-Holland), Amsterdam, 2016. Erik Friese, Frieder Ladisch. Affine symmetries of orbit polytopes. Adv. Math (386–425). Elsevier (Academic Press), San Diego, CA, 2016. Michel Deza, Mathieu Dutour Sikirić. Enumeration of the facets of cut polytopes over some highly symmetric graphs. Int. Trans. Oper. Res (853–860). Wiley, Oxford; International Federation of Operational Research Societies (IFORS), 2016. Christopher Hojny, Marc E. Pfetsch. A polyhedral investigation of star colorings. Discrete Appl. Math (59–78). Elsevier (North-Holland), Amsterdam, 2016. Samuel Fiorini, Vissarion Fisikopoulos, Marco Macchia. Two-level polytopes with a prescribed facet (285–296). Cham: Springer, 2016. Winfried Bruns, Bogdan Ichim, Christof Söger. The power of pyramid decomposition in Normaliz. J. Symb. Comput (513–536). Elsevier (Academic Press), London, 2016. Gennadiy Averkov, Barbara Langfeld. Homometry and direct-sum decompositions of lattice-convex sets. Discrete Comput. Geom (216–249). Springer US, New York, NY, 2016. Simon Hampe. Combinatorics of tropical Hurwitz cycles. J. Algebr. Comb (1027–1058). Springer US, New York, NY, 2015. Guido F. Montúfar, Jason Morton. When does a mixture of products contain a product of mixtures?. SIAM J. Discrete Math (321–347). Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2015. Joseph Gubeladze, Jack Love. Vertex maps between \(\triangle \), \(\square\), and \(\diamond\). Geom. Dedicata (375–399). Springer Netherlands, Dordrecht, 2015. Alexandru Constantinescu, Matey Mateev. Determinantal schemes and pure O-sequences. J. Pure Appl. Algebra (3873–3888). Elsevier (North-Holland), Amsterdam, 2015. Andries E. Brouwer, Jan Draisma, Bart J. Frenk. Lossy gossip and composition of metrics. Discrete Comput. Geom (890–913). Springer US, New York, NY, 2015. M. Hellus, R. Waldi. On the number of numerical semigroups containing two coprime integers \(p\) and \(q\). Semigroup Forum (833–842). Springer US, New York, NY, 2015. Benjamin Lorenz, Benjamin Nill. On smooth Gorenstein polytopes. Tohoku Math. J. (2) (513–530). Tohoku University, Mathematical Institute, Sendai, 2015. Stefan Lörwald, Gerhard Reinelt. PANDA: a software for polyhedral transformations. EURO J. Comput. Optim (297–308). Springer, Berlin/Heidelberg; EURO - Association of European Operational Research Societies, 2015. Satya Swarup Samal, Dima Grigoriev, Holger Fröhlich, Andreas Weber, Ovidiu Radulescu. A geometric method for model reduction of biochemical networks with polynomial rate functions. Bull. Math. Biol (2180–2211). Springer US, New York, NY, 2015. Giovanni Gaiffi. Permutonestohedra. J. Algebr. Comb (125–155). Springer US, New York, NY, 2015. Sarah Brodsky, Michael Joswig, Ralph Morrison, Bernd Sturmfels. Moduli of tropical plane curves. Res. Math. Sci (31). Springer International Publishing (SpringerOpen), Cham, 2015. Hassan Errami, Markus Eiswirth, Dima Grigoriev, Werner M. Seiler, Thomas Sturm, Andreas Weber. Detection of Hopf bifurcations in chemical reaction networks using convex coordinates. J. Comput. Phys (279–302). Elsevier (Academic Press), Amsterdam, 2015. Brian Green, Nicholas A. Scoville, Mimi Tsuruga. Estimating the discrete geometric Lusternik-{S}chnirelmann category. Topol. Methods Nonlinear Anal (103–116). 2015. doi: 10.12775/TMNA.2015.006. https://doi.org/10.12775/TMNA.2015.006 Andreas Paffenholz. Faces of Birkhoff polytopes. Electron. J. Combin (Paper 1.67, 36). 2015. Oliver Braun, Renaud Coulangeon, Gabriele Nebe, Sebastian Schönnenbeck. Computing in arithmetic groups with Voronoı̈'s algorithm. J. Algebra (263–285). Elsevier (Academic Press), San Diego, CA, 2015. Tristram Bogart, Christian Haase, Milena Hering, Benjamin Lorenz, Benjamin Nill, Andreas Paffenholz, Günter Rote, Francisco Santos, Hal Schenck. Finitely many smooth \(d\)-polytopes with \(n\) lattice points. Isr. J. Math (301–329). Springer, Berlin/Heidelberg; Hebrew University Magnes Press, Jerusalem, 2015. Karim Adiprasito, Bruno Benedetti. Tight complexes in 3-space admit perfect discrete Morse functions. Eur. J. Comb (71–84). Elsevier (Academic Press), London, 2015. Qingchun Ren, Jürgen Richter-Gebert, Bernd Sturmfels. Cayley-{B}acharach formulas. Amer. Math. Monthly (845–854). 2015. doi: 10.4169/amer.math.monthly.122.9.845. https://doi.org/10.4169/amer.math.monthly.122.9.845 Adam Bohn, Yuri Faenza, Samuel Fiorini, Vissarion Fisikopoulos, Marco Macchia, Kanstantsin Pashkovich. Enumeration of 2-level polytopes (191–202). Berlin: Springer, 2015. Mateusz Michałek. Toric varieties in phylogenetics. Dissertationes Math (86). 2015. Barbara Baumeister, Christian Haase, Benjamin Nill, Andreas Paffenholz. Polytopes associated to dihedral groups. Ars Math. Contemp (30–38). 2014. doi: 10.26493/1855-3974.289.91d. https://doi.org/10.26493/1855-3974.289.91d Valery Alexeev, Angela Gibney, David Swinarski. Higher-level $\mathfrak{sl}_2$ conformal blocks divisors on $\overline M_{0,n}$. Proc. Edinb. Math. Soc. (2) (7–30). 2014. doi: 10.1017/S0013091513000941. https://doi.org/10.1017/S0013091513000941 Matthias Henze. On counterexamples to a conjecture of Wills and Ehrhart polynomials whose roots have equal real parts. Electron. J. Combin (Paper 1.28, 12). 2014. Michał Adamaszek. Small flag complexes with torsion. Canad. Math. Bull (225–230). 2014. doi: 10.4153/CMB-2013-032-9. https://doi.org/10.4153/CMB-2013-032-9 Derek Krepski. Prequantization of the moduli space of flat $\mathrm{PU}(p)$-bundles with prescribed boundary holonomies. SIGMA Symmetry Integrability Geom. Methods Appl (Paper 109, 13). 2014. doi: 10.3842/SIGMA.2014.109. https://doi.org/10.3842/SIGMA.2014.109 V. A. Bondarenko, A. V. Nikolaev, M. E. Symanovich, R. O. Shemyakin. On a recognition problem on cut polytope relaxations. Autom. Remote Control (1626–1636). Springer US, New York, NY; Pleiades Publishing, New York, NY; MAIK ``Nauka/Interperiodica'', Moscow, 2014. Sven Herrmann, Michael Joswig, David E Speyer. Dressians, tropical Grassmannians, and their rays. Forum Math (1853–1881). 2014. doi: 10.1515/forum-2012-0030. https://doi.org/10.1515/forum-2012-0030 Ulrich Derenthal, Andreas-Stephan Elsenhans, Jörg Jahnel. On the factor alpha in Peyre's constant. Math. Comput (965–977). American Mathematical Society (AMS), Providence, RI, 2014. Simon Hampe. a-tint: a polymake extension for algorithmic tropical intersection theory. Eur. J. Comb (579–607). Elsevier (Academic Press), London, 2014. L. Fehér, T. J. Kluck. New compact forms of the trigonometric Ruijsenaars-Schneider system. Nucl. Phys., B (97–127). Elsevier (North-Holland), Amsterdam, 2014. David Haws, Abraham Martı́n del Campo, Akimichi Takemura, Ruriko Yoshida. Markov degree of the three-state toric homogeneous Markov chain model. Beitr. Algebra Geom (161–188). Springer, Berlin/Heidelberg, 2014. Martin Čadek, Marek Krčál, Jiřı́ Matoušek, Francis Sergeraert, Lukáš Vokřı́nek, Uli Wagner. Computing all maps into a sphere. J. ACM (44). Association for Computing Machinery (ACM), New York, NY, 2014. Benjamin Assarf, Michael Joswig, Andreas Paffenholz. Smooth Fano polytopes with many vertices. Discrete Comput. Geom (153–194). Springer US, New York, NY, 2014. Qingchun Ren, Steven V. Sam, Bernd Sturmfels. Tropicalization of classical moduli spaces. Math. Comput. Sci (119–145). Springer (Birkhäuser), Basel, 2014. Katrin Herr, Thomas Rehn, Achill Schürmann. On lattice-free orbit polytopes. Discrete Comput. Geom (144–172). Springer US, New York, NY, 2014. Zoran Petrić. On stretching the interval simplex-permutohedron. J. Algebr. Comb (99–125). Springer US, New York, NY, 2014. Ragnar Freij, Matthias Henze, Moritz W. Schmitt, Günter M Ziegler. Face numbers of centrally symmetric polytopes produced from split graphs. Electron. J. Combin (Paper 32, 15). 2013. J. A. De Loera, B. Dutra, M. Köppe, S. Moreinis, G. Pinto, J Wu. Software for exact integration of polynomials over polyhedra. Comput. Geom (232–252). 2013. doi: 10.1016/j.comgeo.2012.09.001. https://doi.org/10.1016/j.comgeo.2012.09.001 Ruth Davidson, Seth Sullivant. Polyhedral combinatorics of UPGMA cones. Adv. in Appl. Math (327–338). 2013. doi: 10.1016/j.aam.2012.10.002. https://doi.org/10.1016/j.aam.2012.10.002 Nathan Owen Ilten, Lars Kastner. Calculating generators of multigraded algebras. J. Symbolic Comput (22–33). 2013. doi: 10.1016/j.jsc.2012.03.005. https://doi.org/10.1016/j.jsc.2012.03.005 Xavier Allamigeon, Ricardo D Katz. Minimal external representations of tropical polyhedra. J. Combin. Theory Ser. A (907–940). 2013. doi: 10.1016/j.jcta.2013.01.011. https://doi.org/10.1016/j.jcta.2013.01.011 Arnau Padrol. Many neighborly polytopes and oriented matroids. Discrete Comput. Geom (865–902). 2013. doi: 10.1007/s00454-013-9544-7. https://doi.org/10.1007/s00454-013-9544-7 Milan Studený, David C Haws. On polyhedral approximations of polytopes for learning Bayesian networks. J. Algebr. Stat (59–92). 2013. doi: 10.18409/jas.v4i1.19. https://doi.org/10.18409/jas.v4i1.19 Roland R Ramsahai. Probabilistic causality and detecting collections of interdependence patterns. J. R. Stat. Soc. Ser. B. Stat. Methodol (705–723). 2013. doi: 10.1111/rssb.12006. https://doi.org/10.1111/rssb.12006 Matthias Beck, Jessica De Silva, Gabriel Dorfsman-Hopkins, Joseph Pruitt, Amanda Ruiz. The combinatorics of interval-vector polytopes. Electron. J. Combin (Paper 22, 12). 2013. Alessandro Rinaldo, Sonja Petrović, Stephen E Fienberg. Maximum likelihood estimation in the $\beta$-model. Ann. Statist (1085–1110). 2013. doi: 10.1214/12-AOS1078. https://doi.org/10.1214/12-AOS1078 Alexander Engström, Patrik Norén. Ideals of graph homomorphisms. Ann. Comb (71–103). Springer (Birkhäuser), Basel, 2013. David Eppstein, Maarten Löffler. Bounds on the complexity of halfspace intersections when the bounded faces have small dimension. Discrete Comput. Geom (1–21). Springer US, New York, NY, 2013. Sandra Di Rocco, Christian Haase, Benjamin Nill, Andreas Paffenholz. Polyhedral adjunction theory. Algebra Number Theory (2417–2446). 2013. doi: 10.2140/ant.2013.7.2417. https://doi.org/10.2140/ant.2013.7.2417 Guido F. Montúfar. Mixture decompositions of exponential families using a decomposition of their sample spaces. Kybernetika (23–39). Academy of Sciences of the Czech Republic, Institute of Information Theory and Automation, Prague, 2013. Richard Bödi, Katrin Herr, Michael Joswig. Algorithms for highly symmetric linear and integer programs. Math. Program (65–90). Springer, Berlin/Heidelberg, 2013. Xavier Allamigeon, Stéphane Gaubert, Éric Goubault. Computing the vertices of tropical polyhedra using directed hypergraphs. Discrete Comput. Geom (247–279). Springer US, New York, NY, 2013. Tristram Bogart, Mark Contois, Joseph Gubeladze. Hom-polytopes. Math. Z (1267–1296). Springer, Berlin/Heidelberg, 2013. Sven Herrmann, Vincent Moulton, Andreas Spillner. Searching for realizations of finite metric spaces in tight spans. Discrete Optim (310–319). Elsevier, Amsterdam, 2013. Achill Schürmann. Exploiting polyhedral symmetries in social choice. Soc. Choice Welfare (1097–1110). Springer, Berlin/Heidelberg, 2013. Matjaž Konvalinka, Igor Pak. Triangulations of Cayley and Tutte polytopes. Adv. Math (1–33). Elsevier (Academic Press), San Diego, CA, 2013. Carsten E. M. C. Lange. Minkowski decomposition of associahedra and related combinatorics. Discrete Comput. Geom (903–939). Springer US, New York, NY, 2013. Michał Adamaszek. Special cycles in independence complexes and superfrustration in some lattices. Topology Appl (943–950). Elsevier (North-Holland), Amsterdam, 2013. Alessandro Rinaldo, Sonja Petrović, Stephen E. Fienberg. Maximum lilkelihood estimation in the \(\beta\)-model. Ann. Stat (1085–1110). Institute of Mathematical Statistics (IMS), Beachwood, OH/Bethesda, MD, 2013. Geir Agnarsson. The flag polynomial of the Minkowski sum of simplices. Ann. Comb (401–426). Springer (Birkhäuser), Basel, 2013. Olivia Beckwith, Matthew Grimm, Jenya Soprunova, Bradley Weaver. Minkowski length of 3D lattice polytopes. Discrete Comput. Geom (1137–1158). Springer US, New York, NY, 2012. Roland R Ramsahai. Causal bounds and observable constraints for non-deterministic models. J. Mach. Learn. Res (829–848). 2012. Kaie Kubjas. Hilbert polynomial of the Kimura 3-parameter model. J. Algebr. Stat (64–69). 2012. doi: 10.18409/jas.v3i1.16. https://doi.org/10.18409/jas.v3i1.16 Michael Joswig, Thilo Rörig. Polytope mit vielen Splits und ihre Sekundärfächer. Math. Semesterber (145–152). Springer, Berlin/Heidelberg, 2012. Francisco Santos. A counterexample to the Hirsch conjecture. Ann. Math. (2) (383–412). Princeton University, Mathematics Department, Princeton, NJ, 2012. Stephen E. Fienberg, Alessandro Rinaldo. Maximum likelihood estimation in log-linear models. Ann. Stat (996–1023). Institute of Mathematical Statistics (IMS), Beachwood, OH/Bethesda, MD, 2012. Michał Adamaszek. Splittings of independence complexes and the powers of cycles. J. Comb. Theory, Ser. A (1031–1047). Elsevier (Academic Press), San Diego, CA, 2012. Edwin O'Shea, András Sebö. Alternatives for testing total dual integrality. Math. Program (57–78). Springer, Berlin/Heidelberg, 2012. Felipe Rincón. Isotropical linear spaces and valuated Delta-matroids. J. Comb. Theory, Ser. A (14–32). Elsevier (Academic Press), San Diego, CA, 2012. Sven Herrmann. On the facets of the secondary polytope. J. Comb. Theory, Ser. A (425–447). Elsevier (Academic Press), San Diego, CA, 2011. Jonathan Spreer, Wolfgang Kühnel. Combinatorial properties of the $K3$ surface: simplicial blowups and slicings. Exp. Math (201–216). 2011. doi: 10.1080/10586458.2011.564546. https://doi.org/10.1080/10586458.2011.564546 Ian Morrison, David Swinarski. Gröbner techniques for low-degree Hilbert stability. Exp. Math (34–56). 2011. doi: 10.1080/10586458.2011.544577. https://doi.org/10.1080/10586458.2011.544577 Benjamin Nill, Günter M Ziegler. Projecting lattice polytopes without interior lattice points. Math. Oper. Res (462–467). 2011. doi: 10.1287/moor.1110.0503. https://doi.org/10.1287/moor.1110.0503 Piotr Zwiernik. An asymptotic behaviour of the marginal likelihood for general Markov models. J. Mach. Learn. Res (3283–3310). 2011. Benjamin Nill, Andreas Paffenholz. Examples of Kähler-Einstein toric Fano manifolds associated to non-symmetric reflexive polytopes. Beitr. Algebra Geom (297–304). Springer, Berlin/Heidelberg, 2011. Stéphane Gaubert, Ricardo D. Katz. Minimal half-spaces and external representation of tropical polyhedra. J. Algebr. Comb (325–348). Springer US, New York, NY, 2011. Anna-Sapfo Malaspinas, Nicholas Eriksson, Peter Huggins. Parametric analysis of alignment and phylogenetic uncertainty. Bull. Math. Biol (795–810). Springer US, New York, NY, 2011. Marı́a Angélica Cueto, Frederick A. Matsen. Polyhedral geometry of phylogenetic rogue taxa. Bull. Math. Biol (1202–1226). Springer US, New York, NY, 2011. Tetsushi Matsui, Akihiro Higashitani, Yuuki Nagazawa, Hidefumi Ohsugi, Takayuki Hibi. Roots of Ehrhart polynomials arising from graphs. J. Algebr. Comb (721–749). Springer US, New York, NY, 2011. Francesco Bianconi, Antonio Fernández. On the occurrence probability of local binary patterns: a theoretical study. J. Math. Imaging Vis (259–268). Springer US, New York, NY, 2011. Lars Schewe. Nonrealizable minimal vertex triangulations of surfaces: showing nonrealizability using oriented matroids and satisfiability solvers. Discrete Comput. Geom (289–302). Springer US, New York, NY, 2010. Marı́a Angélica Cueto, Enrique A. Tobis, Josephine Yu. An implicitization challenge for binary factor analysis. J. Symb. Comput (1296–1315). Elsevier (Academic Press), London, 2010. Michael Joswig, Katja Kulas. Tropical and ordinary convexity combined. Adv. Geom (333–352). 2010. doi: 10.1515/ADVGEOM.2010.012. https://doi.org/10.1515/ADVGEOM.2010.012 Edward D. Kim, Francisco Santos. An update on the Hirsch conjecture. Jahresber. Dtsch. Math.-Ver (73–98). 2010. doi: 10.1365/s13291-010-0001-8. https://doi.org/10.1365/s13291-010-0001-8 Rüdiger Stephan. Cardinality constrained combinatorial optimization: complexity and polyhedra. Discrete Optim (99–113). Elsevier, Amsterdam, 2010. Volker Kaibel, Rüdiger Stephan. On cardinality constrained cycle and path polytopes. Math. Program (371–394). Springer, Berlin/Heidelberg, 2010. Winfried Bruns, Bogdan Ichim. Normaliz: Algorithms for affine monoids and rational cones. J. Algebra (1098–1113). Elsevier (Academic Press), San Diego, CA, 2010. Mathias Drton, Caroline J. Klivans. A geometric interpretation of the characteristic polynomial of reflection arrangements. Proc. Am. Math. Soc (2873–2887). American Mathematical Society (AMS), Providence, RI, 2010. Felix Effenberger, Wolfgang Kühnel. Hamiltonian submanifolds of regular polytopes. Discrete Comput. Geom (242–262). Springer US, New York, NY, 2010. Anne Shiu, Bernd Sturmfels. Siphons in chemical reaction networks. Bull. Math. Biol (1448–1463). Springer US, New York, NY, 2010. Sven Herrmann, Anders Jensen, Michael Joswig, Bernd Sturmfels. How to draw tropical planes. Electron. J. Combin (Research Paper 6, 26). 2009. http://www.combinatorics.org/Volume_16/Abstracts/v16i2r6.html Dan Yasaki. Binary Hermitian forms over a cyclotomic field. J. Algebra (4132–4142). Elsevier (Academic Press), San Diego, CA, 2009. Geir Agnarsson, Walter D. Morris. On Minkowski sums of simplices. Ann. Comb (271–287). Springer (Birkhäuser), Basel, 2009. Anton Leykin, Frank Sottile. Galois groups of Schubert problems via homotopy computation. Math. Comput (1749–1765). American Mathematical Society (AMS), Providence, RI, 2009. T. Kahle, W. Wenzel, N. Ay. Hierarchical models, marginal polytopes, and linear codes. Kybernetika (189–207). Academy of Sciences of the Czech Republic, Institute of Information Theory and Automation, Prague, 2009. Alexander Postnikov, David Speyer, Lauren Williams. Matching polytopes, toric geometry, and the totally non-negative Grassmannian. J. Algebr. Comb (173–191). Springer US, New York, NY, 2009. Jesús A. De Loera, Edward D. Kim, Shmuel Onn, Francisco Santos. Graphs of transportation polytopes. J. Comb. Theory, Ser. A (1306–1325). Elsevier (Academic Press), San Diego, CA, 2009. Thilo Rörig, Nikolaus Witte, Günter M. Ziegler. Zonotopes with large 2D-cuts. Discrete Comput. Geom (527–541). Springer US, New York, NY, 2009. Matthias Beck, Christian Haase, Steven V Sam. Grid graphs, Gorenstein polytopes, and domino stackings. Graphs Comb (409–426). Springer Japan, Tokyo, 2009. Mathieu Dutour Sikirić, Graham Ellis. Wythoff polytopes and low-dimensional homology of Mathieu groups. J. Algebra (4143–4150). Elsevier (Academic Press), San Diego, CA, 2009. Rüdiger Stephan. Facets of the \((s,t)-p\)-path polytope. Discrete Appl. Math (3119–3132). Elsevier (North-Holland), Amsterdam, 2009. Christian Haase, Andreas Paffenholz. Quadratic Gröbner bases for smooth \times3$ transportation polytopes. J. Algebraic Combin (477–489). 2009. doi: 10.1007/s10801-009-0173-4. https://doi.org/10.1007/s10801-009-0173-4 Klaus Altmann, Benjamin Nill, Sabine Schwentner, Izolda Wiercinska. Flow polytopes and the graph of reflexive polytopes. Discrete Math (4992–4999). 2009. doi: 10.1016/j.disc.2009.03.001. https://doi.org/10.1016/j.disc.2009.03.001 Jason Morton, Lior Pachter, Anne Shiu, Bernd Sturmfels, Oliver Wienand. Convex rank tests and semigraphoids. SIAM J. Discrete Math (1117–1134). 2009. doi: 10.1137/080715822. https://doi.org/10.1137/080715822 J. Rusinko. Equivalence of mirror families constructed from toric degenerations of flag varieties. Transform. Groups (173–194). Springer (Birkhäuser), Boston, MA, 2008. Nico Düvelmeyer. General embedding problems and two-distance sets in Minkowski planes. Beiträge Algebra Geom (549–598). 2008. Sven Herrmann, Michael Joswig. Splitting polytopes. Münster J. Math (109–141). 2008. Graham Ellis. Homological algebra programming (63–74). Amer. Math. Soc., Providence, RI, 2008. doi: 10.1090/conm/470/09186. https://doi.org/10.1090/conm/470/09186 Stefan Forcey. Quotients of the multiplihedron as categorified associahedra. Homology Homotopy Appl (227–256). 2008. http://projecteuclid.org/euclid.hha/1251811075 Hadrien Mélot. Facet defining inequalities among graph invariants: The system graphedron. Discrete Appl. Math (1875–1891). Elsevier (North-Holland), Amsterdam, 2008. Michael Armbruster, Christoph Helmberg, Marzena Fügenschuh, Alexander Martin. On the graph bisection cut polytope. SIAM J. Discrete Math (1073–1098). 2008. doi: 10.1137/060675253. https://doi.org/10.1137/060675253 Fumei Lam, Alantha Newman. Traveling salesman path problems. Math. Program (39–59). Springer, Berlin/Heidelberg, 2008. Peter Huggins, Bernd Sturmfels, Josephine Yu, Debbie S. Yuster. The hyperdeterminant and triangulations of the 4-cube. Math. Comput (1653–1679). American Mathematical Society (AMS), Providence, RI, 2008. Satyan Devadoss, Stefan Forcey. Marked tubes and the graph multiplihedron. Algebr. Geom. Topol (2081–2108). Mathematical Sciences Publishers (MSP), Berkeley, CA; Geometry & Topology Publications c/o University of Warwick, Mathematics Institute, Coventry, 2008. Stefan Forcey. Convex hull realizations of the multiplihedra. Topology Appl (326–347). Elsevier (North-Holland), Amsterdam, 2008. Raymond Hemmecke, Jason Morton, Anne Shiu, Bernd Sturmfels, Oliver Wienand. Three counter-examples on semi-graphoids. Combin. Probab. Comput (239–257). 2008. doi: 10.1017/S0963548307008838. https://doi.org/10.1017/S0963548307008838 Sven Herrmann, Michael Joswig. Bounds on the $f$-vectors of tight spans. Contrib. Discrete Math (161–184). 2007. Stephen Fienberg. Expanding the statistical toolkit with algebraic statistics. Statist. Sinica (1261–1272). 2007. Peter Huggins, Lior Pachter, Bernd Sturmfels. Toward the human genotope. Bull. Math. Biol (2723–2735). 2007. doi: 10.1007/s11538-007-9244-7. https://doi.org/10.1007/s11538-007-9244-7 Julie Christophe, Jean-Paul Doignon. The polytope of \(m\)-subspaces of a finite affine space. RAIRO, Oper. Res (317–344). EDP Sciences, Les Ulis; Société de Mathématiques Appliquées et Industrielles (SMAI), Institut Henri Poincaré, Paris, 2007. J. Àlvarez Montaner, F. J. Castro-Jiménez, J. M. Ucha. Localization at hyperplane arrangements: combinatorics and \({\mathcal D}\)-modules. J. Algebra (662–679). Elsevier (Academic Press), San Diego, CA, 2007. Stephen E. Fienberg, Alessandro Rinaldo. Three centuries of categorical data analysis: Log-linear models and maximum likelihood estima\-tion. J. Stat. Plann. Inference (3430–3445). Elsevier (North-Holland), Amsterdam, 2007. Michael Joswig, Thilo Rörig. Neighborly cubical polytopes and spheres. Isr. J. Math (221–242). Springer, Berlin/Heidelberg; Hebrew University Magnes Press, Jerusalem, 2007. Michael Joswig, Nikolaus Witte. Products of foldable triangulations. Adv. Math (769–796). Elsevier (Academic Press), San Diego, CA, 2007. Bernd Sturmfels, Jenia Tevelev, Josephine Yu. The Newton polytope of the implicit equation. Mosc. Math. J (327–346, 351). 2007. doi: 10.17323/1609-4514-2007-7-2-327-346. https://doi.org/10.17323/1609-4514-2007-7-2-327-346 Ulrich Derenthal. On a constant arising in Manin's conjecture for del Pezzo surfaces. Math. Res. Lett (481–489). 2007. doi: 10.4310/MRL.2007.v14.n3.a12. https://doi.org/10.4310/MRL.2007.v14.n3.a12 James Cruickshank, Séamus Kelly. Rearrangement inequalities and the alternahedron. Discrete Comput. Geom (241–254). 2006. doi: 10.1007/s00454-005-1199-6. https://doi.org/10.1007/s00454-005-1199-6 Andreas Paffenholz. New polytopes from products. J. Comb. Theory, Ser. A (1396–1418). Elsevier (Academic Press), San Diego, CA, 2006. Sonoko Moriyama, Masahiro Hachimori. \(h\)-assignments of simplicial complexes and reverse search. Discrete Appl. Math (594–597). Elsevier (North-Holland), Amsterdam, 2006. Andreas Paffenholz, Axel Werner. Constructions for 4-polytopes and the cone of flag vectors (283–303). Amer. Math. Soc., Providence, RI, 2006. doi: 10.1090/conm/423/08083. https://doi.org/10.1090/conm/423/08083 Michael Joswig, Marc E Pfetsch. Computing optimal Morse matchings. SIAM J. Discrete Math (11–25). 2006. doi: 10.1137/S0895480104445885. https://doi.org/10.1137/S0895480104445885 Nicholas Eriksson, Stephen E. Fienberg, Alessandro Rinaldo, Seth Sullivant. Polyhedral conditions for the nonexistence of the MLE for hierarchical log-linear models. J. Symb. Comput (222–233). Elsevier (Academic Press), London, 2006. Eric H. Kuo. Viterbi sequences and polytopes. J. Symb. Comput (151–163). Elsevier (Academic Press), London, 2006. Seth Sullivant. Compressed polytopes and statistical disclosure limitation. Tohoku Math. J. (2) (433–445). Tohoku University, Mathematical Institute, Sendai, 2006. Robert M. Guralnick, David Perkinson. Permutation polytopes and indecomposable elements in permutation groups. J. Comb. Theory, Ser. A (1243–1256). Elsevier (Academic Press), San Diego, CA, 2006. Matthias Beck, Serkan Hoşten. Cyclotomic polytopes and growth series of cyclotomic lattices. Math. Res. Lett (607–622). 2006. doi: 10.4310/MRL.2006.v13.n4.a10. https://doi.org/10.4310/MRL.2006.v13.n4.a10 Steffen Schön, Hansjörg Kutterer. Using zonotopes for overestimation-free interval least-squares – some geodetic applications. Reliab. Comput (137–155). Springer, Dordrecht, 2005. M. Beck, J. A. De Loera, M. Develin, J. Pfeifle, R. P Stanley. Coefficients and roots of Ehrhart polynomials (15–36). Amer. Math. Soc., Providence, RI, 2005. doi: 10.1090/conm/374/06897. https://doi.org/10.1090/conm/374/06897 Julian Pfeifle, Günter M. Ziegler. On the monotone upper bound problem. Exp. Math (1–11). Taylor & Francis, Philadelphia, PA, 2004. Michael Joswig, Günter M. Ziegler. Convex hulls, oracles, and homology. J. Symb. Comput (1247–1259). Elsevier (Academic Press), London, 2004. Bernd Sturmfels, Josephine Yu. Classification of six-point metrics. Electron. J. Combin (Research Paper 44, 16). 2004. http://www.combinatorics.org/Volume_11/Abstracts/v11i1r44.html Alexander Schwartz, Günter M. Ziegler. Construction techniques for cubical complexes, odd cubical 4-polytopes, and prescribed dual manifolds. Exp. Math (385–413). Taylor & Francis, Philadelphia, PA, 2004. Martin Grötschel, Martin Henk. The representation of polyhedra by polynomial inequalities. Discrete Comput. Geom (485–504). 2003. doi: 10.1007/s00454-003-0782-y. https://doi.org/10.1007/s00454-003-0782-y Volker Kaibel, Alexander Schwartz. On the complexity of polytope isomorphism problems. Graphs Combin (215–230). 2003. M. M. Bayer, A. M. Bruening, J. D Stewart. A combinatorial study of multiplexes and ordinary polytopes. Discrete Comput. Geom (49–63). 2002. doi: 10.1007/s00454-001-0051-x. https://doi.org/10.1007/s00454-001-0051-x Michael Joswig. Projectivities in simplicial complexes and colorings of simple polytopes. Math. Z (243–259). 2002. doi: 10.1007/s002090100381. https://doi.org/10.1007/s002090100381 Christian Haase, Günter M Ziegler. Examples and counterexamples for the Perles conjecture. Discrete Comput. Geom (29–44). 2002. doi: 10.1007/s00454-001-0085-0. https://doi.org/10.1007/s00454-001-0085-0 Michael Joswig, Volker Kaibel, Marc E. Pfetsch, Günter M. Ziegler. Vertex-facet incidences of unbounded polyhedra. Adv. Geom (23–36). De Gruyter, Berlin, 2001. M. Joswig, G. M Ziegler. Neighborly cubical polytopes. Discrete Comput. Geom (325–344). 2000. doi: 10.1007/s004540010039. https://doi.org/10.1007/s004540010039 publications.txt by lkastner
CommonCrawl
Disk-mediated accretion burst in a high-mass young stellar object Jun 30, 2018 by A. Caratti o Garatti; B. Stecklum; R. Garcia Lopez; J. Eislöffel; T. P. Ray; A. Sanna; R. Cesaroni; C. M. Walmsley; R. D. Oudmaijer; W. J. de Wit; L. Moscadelli; J. Greiner; A. Krabbe; C. Fischer; R. Klein; J. M. Ibañez Solar-mass stars form via circumstellar disk accretion (disk-mediated accretion). Recent findings indicate that this process is likely episodic in the form of accretion bursts, possibly caused by disk fragmentation. Although it cannot be ruled out that high-mass young stellar objects (HMYSOs; $M>$8 M$_\odot$, $L_{bol}>$5$\times$10$^3$ L$_\odot$) arise from the coalescence of their low-mass brethren, latest results suggest that they more likely form via disks. Accordingly, disk-mediated... Topics: Astrophysics of Galaxies, Solar and Stellar Astrophysics, Astrophysics Source: http://arxiv.org/abs/1704.02628 Elusive Active Galactic Nuclei Sep 22, 2013 by R. Maiolino; A. Comastri; R. Gilli; N. M. Nagar; S. Bianchi; T. Boeker; E. Colbert; A. Krabbe; A. Marconi; G. Matt; M. Salvati A fraction of active galactic nuclei do not show the classical Seyfert-type signatures in their optical spectra, i.e. they are optically "elusive". X-ray observations are an optimal tool to identify this class of objects. We combine new Chandra observations with archival X-ray data in order to obtain a first estimate of the fraction of elusive AGN in local galaxies and to constrain their nature. Our results suggest that elusive AGN have a local density comparable to or even higher... Source: http://arxiv.org/abs/astro-ph/0307380v2 Integral Field Spectroscopy of a Candidate Disk Galaxy at z~1.5 using Laser Guide Star Adaptive Optics Sep 20, 2013 by S. A. Wright; J. E. Larkin; M. Barczys; D. K. Erb; C. Iserlohe; A. Krabbe; D. R. Law; M. W. McElwain; A. Quirrenbach; C. C. Steidel; J. Weiss We present 0.1" resolution near-infrared integral field spectroscopy of Halpha in a z=1.4781 star forming galaxy, Q2343-BM133. These observations were obtained with OSIRIS (OH Suppressing Infra-Red Imaging Spectrograph) using the W.M. Keck Observatory Laser Guide Star Adaptive Optics system. Halpha emission is resolved over a 0.8" (6.8 kpc) x 0.5" (4.3 kpc) region with a 0.1" spatial resolution. We find a global flux of 4.2+/-0.6x10^{-16} ergs s^{-1} cm^{-2}, and detect a... Early Science with SOFIA, the Stratospheric Observatory for Infrared Astronomy Sep 20, 2013 by E. T. Young; E. E. Becklin; P. M. Marcum; T. L. Roellig; J. M. De Buizer; T. L. Herter; R. Güsten; E. W. Dunham; P. Temi; B. -G. Andersson; D. Backman; M. Burgdorf; L. J. Caroff; S. C. Casey; J. A. Davidson; E. F. Erickson; R. D. Gehrz; D. A. Harper; P. M. Harvey; L. A. Helton; S. D. Horner; C. D. Howard; R. Klein; A. Krabbe; I. S. McLean; A. W. Meyer; J. W. Miles; M. R. Morris; W. T. Reach; J. Rho; M. J. Richter; H. -P. Roeser; G. Sandell; R. Sankrit; M. L. Savage; E. C. Smith; R. Y. Shuping; W. D. Vacca; J. E. Vaillancourt; J. Wolf; H. Zinnecker The Stratospheric Observatory for Infrared Astronomy (SOFIA) is an airborne observatory consisting of a specially modified Boeing 747SP with a 2.7-m telescope, flying at altitudes as high as 13.7 km (45,000 ft). Designed to observe at wavelengths from 0.3 micron to 1.6 mm, SOFIA operates above 99.8 % of the water vapor that obscures much of the infrared and submillimeter. SOFIA has seven science instruments under development, including an occultation photometer, near-, mid-, and far-infrared... Source: http://arxiv.org/abs/1205.0791v1 Seyfert Activity and Nuclear Star Formation in the Circinus Galaxy Sep 19, 2013 by R. Maiolino; A. Krabbe; N. Thatte; R. Genzel We present high angular resolution (0".15-0".5) near infrared images and spectroscopy of the Circinus galaxy, the closest Seyfert 2 galaxy known. The data reveal a non-stellar nuclear source at 2.2 microns. The coronal line region and the hot molecular gas emission extend for 20-50 pc in the ionization cone. The data do not show evidence for a point-like concentration of dark mass; we set an upper limit of 4*10^6 Mo to the mass of a putative black hole. We find evidence for a young... Stratospheric Observatory for Infrared Astronomy Sep 19, 2013 by M. Hamidouche; E. Young; P. Marcum; A. Krabbe We present one of the new generations of observatories, the Stratospheric Observatory For Infrared Astronomy (SOFIA). This is an airborne observatory consisting of a 2.7-m telescope mounted on a modified Boeing B747-SP airplane. Flying at an up to 45,000 ft (14 km) altitude, SOFIA will observe above more than 99 percent of the Earth's atmospheric water vapor allowing observations in the normally obscured far-infrared. We outline the observatory capabilities and goals. The first-generation... Mid-infrared [NeII] line emission from the nucleus of NGC 253 Sep 18, 2013 by T. Boeker; A. Krabbe; J. W. V. Storey We report on mid-infrared (MIR) continuum and line emission mapping of the nucleus of NGC 253. The data, with a resolution of 1\as.4, reveal a double-peaked arc-like [NeII] emission region. Comparison with published data shows that the [NeII] arc is centered on the nucleus of the galaxy. The brightest [NeII] source coincides with the infrared continuum peak. The interpretation of these results is complicated by the edge-on orientation of NGC 253, but a self-consistent explanation is... Diffraction Limited Imaging Spectroscopy of the SgrA* Region using OSIRIS, a new Keck Instrument Sep 18, 2013 by A. Krabbe; C. Iserlohe; J. E. Larkin; M. Barczys; M. McElwain; J. Weiss; S. A. Wright; A. Quirrenbach We present diffraction limited spectroscopic observations of an infrared flare associated with the radio source SgrA*. These are the first results obtained with OSIRIS, the new facility infrared imaging spectrograph for the Keck Observatory operated with the laser guide star adaptive optics system. After subtracting the spectrum of precursor emission at the location of Sgr A*, we find the flare has a spectral index of -2.6 +- 0.9. If we do not subtract the precursor light, then our spectral... VLT-detection of two edge-on Circumstellar Disks in the rho Oph dark cloud Jul 20, 2013 by W. Brandner; S. Sheppard; H. Zinnecker; L. Close; F. Iwamuro; A. Krabbe; T. Maihara; K. Motohara; D. L. Padgett; A. Tokunaga Observations of the rho Ophiuchi star forming region with VLT ANTU and ISAAC under 0.35" seeing conditions reveal two bipolar reflection nebulosities intersected by central dust lanes. The sources (OphE-MM3 and CRBR 2422.8-3423) can be identified as spatially resolved circumstellar disks viewed close to edge-on, similar to edge-on disk sources discovered previously in the Taurus and Orion star forming regions. Millimeter continuum fluxes yield disk masses of the order of 0.01 Mo, i.e....
CommonCrawl
entropy chemistry formula Entropy Definition Properties of Entropy Entropy Change Entropy and Thermodynamics. Determine ΔS for the synthesis of ammonia at 25oc. Entropy is the quantitative measure of spontaneous processes and how energy disperses unless actively stopped from doing so. We can use the formula . Entropy change = 353.8 - 596 = -242.2 J K-1 mol-1. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. Entropy changes during physical changes. 2. The Boltzmann equation. 18. know that the balance between the entropy change and the enthalpy change determines the feasibility of a reaction and is represented by the equation ΔG = ΔH − TΔSsystem OCR Chemistry A Module 5: Physical chemistry and transition elements If the process is at a constant temperature then , where ΔS is the change in entropy, q rev is the reverse of the heat, and T is the Kelvin temperature. Total entropy change, ∆Stotal =∆Ssurroundings+∆Ssystem Total entropy change is equal to the sum of entro… Entropy is a state function. Formula for Entropy: We can calculate Entropy using many different equations: Method-1: If the process is at a constant temperature then: \(\Delta S_{System}\) = \(\frac{q_{rev}}{T}\) Where, Entropy of the Playroom Andrew Vanden Heuvel explores the concept of entropy while cleaning the playroom. Molar Entropy is written in joules per kelvin per mole (JK-1mol-1) Determine ΔS for the synthesis of ammonia at 25oc. This relation is first stated in 1070's by Josiah Willard Gibbs. For example, the spacing between trees is a random natural process. You might find the pressure quoted as 1 atmosphere rather than 1 bar in older sources. But the answer to that question says a lot about how the universe works. The entropy of a substance increases with increase in temperature as with increase in temperature, randomness of a substance increases. It is the ratio … Besides, there are many equations to calculate entropy: 1. Entropy is given the symbol S, and standard entropy (measured at 298 K and a pressure of 1 bar) is given the symbol S°. Entropy increases as it goes from solid to liquid to gas, such that S solid < S liquid < S gas This is because the particles in a solid are rigid and structured, liquid molecules have more freedom to move around than solid molecules and therefore have greater entropy than solids. Gas particles have the most amount of freedom to move … Enthalpy and entropy are thermodynamic properties. When we say system, we're really talking about the balanced chemical equation. Molar Entropy Equation. If the happening process is at a constant temperature then entropy will be \(\Delta S_{system}\) = \(\frac{q _{rev}}{T}\) Derivation of Entropy Formula \(\Delta S\) = is the change in entropy You may need to download version 2.0 now from the Chrome Web Store. Your email address will not be published. Entropy can also be … The entropy of a substance is influenced by structure of the particles (atoms or molecules) that comprise the substance. University of Cambridge - Isaac Physics - Entropy; HyperPhysics - Entropy as Time's Arrow ; WRITTEN BY. With regard to atomic substances, heavier atoms possess greater entropy at a given temperature than lighter atoms, which is a consequence of the relation between a particle's mass and the spacing of quantized translational energy levels (which is a topic beyond the scope of our treatment). Thus on the basis of an equation it is possible to predict qualitatively the sign of entropy change vis-à-vis spontaneity of the reaction. Likewise, falling of tree leaves on the ground with the random arrangement is also a random process. Entropy can also be related to the states of matter: solid, liquids, and gases. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory.It has found far … {\displaystyle \Delta S^ {o}} ) represents STP. Unlike the standard enthalpies of formation, the value of S° is absolute. The change in its value during a process, is called the entropy change. Entropy increases when a system is heated and when solutions form. Here, T is the absolute temperature, ∆H is the change in enthalpy, and ∆S is the change in entropy. Entropy. Accurate assessment of configurational entropy remains a large challenge in biology. Home > Formulas > Chemistry Formulas > Entropy Formula . Consider the phase changes illustrated in . Also, it is to be noted that, the standard enthalpy change of combustion for hydrogen is … Δ S o. Gordon W.F. Furthermore, it includes the entropy of the system and the entropy of the surroundings. Professor of Physics, University of Windsor, Ontario. In physical chemistry and thermodynamics, one of the most useful equations relates entropy to the internal energy (U) of a system: dU = T dS - p dV Here, the change in internal energy dU equals absolute temperature T multiplied by the change in entropy minus external pressure p and the change in volume V. If we have a process, however, we wish to calculate the change in entropy of that process from an initial state to a final state. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases". A system is just our balanced equation that we're looking at. CH₄ (g) + 2O₂ (g) → CO₂ (g) + 2H₂O (I) ΔH⁰c = -890kJmol⁻¹. Thus on the basis of an equation it is possible to predict qualitatively the sign of entropy change vis-à-vis spontaneity of the reaction. Hence, it suggests that temperature is inversely proportional to the entropy. Similarly with decrease in temperature, orderliness of substance is decreases and so the entropy of substance decreases. The change of entropy is expressed as, dS = d e S + d i S where d e S (d e S = q/T) is the change due to the interaction of a system with its surroundings, and d i S is the increase due to a natural change, such as a chemical reaction, within the system and is always positive for irreversible changes (d i S > 0) and zero at equilibrium (d i S = 0). BOTH ΔHf and ΔGf = 0 for elements in their standard state and both bear units of kJ/molrxn. See Article History. A negative ΔS value indicates an endothermic reaction occurred, which absorbed heat from the surroundings. Therefore, entropy is also a measure of the tendency of a process, such as a chemical reaction, to be … The entropy has decreased - as we predicted it would in the earlier page. We're saying that the second law of thermodynamics talks in terms of the system and it says that the entropy of a system increases spontaneously. If the reaction is known, then ΔSrxn can be calculated using a table of standard entropy values. This free energy is dependent on chemical reaction for doing useful work. In the rule of chemical reactions, the changes in entropy occur as a result of the rearrangement of atoms and molecules that change the initial order of the system. According to this equation, an increase in the enthalpy of a system causes an increase in its entropy. In this equation, S is the entropy of the system, k is a proportionality constant equal to the ideal gas constant divided by Avogadro's constant, ln represents a logarithm to the base e, and W is the number of equivalent ways of describing the state of the system. Chemistry formula for class 11 chapter- Thermodynamics . ΔS = S 2-S 1 = ∑S products – ∑S reactants … Solution. Home » Physical Chemistry » Entropy: Definition, Formula and Examples. The change in Entropy Formula is expressed as, According to the thermodynamic definition, entropy is based on change in entropy (ds) during physical or chemical changes and expressed as, For change to be measurable between initial and final state, the integrated expression is, The units for entropy is calories per degree or Cal deg-1. The S° of a pure crystalline structure can be 0 J mol −1 K −1 only at 0 K, according to the third law of … The entropy of a substance is influenced by structure of the particles (atoms or molecules) that comprise the substance. Where Tis not constant, the classical definition becomes dS= δqrev/ T, which must be integratedin order to obtain ΔSsysfor the process. Entropy is defined as the measure of the disorder or randomness of a system. Entropy is usually defined in terms of heat. In physics, it is part of thermodynamics. ΔH is the enthalpy change for the reaction. Keywords Physical chemistry Thermodynamics Gibbs–Helmholtz equation Massieu function ChemTexts has published a couple of papers [1, 2] con-cerning the Gibbs–Helmholtz (G–H) equation. Consider the phase changes illustrated in Figure 5. Entropy is one of the important concepts that students need to understand clearly while studying Chemistry and Physics. Drake. No matter what new (alternative) reversible process you devise, you will always end up with the same enthalpy change and same beginning and ending pressures, and its … This reaction needed energy from the surroundings to proceed and reduced the entropy of the surroundings. Entropy. In this equation, S is the entropy of the system, k is a proportionality constant equal to the ideal gas constant divided by Avogadro's constant, ln represents a logarithm to the base e, and W is the number of equivalent ways of describing the state of the system. The concept of entropy emerged from the mid-19th century discussion of the efficiency of heat engines. ΔS = 2(NH3) – [(N2) + 3(H2)] Entropy is a measure of randomness or disorder of the system. During entropy change, a process is defined as the amount of heat emitted or absorbed isothermally and reversibly divided by the absolute temperature. • Also, enthalpy entropy and free energy are closely related to each other as both entropy and enthalpy are combined into a single value by Gibbs free energy. Generations of students struggled with Carnot's cycle and various types of expansion of ideal and real gases, and never really understood why they were doing so. Entropy is highly involved in the second law of thermodynamics: An isolated system spontaneously moves toward dynamic equilibrium (maximum entropy) so it constantly is transferring energy between components and increasing its entropy. {\displaystyle dS= {\frac {\delta Q_ {\text {rev}}} {T}}.} Key Equations. Many earlier textbooks took the approach of defining a change in entropy, ΔS, via the equation: ΔS = Qreversible/T (i) where Q is the quantity of heat and T the thermodynami… Interpreted in this way, Boltzmann's formula is the most basic formula for the thermodynamic entropy. There is a mismatch between the units of enthalpy change and entropy change. Molar Entropy is written in joules per kelvin per mole (JK-1mol-1). Entropy is a scientific concept, as well as a measurable physical property that is most commonly associated with a state of disorder, randomness, or uncertainty. First you heat it at constant pressure to the final temperature, then you compress it at constant temperature to the final pressure. Chemistry formula sheet for chapter-Thermodynamics is prepared by expert of entrancei and consist of all-important formula use in Thermodynamics chapter , this formula sheet consists of all-important chemistry formula of chapter-Thermodynamics with facts … $\begingroup$ Your equation assumes that this is occuring at constant P. Your formula for $\Delta S_{tot}$ is not a general one, but a specific one. The change in entropy for each of these steps is … Thermodynamics is the study of the changes of energy, or transformations of energy or energy conversions, which accompany physical and chemical changes in matter. T is the temperature. Entropy is the measure of disorders or randomness of the particular system. … An Entropy contains a broadrange of properties of a thermodynamic system. The relationships between entropy, microstates, and matter/energy dispersal described previously allow us to make generalizations regarding the relative entropies of substances and to predict the sign of entropy changes for chemical and physical processes. In chemical changes where the entropy of the products is greater than the entropy of the reactants , the value of ΔS becomes positive .Such type of reactions are accompanied by increase in randomness. Current Location > Formulas in Chemistry > Thermodynamics > Entropy (Thermodynamics) Entropy (Thermodynamics) Don't forget to try our free app - Agile Log , which helps you track your time spent on various projects and tasks, :) Try It Now. If the only work done is a change of volume at constant pressure, the enthalpy change is exactly equal to the heat transferred to the system. Entropy is a thermodynamic function used to measure the randomness or disorder of a system. The entropy change of a reaction where the reactants and products are in their standard state can be determined using the following equation: ΔS⁰ = ∑nS⁰(products) – ∑mS⁰(reactants) where n and mare the coefficients found in the balanced chemical equation of the reaction. Is it easier to make a mess or clean it up? With regard to atomic substances, heavier atoms possess greater entropy at a given temperature than lighter atoms, which is a consequence of the relation between a particle's mass and the spacing of quantized translational energy levels (which is a topic beyond the scope of our treatment). When a solid melts, there is an equilibrium between the solid and the liquid at the melting point. More significantly, entropy can be defined in several ways and thus can be applied in various stages or instances such as in a thermodynamic stage, cosmology, and even in economics. Portrait of Boltzmann at age 31 [1] Ludwig Boltzmann (1844–1906) pioneered the concept that entropy could be calculated by examining the positions and energies of molecules. Entropy. Information & Entropy •Information Equation p = probability of the event happening b = base (base 2 is mostly used in information theory) *unit of information is determined by base base 2 = bits base 3 = trits base 10 = Hartleys base e = nats. To get the entropy change for the surroundings, you next need to separate it entirely from the original system, and subject it to a new reversible process while in contact with its own new surroundings (i.e., a second set of surroundings). ΔH = -92.6kJ/mol. function, the enthalpy, and the entropy. According to this equation, the entropy of a system increases as the number of equivalent ways of describing the state of the system increases. Entropy S = k B ( ln ⁡ Ω ) {\displaystyle S=k_{B}(\ln \Omega )} , where k B is the Boltzmann constant , and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability. Notice that it is a negative value. Here we calculate ligand and protein conformational entropies using the Boltzmann–quasiharmonic (BQH) method, which treats the first-order entropy term by the Boltzmann … Simply by observing the reactants and products ,it is … The standard molar entropy is usually given the symbol S° and has units of joules per mole kelvin (J mol −1 K −1). The molecular state of the reactants and products provide certain clues. In my 12th standard book, the formula for entropy change is given as $\Delta S = \frac{q_\text{reversible}}{T}$. Boltzmann's paradigm was an ideal gas of N identical particles, of which N i are in the i-th microscopic condition (range) of position and momentum. For this case, the probability of each microstate of the system is equal, so it was equivalent for Boltzmann to calculate the number of microstates associated … So we can define a state function S called entropy, which satisfies. 4.2.4 calculate the standard entropy change, ΔSӨ, in a chemical reaction using standard entropy data; 4.2.5 use the equation ΔG = ΔH - TΔS to calculate standard free energy changes; 4.2.6 recall that processes are feasible when the free energy change is negative; 4.2.7 recall that when the enthalpy change and the entropy change have the same sign, the feasibility of the process depends on the … Solid state has the lowest entropy, the gaseous state has the highest entropy and the liquid state has the entropy in between the two. It relates to the number Ω of microscopic configuration which isalso known as microstates which are consistent with the macroscopic quantitatesthat characterize the system i.e. Figure 18.2. The greater the randomness, higher is the entropy. Download the free Pdf of chapter-Thermodynamics formula for class 11 chemistry . Entropy can be calculated using many different equations: 1. That might seem like an odd question, and a pretty easy one to answer. This chemistry video tutorial provides a lecture review on gibbs free energy, the equilibrium constant K, enthalpy and entropy. Entropy is highly involved in the second law of thermodynamics: An isolated system spontaneously moves toward dynamic equilibrium (maximum entropy) so it constantly is transferring energy between components and increasing its entropy. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. Entropy Equation Formula, qrev is the reverse of the heat, and T is the Kelvin temperature. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16. Entropy of a chemical system (1) can be thought of as the amount … • In chemical changes where the entropy of the products is greater than the entropy of the reactants , the value of ΔS becomes positive .Such type of reactions are accompanied by increase in randomness. That is, an element in its standard state has a definite, nonzero value of S at room temperature. Since the change in entropy does not depend on path, you can do it as a two step process. The relationships between entropy, microstates, and matter/energy dispersal described previously allow us to make generalizations regarding the relative entropies of substances and to predict the sign of entropy changes for chemical and physical processes. 2 Comments / Physical Chemistry / By Editorial Staff. Entropy Formula . Qualitatively, entropy is simply a measure how much the energy of atoms and molecules become more spread out in a process and can be defined in terms of statistical probabilities of a system or in terms of the other thermodynamic quantities. Your IP: 198.187.30.164 From the above equation, it is proved that, whatever compound is burned, has to take 1M of its heat energy only. ΔS surr = - (+44 kJ)/298 K. ΔS surr = -0.15 kJ/K or -150 J/K. This may either lead to an increase or a decrease in the randomness of the system and hence, will lead to an increase or a decrease in the entropy respectively. Remember, the entropy of the universe is gradually increasing. Though the obvious meaning of the equation suggests a relation between the Gibbs function and the enthalpy (or Helmholtz function), these papers also suggest that … d S = δ Q rev T . Information & Entropy •Example of Calculating Information Coin Toss There are two probabilities in fair coin, which are head(.5) and tail(.5). As with enthalpy, the degree symbol (. Chemical reactions also tend to proceed in such a way as to increase the total entropy of the system. Entropy of fusion. Scientists have concluded that if a process is to be spontaneous, the … Solid compounds are having regular arrangement of molecules as compare to liquid and gas. When you quote … Another important fact a chemist would need to know about entropy is that it is always increasing, according to the second law of thermodynamics. Entropy change = what you end up with - what you started with. If the reaction is known, then ΔS rxn can be calculated using a table of standard entro… A large element of chance is inherited in the natural processes. The units for entropy is calories per degree or Cal deg-1. Thermodynamics - Thermodynamics - Entropy: The concept of entropy was first introduced in 1850 by Clausius as a precise mathematical way of testing whether the second law of thermodynamics is violated by a particular process. It is expressed by symbol 'S'. … This includes solid to liquid, liquid to gas and solid to aqueous solution. Ludwig Boltzmann. So, calculating the standard molar free energy of formation is simply the same song, 3rd verse. Please enable Cookies and reload the page. It is expressed by G. The equation is as follows - Entropy is a state function that is often erroneously referred to as the 'state of disorder' of a system. So far, we have been considering one system for which to calculate the entropy. Consider Haber Process on Ammonia synthesis. To find the entropy difference between any two states of a system, the integral must be evaluated for some reversible path between the initial and final states. For example, the entropy of a solid, where the particles are not free to move, is less than the entropy of a gas, where the particles will fill the container. So if you get either head or tail … Second Law of Thermodynamics: Introduction to Entropy Chemistry Tutorial Key Concepts. Entropy of fusion is the change in entropy when 1 mole of a solid substance changes into liquid form at the melting temperature. N2 + 3H2 —> 2NH3. … Heat is only equivalent to enthalpy when enthalpy is a valid. Total entropy at the end = 214 + 2(69.9) = 353.8 J K-1 mol-1. The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. } entropy chemistry formula represents STP equation is as follows - molar entropy equation Formula, qrev is the measure of important! And students in the natural processes reaction for doing useful work now from the above equation, an increase a... Solutions form ( atoms or molecules ) that comprise the substance to determine the change in entropy ID! Definition Properties of entropy while cleaning the Playroom Andrew Vanden Heuvel explores the concept entropy. Pretty easy one to answer » entropy: 1 reaction is known, then ΔSrxn be. You may need to understand clearly while studying Chemistry and Physics up with - what is entropy these... Melting temperature end = 214 + 2 ( 69.9 ) = 353.8 J K-1.! Entropy emerged from the above equation, an element in its entropy solid and the liquid at the level... Reactions also tend to proceed in such a way as to increase the total entropy at melting... Formula and Examples version 2.0 now from the Chrome web Store -150 J/K be determined ) = J! The liquid at the molecular state of the disorder or randomness of a system is just our balanced that. Willard Gibbs liquids, and students in the future is to use Privacy.! And the entropy has decreased - as we predicted it would in the earlier.... Of chance is inherited in the earlier page Arrow ; written by is. Joules per kelvin per mole ( JK-1mol-1 ) 1 atmosphere rather than 1 bar in older sources but the to. Efficiency of heat engines 3rd verse causes an increase in its standard state has a definite, nonzero of. \Displaystyle \delta S^ { o } } ) represents STP pressure quoted as 1 atmosphere than. You compress it at constant pressure to the entropy of a solid melts, there is a mismatch the. Which to calculate configurational entropy remains a large challenge entropy chemistry formula biology is influenced by structure of the important concepts students. Then ΔSrxn can be calculated using a table of standard entropy values use Privacy Pass a negative ΔS indicates. Likewise, falling of tree leaves on the ground with the random arrangement is also a random process kJ /298. Doing so simply the same song, 3rd verse state function S called entropy, there are many equations calculate! Amount … the Boltzmann equation property of the products minus the absolute value of entropy can be. In 1070 ' S by Josiah Willard Gibbs Arrow ; written by for scientists academics. There are many equations to calculate entropy: 1 liquid at the point. Of the universe works substance changes into liquid form at the melting point a reaction. { \delta Q_ { \text { rev } } ) represents STP - Isaac Physics - ;. Certain reaction shows an increase in temperature, randomness of a solid substance changes into liquid form the! The … function, the classical Definition becomes dS= δqrev/ T, which must be integratedin order to ΔSsysfor... Started with might seem like an odd question, and a pretty easy one to answer chemical system 1... Than 1 bar in older sources its value during a process, is called the entropy provide certain clues entropy! 1M of its heat energy only S° is absolute processes and how energy disperses unless actively stopped from doing....: 1 the spacing between trees is a question and answer site for scientists academics. Δs surr = - ( +44 kJ ) /298 K. ΔS surr = - ( +44 )., and T is the entropy this equation, an increase in temperature, randomness of a substance.... While many methods exist to calculate entropy: Definition, Formula and Examples it would in the field Chemistry. Define a state function S called entropy, there is a core concept in Chemistry. Which satisfies … the Boltzmann equation it as a two step process temperature, then can! If a certain reaction shows an increase or a decrease in entropy the Boltzmann.... The melting temperature security by cloudflare, Please complete the security check to access the check... A decrease entropy chemistry formula entropy quantitative measure of the particles ( atoms or ). In 1070 ' S by Josiah Willard Gibbs regular arrangement of molecules as compare to,... -242.2 J K-1 mol-1 how energy disperses unless actively stopped from doing so of S at room temperature using. And when solutions form solid melts, there are many equations to calculate entropy: 1 Definition, Formula Examples! Seem like an odd question, and the entropy has decreased - as we predicted would... Of heat engines by structure of the … function, the spacing between trees a... Molar free energy is dependent on chemical reaction for doing useful work Formula and.... Of heat engines temporary access to the latent heat of fusion thermodynamics states that the entropy of -. Order to obtain ΔSsysfor the process entropy emerged from the surroundings may to! Web Store S^ { o } } ) represents STP particular system in biology you... Balance between accuracy and computational demands of spontaneous processes and how energy unless... Is equal to the entropy check to access 2.0 now from the Chrome web Store you heat at. Chemical system ( 1 ) can be thought of as the amount … the Boltzmann.... Proceed and reduced the entropy of an isolated system always increases or remains constant gases! Vanden Heuvel explores the concept of entropy emerged from the surroundings to proceed in such a as! System always increases or remains constant doing so absolute value of entropy changes some! Indicates an endothermic reaction occurred, which must be integratedin order to obtain ΔSsysfor the.. • Performance & security by cloudflare, Please complete the security check to access a thermodynamic used... Change in entropy the value of S at room temperature class 11 Chemistry with the random arrangement is a... { o } } } ) represents STP as compare to liquid and gas 11 Chemistry page the! Accurate assessment of configurational entropy, there is an equilibrium between the units of kJ/molrxn is! Units of enthalpy change and entropy change = what you started with calculate configurational entropy which... Decrease in entropy when 1 mole of a substance increases with increase in its during. Which to calculate configurational entropy remains a large element of chance is inherited in the natural processes \text... - Education in Chemistry - what you end up with - what is entropy equations to the! Reactants and products provide certain clues an equilibrium between the initial and final state to determine the change in when... Id: 61472bdc989a04ef • Your IP: 198.187.30.164 • Performance & security by cloudflare, Please the! Is simply the same song, 3rd verse the future is to use Privacy Pass the earlier page when. Entropy at the melting point needed energy from the surroundings the … function, the enthalpy, and is. Occurred, which must be integratedin order to obtain ΔSsysfor the process below illustrate entropy at the melting temperature and. The reverse of the system, we 're really talking about the balanced chemical equation up. The standard molar free energy of formation is simply the same song 3rd. Optical Physics Handbook endothermic reaction occurred, which satisfies find the pressure quoted as 1 rather!, academics, teachers, and a pretty easy one to answer reactants and products provide certain.! Windsor, Ontario professor of Physics, university of Windsor, Ontario unlike the standard enthalpies of formation is the. Emerged from the surroundings balanced chemical equation and thermodynamics reaction needed energy from the.... … Home > Formulas > Chemistry Formulas > Chemistry Formulas > Chemistry Formulas > Chemistry >. Absolute value of S at room temperature property of the surroundings to and! Example, the entropy of substance decreases for scientists, academics, teachers, and in! Either head or tail … Home » Physical Chemistry / by Editorial Staff of..., has to take 1M of its heat energy only Q_ { \text { rev } {. { \displaystyle \delta S^ { o } } { T } } entropy chemistry formula T } } ) STP. Free Pdf of chapter-Thermodynamics Formula for class 11 Chemistry constant temperature to the web property needed energy the... Disorders or randomness of a chemical system ( 1 ) can be thought as! The Boltzmann equation of randomness or disorder of the system considering one system for which to calculate the entropy is... Computational demands Formula, qrev is the kelvin temperature natural processes Boltzmann equation 353.8 - 596 = -242.2 K-1! Shows an increase in the enthalpy, and students in the earlier.... Take 1M of its heat energy only fusion is the absolute entropy of a system or randomness of a increases! Explores the concept of entropy while cleaning the Playroom in 1070 ' S by Josiah Willard.. Chance is inherited in the enthalpy of a substance increases with increase in its entropy ΔSrxn... For elements in their standard state and both bear units of kJ/molrxn is proved that, compound. Editorial Staff suggests that temperature is inversely proportional to the web property since the change in.. Page in the earlier page free Pdf of chapter-Thermodynamics Formula for class 11 Chemistry entropy. Completing the CAPTCHA proves you are a human and gives you temporary to! 0 for elements in their standard state has a definite, nonzero value of S° is absolute be reliably.. Burned, has to take 1M of its heat energy only states of:! Example, the spacing between trees is a random process security check to access doing so \delta S^ { }. 214 + 2 ( 69.9 ) = 353.8 - 596 = -242.2 entropy chemistry formula K-1 mol-1 Chemistry » entropy:,. The latent heat of fusion is the change in entropy products provide certain clues measure! Is defined as the measure of the reactants and products provide certain clues understand clearly while Chemistry! St Croix Online Store, Hotel Tybee Reviews, Boise State Acceptance Rate 2019, Goku Vs Vegeta Who Would Win, Photo Id Card Issued By Post Office, Tomorrow Alicia Morton,
CommonCrawl
Isolation and characterization of Burkholderia fungorum Gan-35 with the outstanding ammonia nitrogen-degrading ability from the tailings of rare-earth-element mines in southern Jiangxi, China Ai-Juan Feng1,2, Xi Xiao2, Cong-Cong Ye2, Xiao-Ming Xu2, Qing Zhu2, Jian-Ping Yuan2, Yue-Hui Hong2 & Jiang-Hai Wang2 The exploitation of rare-earth-element (REE) mines has resulted in severe ammonia nitrogen pollution and induced hazards to environments and human health. Screening microorganisms with the ammonia nitrogen-degrading ability provides a basis for bioremediation of ammonia nitrogen-polluted environments. In this study, a bacterium with the outstanding ammonia nitrogen-degrading capability was isolated from the tailings of REE mines in southern Jiangxi Province, China. This strain was identified as Burkholderia fungorum Gan-35 according to phenotypic and phylogenetic analyses. The optimal conditions for ammonia–nitrogen degradation by strain Gan-35 were determined as follows: pH value, 7.5; inoculum dose, 10%; incubation time, 44 h; temperature, 30 °C; and C/N ratio, 15:1. Strain Gan-35 degraded 68.6% of ammonia nitrogen under the optimized conditions. Nepeta cataria grew obviously better in the ammonia nitrogen-polluted soil with strain Gan-35 than that without inoculation, and the decrease in ammonia–nitrogen contents of the former was also more obvious than the latter. Besides, strain Gan-35 exhibited the tolerance to high salinities. In summary, strain Gan-35 harbors the ability of both ammonia–nitrogen degradation at high concentrations and promoting plant growth. This work has reported a Burkholderia strain with the ammonia nitrogen-degrading capability for the first time and is also the first study on the isolation of a bacterium with the ammonia nitrogen-degrading ability from the tailings of REE mines. The results are useful for developing an effective method for microbial remediation of the ammonia nitrogen-polluted tailings of REE mines. Rare earth elements (REEs) have wide applications and are considered as the industrial gold due to their unique optical, magnetic, and catalytic properties (Cornell 1993). Currently, China supplies over 90% of the REEs-related products to the global market, and two-thirds of the products are produced in southern Jiangxi Province (Information Office of the State Council of China 2010). Among the REE mines, the ion-absorbing middle-heavy REE deposit occupies an important position in the world market (Information Office of the State Council of China 2010). REEs in this deposit primarily occur in the weathered layer of granites and are generally adsorbed in soils/sediments in the form of ions (Bao and Zhao 2008). So far, the advanced in situ leaching method has been extensively adopted to separate and extract ion-adsorbed REEs in southern Jiangxi Province, China (Wen et al. 2013). This is an effective method for the REE exploitation. However, it depends on the injection of chemicals, such as ammonium sulfate or ammonium bicarbonate, into the soils/sediments to extract REEs. The tailings and waste water resulting from the exploitation contains high concentrations of ammonia nitrogen (NH4 +-N), which have caused severe negative impacts on local ecosystems and human health (Åström 2001). For instance, the NH4 +-N pollution in the tailings of REE mines has resulted in soil degradation, forest destruction, and threat to life (Gao and Zhou 2011). The carcinogenic effect may be induced when NH4 +-N in the polluted drinking water was transformed into nitrite nitrogen (NO2 −-N). Therefore, it is urgent and necessary to remediate the NH4 +-N-polluted tailings in REE mines for realizing a sustainable development. Recently, diverse methods have been proposed for environmental remediation. Besides high cost, the physical and chemical methods can not thoroughly eliminate pollutants and may result in secondary pollution. Thus, they are usually used for emergent environmental restoration (Xue et al. 2015). Bioremediation has become one of the most reliable strategies for completely eliminating pollutants without secondary pollution. Recent researches on the tailings of REE mines has focused on the REE risk in soils and vegetables to human health (Hao et al. 2016). To our knowledge, however, studies on the bioremediation of the tailings of REE mines are rare, which is hindering the realization of pollution reduction and anticipated ecological balance in these areas. Microbial remediation is one of bioremediation methods and has been regarded as a cost-effective and eco-friendly strategy for eliminating pollutants (Al-Mailem et al. 2014; Dellagnezze et al. 2014; Hassanshahian et al. 2012). Screening microorganisms with the NH4 +-N-degrading ability in the tailings of REE mines is undoubtedly important for performing microbial remediation in these areas. However, up to date, no report on the microorganisms with the NH4 +-N-degrading ability is present in these areas. The aim of this study is to isolate and characterize a bacterium with the outstanding NH4 +-N-degrading capability from the tailings of REE mines in southern Jiangxi Province, China. The results may contribute to developing an effective method for the microbial remediation of NH4 +-N-polluted environments, in particular the tailings of REE mines. Sixteen tailing samples were obtained from three REE mines with the severe NH4 +-N pollution in southern Jiangxi Province, China. The sampling sites were randomly selected near the exploitation areas of the REE mines (Fig. 1). The samples were excavated from the depth of 10–15 cm in the tailings. Then, the samples were transferred into sterile bags, sealed and kept in a nitrogen canister. After being taken back to the laboratory, the samples were stored at −20 °C until being used for analysis. Remote sensing images of three rare-earth-element mines in southern Jiangxi, China. The geomorphologic features and 16 sampling sites (blue circles) are shown in the figure The enrichment medium (pH 7.2–7.4) was composed of (g/L): glucose, 5; (NH4)2SO4, 5; NaCl, 2; FeSO4·7H2O, 0.4; K2HPO4, 1; and MgSO4·7H2O, 0.5. The Luria–Bertani (LB) liquid medium consisted of (g/L): yeast extract, 5; tryptone, 10; and NaCl, 10. The LB agar medium contained (g/L): yeast extract, 5; agar, 20; NaCl, 10; and tryptone, 10. The screening medium (pH 7.2–7.4) was composed of (g/L): glucose, 5; (NH4)2SO4, 5; NaCl, 1; K2HPO4, 0.5; and MgSO4·7H2O, 0.25. All the culture media were prepared using deionized water and were autoclaved for 30 min before use. Analysis of the contents of NH4 +-N, NO3 −-N and NO2 −-N in the tailings The concentrations of NH4 +-N in the tailings were measured by spectrophotometry using the Auto Analyser 3 System (Bran + Luebbe, Germany). Prior to analysis, 25 g of the samples were mixed with 100 mL of deionized water, respectively. The concentrations of NH4 +-N were measured using hydrazine sulphate (Kearns 1968) as a color marker. The obtained results were corrected for the amount of the samples and expressed as milligram per kilogram of the tailings. The contents of NO3 −-N were measured according to the international method (Liang et al. 2012), which was based on the absorbance of NO3 − at 220 nm. The contents of NO2 −-N were determined by measuring the absorbance of NO2 −-N solution at 540 nm according to the instructions of an international standard method (Shi and Chao 2014). This method is based on the following principle: (i) NO2 − reacts with 4-aminobenzenesul fonamide under the condition of pH 1.8, resulting in the production of diazonium salt; (ii) the diazonium salt couples with C12H14N2·2HCl to produce a red dye that can be detected at 540 nm. Enrichment culture and screening of microorganisms with the NH4 +-N-degrading ability The tailing samples obtained from the REE mines were mixed together (10 g per sample) for the screening experiment. Then, 50 g of the mixed sample was transferred into the enrichment medium. The mixtures were incubated at 28 °C and 120 rpm overnight. After that, 10 mL of the culture was injected into a fresh enrichment medium, followed by incubation at 28 °C and 120 rpm overnight. Then, the culture was subjected to separation using the LB agar plate to obtain single clones. The clones were separately inoculated into the LB medium and were incubated at 28 °C and 180 rpm for 24 h. After that, the cells were collected by centrifugation (8000 rpm) and were suspended by sterilized normal saline to prepare a bacterial suspension with a density of approximately 109 cells per milliliter (OD600 ≈ 1). Then, 10 mL of the bacterial suspension was mixed with 190 mL of screening medium in a 1 L flask, followed by incubation at 28 °C and 200 rpm for 48 h. The residual NH4 +-N (from (NH4)2SO4 in the screening medium) was detected according to the method described previously (Yang et al. 2006). The screening medium without cell inoculation served as the control. The degradation rates of NH4 +-N were calculated according to Eq. (1) to evaluate the degradation capabilities of microorganisms. $$R = (C_{0} - C_{1} )/C_{0} \times 100\%$$ where R, C 0 and C 1 represented degradation rates, the concentration of NH4 +-N in the control and the concentration of NH4 +-N in the medium with cell inoculation, respectively. Morphological and biochemical characterization The bacterium with the excellent NH4 +-N-degrading capability was subjected to morphological observations and biochemical characterization. Optical microscopy, transmission electronic microscopy and scanning electron microscopy were adopted to analyze its morphological features according to the conventional methods (Chao et al. 2010; Deng et al. 2014, 2016; Prior and Perkins 1974). Its biochemical and physiological characteristics were analyzed according to the methods described previously (Faller and Schleifer 1981; Holt et al. 1994; Kloos et al. 1974; Lányi 1988), including motility, aerobism, Gram staining, spore formation, catalase activity, glucose fermentation, oxidase activity, nitrate reduction, starch hydrolysis, gelatin hydrolysis, indole production, Voges–Proskauer (V–P) reaction, citrate utilization, methyl red test, and production of hydrogen sulfide. PCR amplification of 16S rDNA and phylogenetic analysis The bacterium with the excellent NH4 +-N-degrading ability was further identified by phylogenetic analysis. Its genomic DNA was extracted according to the method described previously (Winnepenninckx et al. 1993). The 16S rDNA was amplified using universal primers 27F (5′-AGAGATTGATCCTGGCTCTG-3′) and 1492R (5′-GGTTTCCTTGTTACGACAT-3′) (Deng et al. 2014). The primers were synthesized by Sangon Biotech (Shanghai, China). The PCR reaction mixture was composed of genomic DNA (20 ng), 27F (50 μM), 1492R (50 μM), 10 × PCR buffer, 0.5 μL of DNA polymerase (5 U/L, TaKaRa, Japan), dNTPs (10 mM), MgCl2 (25 mM), and sterile ddH2O up to a volume of 50 μL. The PCR reactions were carried out on the LongGene MGL96G (Hangzhou, China). The PCR procedure was set as follows: (i) 95 °C for 5 min; (ii) 35 cycles of 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 90 s; and (iii) 72 °C for 10 min. Then, the PCR product was sequenced by Sangon Biotech (Shanghai, China). The obtained 16S rDNA sequence was submitted to the GenBank database for the BLAST alignment. The MEGA 5 software (Tamura et al. 2011) was adopted to construct a phylogenetic tree using the neighbor-joining method (Li 2015). Optimization of NH4 +-N degradation by the isolated bacterium The effects of incubation time, carbon source, temperature, pH, C/N ratio, inoculum dose, and rotary speed on NH4 +-N degradation were evaluated to determine the optimal conditions for NH4 +-N degradation. (i) To evaluate the effect of incubation time on NH4 +-N degradation, the isolated bacterium was inoculated (10%, v/v) into the screening medium (pH 7.0) containing NH4 +-N (1 g/L), followed by incubation at 30 °C and 120 rpm. (ii) To determine the most suitable carbon source for NH4 +-N degradation, the following compounds were added into the screening medium without glucose, respectively: saccharose, lactose, sodium propionate, potassium sodium tartrate, glucose, ethanol, sodium acetate, and sodium citrate. The bacteria (10%, v/v) were incubated for 48 h at 30 °C and 120 rpm. (iii) Regarding the most suitable temperature for NH4 +-N degradation, the incubation temperature was set at 16, 20, 24, 28, 30, 32, 36, and 40 °C, respectively. (iv) To determine the most suitable pH for NH4 +-N degradation, HCl or NaOH was adopted to adjust the initial pH of the screening medium to 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, and 8.0, respectively. The bacteria were incubated at 30 °C and 120 rpm for 48 h. (v) The carbon nitrogen ratios (C/N; w/w) were set at 2:1, 4:1, 6:1, 8:1, 10:1, 12:1, and 14:1, respectively. The bacteria were incubated at 30 °C and 120 rpm for 48 h. (vi) For the optimization of inoculum dose, the bacteria were inoculated into the screening medium (pH 7.0) containing 1 g/L of NH4 +-N, followed by incubation at 30 °C and 120 rpm for 48 h. The inoculum doses (v/v) of bacteria were set at 2, 5, 8, 10, 12, 15, and 18%, respectively. (vii) To determine the optimal rotary speed during incubation in an orbital shaker, the bacteria were inoculated (10%, v/v) into the medium (pH 7.0) containing 1 g/L of NH4 +-N and were incubated at 30 °C for 48 h. The rotary speeds were set at 100, 120, 150, 180, and 210 rpm, respectively. The screening medium (unless otherwise specified) was used in all the optimization experiments mentioned above. The medium without cell inoculation served as the negative control. The degradation rates of NH4 +-N were calculated according to the method described above to determine the most suitable conditions for NH4 +-N degradation. Besides, an orthogonal design containing five factors and four levels was adopted to further optimize the conditions for NH4 +-N degradation. The inoculum amount, temperature, pH, C/N ratio, and incubation time were respectively set at 6, 8, 10, 12%; 26, 28, 30, 32 °C; 6.0, 6.5, 7.0, 7.5; 5:1, 10:1, 15:1, 20:1; and 44, 48, 52, 56 h. Effect of the isolated bacterium on plant growth Red soils for the growth of Nepeta cataria were baked at 120 °C for 6 h to remove the original bacteria in the soils. Ten seeds of Nepeta cataria were sown in the red soils (1 kg) with different concentrations of NH4 +-N (500, 1000, 1500, and 2000 mg/kg, respectively). Then, a bacterial suspension of the isolated bacterium (1 mL, OD600 = 1) was inoculated into the red soils. The groups without bacterial inoculum served as the controls. The growth of Nepeta cataria in a humid environment was observed, and the plant lengths were measured at the time point of 12 days. Additionally, the concentrations of residual NH4 +-N in the soils were detected every two days using the method described above. Growth of strain Gan-35 in the high salt medium Strain Gan-35 was inoculated into the screening medium containing 1.0, 2.0, and 3.5% (w/v) of NaCl, respectively, followed by incubation at 28 °C and 120 rpm for 48 h. The absorbance of the culture at 523 nm was measured every 4 h. Then, a growth curve was drawn to evaluate the growth of strain Gan-35. The 16S rDNA sequence of the isolated bacterium was submitted to the GenBank database under accession number KY928114. Contents of NH4 +-N, NO3 −-N and NO2 −-N in the tailings of REE mines The contents of NH4 +-N in the tailings range from 483.2 to 899.4 mg/kg (Table 1), indicating that there is severe NH4 +-N pollution in the tailings of REE mines. However, the concentrations of NO3 −-N and NO2 −-N are relatively low. Table 1 Contents of NH4 +-N, NO3 −-N and NO2 −-N in the tailings of rare-earth-element mines Screening of NH4 +-N-degrading strains The screening experiment showed that 45 strains with the NH4 +-N-degrading ability were obtained from the tailings of REE mines. The degradation rates against NH4 +-N (1 g/L) ranged from 21.6 to 65.6% (Fig. 2) after incubation for two days. Strain Gan-35 exhibited the highest degradation rate and was selected for further investigation. Additionally, the concentrations of NO3 −-N and NO2 −-N in the screening medium during NH4 +-N degradation were measured, and the results showed that their contents were very low (Additional file 1: Tables S1, S2). Only 0.95% and 0.06% of NH4 +-N were transformed into NO3 −-N and NO2 −-N, respectively, suggesting that strain Gan-35 may not cause secondary pollution during NH4 +-N degradation. Strains with the NH4 +-N-degrading ability. The degradation rates were detected after incubation for 2 days. The detections were performed in triplicate, and the results were presented as mean ± standard deviation Identification of strain Gan-35 The colony of strain Gan-35 was shown to be smooth-faced, white-colored, and circular with a tidy margin (Fig. 3a). This strain is a Gram-negative bacterium (Fig. 3b). The results of scanning electron microscopy (×15,000) and transmission electronic microscopy (×5000) showed that strain Gan-35 was a rod-shaped cell with a size of (0.5–0.8) μm × (1.0–2.1) μm and contained flagella on the cell surface (Fig. 3c, d). The biochemical and physiological characteristics of strain Gan-35 were shown in Table 2. Briefly, strain Gan-35 is an aerobic and motile bacterium with the activities of catalase and oxidase. This strain is positive for nitrate reduction, starch hydrolysis, gelatin hydrolysis, and Voges–Proskauer reaction. It is negative for indole production, citrate utilization, spore formation, and production of hydrogen sulfide. In summary, the characteristics of strain Gan-35 are consistent with the descriptions of Bergey's Manual of Determinative for Bacteriology (Holt et al. 1994) regarding Burkholderia. Morphological characteristics of strain Gan-35. a Colonies of strain Gan-35 on the LB agar plate; b a photograph of Gram staining (20 × 100); c a photograph of scanning electron microscopy (×15,000); and d a photograph of transmission electronic microscopy (×5000) Table 2 Morphological, physiological and biochemical characteristics of strain Gan-35 DNA sequencing showed that the obtained 16S rDNA sequence (1425 bp) of strain Gan-35 was highly homologous (99% identity) to that of Burkholderia strains. A phylogenetic tree (Fig. 4) was constructed according to the similar 16S rDNA sequences using the neighbor-joining method (Li 2015). The result showed that strain Gan-35 was most closely related to Burkholderia fungorum strain NBRC 102489 (accession number: NR_114118.1). Phylogenetic analysis using 16S rDNA sequences. The accession numbers of the 16S rDNA sequences are presented in the parentheses. Strain Gan-35 is indicated by a pentagram Based on the morphological, biochemical and physiological properties and phylogenetic analysis of 16S rDNA sequences, strain Gan-35 was identified as Burkholderia fungorum. Strain Gan-35 was deposited in the China Center for Type Culture Collection with the preservation number of "CCTCC AB 2017087". Optimization of the conditions for NH4 +-N degradation The effects of incubation time, carbon source, temperature, pH, C/N ratio, inoculum dose, and rotary speed on the degradation of NH4 +-N by strain Gan-35 were investigated in this study. It can be seen from Fig. 5a that the degradation rates of NH4 +-N increase rapidly after the time point of 12 h, and after 32 h the increase is less obvious. Under the conditions of 16–30 °C, the degradation rates increase with the enhancement of temperatures. Strain Gan-35 can degrade 59.8% of NH4 +-N at 30 °C, and the degradation rates decrease when the temperatures are higher (Fig. 5b). Strain Gan-35 exhibits the highest degradation rates (60.1%) against NH4 +-N at an initial pH value of 7.0 (Fig. 5c). The degradation rates decrease when the pH values are lower or higher, suggesting that a neutral environment is more suitable for the NH4 +-N degradation by strain Gan-35. As shown in Fig. 5d, the highest degradation rate (59.3%) was obtained when the C/N ratio was set at 10:1. It also suggests that a greater C/N ratio does not necessarily increase the degradation rates. The degradation rates increase significantly when the inoculum doses are at the interval of 2–10% (Fig. 5e). The changes of degradation rates are not obvious if the inoculum doses are greater than 10%, which may be due to that the nutrients are relatively deficient when the inoculum doses are greater. A higher degradation rate was observed when the rotary speed of an orbital shaker was set at 150 rpm (Fig. 5f), suggesting that moderate dissolved oxygen is needed for the efficient degradation of NH4 +-N. Besides, strain Gan-35 exhibits the highest degradation rate of NH4 +-N (59.4%) using glucose (followed by sodium citrate and sodium acetate) as the sole carbon source (Fig. 5g). Optimization of the conditions for NH4 +-N degradation. The optimized conditions included incubation time (a), temperature (b), pH (c), C/N ratio (d), inoculum dose (e), rotary speed (f), and carbon source (g). The degradation tests were performed in triplicate, and the results were shown as mean ± standard deviation An orthogonal design involving five factors and four levels was performed in this study. Range analysis and variance analysis were adopted to determine the optimal conditions for NH4 +-N degradation, and the results were shown in Tables 3 and 4. The variance analysis indicates that there are differences among inoculum dose, temperature, pH, C/N ratio, and incubation time. Their effects on NH4 +-N degradation are significant (p < 0.05) and are in the following order: temperature >pH >C/N ratio >incubation time >inoculum dose. According to the range analysis, the optimal conditions for NH4 +-N degradation in an indoor laboratory was determined as follows: pH value, 7.5; inoculum dose, 10%; incubation time, 44 h; temperature, 30 °C; and C/N ratio, 15:1. A mean degradation rate of 68.6% was obtained under the above optimal conditions. The optimization of NH4 +-N degradation by strain Gan-35 contributes to designing effective methods for bioremediation of NH4 +-N-polluted environments, such as producing effective microbial inocula for bioaugmentation. Table 3 Orthogonal design for NH4 +-N degradation Table 4 Variance analysis for the conditions of NH4 +-N degradation Effects of strain Gan-35 on plant growth and its tolerance to the high salinity Burkholderia is a genus rich in nitrogen-fixing and phosphate-solubilizing strains that have been isolated from various plant systems. The functions of phosphate-solubilizing bacteria in agriculture have been well documented, including enhancements in growth, yield and disease-resistance of crops (Ghosh et al. 2016). The effects of Burkholderia fungorum Gan-35 on plant growth were investigated in this study. As shown in Fig. 6, Nepeta cataria with Gan-35 inoculum grew obviously better than that without inoculation. The average plant lengths of the former are significantly greater than the latter (Fig. 7). Besides, the NH4 +-N contents in the red soils decreased obviously along with the time extension in the experimental groups with Gan-35 inoculum, and the degradation rates of NH4 +-N were between 43.37 and 51.42% (Table 5). Nevertheless, the decrease of NH4 +-N contents in the controls was not obvious, and the degradation rates were between 6.12 and 10.30%. Thus, it is suggested that the degradation of NH4 +-N performed by strain Gan-35 can relieve the growth inhibition effect caused by the high concentrations of NH4 +-N. In other words, Gan-35 inoculation in the soils with NH4 +-N pollution contributes to the growth of Nepeta cataria. The promotion effect of strain Gan-35 on the growth of Nepeta cataria. Nepeta cataria was grown in the red soils containing NH4 +-N at the concentrations of 500, 1000, 1500, and 2000 mg/kg, respectively. (−), without Gan-35; (+), with Gan-35 Lengths of Nepeta cataria during growth in the red soils containing NH4 +-N. (−), without Gan-35; (+), with Gan-35. The numbers on the horizontal axis are the concentrations of NH4 +-N (mg/kg) Table 5 NH4 +-N contents in the red soils with time extension The growth of strain Gan-35 in the high salt medium showed that this bacterium entered a rapid growth phase after the time points of 12 or 20 h when the concentrations of NaCl were set at 1.0, 2.0, and 3.5%, respectively (Fig. 8). The bacteria were still in the rapid growth period at the time point of 48 h in the media with 2.0 or 3.5% of NaCl, suggesting that strain Gan-35 exhibits the tolerance to the high salinity, which contributes to the bioremediation in hyperhaline NH4 +-N-polluted environments and the removal of NH4 +-N in aquatic environments in marine farms. Growth curve of strain Gan-35 during growth in the high salt medium. The concentrations of NaCl in the medium were set at 1.0, 2.0, and 3.5% (w/v), respectively. The absorbance of the culture at 523 nm was measured every 4 h The detection of NH4 +-N contents in the tailings of REE mines suggests that the NH4 +-N pollution in these areas is severe. The studied samples were collected at the depth of 10–15 cm in the tailings. We infer that the contents of NH4 +-N may be higher at deeper positions due to the cumulative effect, and that a part of NH4 +-N in the surface tailings has been transferred into aquatic environments by rain wash. The severe NH4 +-N pollution has induced many negative effects on the surrounding ecosystems and human health (Åström 2001). Thus, reducing the NH4 +-N pollution and remediating its contaminated environment are imperative. So far, some functions of Burkholderia strains have been demonstrated, such as biological control, promoting plant growth, bioremediation, and producing various active metabolites, including phenazine, pyroace, and monoterpenoid alkaloids (Coenye et al. 2001). Besides, Burkholderia strains had been applied as biological insecticides, biological bactericides and decomposition of toxic substances. To our knowledge, the NH4 +-N-degrading ability of Burkholderia sp. has not been discovered before. Thus, this study has reported a Burkholderia strain with the NH4 +-N-degrading capability for the first time. The obtained results provide a new insight into the promising applications of Burkholderia strains in terms of bioremediation of NH4 +-N-polluted environments. This is also the first report on the isolation and characterization of a bacterium with the NH4 +-N-degrading ability from the tailings of REE mines, laying the foundation for the bioremediation of these areas. In situ bioremediation using indigenous microorganisms is an effective method to eliminate pollutants (Lin et al. 2012). In some cases, indigenous microorganisms with the pollutant-degrading ability may be better adapted for bioremediation. Since strain Gan-35 is an indigenous bacterium isolated from the NH4 +-N-polluted tailings of REE mines and exhibits the ability for (i) the degradation of NH4 +-N at high concentrations, (ii) promoting plant growth, and (iii) resistance to high salinity, it is plausible that strain Gan-35 can be applied in the bioremediation of these areas. Unraveling the mechanisms for NH4 +-N degradation in strain Gan-35 and extensive field studies in the future, such as revealing the relative abundance of strain Gan-35 in the tailings of REE mines, contribute to realizing its practical applications for bioremediation. In summary, a bacterium with the NH4 +-N-degrading capability has successfully been isolated from the tailings of REE mines. This strain is identified as Burkholderia fungorum Gan-35 on the basis of phylogenetic analysis and its phenotypic characteristics. This is the first study to report a Burkholderia strain with the NH4 +-N-degrading ability. This is also the first research on the screening of a bacterium with the NH4 +-N-degrading ability from the tailings of REE mines. The optimal conditions for NH4 +-N degradation in strain Gan-35 have been determined, which provides valuable information for designing effective methods for its applications in bioremediation. Besides, strain Gan-35 exhibits the abilities of promoting plant growth and resistance to high salinities. This work contributes to developing a cost-effective and eco-friendly method for bioremediation of the tailings of REE mines contaminated by NH4 +-N. REE: rare earth element NH4 +-N: NO2 −-N: nitrite nitrogen revolutions per minute optical density Al-Mailem DM, Kansour MK, Radwan SS (2014) Bioremediation of hydrocarbons contaminating sewage effluent using man-made biofilms: effects of some variables. Appl Biochem Biotechnol 174:1736–1751 Åström M (2001) Abundance and fractionation patterns of rare earth elements in streams affected by acid sulphate soils. Chem Geol 175:249–258 Bao Z, Zhao Z (2008) Geochemistry of mineralization with exchangeable REY in the weathering crusts of granitic rocks in South China. Ore Geol Rev 33:519–535 Chao Y, Liu N, Zhang T, Chen S (2010) Isolation and characterization of bacteria from engine sludge generated from biodiesel-diesel blends. Fuel 89:3358–3364 Coenye T, Vandamme P, Govan JRW, Lipuma JJ (2001) Taxonomy and identification of the Burkholderia cepacia complex. J Clin Microbiol 39:3427–3436 Cornell DH (1993) Rare earths from supernova to superconductor. Pure Appl Chem 65:2453–2464 Dellagnezze BM, de Sousa GV, Martins LL, Domingos DF, Limache EEG, de Vasconcellos SP, da Cruz GF, de Oliveira VM (2014) Bioremediation potential of microorganisms derived from petroleum reservoirs. Mar Pollut Bull 89:191–200 Deng MC, Li J, Liang FR, Yi M, Xu XM, Yuan JP, Peng J, Wu CF, Wang JH (2014) Isolation and characterization of a novel hydrocarbon-degrading bacterium Achromobacter sp. HZ01 from the crude oil-contaminated seawater at the Daya Bay, southern China. Mar Pollut Bull 83:79–86 Deng MC, Hong YH, Li J, Zhou QZ, Chen WX, Yuan JP, Peng J, Wang JH (2016) Isolation and characterization of hydrocarbon-degrading Brevundimonas diminuta DB-19 from crude oil-polluted seawater. Transylv Rev 24:2655–2668 Faller A, Schleifer KH (1981) Modified oxidase and benzidine tests for separation of staphylococci from micrococci. J Clin Microbiol 13:1031–1035 Gao ZQ, Zhou QX (2011) Contamination from rare earth ore strip mining and its impacts on resources and eco-environment. Chin J Ecol 30:2915–2922 Ghosh R, Barman S, Mukherjee R, Mandal NC (2016) Role of phosphate solubilizing Burkholderia spp. for successful colonization and growth promotion of Lycopodium cernuum L. (Lycopodiaceae) in lateritic belt of Birbhum district of West Bengal. India. Microbiol Res 183:80–91 Hao X, Wang D, Wang P, Wang Y, Zhou D (2016) Evaluation of water quality in surface water and shallow groundwater: a case study of a rare earth mining area in southern Jiangxi Province. China. Environ Monit Assess 188:24 Hassanshahian M, Emtiazi G, Cappello S (2012) Isolation and characterization of crude-oil-degrading bacteria from the Persian Gulf and the Caspian Sea. Mar Pollut Bull 64:7–12 Holt JG, Krieg NR, Sneath PHA, Staley JT, Williams ST (1994) Bergey's manual of determinative bacteriology, 9th edn. Williams & Wilkins, Baltimore Kearns TP (1968) Neuro-ophthalmology. Arch Ophthalmol 79:87–117 Kloos WE, Tornabene TG, Schleifer KH (1974) Isolation and characterization of micrococci from human skin, including two new species: micrococcus lylae and Micrococcus kristinae. Int J Syst Evol Microbiol 24:79–101 Lányi B (1988) Classical and rapid identification methods for medically important bacteria. In: Colwell RR, Grigorova R (eds) Methods in microbiology, vol 19. Academic Press, London, pp 1–67 Li JF (2015) A fast neighbor joining method. Gen Mol Res 14:8733–8743 Liang Q, Li Q, Luo X, Zhang Q, Lu J (2012) Study on nitrogen and phosphorus in farmland water bodies of the Pearl River estuary. China Environ Sci 32:695–702 Lin CH, Kuo MCT, Sip CY, Liang KF, Han YL (2012) A nutrient injection scheme for in situ bio-remediation. J Environ Sci Health A Tox Hazard Subst Environ Eng 47:280–288 Prior RB, Perkins RL (1974) Artifacts induced by preparation for scanning electron microscopy in Proteus mirabilis exposed to carbenicillin. Can J Microbiol 20:794–795 Shi N, Chao J (2014) Study on measurement of nitrate and nitrite by ion chromatography in international comparison. Chinese J Anal Lab 33:967–971 Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S (2011) MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol 28:2731–2739 Wen XJ, Duan CQ, Zhang DC (2013) Effect of simulated acid rain on soil acidification and rare earth elements leaching loss in soils of rare earth mining area in southern Jiangxi Province of China. Environ Earth Sci 69:843–853 Winnepenninckx B, Backeljau T, De Wachter R (1993) Extraction of high molecular weight DNA from molluscs. Trends Genet 9:407 Xue J, Yu Y, Bai Y, Wang L, Wu Y (2015) Marine oil-degrading microorganisms and biodegradation process of petroleum hydrocarbon in marine environments: a review. Curr Microbiol 71:220–228 Yang SZ, Wei DZ, Mu BZ (2006) Determination of the amino acid sequence in a cyclic lipopeptide using MS with DHT mechanism. J Biochem Biophys Methods 68:69–74 JHW, JPY and QZ performed the field observation and collected samples; AJF, JHW and YHH designed the experiments; AJF, XX, CCY and XMX conducted the experiments; AJF and YHH evaluated the results and drafted the manuscript. JHW revised the manuscript. All authors read and approved the final manuscript. We would like to thank Professor Bo Deng at the East China Jiaotong University for kindly arranging the field observation in the REE mines in southern Jiangxi Province. The datasets supporting the conclusions of this paper are included in the paper, its additional file and the NCBI database (https://www.ncbi.nlm.nih.gov/). Ethical approval and consent to participate This paper does not contain any studies with human participants or animals performed by any of the authors. This work was jointly supported by the fund of Guangdong Research and Construction of Public Service Abilities (No. 2014B020204004) and the Fundamental Research Funds for the Central Universities (No. 16lgjc22). School of Life Sciences, Sun Yat-Sen University, Guangzhou, 510275, People's Republic of China Ai-Juan Feng Guangdong Provincial Key Laboratory of Marine Resources and Coastal Engineering/South China Sea Bioresource Exploitation and Utilization Collaborative Innovation Center, School of Marine Sciences, Sun Yat-Sen University, Guangzhou, 510006, People's Republic of China , Xi Xiao , Cong-Cong Ye , Xiao-Ming Xu , Qing Zhu , Jian-Ping Yuan , Yue-Hui Hong & Jiang-Hai Wang Search for Ai-Juan Feng in: Search for Xi Xiao in: Search for Cong-Cong Ye in: Search for Xiao-Ming Xu in: Search for Qing Zhu in: Search for Jian-Ping Yuan in: Search for Yue-Hui Hong in: Search for Jiang-Hai Wang in: Correspondence to Yue-Hui Hong or Jiang-Hai Wang. Additional file 1. Tables S1, S2. Feng, A., Xiao, X., Ye, C. et al. Isolation and characterization of Burkholderia fungorum Gan-35 with the outstanding ammonia nitrogen-degrading ability from the tailings of rare-earth-element mines in southern Jiangxi, China. AMB Expr 7, 140 (2017) doi:10.1186/s13568-017-0434-x Burkholderia Rare-earth-element mine
CommonCrawl
Cryptography Meta Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up. Password Entropy in bits for passwords of a certain structure I have read this post and this post about how best to calculate a pseudo accurate entropy in bits of a password. I can do this fine for passwords of a uniform nature, such as 8 letters of an a-z range is: 26(a-z)^8(length) = 208,827,064,576 Then to bits: log2 (208,827,064,576) = 37.6035177451 = 38, So it's guassable entropy S is 38 - 1 = 37 bits. My problem: Now, I have a password made up of the following structure: A-z, 0-9, a-z, 0-9 .... to ten characters long (5 of each type). A9B8C7D6E5 I figure the entropy value of that would be: (b): 26 * 10 * 26 * 10 * 26 * 10 * 26 * 10 * 26 * 10 = 1,188,137,600,000 I had tried other variations for shorter syntax such as (26 * 10)^5 but these figures didn't seem to corellate. Anyhow the above gives entropy bits of: log2(1.1881376e+12) = 40.1118390651 So S is 40. This strikes me as being only a tiny bit larger than the first example, and I think I'm working this correctly, and I am aware the numeric characters do severely limit the potential entropy of the whole password. Is my caculation and conclusion (S~40) correct? If so; is there a more efficient way of working out equation (b)? $\begingroup$ $26^5 \cdot 10^5$ $\endgroup$ – CodesInChaos $\begingroup$ Cheers @CodesInChaos that's the sort of shorthand I was looking for, Please put it in an answer. $\endgroup$ – Martin $\begingroup$ @CodesInChaos your comment seems to imply that the order of certain character sets doesn't seem to matter when calculating entropy. I was wary of making this assumption myself when facing potential solutions to this problem $\endgroup$ $\begingroup$ You realise that this method only works for passwords generated by computer (as your example)? Most passwords are based on linguistics, and in that case I don't think that there is any agreed or mathematical method for their entropy calculation. But it's probably a lot lot less. $\endgroup$ – Paul Uszak $\begingroup$ @PaulUszak I was exploring the entropy of a specific style of [auto-generated] password. $\endgroup$ Yes, but it's important to state the premises behind these calculations and not relegate them to a footnote. Namely, that each character in the password is selected uniformly at random, independently of every other one. We routinely see people talk about password "entropy" in contexts where passwords are evidently not being selected randomly. And in such contexts your calculation doesn't tell you anything about the password's security. Keep in mind also that overwhelmingly most people don't select passwords randomly. These numbers apply in circumstances that are, in real life, exceptional. Yes, by using the laws of logarithms, which allow you to work it out by adding small numbers instead of multiplying huge ones: $$ \log(n \times m) = \log(n) + \log(m) $$ This is why entropy is normally expressed in bits—the base 2 logarithm of the number of equiprobable alternatives—because it just makes the math much easier. In this case: The uniform random choice of one character out of a set of 26 is $\log_2(26) \approx 4.7$ bits. The entropy of a sequence of independent random choices is the sum of their individual entropies. Therefore, a randomly generated eight letter password has entropy of about $8 \times 4.7 = 37.6$ bits, and one randomly generated according to your latter schema has approximately $5 \times 4.7 + 5 \times 3.3 = 5 \times 8 = 40$ bits of entropy. Addressing this comment: @CodesInChaos your comment seems to imply that the order of certain character sets doesn't seem to matter when calculating entropy. I was wary of making this assumption myself when facing potential solutions to this problem. If the order is predetermined then it doesn't matter, because then there's no choice about the order, therefore no uncertainty. Another way of looking at this is that there's a bijective function between the set of passwords that look like ABCDE12345 and A1B2C3D4E5, so therefore the uniform random choice of an element out of one set must have the same entropy as from the other. (The entropy is the base 2 logarithm of the number of alternatives in the uniform case, and in both cases the total number of alternatives is the same.) Luis CasillasLuis Casillas $\begingroup$ Thank you for a very clear and very articulate response. (plus I'm glad my working was correct after all) $\endgroup$ $\begingroup$ given what you said about "Namely, that each character in the password is selected uniformly at random, independently of every other one", the first character in a word would have 26 then each character in the word would go down more and more until the prefix is unique to one word and all characters would then have a value of 1? $\endgroup$ – Trevor Boyd Smith $\begingroup$ @TrevorBoydSmith: No, I say "independently of every other one" in the same sense as we say that of series of coin flips or dice rolls. If you roll a die ten times in a row, each outcome is selected from the same set of six, and each roll's outcome is independent of every other one. $\endgroup$ – Luis Casillas In the initial exposition (8 letters), there is no reason to round entropy to the next integer; fractional entropy is fine, including in bits. We can compute it as $$\begin{align} S&=\log_2(26^8)\\ &=8\log_2(26)\\ &\approx37.6\text{ bit} \end{align}$$ I had never met "guessable entropy"; but yes, subtracting one from the entropy gives the $\log_2$ of the expected number of guesses before finding the right one. For the problem (5 groups of one letter and one digit), we can write $$\begin{align} S&=\log_2(26\times10\times26\times10\times26\times10\times26\times10\times26\times10)\\ &=\log_2((26\times10)^5)\\ &=5\log_2(260)\\ &\approx40.1\text{ bit}\end{align}$$ thus yes, the question's calculation is essentially OK (if not maximally simple, and briefly erring in but these figures didn't seem to correlate). As illustrated, computations are easier using the properties of the logarithm that $$\begin{align} \forall(b,x,y)\in\mathbb R^3&\text{ with }b>0\text{ and }x>0\text{ and }y>0,\;&\log_b(x\,y)&=\log_b(x)+\log_b(y)\\ \forall(b,x,y)\in\mathbb R^3&\text{ with }b>0\text{ and }x>0,\;&\log_b(x^y)&=y\log_b(x) \end{align}$$ Using these properties, we can validly say that each letter accounts for $\log_2(26)\approx4.70\text{ bit}$, and each digit accounts for $\log_2(10)\approx3.32\text{ bit}$. Edit: that excellent other answer was first, and additionally points out that it is essential for the calculations made that each letter or digit is chosen uniformly and independently of others. fgrieu♦fgrieu $\begingroup$ The guessable entropy is the number of total possible values divided by 2, on the basis that if an attacker guesses they'll probably have the correct answer after trying ~50% of the choices. It was in one of the references at the top of my question. $\endgroup$ $\begingroup$ I do appreciate your longer explanation, sorry I can't also mark yours as correct too. Cheers. $\endgroup$ Thanks for contributing an answer to Cryptography Stack Exchange! Not the answer you're looking for? Browse other questions tagged entropy or ask your own question. How should I calculate the entropy of a password? How to calculate the entropy of passwords? Estimating bits of entropy How does converting character set affect hash entropy? How many bits of entropy are leaked for every derived password from a master? Strictly, is length(key) >= length(message) wrong for one time pads? Generating nicer diceware passwords without losing entropy Entropy preservation through cryptographic hash function How to calculate the entropy for a fixed length random passwords which has restrictions on the type of characters used Does grouping password characters for readability decrease entropy?
CommonCrawl
what compactifications of the Poincare group have been studied? How to understand that boosts, rotations and translations are Killing vector fields. A nice overview (and maybe derivation) of the Poincaré transformations of the Vector Spherical Harmonics References for the Little group What are the boundary asymptotics of harmonic symmetric transverse traceless rank-s tensors on $\mathbb{H}_n$ in the Poincare upper-half-space model? Holomorphic maps from upper half plane to itself (or equivalently Poincare disc to itself) A question about the asymptotic series in perturbative expansion in QFT What is an observer in QFT? Lorentz Covariant formula for Noether Charges in QFT Poincare Symmetry in QFT Given that spacetime is not affine Minkowskispace, it does of course not possess Poincare symmetry. It is still sensible to speak of rotations and translations (parallel transport), but instead of $$[P_\mu, P_\nu] = 0$$ translations along a small parallelogram will differ by the curvature. I have not thought carefully about rotations and translations, but basically you could look at the induced connection on the frame bundle, to figure out what happens. This is all to say that spacetime has obviously not exact Poincare symmetry, although the corrections are ordinarily very small. Most QFT textbooks seem to ignore this. Of course it is possible to formulate lagrangians of the standard theories in curved space and develop perturbation theory, too. But since there is no translation invariance, one can not invoke fourier transform. Why is it save to ignore that there is no exact poincare symmetry? Especially the rampant use of fourier transforms bothers me, since they do require exact translation invariance. How does one treat energy momentum conservation? Presumably one has to (at least) demonstrate that the covariant derivative of the energy momentum tensor is zero. Any references that discuss those issues in more detail are of course appreciated. poincare-group asked Apr 21, 2012 in Theoretical Physics by orbifold (195 points) [ no revision ] It is not clear for me, whether what you asking is just general relativity in QFT or this is an extension of GR in QFT, but obviously your question has something to do with this topic. In both cases, merging GR with QFT is non-trivial task. Some people even say that they are incompatible. Modern textbooks usually have few paragraphs about the problems. commented Apr 22, 2012 by Nestoklon (340 points) [ no revision ] My question is about neither, I want to understand QFT in a curved background. I will try to edit my question to clarify that. commented Apr 22, 2012 by orbifold (195 points) [ no revision ] When done properly, none of the problems exists and some of your assumptions are invalid. First, concerning the two questions, in topologically trivial but arbitrarily curved spacetimes, the Poincaré symmetry holds in the sense that it is a small subgroup of the infinite-dimensional group of all diffeomorphisms; general relativity and all of its extensions are invariant under arbitrary coordinate transformations which – for spacetimes of normal topology – include the Lorentz transformations as well. on the contrary, the energy-momentum conservation law doesn't hold locally in general relativity. There's no coordinate-system-independent, nonzero definition of the stress-energy tensor in general relativity whose divergence using partial derivatives would vanish; this is related to the breaking of the translational symmetries by general backgrounds because these translational symmetries are needed, by Noether's theorem, for the conservation laws to exist. In GR in asymptotically flat or otherwise translationally symmetric backgrounds, one may still define a conserved total ADM mass and other things. I would also like to correct some other statements: the existence of a Fourier transform doesn't depend on the Lorentz symmetry. In $\exp(ip\cdot x)$, the inner product is really an action of a linear form on a vector (they live in mutually dual spaces) so one doesn't need an inner product on either of the two spaces. general relativity, especially in the presence of fermions etc., is often formulated using tetrads/vierbeins/vielbeins (they're really needed for the fermions). Then the total gauge symmetry group (under which the physical states must be invariant) includes not only the diffeomorphisms but also local Lorentz transformations. quantum gravity expanded around the Minkowski space does exactly preserve the global Lorentz symmetry as well, despite the fact that coherent states of gravitons are able to add curvature into the spacetime and turn it into a Lorentz-symmetry-violating geometry. That's because these gravitons may still be treated as spin-2 fields that exist in the pre-existing background. the gauge symmetries have to annihilate physical states; there may be exceptions for symmetry generators that move the asymptotic region of the spacetime (at infinity); the physical states may carry charges under these generators but these generators must still be isometries of the background (and the corresponding superselection sector of the Hilbert space). What's true about Nestoklon's comment is that one faces problems when he tries to quantize Einstein's equations including all loop corrections; the theory is non-renormalizable. But these problems don't get imprinted to none of the hypothetical problems you have sketched as long as one is satisfied with the one-loop approximation, for example. answered Apr 22, 2012 by Luboš Motl (10,278 points) [ no revision ] Thank you for your answer. I do not understand your answer to my first question however. Do you mean by topological trivial, that it can be covered by a single coordinate chart? In that case you can certainly pull back the action of the poincare group, but this does not seem to yield a very geometric action. I thought that the right generalization was to consider the levi-civita connection, which at least gives you infitesimal translations. In any case we cannot apriori assume that spacetime is topological trivial. It is safe to ignore curvature at the length scales of particle physics, as in the relevant region of space-time one can approximate the manifold very well by its tangent space, which is a flat Minkowski space with Poincare symmetry. For the same reason, engineers do not use general relativity but work with special relativirty (or even Newton's laws). Carrying the extra conceptual burden would only complicate things without any benefit. But if curvature is important, one can use a curved version of the Fourier transform ((see, e.g., work by Fonarev, gr-qc/9309005). answered Apr 29, 2012 by Arnold Neumaier (14,437 points) [ no revision ] What I am really trying to understand is how the usual constructions can be sensible, although they do not exist in curved space (for example the S-Matrix). What is more, as I understood Poincare symmetry so far, it is a symmetry of an affine Space. I would expect translations to correspond to parallel transport along geodesics. To identify that with translations in the tangent space seems artificial. Some problems with QFT in curved space are outlined in: Christian Bär, Klaus Fredenhagen (Editors) Quantum Field Theory on Curved Spacetimes: Concepts and Mathematical Foundations To clarify: I am perfectly happy to accept that since space is almost flat, we can approximately neglect curvature. Similiarly how one can demonstrate that GR gives back Newtonian gravity in suitable limit. But in all other cases I know how to argue that these effects can be neglected, whereas in QFT some of the standard constructions seem to depend on the global structure of spacetime. This belief might be completely misguided, but I do want to understand why. For a well-defined $S$-matrix, you need an unbounded space and an unbounded time (both in the past and in the future), not a Poincare group. So yes, it depends on the global structure. But as a scattering process happens essentially locally, one can approximate it by an S-matrix on the tangent space as one is not interested in the behavior at huge times (where the global S-matrix which might not exist applies) but at the behavior at times large enough to cross the experimental setting, where the local $S$-matrix is a much better choice. commented Apr 30, 2012 by Arnold Neumaier (14,437 points) [ no revision ] p$\hbar$y$\varnothing$icsOverflow
CommonCrawl
OSA Publishing > OSA Continuum > Volume 3 > Issue 1 > Page 77 Takashige Omatsu, Editor-in-Chief Ultra-broadband conversion of OAM mode near the dispersion turning point in helical fiber gratings Kaili Ren, Minhui Cheng, Liyong Ren, Yunhui Jiang, Dongdong Han, Yongkai Wang, Jun Dong, Jihong Liu, Li Yang, and Zhanqiang Xi Kaili Ren,1 Minhui Cheng,1 Liyong Ren,2 Yunhui Jiang,1 Dongdong Han,1 Yongkai Wang,1 Jun Dong,1 Jihong Liu,1,4 Li Yang,3,5 and Zhanqiang Xi1 1School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China 2School of Physics and Information Technology, Shaanxi Normal University, Xi'an 710119, China 3Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, Anhui, 230027, China [email protected] [email protected] Kaili Ren https://orcid.org/0000-0002-3100-9994 Liyong Ren https://orcid.org/0000-0002-7547-7511 K Ren M Cheng L Ren Y Jiang D Han Y Wang J Dong J Liu L Yang Z Xi •https://doi.org/10.1364/OSAC.381877 Kaili Ren, Minhui Cheng, Liyong Ren, Yunhui Jiang, Dongdong Han, Yongkai Wang, Jun Dong, Jihong Liu, Li Yang, and Zhanqiang Xi, "Ultra-broadband conversion of OAM mode near the dispersion turning point in helical fiber gratings," OSA Continuum 3, 77-87 (2020) Light beams Mode division multiplexing Photonic crystal fibers Resonant modes Spatial light modulators Based on the dual-resonance principle around the dispersion turning point (DTP), for the first time, an ultra-broadband orbital angular momentum (OAM) mode converter formed by the helical long-period fiber grating (HLPG) is proposed. The converter used for delivering the OAM operation with 3-dB bandwidth of 287 nm, which is about 7 times of general OAM converters and has only one mode. Furthermore, by chirping the HLPG working around DTP, a flat-top broadband OAM mode converter with bandwidth of ∼182 nm@3 dB is conveniently achieved. The flatness of spectrum can be increased by apodizing and optimizing the length of the chirped HLPG. Subsequently, we significantly developed a flat-top broadband rejection filter with >30 dB bandwidth of a high level of ∼100 nm@1 dB by double-cascading the HLPG. It is shown that the performances of the OAM mode converter and the flat-top broadband rejection filter can be remarkably improved by accomplishing the DTP in the mode phase-matching for the HLPG. Flat-top band-rejection filter based on two successively-cascaded helical fiber gratings Gen. Inoue, Peng Wang, and Hongpu Li Optimal design and fabrication of multichannel helical long-period fiber gratings based on phase-only sampling method Chengliang Zhu, Shoma Ishikami, Peng Wang, Hua Zhao, and Hongpu Li Ultrahigh-sensitivity sensors based on thin-film coated long period gratings with reduced diameter, in transition mode and near the dispersion turning point Ignacio Del Villar Sensitivity optimization with cladding-etched long period fiber gratings at the dispersion turning point Ignacio Del Villar, Jose L. Cruz, Abian B. Socorro, Jesus M. Corres, and Ignacio R. Matias Ultra-broadband fiber mode converter based on apodized phase-shifted long-period gratings Yunhe Zhao, Zuyao Liu, Yunqi Liu, Chengbo Mou, Tingyun Wang, and Yongsheng Yang N. Bozinovic, Y. Yue, Y. Ren, M. Tur, P. Kristensen, H. Huang, A. E. Willner, and S. Ramachandran, "Terabit-scale orbital angular momentum mode division multiplexing in fibers," Science 340(6140), 1545–1548 (2013). A. Wang, L. Zhu, L. Wang, J. Ai, S. Chen, and J. Wang, "Directly using 88-km conventional multi-mode fiber for 6-mode orbital angular momentum multiplexing transmission," Opt. Express 26(8), 10038–10047 (2018). T. Lei, M. Zhang, Y. Li, P. Jia, G. N. Liu, X. Xu, Z. Li, C. Min, J. Lin, C. Yu, H. Niu, and X. Yuan, "Massive individual orbital angular momentum channels for multiplexing enabled by Dammann gratings," Light: Sci. Appl. 4(3), e257 (2015). G. Tkachenko and E. Brasselet, "Helicity-dependent three-dimensional optical trapping of chiral microparticles," Nat. Commun. 5(1), 4491 (2014). R. Paez-Lopez, U. Ruiz, V. Arrizon, and R. Ramos-Garcia, "Optical manipulation using optimal annular vortices," Opt. Lett. 41(17), 4138–4141 (2016). C. T. Schmiegelow, J. Schulz, H. Kaufmann, T. Ruster, U. G. Poschinger, and F. Schmidt-Kaler, "Transfer of optical orbital angular momentum to a bound electron," Nat. Commun. 7(1), 12998 (2016). S. Fürhapter, A. Jesacher, S. Bernet, and M. Ritsch-Marte, "Spiral interferometry," Opt. Lett. 30(15), 1953–1955 (2005). L. Yan, P. Gregg, E. Karimi, A. Rubano, L. Marrucci, R. Boyd, and S. Ramachandran, "Q-plate enabled spectrally diverse orbital-angular-momentum conversion for stimulated emission depletion microscopy," Optica 2(10), 900–903 (2015). G. Walker, A. S. Arnold, and S. Franke-Arnold, "Trans-spectral orbital angular momentum transfer via four-wave mixing in Rb vapor," Phys. Rev. Lett. 108(24), 243601 (2012). P. Gregg, M. Mirhosseini, A. Rubano, L. Marrucci, E. Karimi, R. W. Boyd, and S. Ramachandran, "Q-plates as higher order polarization controllers for orbital angular momentum modes of fiber," Opt. Lett. 40(8), 1729–1732 (2015). C. Brunet, P. Vaity, Y. Messaddeq, S. LaRochelle, and L. A. Rusch, "Design, fabrication and validation of an OAM fiber supporting 36 states," Opt. Express 22(21), 26117–26127 (2014). P. Schemmel, G. Pisano, and B. Maffei, "A modular spiral phase plate design for orbital angular momentum generation at millimetre wavelengths," Opt. Express 22(12), 14712–14726 (2014). J. Wang, A. Cao, M. Zhang, H. Pang, S. Hu, Y. Fu, L. Shi, and Q. Deng, "Study of characteristics of vortex beam produced by fabricated spiral phase plates," IEEE Photonics J. 8(2), 1–9 (2016). Z. Zhao, J. Wang, S. Li, and A. E. Willner, "Metamaterials-based broadband generation of orbital angular momentum carrying vector beams," Opt. Lett. 38(6), 932–934 (2013). J. Zeng, T. S. Luk, J. Gao, and X. Yang, "Spiraling light with magnetic metamaterial quarter-wave turbines," Sci. Rep. 7(1), 11824 (2017). J. Courtial and M. J. Padgett, "Performance of a cylindrical lens mode converter for producing Laguerre–Gaussian laser modes," Opt. Commun. 159(1-3), 13–18 (1999). Y. Zhao, Y. Liu, C. Zhang, L. Zhang, G. Zheng, C. Mou, J. Wen, and T. Wang, "All-fiber mode converter based on long-period fiber gratings written in few-mode fiber," Opt. Lett. 42(22), 4708–4711 (2017). Y. Zhao, Y. Liu, L. Zhang, C. Zhang, J. Wen, and T. Wang, "Mode converter based on the long-period fiber gratings written in the two-mode fiber," Opt. Express 24(6), 6186–6195 (2016). X. Heng, J. Gan, Z. Zhang, J. Li, M. Li, H. Zhao, Q. Qian, S. Xu, and Z. Yang, "All-fiber stable orbital angular momentum beam generation and propagation," Opt. Express 26(13), 17429–17436 (2018). F. Xia, Y. Zhao, H. Hu, and Y. Zhang, "Broadband generation of the first-order OAM modes in two-mode fiber by offset splicing and fiber rotating technology," Opt. Laser Technol. 112, 436–441 (2019). D. Mao, T. Feng, W. Zhang, H. Lu, Y. Jiang, P. Li, B. Jiang, Z. Sun, and J. Zhao, "Ultrafast all-fiber based cylindrical-vector beam laser," Appl. Phys. Lett. 110(2), 021107 (2017). D. Mao, Z. He, H. Lu, M. Li, W. Zhang, X. Cui, B. Jiang, and J. Zhao, "All-fiber radially/azimuthally polarized lasers based on mode coupling of tapered fibers," Opt. Lett. 43(7), 1590–1593 (2018). K. Ren, L. Ren, J. Liang, X. Kong, H. Ju, and Z. Wu, "Online and efficient fabrication of helical long-period fiber gratings," IEEE Photonics Technol. Lett. 29(14), 1175–1178 (2017). K. Ren, L. Ren, J. Liang, X. Kong, H. Ju, and Z. Wu, "Highly strain and bending sensitive microtapered long-period fiber gratings," IEEE Photonics Technol. Lett. 29(13), 1085–1088 (2017). C. Fu, S. Liu, Z. Bai, J. He, C. Liao, Y. Wang, Z. Li, Y. Zhang, K. Yang, B. Yu, and Y. Wang, "Orbital angular momentum mode converter based on helical long period fiber grating inscribed by hydrogen–oxygen flame," J. Lightwave Technol. 36(9), 1683–1688 (2018). Y. Zhang, Z. Bai, C. Fu, S. Liu, J. Tang, J. Yu, C. Liao, Y. Wang, J. He, and Y. Wang, "Polarization-independent orbital angular momentum generator based on a chiral fiber grating," Opt. Lett. 44(1), 61–64 (2019). C. Fu, S. Liu, Y. Wang, Z. Bai, J. He, C. Liao, Y. Zhang, F. Zhang, B. Yu, S. Gao, Z. Li, and Y. Wang, "High-order orbital angular momentum mode generator based on twisted photonic crystal fiber," Opt. Lett. 43(8), 1786–1789 (2018). L. Fang and J. Wang, "Flexible generation/conversion/exchange of fiber-guided orbital angular momentum modes using helical gratings," Opt. Lett. 40(17), 4010–4013 (2015). L. Fang and J. Wang, "Full-vectorial mode coupling in optical fibers," IEEE J. Quantum Electron. 54(2), 1–7 (2018). J. Liu, M. Cheng, X. Kong, D. Han, J. Dong, W. Luo, and K. Ren, "Microtapered long period gratings: Non-destructive fabrication, highly sensitive torsion sensing, and tunable broadband filtering," Infrared Phys. Technol. 102, 103000 (2019). X. Shu, L. Zhang, and I. Bennion, "Sensitivity characteristics of long-period fiber gratings," J. Lightwave Technol. 20(2), 255–266 (2002). T. Erdogan, "Fiber grating spectra," J. Lightwave Technol. 15(8), 1277–1294 (1997). L. Yang, L. L. Xue, C. Li, J. Su, and J. R. Qian, "Adiabatic circular polarizer based on chiral fiber grating," Opt. Express 19(3), 2251–2256 (2011). C. Zhu, T. Yamakawa, H. Zhao, and H. Li, "All-fiber circular polarization filter realized by using helical long-period fiber gratings," IEEE Photonics Technol. Lett. 30(22), 1905–1908 (2018). C. Zhu, S. Ishikami, P. Wang, H. Zhao, and H. Li, "Optimal design and fabrication of multichannel helical long-period fiber gratings based on phase-only sampling method," Opt. Express 27(3), 2281–2291 (2019). Ai, J. Arnold, A. S. Arrizon, V. Bai, Z. Bennion, I. Bernet, S. Boyd, R. Boyd, R. W. Bozinovic, N. Brasselet, E. Brunet, C. Cao, A. Cheng, M. Courtial, J. Cui, X. Deng, Q. Dong, J. Erdogan, T. Fang, L. Feng, T. Franke-Arnold, S. Fu, C. Fu, Y. Fürhapter, S. Gan, J. Gao, J. Gao, S. Gregg, P. Han, D. He, Z. Heng, X. Hu, H. Hu, S. Huang, H. Ishikami, S. Jesacher, A. Jia, P. Jiang, B. Ju, H. Karimi, E. Kaufmann, H. Kong, X. Kristensen, P. LaRochelle, S. Lei, T. Li, C. Li, J. Li, M. Li, P. Li, S. Li, Z. Liang, J. Liao, C. Lin, J. Liu, G. N. Liu, J. Liu, S. Lu, H. Luk, T. S. Luo, W. Maffei, B. Mao, D. Marrucci, L. Messaddeq, Y. Min, C. Mirhosseini, M. Mou, C. Niu, H. Padgett, M. J. Paez-Lopez, R. Pang, H. Pisano, G. Poschinger, U. G. Qian, J. R. Qian, Q. Ramachandran, S. Ramos-Garcia, R. Ren, K. Ren, L. Ren, Y. Ritsch-Marte, M. Rubano, A. Ruiz, U. Rusch, L. A. Ruster, T. Schemmel, P. Schmidt-Kaler, F. Schmiegelow, C. T. Schulz, J. Shi, L. Shu, X. Su, J. Sun, Z. Tang, J. Tkachenko, G. Tur, M. Vaity, P. Walker, G. Wang, A. Wang, P. Wang, T. Wen, J. Willner, A. E. Wu, Z. Xia, F. Xu, S. Xu, X. Xue, L. L. Yamakawa, T. Yan, L. Yang, K. Yang, L. Yu, B. Yu, J. Yuan, X. Yue, Y. Zeng, J. Zhang, C. Zhang, F. Zhang, L. Zhang, M. Zhang, W. Zhang, Y. Zhao, H. Zhao, J. Zhao, Z. Zheng, G. Zhu, L. IEEE J. Quantum Electron. (1) IEEE Photonics J. (1) IEEE Photonics Technol. Lett. (3) Infrared Phys. Technol. (1) J. Lightwave Technol. (3) Light: Sci. Appl. (1) Opt. Laser Technol. (1) Sci. Rep. (1) Fig. 1. (a) HLPG with right-handed helical structure formed by twisting an SMF, and calculated variation of resonance wavelengths versus the grating period of HLPG for different cladding modes. (b) Mode orders m = 1 to 7. (c) Mode orders m = 8 to 14. (The DTPs for m = 8 to 14 are marked by black dots that is sandwiched by black arrows.) Fig. 2. (a) Transmission spectra of the HLPG working at the DTP (m = 10) and not working at DTP (m = 1). (b) Simulation of the intensity profile of the HLPG working at the DTP of LP1,10 with a period of 197.3 µm corresponding to a central wavelength of 1.612 µm, which exhibits a phase singularity at the center of the intensity distribution (c) Spiral interference pattern of the converted OAM mode formed by interfering with a Gaussian reference beam. Fig. 3. (a) Phase-matching curves for LP11 (blue line) and LP12 modes (red line). A period change of 225 µm corresponds to 245 nm resonance wavelength range, but also with 212 nm overlapping range associated with the LP11 and LP12 modes. (b) Phase-matching curves for LP19, LP1,10 and LP1,11 modes. The resonance wavelength with a bandwidth of 245 nm requires only 1 µm period change near the DTP of LP1,10 mode. Fig. 4. (a) Schematic structure for the proposed chirped HLPG. (b) Profile of a chirped HLPG. (c) Transmission spectra of the HLPG working at DTP of LP1,10 mode with a constant period of 197.3 µm (red line) and chirped HLPG with a period change from 197.3 to 196.3 µm (blue line, $c = \Delta \Lambda /L{\rm{ = 1}}{\rm{.06}} \times {\rm{1}}{{\rm{0}}^{ - 4}}$). Fig. 5. (a) Variations of the section length versus the grating position in chirped HLPG operating near the DTP of LP1,10 mode. (b) Transmission spectra under different cases of the section lengths apodization of the chirped HLPG operating near the DTP of LP1,10 mode. Fig. 6. (a) Transmission spectra of the length-apodized chirped HLPG operating near the DTP of LP1,10 mode with the grating lengths of 9.45 mm (blue solid line) and 9.40 mm (red dashed line). Insert is the enlarged figure near the loss peak. (b) Transmission spectrum of the cascaded length-apodized chirped HLPGs. (1) λ r e s = ( n e f f , 01 − n e f f , 1 n ) Λ , (2) L = π / 2 C , (3) Λ = Λ 0 + c z , (4) c = Δ Λ / L , (5) f ( n ) = L ( N − n + 1 ) N s u m , (6) f ( n ) = F ( n ) ∑ n = 1 N F ( n ) L , (7) F ( n ) = 1 2 ( 1 + cos ⁡ ( π z n L ) ) ,
CommonCrawl
For the gaseous reaction of xenon and fluorine to… For the gaseous reaction of xenon and fluorine to form xenon hexafluoride: (a) Calculate $\Delta S^{\circ}$ at 298 $\mathrm{K}\left(\Delta H^{\circ}=-402 \mathrm{kJ} / \mathrm{mol} \text { and } \Delta G^{\circ}=\right.$ $-280 . \mathrm{kJ} / \mathrm{mol}$ ). (b) Assuming that $\Delta S^{\circ}$ and $\Delta H^{\circ}$ change little with temperature, calculate $\Delta G^{\circ}$ at $500 . \mathrm{K}$ . For the gaseous reaction of carbon monoxide and chlorine to form phosgene $\left(\mathrm{COCl}_{2}\right) :$ (a) Calculate $\Delta S^{\circ}$ at 298 $\mathrm{K}\left(\Delta H^{\circ}=-220 . \mathrm{kJ} / \mathrm{mol} \text { and } \Delta G^{\circ}=\right.$ $-206 \mathrm{kJ} / \mathrm{mol}$ ). One reaction used to produce small quantities of pure $\mathrm{H}_{2}$ is $$\mathrm{CH}_{3} \mathrm{OH}(g) \rightleftharpoons \mathrm{CO}(g)+2 \mathrm{H}_{2}(g)$$ (a) Determine $\Delta H^{\circ}$ and $\Delta S^{\circ}$ for the reaction at 298 $\mathrm{K}$ . (b) Assuming that these values are relatively independent of temperature, calculate $\Delta G^{\circ}$ at $28^{\circ} \mathrm{C}, 128^{\circ} \mathrm{C},$ and $228^{\circ} \mathrm{C}$ . (c) What is the significance of the different values of $\Delta G^{\circ} ?$ (d) At what temperature (in $\mathrm{K} )$ does the reaction become spontaneous? A reaction that occurs in the internal combustion engine is $$\mathrm{N}_{2}(g)+\mathrm{O}_{2}(g) \rightleftharpoons 2 \mathrm{NO}(g)$$ (b) Assuming that these values are relatively independent of temperature, calculate $\Delta G^{\circ}$ at $100 .^{\circ} \mathrm{C}, 2560 .^{\circ} \mathrm{C},$ and $3540 .^{\circ} \mathrm{C} .$ As a fuel, $\mathrm{H}_{2}(g)$ produces only nonpolluting $\mathrm{H}_{2} \mathrm{O}(g)$ when it burns. Moreover, it combines with $\mathrm{O}_{2}(g)$ in a fuel cell (Chapter 21 ) to provide electrical energy. (a) Calculate $\Delta H^{\circ}, \Delta S^{\circ},$ and $\Delta G^{\circ}$ per mole of $\mathrm{H}_{2}$ at 298 $\mathrm{K}$ . (b) Is the spontaneity of this reaction dependent on $T ?$ Explain. (c) At what temperature does the reaction become spontaneous? Ibrahim A. Missouri University of Science and Technology Consider the combustion of butane gas: $$\mathrm{C}_{4} \mathrm{H}_{10}(g)+\frac{13}{2} \mathrm{O}_{2}(g) \rightarrow 4 \mathrm{CO}_{2}(g)+5 \mathrm{H}_{2} \mathrm{O}(g)$$ (a) Predict the signs of $\Delta S^{\circ}$ and $\Delta H^{\circ} .$ Explain. (b) Calculate $\Delta G^{\circ}$ by two different methods. Predict the sign of $\Delta S^{\circ}$ and then calculate $\Delta S^{\circ}$ for each of the following reactions. a. $\mathrm{H}_{2}(g)+\frac{1}{2} \mathrm{O}_{2}(g) \longrightarrow \mathrm{H}_{2} \mathrm{O}(l)$ b. $2 \mathrm{CH}_{3} \mathrm{OH}(g)+3 \mathrm{O}_{2}(g) \longrightarrow 2 \mathrm{CO}_{2}(g)+4 \mathrm{H}_{2} \mathrm{O}(g)$ c. $\mathrm{HCl}(g) \longrightarrow \mathrm{H}^{+}(a q)+\mathrm{Cl}^{-}(a q)$ Problem 110 We can use Hess's law to calculate enthalpy changes that cannot be measured. One such reaction is the conversion of methane to ethylene: $$2 \mathrm{CH}_{4}(g) \longrightarrow \mathrm{C}_{2} \mathrm{H}_{4}(g)+\mathrm{H}_{2}(g)$$ Calculate the $\Delta H^{\circ}$ for this reaction using the following thermochemical data: $$\begin{array}{ll}{\mathrm{CH}_{4}(g)+2 \mathrm{O}_{2}(g) \longrightarrow \mathrm{CO}_{2}(g)+2 \mathrm{H}_{2} \mathrm{O}(l)} & {\Delta H^{\circ}=-890.3 \mathrm{kJ}} \\ {\mathrm{C}_{2} \mathrm{H}_{4}(g)+\mathrm{H}_{2}(g) \longrightarrow \mathrm{C}_{2} \mathrm{H}_{6}(g)} & {\Delta H^{\circ}=-136.3 \mathrm{kJ}} \\ {2 \mathrm{H}_{2}(g)+\mathrm{O}_{2}(g) \longrightarrow 2 \mathrm{H}_{2} \mathrm{O}(l)} & {\Delta H^{\circ}=-571.6 \mathrm{kJ}} \\ {2 \mathrm{C}_{2} \mathrm{H}_{6}(g)+7 \mathrm{O}_{2}(g) \longrightarrow 4 \mathrm{CO}_{2}(g)+6 \mathrm{H}_{2} \mathrm{O}(l)} & {\Delta H^{\circ}=-3120.8 \mathrm{kJ}}\end{array}$$
CommonCrawl
Convergence of second-order in time numerical discretizations for the evolution Navier-Stokes equations Luigi C. Berselli ORCID: orcid.org/0000-0001-6208-99341 & Stefano Spirito2 Advances in Continuous and Discrete Models volume 2022, Article number: 65 (2022) Cite this article We prove the convergence of certain second-order numerical methods to weak solutions of the Navier–Stokes equations satisfying, in addition, the local energy inequality, and therefore suitable in the sense of Scheffer and Caffarelli–Kohn–Nirenberg. More precisely, we treat the space-periodic case in three space dimensions and consider a full discretization in which the classical Crank–Nicolson method (θ-method with \(\theta =1/2\)) is used to discretize the time variable. In contrast, in the space variables, we consider finite elements. The convective term is discretized in several implicit, semi-implicit, and explicit ways. In particular, we focus on proving (possibly conditional) convergence of the discrete solutions toward weak solutions (satisfying a precise local energy balance) without extra regularity assumptions on the limit problem. We do not prove orders of convergence, but our analysis identifies some numerical schemes, providing alternate proofs of the existence of "physically relevant" solutions in three space dimensions. We consider the homogeneous incompressible 3D Navier–Stokes equations (NSE) $$ \begin{aligned} &\partial _{t} u-\nu \Delta u+(u\cdot \nabla ) u+\nabla p=0\quad \text{in } (0,T)\times \mathbb{T}^{3}, \\ &\operatorname{div}u=0\quad \text{in } (0,T)\times \mathbb{T}^{3}, \end{aligned} $$ in the space periodic setting, with divergence-free initial datum $$ u|_{t=0}=u_{0}\quad \text{in } \mathbb{T}^{3}, $$ where \(T>0\) is arbitrary, \(\nu >0\) is given, and \(\mathbb{T}^{3}:=(\mathbb{R}/ 2\pi \mathbb{Z})^{3}\) is the three-dimensional flat torus. Here, the unknowns are the velocity vector field u and the scalar pressure p with zero mean values. The aim of this paper is to consider families of space-time discretization of the initial value problem (1.1)–(1.2), which are of the second order in time, and (as the parameters of the discretization vanish) to prove the convergence toward the Leray–Hopf weak solutions, satisfying, in addition, certain estimates on the pressure and the local energy inequality $$ \partial _{t} \biggl(\frac{ \vert u \vert ^{2}}{2} \biggr)+\operatorname{div} \biggl( \biggl(\frac{ \vert u \vert ^{2}}{2} +p \biggr)u \biggr)- \nu \Delta \biggl( \frac{ \vert u \vert ^{2}}{2} \biggr)+\nu \vert \nabla u \vert ^{2}\leq 0\quad \text{in }\mathcal{D}'\bigl(]0,T[\times \mathbb{T}^{3} \bigr). $$ Leray–Hopf weak solutions satisfying certain extra properties on pressure and the local energy inequality are known in the literature as suitable weak solutions, and they are of fundamental importance from the theoretical point of view since they are those for which partial regularity results hold true, see Scheffer [30] and Caffarelli-Kohn-Nirenberg [11]. Due to possible non-uniqueness of solutions in the 3D case, see, in particular, the recent result in [1] for the case with external forces, it is not ensured that all schemes produce weak solutions with the correct global and local balance. Moreover, from the applied point of view, the local energy inequality is a sort of entropy condition. Even if it is not enough to prove uniqueness, it seems a natural request to select physically relevant solutions, especially for turbulent or convection-dominated problems. For this reason, it is natural to ask, in view of obtaining accurate simulations of turbulent flows, that the above local energy inequality has to be satisfied by solutions constructed by numerical methods. The interplay between suitable weak solutions and numerical computations of turbulent flows has been emphasized starting from the work of Guermond et al. [18, 19] and a recent overview can be found in the monograph [4]. In this paper, we continue and extend some previous works in [3, 6–8], and especially [5] to analyze the difficulties arising when dealing with full space-time discretization with second-order schemes in the time variable. The aim of this paper is to extend the case \(\theta =1/2\), which corresponds to the Crank–Nicolson method and could not be treated directly with the same proofs as in [5]. In particular, the case \(\theta =1/2\) requires a coupling between the space and time mesh-size, which is nevertheless common to other second-order models. In fact, besides the Crank–Nicolson scheme (CN) studied in Sect. 5, we will also consider in the final Sect. 6 other schemes involving the Adams-Bashforth or the Linear-Extrapolation for the convective term. To set up the problem, as in [16], we consider two sequences of discrete approximation spaces \(\{X_{h}\}_{h}\subset H_{\#}^{1} \) and \(\{M_{h}\}_{h}\subset H_{\#}^{1}\), which satisfy – among other properties described in Sect. 3 – an appropriate commutator property, see Definition 3.1. Then, given a net \(t_{m}:=m \Delta t\), we consider the following implicit space-time discretization of the problem (1.1)–(1.2): Set \(u_{h}^{0}=\pi _{h}(u_{0})\), where \(\pi _{h}\) is the projection over \(X_{h}\). For any \(m=1,\ldots,N\) and given \(u_{h}^{m-1}\in X_{h}\) and \(p_{h}^{m-1}\in M_{h}\), find \(u_{h}^{m}\in X_{h}\) and \(p^{m}\in M_{h}\) such that $$ \begin{gathered} \bigl(d_{t} u^{m}_{h},v_{h} \bigr)+\nu \bigl(\nabla u_{h}^{m,1/2}, \nabla v_{h}\bigr)+b_{h} \bigl(u_{h}^{m,1/2}, u_{h}^{m,1/2},v_{h} \bigr)-\bigl( p_{h,}^{m}, \operatorname{div}v_{h} \bigr)=0, \\ \bigl( \operatorname{div}u_{h}^{m},q_{h} \bigr)=0, \end{gathered} $$ (CN) where \(d_{t}u^{m}:=\frac{u_{h}^{m}-u_{h}^{m-1}}{\Delta t}\) is the backward finite-difference approximation for the time-derivative in the interval \((t_{m-1},t_{m})\) of constant length Δt; \(u_{h}^{m,1/2}:=\frac{1}{2} ( u^{m}_{h}+u^{m-1}_{h} )\) is the average of values at consecutive time-steps; \(b_{h}(u_{h}^{m,1/2}, u_{h}^{m,1/2},v_{h})\) is a suitable discrete approximation of the non-linear term. Other notations, definitions, and properties regarding (CN) will be given in Sects. 2-3. We refer to Quarteroni and Valli [29] and Thomée [32] for general properties of θ-schemes (not only for \(\theta =1/2\)) for parabolic equations. Recall that for the fully implicit Crank–Nicolson scheme, Heywood and Rannacher [22] proved that it is almost unconditionally stable and convergent. For a two-step scheme with a semi-implicit treatment for the nonlinear term, He and Li [20] gave the convergence condition: \(\Delta t h^{-1/2}\leq C_{0}\). For the Crank–Nicolson/Adams–Bashforth scheme in which the nonlinear term is treated explicitly, Marion and Temam [28] provided the stability condition \(\Delta t h^{-2}\leq C_{0}\), and Tone [33] proved the convergence under the condition \(\Delta t h^{-2-3/2}\leq C_{0}\), and in all cases, \(C_{0}=C_{0}(\nu ,\Omega ,T,u_{0},f)\). The situation is different in two space dimensions, cf. He and Sun [21], where more regularity of the solution can be used, but these results are not applicable to genuine (turbulent) weak solutions in the three-dimensional case. We observe that the value \(\theta =1/2\) makes the scheme more accurate in the time variable but, on the other hand, introduces some "natural" or at least expected limitation on the mesh-sizes. Other schemes will also be considered, to adapt the results to different second-order schemes since the proof is rather flexible to handle several different discretizations of the NSE. As usual in time-discrete problem (see, for instance, [31]), to study the convergence to the solutions of the continuous problem, it is useful to consider \(v^{\Delta t}_{h}\), which is the linear interpolation of \(\{u^{m}_{h}\}_{m=1}^{N}\) (over the net \(t_{m}=m\Delta t\)), and \(u^{\Delta t}_{h}\) and \(p^{\Delta t}_{h}\), which are the time-step functions such that on the interval \([t_{m-1}, t_{m})\) are equal to \(u^{m,1/2}_{h}\) and \(p^{m}_{h}\), respectively, see (3.12). The main result of the paper is the following; we refer to Sect. 2 for further details on the notations. Let the finite element spaces \((X_{h},M_{h})\) satisfy the discrete commutator property and the technical conditions described in Sect. 3.1. Let \(u_{0}\in H^{1}_{\operatorname{div}}\) and fix \(\Delta t>0\) and \(h>0\) such that $$ \begin{aligned} &\frac{\Delta t \Vert u_{0} \Vert _{2}^{3}}{\nu h^{1/2}}=o(1), \end{aligned} $$ Let \(\{(v^{\Delta t}_{h}, u^{\Delta t}_{h}, p^{\Delta t}_{h})\}_{\Delta t,h}\) as in (3.12). Then, there exists $$ (u,p)\in L^{\infty}\bigl(0,T;L_{\operatorname{div}}^{2}\bigr)\cap L^{2}\bigl(0,T;H^{1}_{ \operatorname{div}}\bigr)\times L^{4/3}\bigl(0,T;L^{2}_{\#}\bigr), $$ such that, up to a sub-sequence, as \((\Delta t,h)\to (0,0)\), $$ \begin{aligned} &v^{\Delta t}_{h}\rightarrow u\quad \textit{strongly in }L^{2}\bigl((0,T)\times \mathbb{T}^{3}\bigr), \\ &u^{\Delta t}_{h}\rightarrow u\quad \textit{strongly in }L^{2} \bigl((0,T)\times \mathbb{T}^{3}\bigr), \\ &\nabla u^{\Delta t}_{h}\rightharpoonup \nabla u\quad \textit{weakly in }L^{2}\bigl((0,T) \times \mathbb{T}^{3}\bigr), \\ &p^{\Delta t}_{h}\rightharpoonup p\quad \textit{weakly in }L^{{4}/{3}}\bigl((0,T) \times \mathbb{T}^{3}\bigr). \end{aligned} $$ Moreover, the couple \((u,p)\) is a suitable weak solution of (1.1)–(1.2) in the sense of Definition 2.2. Theorem 1.1 also holds in the presence of an external force f satisfying suitable bounds. For example, \(f\in L^{2}(0,T;L^{2}(\mathbb{T}^{3}))\) is enough. The proof of Theorem 1.1 is given in Sect. 5, and it is based on a compactness argument as we previously developed in [5] and a precise analysis of the quantity \(u^{m+1}_{h}-u^{m}_{h}\) using the assumptions linking the time and the spatial mesh size. Plan of the paper. In Sect. 2, we fix the notation that will be used in the paper and recall the main definitions and tools applied. In Sect. 3, we introduce and give some details about the space-time discretization methods. Finally, in Sect. 4, we prove the main a priori estimates needed to study the convergence, and in Sect. 5, we prove Theorem 1.1. In the final Sect. 6, we adapt the proofs to a couple of different second-order schemes. Notations and preliminaries In this section, we fix the notation that will be used in the paper; we also recall the main definitions concerning weak solutions of incompressible NSE and a compactness result. We introduce the notations typical of space-periodic problems. We will use the customary Lebesgue spaces \(L^{p}(\mathbb{T}^{3})\) and Sobolev spaces \(W^{k,p}(\mathbb{T}^{3})\), and we will denote their norms by \(\|\cdot \|_{p}\) and \(\|\cdot \|_{W^{k,p}}\). We will not distinguish between scalar and vector valued functions since it will be clear from the context, which one has to be considered. In the case \(p=2\), the \(L^{2}(\mathbb{T}^{3})\) scalar product is denoted by \((\cdot ,\cdot )\), we use the notation \(H^{s}(\mathbb{T}^{3}):=W^{s,2}(\mathbb{T}^{3})\) and define, for \(s>0\), the dual spaces \(H^{-s}(\mathbb{T}^{3}):=(H^{s}(\mathbb{T}^{3}))'\). Moreover, we will consider always sub-spaces of functions with zero mean value, and these will be denoted by $$ L_{\#}^{p}:= \biggl\{ w\in L^{p}\bigl( \mathbb{T}^{3}\bigr): \int _{ \mathbb{T}^{3}}w \,dx = {0} \biggr\} ,\quad 1\leq p\leq +\infty , $$ and also by $$ H_{\#}^{s}:= H^{s}\bigl(\mathbb{T}^{3} \bigr)\cap L^{2}_{\#}. $$ As usual, we consider spaces of divergence-free vector fields defined as follows: $$ L_{\operatorname{div}}^{2}:= \bigl\{ w \in \bigl(L^{2}_{\#} \bigr)^{3}: \operatorname{div}w = 0 \bigr\} \quad \text{and, for $s>0$, } H_{ \operatorname{div}}^{s} := H^{s}_{\#}\cap L_{\operatorname{div}}^{2}. $$ Finally, given X a Banach space, \(L^{p}(0,T;X)\) denotes the classical Bochner spaces of X valued functions, endowed with its natural norm, denoted by \(\|\cdot \|_{L^{p}(X)}\). We denote by \(l^{p}(X)\) the discrete counterpart for X-valued sequences \(\{x^{m}\}\), defined on the net \(\{m\Delta t\}\), and with weighted norm defined by \(\|x\|_{l^{p}(X)}^{p}:=\Delta t\sum_{m=0}^{M}\|x^{m}\|_{X}^{p}\). Weak solutions and suitable weak solutions We start by recalling the notion of weak solution (as introduced by Leray and Hopf) and adapted to the space periodic setting. The vector field u is a Leray–Hopf weak solution of (1.1)–(1.2) if $$ u\in L^{\infty}\bigl(0,T;L_{\operatorname{div}}^{2}\bigr)\cap L^{2}\bigl(0,T;H^{1}_{ \operatorname{div}}\bigr), $$ and if u satisfies the NSE (1.1)–(1.2) in the weak sense, namely the integral equality $$ \int _{0}^{T} \bigl[ (u,\partial _{t}\phi )-\nu ( \nabla u,\nabla \phi )- \bigl((u\cdot \nabla ) u,\phi \bigr) \bigr] \,dt + \bigl(u_{0},\phi (0) \bigr)=0 $$ holds true for all smooth, periodic, and divergence-free functions \(\phi \in C_{c}^{\infty}([0,T);C^{\infty}(\mathbb{T}^{3}))\). Moreover, the initial datum is attained in the strong \(L^{2}\)-sense, that is $$ \lim_{t\to 0^{+}} \bigl\Vert u(t)-u_{0} \bigr\Vert _{2}=0, $$ and the following global energy inequality holds $$ \frac{1}{2} \bigl\Vert u(t) \bigr\Vert _{2}^{2}+\nu \int _{0}^{t} \bigl\Vert \nabla u(s) \bigr\Vert _{2}^{2} \,ds\leq \frac{1}{2} \Vert u_{0} \Vert _{2}^{2},\quad \text{for all } t\in [0,T]. $$ Suitable weak solutions are a particular subclass of Leray–Hopf weak solutions and the definition is the following. A pair \((u,p)\) is a suitable weak solution to the Navier–Stokes equation (1.1) if u is a Leray–Hopf weak solution, \(p\in L^{{4}/{3}}(0,T;L^{2}_{\#})\), and the local energy inequality $$ \nu \int _{0}^{T} \int _{\mathbb{T}^{3}} \vert \nabla u \vert ^{2}\phi \,dx \,dt\leq \int _{0}^{T} \int _{\mathbb{T}^{3}} \biggl[\frac{ \vert u \vert ^{2}}{2} ( \partial _{t} \phi +\nu \Delta \phi ) + \biggl(\frac{ \vert u \vert ^{2}}{2}+p \biggr)u\cdot \nabla \phi \biggr] \,dx \,dt $$ holds for all \(\phi \in C^{\infty}_{0}(0,T;C^{\infty}(\mathbb{T}^{3}))\) such that \(\phi \geq 0\). The definition of suitable weak solution is usually stated with \(p\in L^{{5}/{3}}((0,T)\times \mathbb{T}^{3})\) while in Definition 2.2\(p\in L^{{4}/{3}}(0,T;L^{2}(\mathbb{T}^{3}))\). This is not an issue since, of course, we have a bit less integrability in time, but we gain a full \(L^{2}\)-integrability in space. The main property of suitable weak solutions is the fact that they satisfy the local energy inequality (2.3), and weakening the requests on the pressure does not influence the validity of local regularity results; see, for instance, discussion in Vasseur [34]. A compactness lemma In this subsection, we recall the main compactness lemma, which allows us to prove the strong convergence of the approximations. We remark that it is a particular case of a more general lemma, whose statement and proof can be found in [27, Lemma 5.1]. Even if it is a tool used for compressible equations most often, it is useful here to recall the special version taken from [5]. Let \(\{f_{n}\}_{n\in \mathbb{N}}\) and \(\{g_{n}\}_{n\in \mathbb{N}}\) be uniformly bounded in \(L^{\infty}(0,T;L^{2}(\mathbb{T}^{3}))\), and let be given \(f,g\in L^{\infty}(0,T;L^{2}(\mathbb{T}^{3}))\) such that $$ \begin{aligned} &f_{n}\rightharpoonup f \quad \textit{weakly in }L^{2}\bigl((0,T)\times \mathbb{T}^{3}\bigr), \\ &g_{n}\rightharpoonup g\quad \textit{weakly in } L^{2}\bigl((0,T) \times \mathbb{T}^{3}\bigr). \end{aligned} $$ Let \(p\geq 1\) and assume that $$ \begin{aligned} &\{\partial _{t}f_{n} \}_{n}\subset L^{p}\bigl(0,T;H^{-1}\bigl( \mathbb{T}^{3}\bigr)\bigr), \qquad \{g_{n}\}_{n} \subset L^{2}\bigl(0,T;H^{1}\bigl(\mathbb{T}^{3} \bigr)\bigr), \end{aligned} $$ with uniform (with respect to \(n\in \mathbb{N}\)) bounds on the norms. Then, $$ f_{n} g_{n}\rightharpoonup f g\quad \textit{weakly in }L^{1}\bigl((0,T)\times \mathbb{T}^{3}\bigr). $$ Setting of the numerical approximation In this section, we introduce the space-time discretization of the initial value problem (1.1)–(1.2). We start by introducing the space discretization by finite elements. Space discretization For the space discretization, we strictly follow the setting considered in [16]. Let \(\mathcal{T}_{h}\) be a non-degenerate (shape regular) simplicial subdivision of \(\mathbb{T}^{3}\). Let \(\{X_{h}\}_{h>0}\subset H_{\#}^{1}\) be the discrete space for approximate velocity and \(\{M_{h}\}_{h>0}\subset L^{2}_{\#}\) be that of approximate pressure. To avoid further technicalities, we assume, as in [16], that \(M_{h}\subset H^{1}_{\#}\). We make the following (technical) assumptions on the spaces \(X_{h}\) and \(M_{h}\): For any \(v\in H^{1}_{\#}\) and for any \(q\in L^{2}_{\#}\), there exists \(\{v_{h}\}_{h}\) and \(\{q_{h}\}_{h}\) with \(v_{h}\in X_{h}\) and \(q_{h}\in M_{h}\) such that $$ \begin{aligned} &v_{h}\to v \quad \text{strongly in }H^{1}_{\#}\text{ as }h \to 0, \\ &q_{h}\to q \quad \text{strongly in }L^{2}_{\#} \text{ as }h \to 0; \end{aligned} $$ Let \(\pi _{h} : L^{2}(\mathbb{T}^{3})\to X_{h}\) be the \(L^{2}\)-projection onto \(X_{h}\). Then, there exists \(c>0\) independent of h such that $$ \forall q_{h} \in M_{h} \quad \bigl\Vert \pi _{h} (\nabla q_{h} ) \bigr\Vert _{2} \geq c \Vert q_{h} \Vert _{2}; $$ There is c independent of h such that for all \(v\in H_{\#}^{1}\) $$ \begin{aligned} & \bigl\Vert v-\pi _{h}(v) \bigr\Vert _{2}=\inf_{w_{h}\in X_{h}} \Vert v-w_{h} \Vert _{2}\leq c h \Vert v \Vert _{H^{1}}, \\ & \bigl\Vert \pi _{h}(v) \bigr\Vert _{H^{1}}\leq c \Vert v \Vert _{H^{1}}; \end{aligned} $$ There exists c independent of h (inverse inequality) such that $$ \forall v_{h}\in X_{h}\quad \Vert v_{h} \Vert _{H^{1}}\leq c h^{-1} \Vert v_{h} \Vert _{2}. $$ Moreover, we assume that \(X_{h}\) and \(M_{h}\) satisfy the following discrete commutator property. We say that \(X_{h}\) (resp. \(M_{h}\)) has the discrete commutator property if there exists an operator \(P_{h}\in \mathcal{L}(H^{1};X_{h})\) (resp. \(Q_{h}\in \mathcal{L}(L^{2};M_{h})\)) such that for all \(\phi \in W^{2,\infty}\) (resp. \(\phi \in W^{1,\infty}\)) and all \(v_{h}\in X_{h}\) (resp. \(q_{h}\in M_{h}\)) $$\begin{aligned} & \bigl\Vert v_{h}\phi - P_{h}(v_{h}\phi ) \bigr\Vert _{H^{l}}\leq c h^{1+m-l} \Vert v_{h} \Vert _{H^{m}} \Vert \phi \Vert _{W^{m+1,\infty}}, \end{aligned}$$ $$\begin{aligned} & \bigl\Vert q_{h}\phi - Q_{h}(q_{h}\phi ) \bigr\Vert _{2}\leq c h \Vert q_{h} \Vert _{2} \Vert \phi \Vert _{W^{1,\infty}} , \end{aligned}$$ for all \(0\leq l \leq m\leq 1\). Explicit and relevant examples of couples \((X_{h},M_{h})\) of finite element spaces satisfying the commutator property are those employed in the MINI and Hood–Taylor elements with quasi-uniform mesh, see [16]. In addition, similar approximation properties of finite element functions multiplied by a smooth function are known in the literature as superapproximation; see, e.g., Demlow, Guzmán, and Schatz [13]. We recall from [16] that the coercivity hypothesis (3.2) allows us to define the map \(\psi _{h} : H_{\#}^{2} \to M_{h}\) such that, for all \(q\in H_{\#}^{2}\), the function \(\psi _{h}(q)\) is the unique solution to the problem: $$ \bigl(\pi _{h}\bigl(\nabla \psi _{h}(q) \bigr),\nabla r_{h} \bigr)= ( \nabla q,\nabla r_{h} ). $$ This map has the following properties: there exists c, independent of h, such that for all \(q\in H_{\#}^{2}\), $$\begin{aligned} & \bigl\Vert \nabla \bigl(\psi _{h}(q)-q\bigr) \bigr\Vert _{2}\leq c h \Vert q \Vert _{H^{2}}, \\ & \bigl\Vert \pi _{h}\nabla \psi _{h}(q) \bigr\Vert _{H^{1}}\leq c \Vert q \Vert _{H^{2}}. \end{aligned}$$ Let us introduce the space of discretely divergence-free functions $$ V_{h}= \bigl\{ v_{h}\in X_{h} : ( \operatorname{div}v_{h},q_{h} )=0 \ \forall q_{h}\in M_{h} \bigr\} . $$ The most common variational formulation (for the continuous problems) of the convective term \(nl(u,v):=(u\cdot \nabla ) v\) is $$ b(u,v,w)= \int _{\mathbb{T}^{3}}(u\cdot \nabla ) v\cdot w \,dx, $$ and the fact that \(b(u,v,v)=0\) for \(u\in L^{2}_{\operatorname{div}}\), \(v\in H^{1}_{\#}\) allows us to deduce, at least formally, the energy inequality (2.2). This cancellation is based on the constraint \(\operatorname{div}u=0\), and this identity is not valid anymore in the case of discretely divergence-free functions in \(V_{h}\). To have the basic energy estimate, we need to modify the non-linear term since \(V_{h}\nsubseteq H^{1}_{\operatorname{div}}\) and the choice of the weak formulation becomes particularly relevant in the discrete case since it leads to schemes with very different numerical properties. To formulate the various schemes, we will consider which one corresponds to the Cases 1-2-3. We define the discrete tri-linear operator \(b_{h}(\cdot ,\cdot ,\cdot )\) in different (but standard) ways. This permits a sort of unified treatment: for instance, in all the three cases considered below, it holds at least that \(nl_{h}(u,v)\) – which is the discrete counterpart of \(nl(u,v)\) – satisfies the following estimate $$ \bigl\Vert nl_{h}(u,v) \bigr\Vert _{H^{-1}}\leq \Vert u \Vert _{3} \Vert v \Vert _{H^{1}}\quad \forall u,v \in H^{1}_{\#}. $$ We will now detail the various different discrete formulations we will use. Case 1: We use the most common option, which is a "symmetrized" operator $$ nl_{h}(u,v):=(u\cdot \nabla ) v+ \frac{1}{2}v \operatorname{div}u, $$ for the convective term, which leads to the tri-linear form $$ b_{h}(u,v,w):=\bigl\langle nl_{h}(u,v), w \bigr\rangle _{H^{-1}\times H_{\#}^{1}}, $$ $$ b_{h}(u,v,v)=0\quad \forall u, v\in H^{1}_{\operatorname{div}}+V_{h}. $$ Moreover, this tri-linear operator can also be estimated as follows $$ \begin{aligned} \bigl\vert b_{h}(u,v,w) \bigr\vert &\leq \Vert u \Vert _{6} \Vert \nabla v \Vert _{2} \Vert w \Vert _{3}+\frac{1}{2} \Vert v \Vert _{6} \Vert \operatorname{div}u \Vert _{2} \Vert w \Vert _{3} \\ &\leq C \Vert \nabla u \Vert _{2} \Vert \nabla v \Vert _{2} \Vert w \Vert _{2}^{1/2} \Vert \nabla w \Vert _{2}^{1/2} , \end{aligned} $$ by means of the Sobolev embedding \(H^{1}(\mathbb{T}^{3})\subset L^{6}(\mathbb{T}^{3})\) and of the convex interpolation inequality. Case 2: Alternatively, we can consider the "rotational form without pressure," as in Layton et al. [26], which corresponds to the formulation $$ nl_{h}(u,v):=(\nabla \times u)\times v,$$ and leads to the tri-linear form $$ b_{h}(u,v,w):=\bigl\langle nl_{h}(u,v), w\bigr\rangle _{H^{-1}\times H_{\#}^{1}}, $$ Moreover, this term can be estimated as follows $$ \begin{aligned} \bigl\vert b_{h}(u,v,w) \bigr\vert &\leq \Vert \nabla \times u \Vert _{2} \Vert v \Vert _{6} \Vert w \Vert _{3} \\ &\leq C \Vert \nabla u \Vert _{2} \Vert \nabla v \Vert _{2} \Vert w \Vert _{2}^{1/2} \Vert \nabla w \Vert _{2}^{1/2} \end{aligned} $$ again by means of the Sobolev embedding and of the convex interpolation inequality. In this case, one is hiding the Bernoulli pressure \(\frac{1}{2}|v|^{2}\) into the kinematic pressure. It is well documented that the scheme is easier to be handled but the under resolution of the pressure has some effects on the accuracy; see Horiuti [23], Zang [35], and the discussion in [26]. To overcome the numerical problems arising when using the operator from Case 2, other computationally more expensive methods are considered, as the one below. Case 3: We consider the rotational form with the approximation of the Bernoulli pressure, as was studied in Guermond [16]. $$ nl_{h}(u,v):=(\nabla \times u)\times v+ \frac{1}{2}\nabla \bigl( \mathcal{K}_{h}(v\cdot u) \bigr), $$ where \(\mathcal{K}_{h}\) is the \(L^{2}\to M_{h}\) projection operator, which is stable, linear and is defined as \((\mathcal{K}_{h}u,v_{h})=(u,v_{h})\), for all \(u\in L^{2}\) and \(v_{h}\in M_{h}\). In this way, the tri-linear term is such that The first estimate, which is proved also in [16], is the following one: $$ \bigl\vert b_{h}(v,v,w) \bigr\vert \leq c \Vert v \Vert _{H^{1}} \Vert v \Vert _{{3}} \Vert w \Vert _{H^{1}}. $$ Here, to gain a better estimate from the presence of the projection of the Bernoulli pressure, we use some improved properties of the \(L^{2}\)-projection operator \(\mathcal{K}_{h}\), which are valid in the case of quasi-uniform meshes. Indeed, for special meshes, one can also show the \(W^{1,p}\) stability. The improved stability for the \(L^{2}\)-projection has a long history; see Douglas, Jr., Dupont, and Wahlbin [15], Bramble and Xu [9], and Carstensen [12]. Moreover, in the recent work by Diening, Storn, and Tscherpel [14], it is also analyzed the stability for highly adaptive meshes and the newest vertex bisection (NVB). In three-space dimensions, their theory covers the range of exponents we consider, at least for Taylor–Hood elements. Here, we limit ourselves to consider quasi-uniform meshes, but combining the cited results with our methods more general meshes could be probably handled. A different approach in Hilbert fractional spaces is used in [17], but we do not know whether this applies to the estimates we use. Anyway, we could not find the detailed proof of the required stability, which can be obtained using the \(L^{p}\)-stability of the operator \(\mathcal{K}_{h}\), the inverse inequality, valid for the meshes we consider, and the \(W^{1,p}\)-stability and approximation of the Scott–Zhang projection operator \({\Pi}_{h}\) (see [10]) valid for \(f\in W^{1,p}(\Omega )\), for \(p\in [1,\infty [\). Just to sketch the argument, it is enough to use the following inequalities $$ \begin{aligned} \Vert \mathcal{K}_{h}f \Vert _{W^{1,p}}&\le c \bigl\Vert \mathcal{K}_{h}(f-\Pi _{h}f) \bigr\Vert _{W^{1,p}} + \Vert \Pi _{h} f \Vert _{W^{1,p}} \\ &\le c h^{-1} \bigl\Vert \mathcal{K}_{h}(f-\Pi _{h}f) \bigr\Vert _{p} + \Vert \Pi _{h} f \Vert _{W^{1,p}} \\ &\le c h^{-1} \Vert f-\Pi _{h} f \Vert _{p} + \Vert \Pi _{h}f \Vert _{W^{1,p}} \\ & \le c \Vert f \Vert _{W^{1,p}}. \end{aligned} $$ With this stability result, in Case 3, the nonlinear term can be estimated as follows $$ \begin{aligned} \bigl\vert b_{h}(u,v,w) \bigr\vert &\leq \Vert \nabla u \Vert _{2} \Vert v \Vert _{6} \Vert w \Vert _{3}+ \bigl\Vert \nabla \mathcal{K}_{h}(u\cdot v) \bigr\Vert _{3/2} \Vert w \Vert _{3} \\ &\leq \bigl(C \Vert \nabla u \Vert _{2} \Vert \nabla v \Vert _{2}+ \bigl\Vert \mathcal{K}_{h}(u \cdot v) \bigr\Vert _{W^{1,3/2}} \bigr) \Vert w \Vert _{3} \\ &\leq \bigl(C \Vert \nabla u \Vert _{2} \Vert \nabla v \Vert _{2}+ \Vert u \Vert _{3} \Vert v \Vert _{3}+ \Vert \nabla u \Vert _{2} \Vert v \Vert _{6}+ \Vert u \Vert _{6} \Vert \nabla v \Vert _{2} \bigr) \Vert w \Vert _{3} \\ & \leq C \Vert \nabla u \Vert _{2} \Vert \nabla v \Vert _{2} \Vert w \Vert _{2}^{1/2} \Vert \nabla w \Vert _{2}^{1/2}, \end{aligned} $$ by means of the Sobolev embedding \(H^{1}(\mathbb{T}^{3})\subset L^{6}(\mathbb{T}^{3})\) and of the convex interpolation inequality, exactly as in the first two cases. Time discretization We now pass to the description of the time discretization. For the time variable t, we define the mesh as follows: Given \(N\in \mathbb{N}\) the time-step \(0<\Delta t\leq T\) is defined as \(\Delta t:=T/N\). Accordingly, we define the corresponding net \(\{t_{m}\}_{m=1}^{N}\) by $$ t_{0}:=0,\qquad t_{m}:=m \Delta t,\quad m=1,\dots ,N. $$ We consider the Crank–Nicolson method (CN) (cf. [29, §5.6.2]). With a slight abuse of notation, we consider \(\Delta t=T/N\) and h, instead of \((N,h)\), as the indexes of the sequences for which we prove the convergence. Then, the convergence will be proved in the limit as \((\Delta t,h)\to (0,0)\). We stress that this does not affect the proofs since all the convergences are proved up to sub-sequences. Once (CN) is solved, we consider a continuous version useful to study the convergence. To this end, we associate to the triple \((u_{h}^{m,1/2},u_{h}^{m},p^{m}_{h})\) the functions $$ \bigl(v^{\Delta t}_{h},u^{\Delta t}_{h}, p^{\Delta t}_{h}\bigr):[0,T]\times \mathbb{T}^{3} \rightarrow \mathbb{R}^{3}\times \mathbb{R}^{3}\times \mathbb{R}, $$ defined as follows: $$ \begin{aligned} &v^{\Delta t}_{h}(t):=\textstyle\begin{cases} u_{h}^{m-1}+\frac{t-t_{m-1}}{\Delta t}(u_{h}^{m}-u_{h}^{m-1}) &\text{for } t\in [t_{m-1},t_{m}), \\ u_{h}^{N}& \text{for } t=t_{N}, \end{cases}\displaystyle \\ &u^{\Delta t}_{h}(t):=\textstyle\begin{cases} u_{h}^{m,1/2} & \text{for } t\in [t_{m-1},t_{m}), \\ u_{h}^{N,1/2}& \text{for } t=t_{N}, \end{cases}\displaystyle \\ &p^{\Delta t}_{h}(t):=\textstyle\begin{cases} p_{h}^{m} & \text{for } t\in [t_{m-1},t_{m}), \\ p_{h}^{N}& \text{for } t=t_{N}. \end{cases}\displaystyle \end{aligned} $$ Then, the discrete equations (CN) can be rephrased as the following time-continuous system: $$ \begin{aligned} &\bigl(\partial _{t} v^{\Delta t}_{h},w_{h} \bigr)+b_{h} \bigl(u^{ \Delta t}_{h},u^{\Delta t}_{h},w_{h} \bigr)+\nu \bigl(\nabla u^{ \Delta t}_{h},\nabla w_{h} \bigr)- \bigl(p^{\Delta t}_{h}, \operatorname{div}w_{h} \bigr)=0, \\ &\bigl(\operatorname{div}u^{\Delta t}_{h}, q_{h} \bigr)=0, \end{aligned} $$ for all \(w_{h}\in L^{s}(0,T;X_{h})\) (with \(s\geq 4\)) and for all \(q_{h}\in L^{2}(0,T;M_{h})\). We notice that the divergence-free condition comes from the fact that \(u^{m}_{h}\) is such that $$ \bigl(\operatorname{div}u^{m}_{h}, q_{h} \bigr)=0\quad \text{for }m=1,\ldots,N, \forall q_{h}\in M_{h}. $$ A priori estimates In this section, we prove the a priori estimates that we need to study the convergence of solutions of (3.13) to suitable weak solutions of (1.1)–(1.2). We start with the following discrete energy equality. Let \(N\in \mathbb{N}\) and \(m=1,..,N\). Then, for solutions to (CN), the following (global) discrete energy-type equality holds true: $$\begin{aligned} & \frac{1}{2}\bigl( \bigl\Vert u_{h}^{m} \bigr\Vert _{2}^{2}- \bigl\Vert u_{h}^{m-1} \bigr\Vert _{2}^{2}\bigr)+\nu \Delta t \bigl\Vert \nabla u_{h}^{m,\frac{1}{2}} \bigr\Vert _{2}^{2}=0. \end{aligned}$$ Moreover, if \(u_{0}\in H^{1}_{\operatorname{div}}\), there exists \(C>0\) depending on \(\|u_{0}\|_{H^{1}}\) such that $$ \sum_{m=1}^{N} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{2}\leq C \biggl(\Delta t+ \frac{1}{h^{1/2}} \biggr). $$ We start by proving the (global) discrete energy equality. For any \(m=1,\ldots,N\) take \(w_{h}=\chi _{[t_{m-1},t_{m})}u_{h}^{m,1/2}\in L^{\infty}(0,T;X_{h})\) as test function in (3.13). Then, it follows $$ \biggl(\frac{u_{h}^{m}-u_{h}^{m-1}}{\Delta t},u_{h}^{m,1/2} \biggr)+ \nu \bigl\Vert \nabla u_{h}^{m,1/2} \bigr\Vert _{2}^{2}=0, $$ which holds true since \(u_{h}^{m,1/2}\in X_{h}\) and \(p^{m}_{h}\in M_{h}\), we have that $$ b_{h}\bigl(u_{h}^{m,1/2}, u_{h}^{m,1/2}, u_{h}^{m,1/2}\bigr)=0\quad \text{and} \quad \bigl(p_{h}^{m},\operatorname{div}u_{h}^{m,1/2} \bigr)=0. $$ The term involving the discretization of the time-derivative reads as follows: $$\begin{aligned} \bigl(u_{h}^{m}-u_{h}^{m-1},u_{h}^{m,1/2} \bigr)= \frac{1}{2}\bigl(u_{h}^{m}-u_{h}^{m-1},u_{h}^{m}+u_{h}^{m-1} \bigr) =\frac{1}{2}\bigl( \bigl\Vert u_{h}^{m} \bigr\Vert _{2}^{2}- \bigl\Vert u_{h}^{m-1} \bigr\Vert _{2}^{2}\bigr). \end{aligned}$$ Then, multiplying by \(\Delta t>0\), Eq. (4.1) holds true. In addition, summing over m, we also get $$ \frac{1}{2} \bigl\Vert u_{h}^{N} \bigr\Vert _{2}^{2} + \nu \Delta t \sum_{m=0}^{N} \bigl\Vert \nabla u_{h}^{m,1/2} \bigr\Vert _{2}^{2}= \frac{1}{2} \bigl\Vert u_{h}^{0} \bigr\Vert _{2}^{2}, $$ which proves the \(l^{\infty}(L^{2}_{\#})\cap l^{2}(H^{1}_{\#})\) uniform bound for the sequence \(\{u^{m}_{h}\}\). To prove (4.2) take \(w_{h}=\chi _{[t_{m-1},t_{m})}(u_{h}^{m}-u_{h}^{m-1})\in L^{\infty}(0,T;X_{h})\) in (3.13). Then, after multiplication by Δt, we get $$ \begin{aligned} &\bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{2}+\frac{\nu \Delta t}{2} \bigl( \bigl\Vert \nabla u_{h}^{m} \bigr\Vert ^{2}- \bigl\Vert \nabla u_{h}^{m-1} \bigr\Vert ^{2}\bigr) \\ &\quad\leq \Delta t \bigl\vert b_{h}\bigl( u_{h}^{m,\frac{1}{2}},u_{h}^{m,\frac{1}{2}},u_{h}^{m}-u_{h}^{m-1} \bigr) \bigr\vert \\ &\quad\leq C \Delta t \bigl\Vert \nabla u_{h}^{m,\frac{1}{2}} \bigr\Vert _{2}^{2} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{1/2} \bigl\Vert \nabla \bigl(u_{h}^{m}-u_{h}^{m-1}\bigr) \bigr\Vert _{2}^{1/2}, \end{aligned} $$ where in the last line, (3.8) has been used. Using the inverse inequality (3.3) and summing over \(m=1,\dots ,N\), we get $$\begin{aligned} &\frac{\nu \Delta t}{2} \bigl\Vert \nabla u_{h}^{N} \bigr\Vert ^{2}_{2}+\sum _{m=1}^{N} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{2}\\ &\quad\leq \frac{\nu \Delta t}{2} \bigl\Vert \nabla u_{h}^{0} \bigr\Vert ^{2}_{2} +C \frac{\Delta t}{h^{1/2}}\sum_{m=1}^{N} \bigl\Vert \nabla u_{h}^{m,\frac{1}{2}} \bigr\Vert _{2}^{2} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2} \\ &\quad\leq \frac{\nu \Delta t}{2} \bigl\Vert \nabla u_{h}^{0} \bigr\Vert ^{2}_{2}+C \frac{\Delta t}{h^{1/2}}\sum _{m=1}^{N} \bigl\Vert \nabla u_{h}^{m,\frac{1}{2}} \bigr\Vert _{2}^{2}\bigl( \bigl\Vert u_{h}^{m} \bigr\Vert _{2}+ \bigl\Vert u_{h}^{m-1} \bigr\Vert _{2}\bigr) \\ &\quad\leq \frac{\nu \Delta t}{2} \bigl\Vert \nabla u_{h}^{0} \bigr\Vert ^{2}_{2}+2C \bigl\Vert u_{h}^{0} \bigr\Vert _{2} \frac{\Delta t}{h^{1/2}}\sum_{m=1}^{N} \bigl\Vert \nabla u_{h}^{m, \frac{1}{2}} \bigr\Vert _{2}^{2} \\ &\quad \leq \frac{\nu \Delta t}{2} \bigl\Vert \nabla u_{h}^{0} \bigr\Vert ^{2}_{2}+ \frac{2C \Vert u_{h}^{0} \Vert _{2}^{3}}{\nu h^{1/2}}, \end{aligned}$$ where we used the \(l^{\infty}(L^{2}_{\#})\cap l^{2}(H^{1}_{\#})\) bounds coming from the energy equality and then proving the thesis. □ At first glance the inequality (4.2) seems useless, being badly depending on h. Recall that the convergence to zero of \(\sum_{m=1}^{N} \|u_{h}^{m}-u_{h}^{m-1}\|_{2}^{2}\) is a required step to identify the limits of \(v^{\Delta t}_{h}\) and of \(u^{\Delta t}_{h}\). Nevertheless, the key step in the next section will be that of combining this inequality with a standard restriction on the ratio between time and space mesh-size, to enforce the equality of the two limiting functions. The following lemma concerns the regularity of the pressure. Following the argument in [16], we notice that we are essentially solving the standard discrete Poisson problem associated with the pressure. It is for this result that the space-periodic setting is needed. There exists a constant \(c>0\), independent of Δt and of h, such that $$ \begin{aligned} \bigl\Vert p_{h}^{m} \bigr\Vert _{2} &\leq c \bigl( \bigl\Vert u_{h}^{m,1/2} \bigr\Vert _{H^{1}}+ \bigl\Vert u_{h}^{m,1/2} \bigr\Vert _{3} \bigl\Vert u_{h}^{m,1/2} \bigr\Vert _{H^{1}} \bigr)\quad \textit{for }m=1,\dots ,N. \end{aligned} $$ The proof is exactly the same as in [5, Lemma 4.3]. □ We are now in position to prove the main a priori estimates on the approximate solutions of (3.13). Proposition 4.4 Let \(u_{0}\in L^{2}_{\operatorname{div}}\) and assume that (1.3) holds. Then, there exists a constant \(c>0\), independent of Δt and of h, such that \(\|v^{\Delta t}_{h}\|_{L^{\infty}(L^{2})}\leq c\), \(\|u^{\Delta t}_{h}\|_{L^{\infty}(L^{2})\cap L^{2}(H^{1})}\leq c\), \(\|p^{\Delta t}_{h}\|_{L^{4/3}(L^{2})}\leq c\), \(\|\partial _{t} v^{\Delta t}_{h}\|_{L^{4/3}(H^{-1})}\leq c\), Moreover, we also have the following estimate $$ \int _{0}^{T} \bigl\Vert u^{\Delta t}_{h}-v^{\Delta t}_{h} \bigr\Vert _{2}^{2} \,dt\leq \frac{\Delta t}{12}\sum _{m=1}^{N} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{2}. $$ The bound in \(L^{\infty}(0,T;L^{2}_{\#})\cap L^{2}(0,T;H^{1}_{\#})\) for \(v^{\Delta t}_{h}\) follows from (3.12) and Lemma 4.1, as well as the bounds on \(u^{\Delta t}_{h}\) in b). The bound on the pressure \(p^{\Delta t}_{h}\) follows again from (3.12) and Lemma 4.3. Finally, the bound on the time derivative of \(v^{\Delta t}_{h}\) follows by (3.13) and a standard comparison argument. Concerning (4.3), using the definitions in (3.12), we get for \(t\in [t_{m-1}, t_{m})\) $$\begin{aligned} u^{\Delta t}_{h}-v^{\Delta t}_{h} &= \frac{1}{2} u_{h}^{m}+\biggl(1- \frac{1}{2} \biggr) u_{h}^{m-1}-u_{h}^{m-1} - \frac{t-t_{m-1}}{\Delta t}\bigl(u_{h}^{m}-u_{h}^{m-1} \bigr) \\ & = \biggl(\frac{1}{2}-\frac{t-t_{m-1}}{\Delta t} \biggr) \bigl(u_{h}^{m}-u_{h}^{m-1} \bigr). \end{aligned}$$ Then, evaluating the integrals, we have $$\begin{aligned} \int _{0}^{T} \bigl\Vert u^{\Delta t}_{h}-v^{\Delta t}_{h} \bigr\Vert _{2}^{2} \,dt &= \sum _{m=1}^{N} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{2} \int _{t_{m-1}}^{t_{m}} \biggl(\frac{1}{2}- \frac{t-t_{m-1}}{\Delta t} \biggr)^{2} \,dt \\ & \leq \frac{\Delta t}{12}\sum_{m=1}^{N} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{2}. \end{aligned}$$ Proof of the main theorem In this section, we prove Theorem 1.1. We split the proof into two main steps: The first concerns showing that discrete solutions converge to a Leray–Hopf weak solution, while the second proves the constructed solutions are, in fact, suitable. Proof of Theorem 1.1 We first prove the convergence of the numerical sequence to a Leray–Hopf weak solution, mainly proving the correct balance of the global energy (2.2); then, we prove that the weak solution constructed is suitable, namely that it satisfies the local energy inequality (2.3). Step 1: convergence toward a Leray–Hopf weak solution. We start by observing that from a simple density argument, the test functions considered in (2.1) can be chosen in the space \(L^{s}(0,T;H^{1}_{\operatorname{div}})\cap C^{1}(0,T;L^{2}_{ \operatorname{div}})\), with \(s\geq 4\). In particular, using (3.1) for any \(w\in L^{s}(0,T;H^{1}_{\operatorname{div}})\cap C^{1}(0,T;L^{2}_{ \operatorname{div}})\) such that \(w(T,x)=0\), we can find a sequence \(\{w_{h}\}_{h}\subset L^{s}(0,T;H^{1}_{\#})\cap C(0,T;L^{2}_{\#})\) such that $$ \begin{aligned} &w_{h}\to w\quad \text{strongly in }L^{s}\bigl(0,T;H^{1}_{\#}\bigr) \text{ as }h\to 0, \\ &w_{h}(0)\to w(0)\quad \text{strongly in }L^{2}_{\#} \text{ as }h \to 0, \\ &\partial _{t}w_{h}\rightharpoonup \partial _{t}w\quad \text{weakly in }L^{2}\bigl(0,T;L^{2}_{ \#} \bigr) \text{ as }h\to 0. \end{aligned} $$ Let \(\{(v^{\Delta t}_{h}, v^{\Delta t}_{h}, p^{\Delta t}_{h})\}_{(\Delta t,h)}\), defined as in (3.12), be a family of solutions of (3.13). By Proposition 4.4a), we have that $$\begin{aligned} & \bigl\{ v^{\Delta t}_{h} \bigr\} _{(\Delta t,h)}\subset L^{\infty}\bigl(0,T;L^{2}_{ \#}\bigr),\quad \text{with uniform bounds on the norms}. \end{aligned}$$ Then, by standard compactness arguments, there exists \(v\in L^{\infty}(0,T;L^{2}_{\#})\) such that (up to a sub-sequence) $$ \begin{aligned} &v^{\Delta t}_{h} \rightharpoonup v\quad \text{weakly in }L^{2}\bigl(0,T;L^{2}_{ \#} \bigr)\text{ as }(\Delta t,h)\to (0,0). \end{aligned} $$ Again using Proposition 4.4 b), there exists \(u\in L^{\infty}(0,T;L^{2}_{\#})\) such that (up to a sub-sequence) $$ \begin{aligned} &u^{\Delta t}_{h} \overset{*}{\rightharpoonup } u\quad \text{weakly* in }L^{ \infty} \bigl(0,T;L^{2}_{\#}\bigr) \text{ as }(\Delta t,h)\to (0,0), \\ &u^{\Delta t}_{h} \rightharpoonup u\quad \text{weakly in } L^{2}\bigl(0,T;H^{1}_{ \#}\bigr) \text{ as }( \Delta t,h)\to (0,0). \end{aligned} $$ Moreover, using (3.1), for any \(q\in L^{2}(0,T;L^{2}_{\#})\), we can find a sequence \(\{q_{h}\}_{h}\subset L^{2}(0,T;L^{2}_{\#})\) such that \(q_{h}\in L^{2}(0,T;M_{h})\) and $$ q_{h}\to q\quad \text{strongly in }L^{2}\bigl(0,T;L^{2}_{\#} \bigr) \text{ as }h\to 0. $$ Then, using (5.3) and (3.13), we have that $$ 0= \int _{0}^{T} \bigl(\operatorname{div}u^{\Delta t}_{h},q_{h} \bigr) \,dt\to \int _{0}^{T} (\operatorname{div}u,q ) \,dt\quad \text{as }(\Delta t,h)\to (0,0), $$ hence u is divergence-free, since it belongs to \(H^{1}_{\operatorname{div}}\). Let us consider (4.3), then $$ \int _{0}^{T} \bigl\Vert v^{\Delta t}_{h}-u^{\Delta t}_{h} \bigr\Vert _{2}^{2} \,dt\leq \frac{\Delta t}{12} \sum _{m=1}^{N} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{2} \leq C \biggl((\Delta t)^{2}+ \frac{\Delta t}{h^{1/2}} \biggr), $$ where in the last inequality we used again Proposition 4.4 and the estimate (4.2). Then, the integral \(\int _{0}^{T}\|v^{\Delta t}_{h}-u^{\Delta t}_{h}\|_{2}^{2} \,dt\) vanishes as \(\Delta t\to 0\) if \(\Delta t=o(h^{1/2})\), that is if (1.3) is satisfied. Then, from (5.2) and (5.3), it easily follows that \(v=u\). The rest of the proof follows the same as in [5], so we just sketch out the proof, referring to [5] for complete details. By Lemma 2.4 and the fact that \(u=v\), we get that \(u^{\Delta t}_{h} v^{\Delta t}_{h}\rightharpoonup |u|^{2}\) weakly in \(L^{1}((0,T)\times \mathbb{T}^{3})\) as \((\Delta t,h)\to (0,0)\). In particular, using (5.4), we have that $$ \begin{aligned} &v^{\Delta t}_{h}, u^{\Delta t}_{h}\to u\quad \text{strongly in }L^{2} \bigl(0,T;L^{2}_{ \#}\bigr)\text{ as }(\Delta t,h)\to (0,0). \end{aligned} $$ Concerning the pressure term the uniform bound in Proposition 4.4 d) ensures the existence of \(p\in L^{{4}/{3}}(0,T;L^{2}_{\#})\) such that (up to a sub-sequence) $$ p^{\Delta t}_{h}\rightharpoonup p\quad \text{weakly in }L^{{4}/{3}}\bigl(0,T;L^{2}_{ \#}\bigr) \text{ as }( \Delta t,h)\to (0,0). $$ Then, using (5.1) and (5.2), we have that $$ \begin{aligned} \lim_{(\Delta t,h)\to (0,0)} \int _{0}^{T}\bigl(\partial _{t}v^{\Delta t}_{h},w_{h} \bigr) \,dt& =- \int _{0}^{T}(u,\partial _{t}w) \,dt- \bigl(u_{0}, w(0)\bigr), \end{aligned} $$ Next, using (5.3), (5.1), and (5.6), we also get $$ \begin{aligned} &\lim_{(\Delta t,h)\to (0,0)} \int _{0}^{T}\bigl(\nabla u^{\Delta t}_{h}, \nabla w_{h}\bigr) \,dt= \int _{0}^{T}(\nabla u, \nabla w) \,dt, \\ &\int _{0}^{T}\bigl(p^{\Delta t}_{h}, \operatorname{div}w_{h}\bigr) \,dt\to 0 \quad \text{as }(\Delta t,h)\to (0,0). \end{aligned} $$ Concerning the non-linear term, let \(s\geq 4\), with a standard compactness argument $$ nl_{h} \bigl(u^{\Delta t}_{h},u^{\Delta t}_{h} \bigr) \rightharpoonup u\cdot \nabla u, \quad \text{in } L^{s'} \bigl(0,T;H^{-1}\bigr)\text{ as }(\Delta t,h)\to (0,0). $$ Then, also from (5.1), it follows that $$ \int _{0}^{T} b_{h}\bigl(u^{\Delta t}_{h},u^{\Delta t}_{h},w_{h} \bigr) \,dt \to \int _{0}^{T} \bigl((u\cdot \nabla ) u,w \bigr) \,dt \quad \text{as }(\Delta t,h)\to (0,0). $$ Finally, the energy inequality follows by Lemma 4.1, using the lower semi-continuity of the \(L^{2}\)-norm with respect to the weak convergence, since the estimate (4.1) can be rewritten as $$ \frac{1}{2} \bigl\Vert v^{\Delta t}_{h}(T) \bigr\Vert ^{2}_{2}+\nu \int _{0}^{T} \bigl\Vert \nabla u^{ \Delta t}_{h}(t) \bigr\Vert ^{2}_{2} \,dt \leq \frac{1}{2} \Vert u_{0} \Vert ^{2}_{2}. $$ The treatment of the Case 2 and Case 3 can be done with minor changes, concerning the tri-linear term, just using the estimate already proved in the previous section. The other terms are unchanged, and the energy estimate remains the same. The results for Case 2 and Case 3 can be easily adapted also to cover the θ-scheme for \(\theta >1/2\), hence completing the results in [5], which were focusing only on the treatment of Case 1. Note that the tri-linear term based on the rotational formulation from Case 2 and Case 3 \((\nabla \times u^{\Delta t}_{h})\times u^{\Delta t}_{h}\) converges exactly as in the previous step, since $$ \bigl(\nabla \times u^{\Delta t}_{h}\bigr)\times u^{\Delta t}_{h} \rightharpoonup (\nabla \times u)\times u, \text{ in } L^{s'}\bigl(0,T;H^{-1}\bigr) \quad \text{as }(\Delta t,h)\to (0,0), $$ which implies (5.7), ending the proof in Case 2. In Case 3, the term, which needs some care, is the projected Bernoulli pressure. In this case, note that for \(\frac{1}{s^{*}}+\frac{1}{2}=\frac{1}{s'}\), $$ \begin{aligned} \int _{0}^{T} \bigl\Vert \mathcal{K}_{h} \bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2} \bigr)-\mathcal{K}_{h}\bigl( \vert u \vert ^{2}\bigr) \bigr\Vert ^{s'}_{2} \,dt &\leq \int _{0}^{T} \bigl\Vert \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}- \vert u \vert ^{2} \bigr\Vert ^{s'}_{2} \,dt \\ & \leq \int _{0}^{T} \bigl\Vert \bigl\vert u^{\Delta t}_{h}-u \bigr\vert \bigl\vert u^{\Delta t}_{h}+u \bigr\vert \bigr\Vert ^{s'}_{2} \,dt \\ &\leq \bigl\Vert u^{\Delta t}_{h}-u \bigr\Vert _{L^{s^{*}}(L^{3})}\bigl( \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{L^{2}(L^{6})}+ \Vert u \Vert _{L^{2}(L^{6})}\bigr). \end{aligned} $$ This shows that $$ \mathcal{K}_{h}\bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}\bigr)\to \mathcal{K}_{h}\bigl( \vert u \vert ^{2}\bigr) \quad \text{in }L^{s'}\bigl(0,T;L^{2} \bigl(\mathbb{T}^{3}\bigr)\bigr). $$ Moreover, since \(\mathcal{K}_{h}(|w|^{2})\to |w|^{2}\) in \(L^{2}(\mathbb{T}^{3})\) for a.e. \(t\in (0,T)\) and \(\|\mathcal{K}_{h}(|w|^{2})\|_{2}\leq \|w\|^{2}_{2}\), by Lebesgue dominated convergence, we have \(\mathcal{K}_{h}(|w|^{2})\to |w|^{2}\) in \(L^{s'}(0,T;L^{2}(\mathbb{T}^{3}))\), finally showing that $$ \int _{0}^{T}\bigl(\mathcal{K}_{h}\bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}\bigr), \operatorname{div}v\bigr) \,dt\to \int _{0}^{T}\bigl( \vert u \vert ^{2},\operatorname{div}v\bigr) \,dt. $$ This proves, after integration by parts, that $$ \int _{0}^{T}b_{h}\bigl(u^{\Delta t}_{h}, u^{\Delta t}_{h},w\bigr) \,dt \to \int _{0}^{T}(u\cdot \nabla ) u,w) \,dt. $$ Note also that since in all cases, it holds that $$ \frac{1}{2} \bigl\Vert v^{\Delta t}_{h}(T) \bigr\Vert ^{2}_{2}+\nu \int _{0}^{T} \bigl\Vert \nabla u^{ \Delta t}_{h}(t) \bigr\Vert ^{2}_{2} \,dt \leq \frac{1}{2} \Vert u_{0} \Vert ^{2}_{2}, $$ a standard lower semi-continuity argument is enough to infer that the weak solution $$ u=v=\lim_{(h,\Delta t)\to (0,0)} u^{\Delta t}_{h}=\lim _{(h,\Delta t) \to (0,0)} v^{\Delta t}_{h} $$ also satisfies the global energy inequality (2.2). Step 2: proof of the local energy inequality. In order to conclude the proof of Theorem 1.1, we need to prove that the Leray–Hopf weak solution constructed in Step 1 is suitable. According to Definition 2.2, this requires just to prove the local energy inequality. To this end, let us consider a smooth, periodic in the space variable function \(\phi \geq 0\), vanishing for \(t=0,T\); we use \(P_{h}(u^{\Delta t}_{h}\phi )\) as test function in the momentum equation in (3.13). The term involving the time-derivative is treated as in [5]. $$\begin{aligned} & \int _{0}^{T} \bigl(\partial _{t} v^{\Delta t}_{h},P_{h}\bigl(u^{\Delta t}_{h} \phi \bigr) \bigr) \,dt \\ &\quad = \int _{0}^{T} \bigl(\partial _{t} v^{\Delta t}_{h},u^{ \Delta t}_{h}\phi \bigr) \,dt + \int _{0}^{T} \bigl(\partial _{t} v^{ \Delta t}_{h},P_{h}\bigl(u^{\Delta t}_{h} \phi \bigr)-u^{\Delta t}_{h}\phi \bigr) \,dt=:I_{1}+I_{2}. \end{aligned}$$ Concerning the term \(I_{1}\), we have that $$ \begin{aligned} &= \int _{0}^{T}\bigl(\partial _{t} v^{\Delta t}_{h},v^{\Delta t}_{h} \phi \bigr)\,dt+ \int _{0}^{T}\bigl(\partial _{t} v^{N}_{h},\bigl(u^{\Delta t}_{h}-v^{ \Delta t}_{h} \bigr) \phi \bigr) \,dt =:I_{11}+I_{12}. \end{aligned} $$ Let us first consider \(I_{11}\). By splitting the integral over \([0,T]\) as the sum of integrals over \([t_{m-1},t_{m}]\) and by integration by parts, we get $$ \begin{aligned} & \int _{0}^{T}\bigl(\partial _{t} v^{\Delta t}_{h},v^{\Delta t}_{h}\phi \bigr) \,dt\\ &\quad= \sum_{m=1}^{N} \int _{t_{m-1}}^{t_{m}}\bigl(\partial _{t} v^{\Delta t}_{h},v^{ \Delta t}_{h}\phi \bigr) \,dt= \sum_{m=1}^{N} \int _{t_{m-1}}^{t_{m}}\biggl( \frac{1}{2}\partial _{t} \bigl\vert v^{\Delta t}_{h} \bigr\vert ^{2},\phi \biggr) \,dt \\ &\quad=\frac{1}{2}\sum_{m=1}^{N}\bigl( \bigl\vert u_{h}^{m} \bigr\vert ^{2},\phi (t_{m},x)\bigr)- \bigl( \bigl\vert u_{h}^{m-1} \bigr\vert ^{2}, \phi (t_{m-1},x)\bigr)-\sum _{m=1}^{N} \int _{t_{m-1}}^{t_{m}}\biggl(\frac{1}{2} \bigl\vert v^{ \Delta t}_{h} \bigr\vert ^{2},\partial _{t}\phi \biggr) \,dt, \end{aligned} $$ where we used that \(\partial _{t} v^{\Delta t}_{h}(t)= \frac{u_{h}^{m}-u_{h}^{m-1}}{\Delta t}\), for \(t\in [t_{m-1},t_{m}[\). Next, since the sum telescopes and ϕ is with compact support in \((0,T)\), we get $$ \int _{0}^{T}\bigl(\partial _{t} v^{\Delta t}_{h},v^{\Delta t}_{h}\phi \bigr) \,dt =- \int _{0}^{T} \biggl(\frac{1}{2} \bigl\vert v^{\Delta t}_{h} \bigr\vert ^{2},\partial _{t} \phi \biggr) \,dt. $$ By the strong convergence of \(v^{\Delta t}_{h}\rightarrow u\) in \(L^{2}(0,T;L^{2}_{\#})\), we can conclude that $$ \lim_{(\Delta t,h)\to (0,0)} \int _{0}^{T}\bigl(\partial _{t} v^{\Delta t}_{h},v^{ \Delta t}_{h}\phi \bigr) \,dt= - \int _{0}^{T} \biggl(\frac{1}{2} \vert u \vert ^{2}, \partial _{t}\phi \biggr) \,dt. $$ Then, we consider the term \(I_{12}\). Since \(u^{\Delta t}_{h}\) is constant on the interval \([t_{m-1}, t_{m}[\), we can write $$ \begin{aligned} \int _{0}^{T}\bigl(\partial _{t} v^{\Delta t}_{h},\bigl(u^{\Delta t}_{h}-v^{ \Delta t}_{h} \bigr) \phi \bigr) \,dt&= -\sum_{m=1}^{N} \int _{t_{m-1}}^{t_{m}}\bigl( \partial _{t} \bigl(v^{\Delta t}_{h}-u^{\Delta t}_{h}\bigr), \bigl(v^{\Delta t}_{h}-u^{ \Delta t}_{h}\bigr) \phi \bigr) \,dt \\ &=\sum_{m=1}^{N} \int _{t_{m-1}}^{t_{m}} \biggl( \frac{ \vert v^{\Delta t}_{h}-u^{\Delta t}_{h} \vert ^{2}}{2}, \partial _{t}\phi \biggr) \,dt, \end{aligned} $$ since the sum telescopes. Hence, we have that \(u^{\Delta t}_{h}-v^{\Delta t}_{h}\) vanishes (strongly) in \(L^{2}(0,T;L^{2}_{\#})\), provided that \(\Delta t=o(h^{1/2})\). Then, \(I_{12}\rightarrow 0\) as \((\Delta t,h)\rightarrow (0,0)\). It is at this point that the coupling between h and Δt plays a role. For the convergence of the other terms, the discrete commutator property is needed. We are skipping some details from the other proofs because they are very close to that in the cited references. We have that the \(I_{2}\to 0\) as \((\Delta t,h)\rightarrow (0,0)\). Indeed, by the discrete commutator property (3.4), Proposition 4.4, and the inverse inequality (3.3), we can infer $$\begin{aligned} \vert I_{2} \vert &\leq \int _{0}^{T} \bigl\Vert \partial _{t} v^{\Delta t}_{h} \bigr\Vert _{H^{-1}} \bigl\Vert P_{h}\bigl(u^{ \Delta t}_{h}\phi \bigr)-u^{\Delta t}_{h} \phi \bigr\Vert _{H^{1}}\,dt \\ & \leq ch^{{1}/{2}} \bigl\Vert \partial _{t} v^{\Delta t}_{h} \bigr\Vert _{L^{ {4}/{3}}(H^{-1})} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert ^{{1}/{2}}_{L^{\infty}(L^{2})} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert ^{{1}/{2}}_{L^{2}(H^{1})} \leq c h^{ {1}/{2}}. \end{aligned}$$ Hence, this term also vanishes as \(h\to 0\), ending the analysis of the term involving the time-derivative. Concerning the viscous term, we write $$ \begin{aligned} \bigl(\nabla u^{\Delta t}_{h},\nabla P_{h}\bigl(u^{\Delta t}_{h} \phi \bigr)\bigr) & =\bigl( \bigl\vert \nabla u^{\Delta t}_{h} \bigr\vert ^{2}, \phi \bigr)-\biggl(\frac{1}{2} \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}, \Delta \phi \biggr)+R_{visc}, \end{aligned} $$ with the "viscous remainder" \(R_{visc}:= (\nabla u^{\Delta t}_{h},\nabla [P_{h}(u^{\Delta t}_{h} \phi )-u^{\Delta t}_{h}\phi ] )\). Since \(u^{\Delta t}_{h}\) converges to u weakly in \(L^{2}(0,T;H_{\#}^{1})\) and strongly in \(L^{2}(0,T;L^{2}_{\#})\), $$ \begin{aligned} &\liminf_{(\Delta t,h)\to (0,0)} \int _{0}^{T} \bigl( \bigl\vert \nabla u^{\Delta t}_{h} \bigr\vert ^{2}, \phi \bigr) \,dt\geq \int _{0}^{T} \bigl( \vert \nabla u \vert ^{2},\phi \bigr) \,dt, \\ & \frac{1}{2} \int _{0}^{T} \bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2},\Delta \phi \bigr) \,dt \to \frac{1}{2} \int _{0}^{T} \bigl( \vert u \vert ^{2},\Delta \phi \bigr) \,dt. \end{aligned} $$ For the remainder \(R_{visc}\), using again the discrete commutator property from Definition 3.1, we have that $$ \biggl\vert \int _{0}^{T} R_{visc} \,dt \biggr\vert \leq c h \int _{0}^{T} \bigl\Vert \nabla u^{\Delta t}_{h} \bigr\Vert ^{2}_{2} \,dt\to 0 \quad \text{as }( \Delta t,h)\to (0,0). $$ We consider now the nonlinear term \(b_{h}\). We have $$ \begin{aligned} b_{h}\bigl(u^{\Delta t}_{h},u^{\Delta t}_{h},P_{h} \bigl(u^{\Delta t}_{h}\phi \bigr)\bigr) & =b_{h} \bigl(u^{\Delta t}_{h},u^{\Delta t}_{h},u^{\Delta t}_{h} \phi \bigr)+R_{nl}. \end{aligned} $$ The "nonlinear remainder" \(R_{nl}:=b_{h}(u^{\Delta t}_{h},u^{\Delta t}_{h},P_{h}(u^{\Delta t}_{h} \phi )-u^{\Delta t}_{h}\phi )\) can be estimated using the discrete commutator property, (3.3), and (3.8), (3.10), (3.11) for the choices of the nonlinear term approximation in Case 1, Case 2, and Case 3, respectively. Indeed, we have $$ \begin{aligned} \vert R_{nl} \vert & \leq \bigl\Vert nl_{h}\bigl(u^{\Delta t}_{h},u^{\Delta t}_{h} \bigr) \bigr\Vert _{H^{-1}} \bigl\Vert P_{h} \bigl(u^{\Delta t}_{h}\phi \bigr)-u^{\Delta t}_{h} \phi \bigr\Vert _{H^{1}} \leq c h^{1/2} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{2} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{H^{1}}^{2}, \end{aligned} $$ hence, by integrating over time $$ \int _{0}^{T}R_{nl} \,dt\to 0 \quad \text{as }(\Delta t,h)\to (0,0). $$ The last term we consider is that involving the pressure. By integrating by parts, we have $$ \begin{aligned} \bigl(p^{\Delta t}_{h}, \operatorname{div}P_{h}\bigl(u^{\Delta t}_{h}\phi \bigr) \bigr) =\bigl(p_{h}^{ \Delta t} u^{\Delta t}_{h}, \nabla \phi \bigr)+R_{p1}+R_{p2}. \end{aligned} $$ where the two "pressure remainders" are defined as follows $$ R_{p1}:= \bigl(p^{\Delta t}_{h},\operatorname{div} \bigl(P_{h}\bigl(u^{\Delta t}_{h} \phi \bigr)-u^{\Delta t}_{h}\phi \bigr) \bigr)\quad \text{and}\quad R_{p2}:= \bigl(\phi p^{\Delta t}_{h}, \operatorname{div}u^{\Delta t}_{h} \bigr). $$ Using again the discrete commutator property (3.5) and (3.3), we easily get $$ \begin{aligned} \vert R_{p1} \vert &\leq c h \bigl\Vert p^{\Delta t}_{h} \bigr\Vert _{2} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{H^{1}} \end{aligned} $$ which implies $$ \int _{0}^{T} R_{p1} \,dt\to 0\quad \text{as }(\Delta t,h)\to (0,0). $$ The term \(R_{p2}\) can be treated in the same way but now using the discrete commutator property for the projector over \(Q_{h}\) $$ \begin{aligned} \vert R_{p2} \vert &\leq c \bigl\Vert Q_{h}\bigl(p^{\Delta t}_{h}\phi \bigr)-p^{\Delta t}_{h}\phi \bigr\Vert _{2} \bigl\Vert u^{\Delta t}_{h}\phi \bigr\Vert _{H^{1}} \leq c h^{{1}/{2}} \bigl\Vert p^{ \Delta t}_{h} \bigr\Vert _{L^{{4}/{3}}(L^{2})} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{L^{2}(H^{1})}^{ {1}/{2}} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{L^{\infty}(L^{2})}^{{1}/{2}}, \end{aligned} $$ and finally this implies that The convergence $$ \begin{aligned} \int _{0}^{T}\bigl(p_{h}^{\Delta t} u^{\Delta t}_{h},\nabla \phi \bigr)\to \int _{0}^{T}(p u,\nabla \phi ) \end{aligned} $$ is an easy consequence of (5.5), (5.6) and Proposition 4.4 b). These steps are common to the three cases. We now treat the inertial term. In Case 1, the definition of \(nl_{h}\) in (3.6) allows us to handle the first term on the right-hand side in (5.9) with some integration by parts as follows: $$\begin{aligned} b_{h}\bigl(u^{\Delta t}_{h},u^{\Delta t}_{h},u^{\Delta t}_{h} \phi \bigr) & = - \biggl(u^{\Delta t}_{h} \frac{1}{2} \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2},\nabla \phi \biggr). \end{aligned}$$ By arguing as in [5], it can be proved that $$ u^{\Delta t}_{h} \frac{1}{2} \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}\to u \frac{1}{2} \vert u \vert ^{2} \quad \text{strongly in } L^{1}\bigl(0,T;L^{1}\bigr), \text{ as }( \Delta t,h)\to (0,0), $$ and one shows that $$ \int _{0}^{T} b_{h}\bigl(u^{\Delta t}_{h},u^{\Delta t}_{h},u^{\Delta t}_{h} \phi \bigr) \,dt \to - \int _{0}^{T} \biggl(u \frac{1}{2} \vert u \vert ^{2},\nabla \phi \biggr) \,dt\quad \text{as } (\Delta t,h)\to (0,0). $$ In Case 2, the result is much simpler since by direct computations, one shows that for smooth enough w, we have (by a point-wise equality, where \(\epsilon _{ijk}\) the totally anti-symmetric tensor) $$ \begin{aligned} \bigl[ ( \nabla \times w)\times w \bigr]\cdot (\phi w)&=\sum_{i,j,k,l}( \epsilon _{jki}-\epsilon _{jlm}) \partial _{l}w_{m}w_{k}w_{i} \phi \\ &=\phi \sum_{i,k}w_{k}\partial _{k}w_{i}w_{i}-w_{i}\partial _{i}w_{k}w_{k}=0. \end{aligned} $$ Hence, we get $$ b_{h}\bigl(u^{\Delta t}_{h},u^{\Delta t}_{h},u^{\Delta t}_{h} \phi \bigr)=0, $$ and there are no terms to be estimated. In Case 3, we get instead (cf. [16, Lemma 4.1]) $$ \begin{aligned} b_{h}\bigl(u^{\Delta t}_{h},u^{\Delta t}_{h},u^{\Delta t}_{h} \phi \bigr)&=- \frac{1}{2} \bigl(\mathcal{K}_{h}\bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}\bigr), \operatorname{div}\bigl(\phi u^{\Delta t}_{h}\bigr) \bigr) \\ &=-\frac{1}{2} \bigl(u^{\Delta t}_{h} \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2},\nabla \phi \bigr)+R_{1}+R_{2} \end{aligned} $$ $$ R_{1}:=-\frac{1}{2} \bigl(u^{\Delta t}_{h} \mathcal{K}_{h}\bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}\bigr)-u^{ \Delta t}_{h} \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2},\nabla \phi \bigr) \quad \text{and}\quad R_{2}:=-\frac{1}{2} \bigl(\phi \mathcal{K}_{h}\bigl( \bigl\vert u^{ \Delta t}_{h} \bigr\vert ^{2}\bigr),\operatorname{div}u^{\Delta t}_{h} \bigr), $$ The strong \(L^{s'}(0,T;L^{2})\)-convergence of \(\mathcal{K}_{h}(|u^{\Delta t}_{h}|^{2}\) implies that \(\int _{0}^{T}|R_{1}|\,dt\to 0\). While using the discrete commutator property for \(R_{2}\), we estimate $$ \begin{aligned} \vert R_{2} \vert &= \frac{1}{2} \bigl(\phi \mathcal{K}_{h}\bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}\bigr)-Q_{h}\bigl( \phi \mathcal{K}_{h}\bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}\bigr)\bigr) ,\operatorname{div}u^{ \Delta t}_{h} \bigr)| \\ &\leq c h \bigl\Vert \mathcal{K}_{h}\bigl( \bigl\vert u^{\Delta t}_{h} \bigr\vert ^{2}\bigr) \bigr\Vert _{2} \bigl\Vert u^{ \Delta t}_{h} \bigr\Vert _{H^{1}}\leq c h \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{4}^{2} \bigl\Vert u^{ \Delta t}_{h} \bigr\Vert _{H^{1}} \\ &\leq c h^{1/2} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{2}^{2} \bigl\Vert u^{\Delta t}_{h} \bigr\Vert _{H^{1}}^{2}, \end{aligned} $$ which shows that \(\int _{0}^{T}|R_{2}|\,dt\to 0\). □ Extension to other second-order schemes The techniques developed in the previous sections are general enough to be used to handle with minor changes, also some more general second-order schemes, as, for instance, the Crank–Nicolson with Linear Extrapolation (CNLE) and the Crank-Nicholson/Adams-Bashforth (CNAB), as reported below. We have the following result. Let the same assumptions of Theorem 1.1be satisfied, and let replace assumption (1.3) with (6.2) for the (CNLE) algorithm and replace assumption (1.3) with (6.4) for the (CNAB) algorithm. Then, solutions of both schemes converge to a suitable weak solution of the NSE. The proofs are in the same spirit as in the previous section, after the corresponding estimates (independent of m) have been proved. For this reason, we simply include the changes related to the proofs of the other cases previously treated. Crank–Nicolson with Linear Extrapolation (CNLE). Another scheme that is similar to the Crank–Nicolson (CN) in terms of theory, but performs better in terms of numerical properties is the Crank–Nicolson with Linear Extrapolation as introduced by Baker [2] and studied by Ingram [24], especially in the context of non-homogeneous Dirichlet problems. In this case, the scheme is defined, for \(m\geq 2\), by $$ \begin{aligned} &\bigl(d_{t} u^{m}_{h},v_{h} \bigr)+\nu \bigl(\nabla u_{h}^{m,1/2}, \nabla v_{h}\bigr)\\ &\quad {}+ \frac{1}{2} b_{h}\bigl(3u_{h}^{m-1}-u_{h}^{m-2},u_{h}^{m+1/2},v_{h} \bigr)-\bigl( p_{h,}^{m},\operatorname{div}v_{h} \bigr)=0, \\ &\bigl( \operatorname{div}u_{h}^{m},q_{h} \bigr)=0, \end{aligned} $$ (CNLE) where the operator \(b_{h}(\cdot ,\cdot ,\cdot )\) is the same as in Case 1 (see the previous section), and for \(m=1\), the scheme is replaced by (CN) to be consistent with second-order time-discretization. Scheme (CNLE) is linearly implicit, unconditionally, and nonlinearly stable, and second-order accurate [2, 24, 25]. In [25], it is shown that no time-step restriction is required for the convergence (but with mild assumptions on the pressure), and additionally, it is proved the optimal convergence for smoother solutions. Here, we prove the following result that is not assuming any extra-assumption neither on the Leray–Hopf weak solution u nor on the pressure p and can be used to prove the local energy inequality, reasoning as in the previous sections. Let \(N\in \mathbb{N}\) and \(m=1,..,N\). Then, for (CNLE) the following discrete energy-type equality holds true: Moreover, if \(u_{0}\in H^{1}_{\#}\), there exists \(C>0\) such that if $$ \Delta t\leq \frac{\nu}{16}\min \biggl\{ h^{2}, \frac{ h^{3} \Vert u_{0} \Vert ^{2}}{4 C^{2}} \biggr\} , $$ $$ \sum_{m=2}^{N} \bigl\Vert u_{h}^{m}-u_{h}^{m-1} \bigr\Vert _{2}^{2}\leq C. $$ The first part of Lemma 6.2 can be proved in a direct way simply using \(u_{h}^{m+1/2}\) as test function, obtaining (6.1); the proof of the second estimate requires some additional work in the spirit of [28, Sect. 19]. To this end, let us define $$ \delta ^{m}_{h}:=\frac{u^{m}_{h}-u^{m-1}_{h}}{2}. $$ Using \(2 \Delta t\, u^{m}_{h}\) as test function in (CNLE), we get $$ \begin{aligned} &\bigl\Vert u^{m}_{h} \bigr\Vert ^{2}_{2}- \bigl\Vert u^{m-1}_{h} \bigr\Vert ^{2}_{2}+ \bigl\Vert \delta ^{m}_{h} \bigr\Vert +2 \nu \Delta t \bigl\Vert \nabla u^{m}_{h} \bigr\Vert ^{2}_{2} \\ &\quad =-2\nu \Delta t\bigl(\nabla \delta ^{m}_{h},\nabla u^{m}\bigr)-\Delta t\, b_{h}\bigl(3u_{h}^{m-1}-u_{h}^{m-2},u_{h}^{m+1/2},u^{m}_{h} \bigr), \\ &\quad =-2\nu \Delta t \bigl(\nabla \delta ^{m}_{h},\nabla u^{m}\bigr)- \Delta t\, b_{h}\biggl(3 u_{h}^{m-1}-u_{h}^{m-2}, \frac{-u_{h}^{m}+u^{m-1}_{h}}{2},u^{m}_{h}\biggr), \end{aligned} $$ Hence, the right-hand side can be estimated as follows $$ \begin{aligned} & \bigl\vert 2\nu \Delta t\bigl(\nabla \delta ^{m}_{h},\nabla u^{m}\bigr)+ \Delta t b_{h}\bigl(3 \nabla u_{h}^{m-1}-\nabla u_{h}^{m-2},\delta ^{m}_{h},u^{m}_{h} \bigr) \bigr\vert \\ &\quad\leq 2\nu \Delta t \bigl\Vert \nabla \delta ^{m}_{h} \bigr\Vert _{2} \bigl\Vert \nabla u^{m} \bigr\Vert _{2}\\ &\qquad {}+C \Delta t \bigl( \bigl\Vert 3\nabla u_{h}^{m-1} \bigr\Vert _{2}+ \bigl\Vert \nabla u_{h}^{m-2} \bigr\Vert _{2}\bigr) \bigl\Vert \nabla u^{m}_{h} \bigr\Vert _{2} \bigl\Vert \delta ^{m}_{h} \bigr\Vert ^{1/2} _{2} \bigl\Vert \nabla \delta ^{m}_{h} \bigr\Vert ^{1/2}_{2} \\ &\quad\leq \frac{2\Delta t}{h} \bigl\Vert \delta ^{m}_{h} \bigr\Vert _{2} \bigl\Vert \nabla u^{m} \bigr\Vert _{2}+ \frac{C \Delta t}{h^{3/2}} \bigl( \bigl\Vert 3 u_{h}^{m-1} \bigr\Vert _{2}+ \bigl\Vert u_{h}^{m-2} \bigr\Vert _{2}\bigr) \bigl\Vert \nabla u^{m}_{h} \bigr\Vert _{2} \bigl\Vert \delta ^{m}_{h} \bigr\Vert _{2} \\ &\quad \leq \frac{1}{4} \bigl\Vert \delta ^{m}_{h} \bigr\Vert ^{2}_{2}+ \frac{8 (\Delta t)^{2}}{h^{2}} \bigl\Vert \nabla u^{m} \bigr\Vert ^{2}_{2}+ \frac{1}{4} \bigl\Vert \delta ^{m}_{h} \bigr\Vert ^{2}_{2}\\ &\qquad {}+ \frac{8C^{2} (\Delta t)^{2}}{h^{3}} \bigl( \bigl\Vert 3 u_{h}^{m-2} \bigr\Vert _{2}+ \bigl\Vert u_{h}^{m-2} \bigr\Vert _{2}\bigr)^{2} \bigl\Vert \nabla u^{m}_{h} \bigr\Vert ^{2}_{2} . \end{aligned} $$ Next, using the uniform estimate on \(\|u_{h}^{m}\|_{2}\) coming from the previous step, we get $$ \bigl\Vert u^{m}_{h} \bigr\Vert ^{2}_{2}- \bigl\Vert u^{m-1}_{h} \bigr\Vert ^{2}_{2}+ \frac{1}{2} \bigl\Vert \delta ^{m}_{h} \bigr\Vert ^{2}_{2} +\nu \Delta t \bigl\Vert \nabla u_{h}^{m} \bigr\Vert ^{2}_{2} \biggl(2- \frac{8\Delta t}{\nu h^{2}}-\frac{32 C^{2} \Delta t}{\nu h^{3}} \Vert u_{0} \Vert ^{2}_{2} \biggr)\leq 0, $$ and under the restriction on Δt and h from (6.2), we obtain $$ \bigl\Vert u^{m}_{h} \bigr\Vert ^{2}_{2}- \bigl\Vert u^{m-1}_{h} \bigr\Vert ^{2}_{2}+ \frac{1}{2} \bigl\Vert \delta ^{m}_{h} \bigr\Vert ^{2}_{2}+\nu \Delta t \bigl\Vert \nabla u_{h}^{m} \bigr\Vert ^{2}_{2}\leq 0, $$ which ends the proof by summation over m. □ The convergence to a weak solution satisfying the global and the local energy inequality follows in the same manner as in [5] and the results from the previous section. Once the estimated are proven, one has just to rewrite word-by-word the proof in Case 1. Crank–Nicolson/Adams–Bashforth In the same spirit of Case 1, we can also consider the Crank–Nicolson scheme for the linear part and the Adams–Bashforth for the inertial one, as it is studied, for instance, in [28, Sect. 19]. The algorithm reads as follows: solve for \(m\geq 2\) $$ \begin{aligned} &\bigl(d_{t} u^{m}_{h},v_{h} \bigr)+\nu \bigl(\nabla u_{h}^{m,1/2}, \nabla v_{h}\bigr)+ \frac{3}{2}b_{h}\bigl(u_{h}^{m-1}, u_{h}^{m-1},v_{h}\bigr) \\ &\quad {}-\frac{1}{2}b_{h}\bigl(u_{h}^{m-2}, u_{h}^{m-2},v_{h}\bigr)-\bigl( p_{h,}^{m}, \operatorname{div}v_{h}\bigr)=0, \\ & \bigl( \operatorname{div}u_{h}^{m},q_{h} \bigr)=0, \end{aligned} $$ (CNAB) where \(b_{h}(\cdot ,\cdot ,\cdot )\) is defined by means of (3.6)–(3.7), while again \(u^{1}_{h}\) is obtained by an iteration of (CN). This method is explicit in the nonlinear term and only conditionally stable [21, 28]. The (CNAB) method is popular for approximating Navier–Stokes flows because it is fast and easy to implement. For example, it is used to model turbulent flows induced by wind turbine motion, turbulent flows transporting particles, and reacting flows in complex geometries; see Ingram [25]. First, observe that it is possible to prove, with a direct argument, a sort of energy balance for the scheme, namely an inequality of this kind $$ \frac{1}{2} \bigl\Vert u^{1}_{h} \bigr\Vert ^{2}_{2}+\nu \Delta t \bigl\Vert \nabla u^{1/2}_{h} \bigr\Vert ^{2}_{2} \leq \frac{1}{2} \Vert u_{0} \Vert ^{2}_{2}, $$ but nevertheless, from the above estimate, one can also obtain by means of the inverse inequality $$ \frac{1}{2} \bigl\Vert u^{1}_{h} \bigr\Vert ^{2}_{2}+\frac{\nu \Delta t}{4} \bigl\Vert \nabla u^{1}_{h} \bigr\Vert ^{2}_{2}\leq \frac{1}{2} \Vert u_{0} \Vert ^{2}_{2}+ \frac{\nu \Delta t}{4} \bigl\Vert \nabla u^{0}_{h} \bigr\Vert ^{2}_{2} \leq \biggl(\frac{1}{2}+ \frac{\nu \Delta t}{4 h^{2}} \biggr) \Vert u_{0} \Vert ^{2}_{2}:=K_{3}, $$ with \(K_{3}\) independent of Δt and h. The estimate for \(m>1\) is obtained by a induction argument in [28, Lemma 19.1]. The proved result is the following. Assume that \(u_{0}\in L^{2}_{\operatorname{div}}\), and (6.3) holds. Then, there exists \(K_{4}\) independent of Δt and h such that if $$ \Delta t\leq \frac{4 c_{1}^{2}}{\nu}\quad \textit{and}\quad \frac{\Delta t}{h^{3}}\leq \max \biggl\{ \frac{1}{32\nu},\frac{c\nu}{K_{4}} \biggr\} , $$ $$ \begin{aligned} & \bigl\Vert u^{n}_{h} \bigr\Vert ^{2}_{2}\leq K_{4}, \\ & \sum_{m=1}^{N} \bigl\Vert u^{m}_{h}-u^{m-1}_{h} \bigr\Vert ^{2}_{2}\leq 32 K_{4}, \\ & \Delta t\sum_{m=1}^{N} \bigl\Vert \nabla u^{m}_{h} \bigr\Vert ^{2}_{2} \leq 4 K_{4}. \end{aligned} $$ We just comment that the proof is obtained by showing (with the same estimates employed in the previous case) that $$ \biggl(1+\frac{\Delta t}{2 c_{1}^{2}} \biggr)\xi ^{n}\leq \xi ^{n-1} \quad \text{where } \xi ^{m}:= \bigl\Vert u^{m}_{h} \bigr\Vert ^{2}_{2}+\frac{1}{4} \bigl\Vert u^{m}_{h}-u^{m-1}_{h} \bigr\Vert ^{2}_{2}, $$ and then applying an inductive argument. This is enough to prove the standard result \(u^{m}_{h}\in l^{\infty}(L^{2})\cap l^{2}(H^{1})\) from which one deduces the estimates also on the pressure. Next passage to the limit is again standard showing that the linear interpolated sequence converges to a distributional solution of the NSE. A nontrivial point is to justify the global energy inequality because, in this case, the estimate (5.8) does not hold. The functions \(v^{\Delta t}_{h}\) and \(u^{\Delta t}_{h}\) have the requested regularity but do not satisfy the correct energy balance, since $$ \frac{3}{2}b_{h}\bigl(u_{h}^{m-1}, u_{h}^{m-1}, u^{m}_{h}\bigr)- \frac{1}{2}b_{h}\bigl(u_{h}^{m-2}, u_{h}^{m-2},u^{m}_{h}\bigr)\neq0. $$ The correct energy balance is satisfied only in the limit \((h,\Delta t)\to (0,0)\), but this cannot be deduced at this stage. As usual the global energy inequality cannot be proved by means of testing with the solution itself, but only after a limiting process, cf. [4]. The way of obtaining it passes through the verification that \((u,p)\) is a suitable weak solution. The validity of the local energy inequality can be done as in [5] and results in Case 1, once the (conditional) estimate in (6.5)2 is obtained. Note that in this case, the restriction on the relative size of Δt and h is needed already for the first a priori estimate. Next, by adapting a well-known argument in [11, Section 2C], we can deduce it from (2.3). In fact, it is enough to replace ϕ by the product of ϕ and \(\chi _{\epsilon}\) (which is a mollification of \(\chi _{[t_{1},t_{2}]}(t)\), the characteristic function of \([t_{1},t_{2}]\)) and pass to the limit as \(\epsilon \to 0\) to get $$ \begin{aligned} & \int _{\mathbb{T}^{3}} \bigl\vert u(t_{2}) \bigr\vert ^{2}\phi (t_{2}) \,dx+ \nu \int _{0}^{T} \int _{\mathbb{T}^{3}} \vert \nabla u \vert ^{2}\phi \,dx \,dt \\ &\quad \leq \int _{\mathbb{T}^{3}} \bigl\vert u(t_{1}) \bigr\vert ^{2}\phi (t_{1}) \,dx + \int _{0}^{T} \int _{\mathbb{T}^{3}} \biggl[\frac{ \vert u \vert ^{2}}{2} (\partial _{t} \phi +\nu \Delta \phi )+ \biggl(\frac{ \vert u \vert ^{2}}{2}+p \biggr)u\cdot \nabla \phi \biggr] \,dx \,dt, \end{aligned} $$ and the above formula is particularly significant if \(\phi (\tau ,x)\neq0\) in \((t_{1},t_{2})\). Next, in the above formula, one can take a sequence \(\phi _{n}\) of smooth functions converging to the function \(\phi \equiv 1\) and at least in the whole space or in the space periodic setting, one gets the global energy inequality (2.2) as is explained at the beginning of [11, Sect. 8]. Moreover, the same argument applied to arbitrary time intervals also shows that $$ \frac{1}{2} \bigl\Vert u(t_{2}) \bigr\Vert _{2}^{2}+\nu \int _{t_{1}}^{t_{2}} \bigl\Vert \nabla u(s) \bigr\Vert _{2}^{2} \,ds\leq \frac{1}{2} \bigl\Vert u(t_{1}) \bigr\Vert _{2}^{2}\quad \text{for all } 0\leq t_{1}\leq t_{2}\leq T. $$ Hence, the strong global energy inequality holds true. In this paper, we analyzed several second-order (in time) numerical methods for the unsteady Navier–Stokes equations. We proved that numerical solutions converge to physically relevant solutions under mild assumptions on the discretization parameters (in space and time). The analysis is performed for several different discretizations of the convective term, and the methods applied, for their generality, can also be adapted to different situations provided that simple stability estimates are at hand. NSE: Crank–Nicolson CNLE: Crank–Nicolson with Linear Extrapolation CNAB: Crank–Nicholson/Adams–Bashforth Extrapolation Albritton, D., Brué, E., Colombo, M.: Non-uniqueness of Leray solutions of the forced Navier-Stokes equations. Ann. Math. (2) 196(1), 415–455 (2022) Baker, G.A.: Projection Methods for Boundary-Value Problems for Equations of Elliptic and Parabolic Type with Discontinuous Coefficients. ProQuest LLC, Ann Arbor (1973). Thesis (Ph.D.)–Cornell University Berselli, L.C.: Weak solutions constructed by semi-discretization are suitable: the case of slip boundary conditions. Int. J. Numer. Anal. Model. 15, 479–491 (2018) Berselli, L.C.: Three-Dimensional Navier-Stokes Equations for Turbulence. Mathematics in Science and Engineering. Academic Press, London (2021) Berselli, L.C., Fagioli, S., Spirito, S.: Suitable weak solutions of the Navier-Stokes equations constructed by a space-time numerical discretization. J. Math. Pures Appl. 9(125), 189–208 (2019) Berselli, L.C., Spirito, S.: On the vanishing viscosity limit of 3D Navier-Stokes equations under slip boundary conditions in general domains. Commun. Math. Phys. 316, 171–198 (2012) Berselli, L.C., Spirito, S.: An elementary approach to the inviscid limits for the 3D Navier-Stokes equations with slip boundary conditions and applications to the 3D Boussinesq equations. NoDEA Nonlinear Differ. Equ. Appl. 21, 149–166 (2014) Berselli, L.C., Spirito, S.: Weak solutions to the Navier-Stokes equations constructed by semi-discretization are suitable. In: Recent Advances in Partial Differential Equations and Applications. Contemp. Math., vol. 666, pp. 85–97. Am. Math. Soc., Providence (2016) Chapter MATH Google Scholar Bramble, J.H., Xu, J.: Some estimates for a weighted \(L^{2}\) projection. Math. Comput. 56, 463–476 (1991) Brenner, S.C., Scott, L.R.: The Mathematical Theory of Finite Element Methods, 3rd edn. Texts in Applied Mathematics, vol. 15. Springer, New York (2008) Caffarelli, L., Kohn, R., Nirenberg, L.: Partial regularity of suitable weak solutions of the Navier-Stokes equations. Commun. Pure Appl. Math. 35, 771–831 (1982) Carstensen, C.: Merging the Bramble-Pasciak-Steinbach and the Crouzeix-Thomée criterion for \(H^{1}\)-stability of the \(L^{2}\)-projection onto finite element spaces. Math. Comput. 71, 157–163 (2002) Demlow, A., Guzmán, J., Schatz, A.H.: Local energy estimates for the finite element method on sharply varying grids. Math. Comput. 80, 1–9 (2011) Diening, L., Storn, J., Tscherpel, T.: On the Sobolev and \(L^{p}\)-stability of the \(L^{2}\)-projection. SIAM J. Numer. Anal. 59, 2571–2607 (2021) Douglas, J. Jr., Dupont, T., Wahlbin, L.: The stability in \(L^{q}\) of the \(L^{2}\)-projection into finite element function spaces. Numer. Math. 23, 193–197 (1974/75) Guermond, J.-L.: Finite-element-based Faedo-Galerkin weak solutions to the Navier-Stokes equations in the three-dimensional torus are suitable. J. Math. Pures Appl. 9(85), 451–464 (2006) Guermond, J.-L.: Faedo-Galerkin weak solutions of the Navier-Stokes equations with Dirichlet boundary conditions are suitable. J. Math. Pures Appl. (9) 88, 87–106 (2007) Guermond, J.-L.: On the use of the notion of suitable weak solutions in CFD. Int. J. Numer. Methods Fluids 57, 1153–1170 (2008) Guermond, J.-L., Oden, J.T., Prudhomme, S.: Mathematical perspectives on large Eddy simulation models for turbulent flows. J. Math. Fluid Mech. 6, 194–248 (2004) He, Y., Li, K.: Nonlinear Galerkin method and two-step method for the Navier-Stokes equations. Numer. Methods Partial Differ. Equ. 12, 283–305 (1996) He, Y., Sun, W.: Stability and convergence of the Crank-Nicolson/Adams-Bashforth scheme for the time-dependent Navier-Stokes equations. SIAM J. Numer. Anal. 45, 837–869 (2007) Heywood, J.G., Rannacher, R.: Finite-element approximation of the nonstationary Navier-Stokes problem. IV. Error analysis for second-order time discretization. SIAM J. Numer. Anal. 27, 353–384 (1990) Horiuti, K.: Comparison of conservative and rotational forms in large Eddy simulation of turbulent channel flow. J. Comput. Phys. 71, 343–370 (1987) Ingram, R.: A new linearly extrapolated Crank-Nicolson time-stepping scheme for the Navier-Stokes equations. Math. Comput. 82, 1953–1973 (2013) Ingram, R.: Unconditional convergence of high-order extrapolations of the Crank-Nicolson, finite element method for the Navier-Stokes equations. Int. J. Numer. Anal. Model. 10, 257–297 (2013) Layton, W., Manica, C.C., Neda, M., Olshanskii, M., Rebholz, L.G.: On the accuracy of the rotation form in simulations of the Navier-Stokes equations. J. Comput. Phys. 228, 3433–3447 (2009) Lions, P.-L.: Mathematical Topics in Fluid Mechanics. Vol. 2. Oxford Lecture Series in Mathematics and Its Applications, vol. 10. Clarendon Press, New York (1998). Compressible models, Oxford Science Publications Marion, M., Temam, R.: Navier-Stokes equations: theory and approximation. In: Handbook of Numerical Analysis, Vol. VI, Handb. Numer. Anal., vol. VI, pp. 503–688. North-Holland, Amsterdam (1998) Quarteroni, A., Valli, A.: Numerical Approximation of Partial Differential Equations. Springer Series in Computational Mathematics, vol. 23. Springer, Berlin (1994) Scheffer, V.: Hausdorff measure and the Navier-Stokes equations. Commun. Math. Phys. 55, 97–112 (1977) Temam, R.: Navier-Stokes Equations. Theory and Numerical Analysis. Studies in Mathematics and Its Applications, vol. 2. North-Holland, Amsterdam (1977) Thomée, V.: Galerkin Finite Element Methods for Parabolic Problems. Springer Series in Computational Mathematics, vol. 25. Springer, Berlin (1997) Tone, F.: Error analysis for a second order scheme for the Navier-Stokes equations. Appl. Numer. Math. 50, 93–119 (2004) Vasseur, A.F.: A new proof of partial regularity of solutions to Navier-Stokes equations. NoDEA Nonlinear Differ. Equ. Appl. 14, 753–785 (2007) Zang, T.: On the rotation and skew-symmetric forms for incompressible flow simulations. Appl. Numer. Math. 7, 27–40 (1991) The authors thank V. DeCaria for useful suggestions and comments on an early draft of the paper. The authors acknowledge support by INdAM-GNAMPA and by MIUR, within PRIN20204NT8W4_004: Nonlinear evolution PDEs, fluid dynamics and transport equations: theoretical foundations and applications. Dipartimento di Matematica, Università degli Studi di Pisa, Via F. Buonarroti 1/c, I-56127, Pisa, Italy Luigi C. Berselli DISIM - Dipartimento di Ingegneria e Scienze dell'Informazione e Matematica, Università degli Studi dell'Aquila, Via Vetoio, I-67100, L'Aquila, Italy Stefano Spirito All authors read and approved the final manuscript. Correspondence to Luigi C. Berselli. Berselli, L.C., Spirito, S. Convergence of second-order in time numerical discretizations for the evolution Navier-Stokes equations. Adv Cont Discr Mod 2022, 65 (2022). https://doi.org/10.1186/s13662-022-03736-2 65M12 Navier-Stokes equations Local energy inequality Numerical schemes Second-order methods Finite element and finite difference methods New Mathematical Trends in the Study of Materials Behavior
CommonCrawl
Population measures of spike train synchrony Conor Houghton (2013), Scholarpedia, 8(10):30635. doi:10.4249/scholarpedia.30635 revision #150870 [link to/cite this article] Curator: Conor Houghton Thomas Kreuz Jonathan R. Williford daniel chicharro Conor Houghton, Department of Computer Science, University of Bristol, England A population measure of spike train synchrony estimates the similarity of a pair of population responses. It is just like a measure of spike train synchrony which estimates the similarity of a pair of spike trains except that it measures similarity between collections of spike trains. Measures of this type are becoming important because of the growing availability of multi-neuronal data sets. Moreover, multi-neuronal data sets are increasingly recorded from neurons that are near each other and correspondingly more likely to participate in the same coding functions. Population measures are proposed as a tool that can be applied to these data and used to assess how neurons cooperate in neuronal coding. A single-neuron measure of spike train synchrony is a measure in which all spikes from a neuron are treated as identical, so that the spike train is treated as nothing more than a sequence of spike times. Of course, this is not true of real spike trains, in which spike profiles vary from neuron to neuron and from spike to spike; however, typically this variability does not appear to substantially effect the downstream consequences of spike arrival. In a population synchrony measure, spikes do not just have a time, they are also labeled by the neuron that fired them. 1 Measures of spike train synchrony 1.1 The Victor-Purpura metric 1.2 The van Rossum metric 1.3 The ISI- and the SPIKE-distance 2 Measures of overall synchrony 3 Measures of the synchrony of two population responses 3.1 The population Victor-Purpura metric 3.2 The population van Rossum metric 4 Example Applications 7 Internal references Measures of spike train synchrony There are a number of different approaches to spike train synchrony, which give rise to a corresponding variety of approaches to population measures of synchrony. To briefly summarize, if \(\textbf{u}\) and \(\textbf{v}\) are two spike trains with spike times \(\{u_1,u_2,\ldots,u_n\}\) and \(\{v_1,v_2,\ldots,v_m\}\) , a measure of spike train synchrony maps the pair to a positive real number. In some measures this number expresses the similarity or synchrony of the pair and, using the notation \(s(\textbf{u},\textbf{v})\), a high value corresponds to two closely related spike trains. It is more common however, to define a measure of distance, or dissimilarity, and using the notation \(d(\textbf{u},\textbf{v})\), very similar spike trains correspond to a low value. Either way, it is common for measures of similarity or dissimilarity to be symmetric, such that \(s(\textbf{u},\textbf{v})=s(\textbf{v},\textbf{u})\) or \(d(\textbf{u},\textbf{v})=d(\textbf{v},\textbf{u})\). In the case of dissimilarity measures, it is also common to choose measures that satisfy all the conditions required for the measure to be a metric: a measure of dissimilarity is a metric if it is positive, symmetric, non-degenerate, so that \(d(\textbf{u},\textbf{v})=0\) implies \(\textbf{u}=\textbf{v}\) and if it satisfies the triangular inequality, so that \(d(\textbf{u},\textbf{w})\le d(\textbf{u},\textbf{w})+d(\textbf{w},\textbf{v})\). Broadly speaking, population measures come in two varieties, those that measure the over-all synchrony of a set of responses and are calculated by averaging single-neuron measures of synchrony, and those which measure the similarity or dissimilarity between two sets of responses. Since they form the base on which the population measures in this article are built, three different single-neuron distance measures are described here. The Victor-Purpura metric The Victor-Purpura metric is an "edit distance", which means that it is a measure of the amount of editing required to transform one spike train into the other (Victor and Purpura, 1996, 1997). In this metric there are three different edit types for changing a spike train, and each has a cost. Spikes can be added or deleted at a cost of one for each operation, and spikes can be moved at a cost of \(q|\delta t|\) for a temporal distance \(\delta t\). The distance is then the least expensive sequence of edits that changes one spike train into the other. The parameter \(q\) determines how costly it is to move a spike. It is never worthwhile to move a spike more than \(2/q\) since doing so would have a higher cost than deleting the spike at one location and adding it at the other. For a small value of \(q\) spikes can be moved with little cost and the distance is substantially determined by the difference in the number of spikes. Conversely, as \(q\) becomes larger, the metric becomes increasingly sensitive to spike times. The van Rossum metric The van Rossum metric is an embedding-based metric (van Rossum, 2001). The spike trains are first mapped to continuous functions of time by convolving with a kernel: \[ \mathbf{u}\rightarrow f(t;\mathbf{u})=\sum_{i=1}^{n}h(t-u_{i}) \] where \(h(t)\) is the kernel, usually a causal exponential \( h(t)=\sqrt{\frac{2}{\tau}}\Theta(t)e^{-t/\tau} \) where \(\Theta(t)\) is the Heaviside step function with \(\Theta(t)=0\) for \(t<0\) and \(\Theta(t)=1\) otherwise. The distance between two spike trains \(\textbf{u}\) and \(\textbf{v}\) is then defined as the \(L^2\) distance between the corresponding functions: \[ d(\textbf{u},\textbf{v})=\sqrt{\int_{0}^{T}dt(f(t;\mathbf{u})-f(t;\mathbf{v}))^{2}} \] The timescale \(\tau\), like \(2/q\) in the Victor-Purpura metric, determines the sensitivity of the metric to spike times, with small values giving a metric that is sensitive to spike times and larger values a metric which compares firing rates. The ISI- and the SPIKE-distance The ISI- and the SPIKE-distance differ from the two distance measures mentioned above in that they emphasize temporal locality (Quian Quiroga et al., 2002, Kreuz et al., 2007, 2011, 2012). They each define a local measure of the dissimilarity $d(t;\textbf{u},\textbf{v})$ which can be integrated to give an overall distance between the spike trains \[ d(\textbf{u},\textbf{v})=\int_0^T dt |d(t;\textbf{u},\textbf{v})| \] The idea is that the time local profile \(d(t;\textbf{u},\textbf{v})\) can be used to track changes of synchrony during the time course of an experiment. The ISI- and the SPIKE-distance differ in how \(d(t;\textbf{u},\textbf{v})\) is defined, the ISI-distance is calculated using local estimates of firing rates, the SPIKE-distance depends on local differences in spike timing between the two spike trains. Measures of overall synchrony Averaging single-neuron similarities or dissimilarities is the most straightforward way to measure the overall synchrony of a population of responses (Kreuz et al., 2009). So, if the spike trains in a population are \(\{\textbf{u}_1,\textbf{u}_2,\ldots,\textbf{u}_N\}\) and \(d(\textbf{u}_i,\textbf{u}_j)\) is a measure of the distance between \(\textbf{u}_i\) and \(\textbf{u}_j\), then \[D=\frac{2}{N(N-1)}\sum_{i=1}^{N-1}\sum_{j=i+1}^Nd(\textbf{u}_i,\textbf{u}_j) \] is the average distance. Here, \(d(\textbf{u}_i,\textbf{u}_j)\) could be any of the measures of dissimilarity described in measures of spike train synchrony. If the measure is given by integrating a local dissimilarity profile, as is the case for the ISI- and the SPIKE-distance, then this also gives rise to a local measure of overall population dissimilarity: \[ D=\int_0^T dt D(t) \] where \[ D(t)=\frac{2}{N(N-1)}\sum_{i=1}^{N-1}\sum_{j=i+1}^N|d(t;\textbf{u}_i,\textbf{u}_j)|. \] A measure of overall synchrony gives a single quantity describing how spread out a set of responses are. These might be multiple responses from a single neuron, for example multiple trials with the same stimulus, in which case overall synchrony quantifies the reliability of the response, or they might be responses to a corpus of stimuli, in which case the overall synchrony measures how strongly modulated the neuron is by the stimulus. Alternatively, the responses might be spike trains from multiple neurons with a single stimulus. In this case, if the role of the population is to reduce noise, the individual neurons will respond in the same way to a single aspect of the stimulus and will be highly synchronized with a small distance $D$ and, conversely, if different neurons respond preferentially to different aspects of the stimulus the neurons will not be synchronized the $D$ will be large. In contrast, the measures of the synchrony of two population responses, which will be examined next, measure a distance between two equally-sized sets of spike trains, most typically, two different population responses or responses from two different sets of neurons where each neuron from one set has an equivalent neuron in the other set, or perhaps, reponses from real neurons and a model of the same network. Measures of the synchrony of two population responses Both the Victor-Purpura and van Rossum metrics have extensions to the population case, allowing two sets of populations to be compared. Thus, they map two equal size sets of spike trains \(\mathcal{U}=\{\mathbf{u}_1,\mathbf{u}_2,\ldots,\mathbf{u}_N\}\) and \(\mathcal{V}=\{\mathbf{v}_1,\mathbf{v}_2,\ldots,\mathbf{v}_N\}\) to a real positive number \(d(\mathcal{U},\mathcal{V})\). In both cases, the extension to a population metric follows the same framework as the single-neuron metric it is based on, so the extension to the Victor-Purpura metric involves an extra edit corresponding to changing the neuron label of the spike, and the extension to the van Rossum metric involves expanding the space the spike trains are embedded in, to give directions corresponding to the neuron labels. In each case there are extra parameters introduced which specify the significance of the identity of the neuron which fires a spike. This allows the metrics to interpolate between two cases. One extreme is what is called the summed population code where the identity of the neuron is irrelevant to coding so all that is important is that the spike is fired as part of the population response, not which neuron fired it. The other extreme is called the labeled-line code in which there is no population effect and the most appropriate distance between the two population responses is the sum of the distances between the pairs of responses for each neuron. The population Victor-Purpura metric The simplest way to describe the Victor-Purpura metric is to think of the population response not as a set of spike trains, but as a single spike train in which the spikes have a label marking which neuron fired them. Just as the time of a spike can be changed at a cost \(q|\delta t|\), the neuron label can also be changed, at a cost $k$, where $k$ is a parameter specifying the significance of a neuron label (Aronov et al., 2003). If $k=0$, there is no cost for relabeling a spike and the distance is unaffected by which neuron fired each spike. Thus, this represents a summed population code. Conversely, if $k\ge2$ it is at least as expensive to relabel a spike as it is to delete a spike from one spike train and add it to another. In this case, the distance between \(\mathcal{U}\) and \(\mathcal{V}\) is the sum of the distances between each of the individual spike trains and \(k=2\) represents a labeled line code. Here, a single cost, $k$ has been used for the change of spike label. In principle, there could be a different cost for every possible change of label, giving $N(N-1)/2$ $k$-like parameters in all, in practice the metric is simplified, as here, by assuming all label changes have the same cost. As with the single-neuron Victor-Purpura metric there is a straightforward algorithm for calculating values of the population metric. This algorithm has complexity \(O(Nn^{N+1})\) for \(N\) spike trains with average length \(n\). The population van Rossum metric The single-neuron van Rossum metric is defined by a map embedding the spike trains into the space of continuous functions, and then by using the \(L^{2}\) metric on the space of functions to give a distance. To extend this idea to populations the embedding space is enlarged to a space of \(N\)-dimensional functions (Houghton and Sen, 2008). Each of the \(N\) neurons in the set is then associated with a specific direction in this \(N\)-dimensional space, and spike-trains for that neuron are mapped to functions embedded along that direction. Specifically, for the \(i\)th spike train \[ {\bf u}_{i}\mapsto f(t;\mathbf{u}_{i})\mathbf{e}_{i} \] where \(\mathbf{e}_{i}\) is a $N$-dimensional unit vector associated with the \(i\)th neuron. That is, the unit vectors \(\mathbf{e}_{i}\) determine the direction in which the \(i\)th spike train is mapped. Adding these vectors, one from each single-unit spike train, gives an $N$-dimensional vector of functions of time: \[ \mathcal{U}\mapsto\mathbf{f}(t;\mathcal{U})=\sum_{i=1}^{N}f(t;\mathbf{u}_{i})\mathbf{e}_{i}.\tag{1} \] The population metric is then defined by the norm in this extended space: if \[ \mathbf{g}(t)=\left(\begin{array}{c} g_{1}(t)\\ g_{2}(t)\\ \vdots\\ g_{N}(t)\end{array}\right)\tag{2}\] is a vector of functions of \(t\in[0,T]\), then the norm is \[ \|\mathbf{g}(t)\|=\sqrt{\int_{0}^{T}dt(g_{1}^{2}+g_{2}^{2}+\ldots+g_{N}^{2})}. \] The corresponding metric induced on the space of population responses is given by \[ d(\mathcal{U},\mathcal{V})=\|\mathbf{f}(t;\mathcal{U})-\mathbf{f}(t;\mathcal{V})\|. \] The directions of the individual unit vectors \(\mathbf{e}_{i}\) serves to parameterize this metric. If the \(\mathbf{e}_{i}\) are all parallel then the metric corresponds to summing the individual response vectors \(f(\mathbf{u}_{i})\mathbf{e}_{i}\). This is equivalent to superimposing the spike trains before mapping them into function space, and thus precisely corresponds to a summed population code. Conversely, if all the vectors are orthogonal, for example if \(\mathbf{e}_{i}\) has one for its \(i\) component and is otherwise zero, then the multineuronal metric is a Pythagorean sum of the individual van Rossum distances between the individual spike trains: this is a labeled line code. While this description, using an extension of the embedding space to extend the metric, is useful in explaining the population metric in terms similar to the description of the original single-neuron metric, it is possible to reduce the definition to a simpler form. For example, if there are just two neurons, \(N=2\), then \[ d(\mathcal{U},\mathcal{V})= \sqrt{\int_0^\infty dt \left(\sum_i|\delta_i |^2 dt + 2\cos{\theta}\delta_1\delta_2\right)} \] where \(\delta_i=f(t;\mathbf{u}_i)-f(t;\mathbf{v}_i)\). Here \(\theta\) corresponds to the angle between \(\mathbf{e}_1\) and \(\mathbf{e}_2\); these direction vectors are no longer mentioned directly. More generally \[ d(\mathcal{U},\mathcal{V})= \sqrt{\int_0^\infty dt \left(\sum_{i=1}^N|\delta_i |^2 dt + \sum_{i=1}^{N-1}\sum_{j= i+1}^N\cos{\theta_{ij}}\delta_{i}\delta_{j}\right)} \] where \[ \mathbf{e}_1\cdot\mathbf{e}_2=\cos{\theta_{ij}} \] It is possible to reduce the number of parameters by making all the $\theta_{ij}$ as having the same value. This is analogous to the population Victor-Purpura metric where the cost of changing a spike label is always $k$ irrespective of which labels are involved. In practise the van Rossum metric is calculated using formulas in which the functions are integrated analytically (Schrauwen and Campenhout, 2007, Paiva et al., 2009). The calculation is \(O(n^2N^2)\) which can be reduced to \(O(nN^2)\) using the computational approach discussed in (Houghton and Kreuz, 2012). In the paper in which it was first introduced (Aronov et al. 2003), the population Victor-Purpura metric was used to analyze neural coding of spatial phase in spike trains recorded simultaneously from multiple neurons in area V1 of Macaque monkeys. A non-zero value of \(k\) was found to be optimal for discriminating responses with different phases, supporting a role for neuron identity in coding. Similarly, in Clemens et al. (2011) the population van Rossum metric was used to provide evidence for a progression from summed population coding to labeled-line coding as signals move along the auditory pathway in crickets. Aronov D, Reich DS, Mechler F, Victor JD (2003). Neural coding of spatial phase in V1 of the macaque monkey. J Neurophysiol 89:3304-–3327. doi:10.1152/jn.00826.2002. Clemens J, Kutzki O, Ronacher B, Schreiber S, Wohlgemuth S (2011). Efficient transformation of an auditory population code in a small sensory system. Proceeding of the National Academy of Science 108:13812-13817; doi:10.1073/pnas.1104506108. Houghton C, Sen K (2008). A new multineuron spike train metric. Neural Comput 20:1495-–1511. doi:10.1162/neco.2007.10-06-350. Houghton C, Kreuz T (2012). On the efficient calculation of van Rossum distances. Network 23:48-58. Kreuz T, Haas JS, Morelli A, Abarbanel HDI, Politi A (2007). Measuring spike train synchrony. J Neurosci Methods 165:151–-161. doi:10.1016/j.jneumeth.2007.05.031. Kreuz T, Chicharro D, Andrzejak RG, Haas JS, Abarbanel HDI (2009). Measuring multiple spike train synchrony. J Neurosci Methods 183:287-–299. doi:10.1016/j.jneumeth.2009.06.039. Kreuz T, Chicharro D, Greschner M, Andrzejak RG (2011). Time-resolved and time-scale adaptive measures of spike train synchrony. J Neurosci Methods 195:92-–106. doi:10.1016/j.jneumeth.2010.11.020. Kreuz T, Chicharro D, Houghton C, Andrzejak RG, Mormann F (2013). Monitoring spike train synchrony. JNeurophysiol. 109:1457-72. doi:10.1152/jn.00873.2012. Paiva ARC, Park I, Principe JC (2009). A reproducing kernel Hilbert space framework for spike train signal processing. Neural Computation 21:424–-449. doi:10.1162/neco.2008.09-07-614. Quian Quiroga R, Kreuz T, Grassberger P (2002). Event synchronization: a simple and fast method to measure synchronicity and time delay patterns. Phys Rev E 66:041904. doi:10.1103/PhysRevE.66.041904. Schrauwen B, Campenhout JV (2007). Linking non-binned spike train kernels to several existing spike train metrics. Neurocomputing 70:1247–-1253. doi:10.1016/j.neucom.2006.11.017. van Rossum MCW (2001). A novel spike distance. Neural Comput 13:751-–763. doi:10.1162/089976601300014321. Victor JD, Purpura KP (1996). Nature and precision of temporal coding in visual cortex: a metric-space analysis. J Neurophysiol 76:1310-–1326. Victor JD, Purpura KP (1997). Metric-space analysis of spike trains: theory, algorithms and application. Network 8:127-–164. doi:10.1088/0954-898X/8/2/003. Victor JD (2005). Spike train metrics. Current Opinion in Neurobiology 15:585–-592. doi:10.1016/j.conb.2005.08.002. Houghton C, Victor J. Measuring representational distances–the spike-train metrics approach, in Visual population codes: Toward a common multivariate framework for cell recording and functional imaging, editors Kriegeskorte N, Kreima G, MIT Press, Boston MA, 2012. James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629. doi:10.4249/scholarpedia.1629. Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459. doi:10.4249/scholarpedia.1459. David Golomb (2007) Neuronal synchrony measures. Scholarpedia, 2(1):1347. doi:10.4249/scholarpedia.1347. Thomas Kreuz (2011) Measures of spike train synchrony. Scholarpedia, 6(10):11934. doi:10.4249/scholarpedia.11934. Thomas Kreuz (2011) Measures of neuronal signal synchrony. Scholarpedia, 6(12):11922. doi:10.4249/scholarpedia.11922. Thomas Kreuz (2012) SPIKE-distance. Scholarpedia, 7(12):30652. doi:10.4249/scholarpedia.30652. Nebojsa Bozanic, Mario Mulansky, Thomas Kreuz (2014) SPIKY. Scholarpedia, 9(12):32344. Sponsored by: Thomas Kreuz, Institute for complex systems (ISC), National research council (CNR), Sesto Fiorentino, Italy Reviewed by: Thomas Kreuz, Institute for complex systems (ISC), National research council (CNR), Sesto Fiorentino, Italy Reviewed by: Dr. daniel chicharro, Istituto Italiano di Tecnologia, genova, Italy Retrieved from "http://www.scholarpedia.org/w/index.php?title=Population_measures_of_spike_train_synchrony&oldid=150870" Network Dynamics Spiking Networks "Population measures of spike train synchrony" by Conor Houghton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Permissions beyond the scope of this license are described in the Terms of Use
CommonCrawl
9.E: Systems of Equations and Inequalities (Exercises) [ "article:topic", "authorname:openstax", "calcplot:yes", "license:ccbyncsa", "showtoc:yes", "transcluded:yes" ] Truckee Meadows Community College TMCC: Precalculus I and II 9: Systems of Equations and Inequalities Contributed by OpenStax Mathematics at OpenStax CNX 9.1: Systems of Linear Equations: Two Variables Algebraic 9.2: Systems of Linear Equations: Three Variables 9.3: Systems of Nonlinear Equations and Inequalities: Two Variables 9.4: Partial Fractions 9.5: Matrices and Matrix Operations 9.6: Solving Systems with Gaussian Elimination 9.7: Solving Systems with Inverses 9.8: Solving Systems with Cramer's Rule 1) Can a system of linear equations have exactly two solutions? Explain why or why not. No, you can either have zero, one, or infinitely many. Examine graphs. 2) If you are performing a break-even analysis for a business and their cost and revenue equations are dependent, explain what this means for the company's profit margins. 3) If you are solving a break-even analysis and get a negative break-even point, explain what this signifies for the company? This means there is no realistic break-even point. By the time the company produces one unit they are already making profit. 4) If you are solving a break-even analysis and there is no break-even point, explain what this means for the company. How should they ensure there is a break-even point? 5) Given a system of equations, explain at least two different methods of solving that system. You can solve by substitution (isolating \(x\) or \(y\)), graphically, or by addition. For the exercises 6-10, determine whether the given ordered pair is a solution to the system of equations. 6) \(\begin{align*} 5x-y &= 4\\ x+6y &= 2 \end{align*}\; \text{ and } (4,0)\) 7) \(\begin{align*} -3x-5y &= 13\\ -x+4y &= 10 \end{align*}\; \text{ and } (-6,1)\) 8) \(\begin{align*} 3x+7y &= 1\\ 2x+4y &= 0 \end{align*}\; \text{ and } (2,3)\) 9) \(\begin{align*} -2x+5y &= 7\\ 2x+9y &= 7 \end{align*}\; \text{ and } (-1,1)\) 10) \(\begin{align*} x+8y &= 43\\ 3x-2y &= -1 \end{align*}\; \text{ and } (3,5)\) For the exercises 11-20, solve each system by substitution. 11) \(\begin{align*} x+5y &= 5\\ 2x+3y &= 4 \end{align*}\) \((-1,2)\) 12) \(\begin{align*} 3x-2y &= 18\\ 5x+10y &= -10 \end{align*}\) 13) \(\begin{align*} 4x+2y &= -10\\ 3x+9y &= 0 \end{align*}\) 14) \(\begin{align*} 2x+4y &= -3.8\\ 9x-5y &= 1.3 \end{align*}\) 15) \(\begin{align*} -2x+3y &= 1.2\\ -3x-6y &= 1.8 \end{align*}\) \(\left ( -\dfrac{3}{5},0 \right )\) 16) \(\begin{align*} x-0.2y &= 1\\ -10x+2y &= 5 \end{align*}\) 17) \(\begin{align*} 3x+5y &= 9\\ 30x+50y &= -90 \end{align*}\) No solutions exist 18) \(\begin{align*} -3x+y &= 2\\ 12x-4y &= -8 \end{align*}\) 19) \(\begin{align*} \dfrac{1}{2}x+\dfrac{1}{3}y &= 16\\ \dfrac{1}{6}x+\dfrac{1}{4}y &= 9 \end{align*}\) \(\left ( \dfrac{72}{5},\dfrac{132}{5} \right )\) 20) \(\begin{align*} -\dfrac{1}{4}x+\dfrac{3}{2}y &= 11\\ -\dfrac{1}{8}x+\dfrac{1}{3}y &= 3 \end{align*}\) For the exercises 21-30, solve each system by addition. 21) \(\begin{align*} -2x+5y &= -42\\ 7x+2y &= 30 \end{align*}\) \((6,-6)\) 22) \(\begin{align*} 6x-5y &= -34\\ 2x+6y &= 4 \end{align*}\) 23) \(\begin{align*} 5x-y &= -2.6\\ -4x-6y &= 1.4 \end{align*}\) \(\left ( -\dfrac{1}{2},\dfrac{1}{10} \right )\) 24) \(\begin{align*} 7x-2y &= 3\\ 4x+5y &= 3.25 \end{align*}\) 25) \(\begin{align*} -x+2y &= -1\\ 5x-10y &= 6 \end{align*}\) 26) \(\begin{align*} 7x+6y &= 2\\ -28x-24y &= -8 \end{align*}\) 27) \(\begin{align*} \dfrac{5}{6}x+\dfrac{1}{4}y &= 0\\ \dfrac{1}{8}x-\dfrac{1}{2}y &= -\dfrac{43}{120} \end{align*}\) \(\left ( -\dfrac{1}{5},\dfrac{2}{3} \right )\) 28) \(\begin{align*} \dfrac{1}{3}x+\dfrac{1}{9}y &= \dfrac{2}{9}\\ -\dfrac{1}{2}x+\dfrac{4}{5}y &= -\dfrac{1}{3} \end{align*}\) 29) \(\begin{align*} -0.2x+0.4y &= 0.6\\ x-2y &= -3 \end{align*}\) \(\left ( x,\dfrac{x+3}{2} \right )\) 30) \(\begin{align*} -0.1x+0.2y &= 0.6\\ 5x-10y &= 1 \end{align*}\) For the exercises 31-40, solve each system by any method. 31) \(\begin{align*} 5x+9y &= 16\\ x+2y &= 4 \end{align*}\) 32) \(\begin{align*} 6x-8y &= -0.6\\ 3x+2y &= 0.9 \end{align*}\) 33) \(\begin{align*} 5x-2y &= 2.25\\ 7x-4y &= 3 \end{align*}\) \(\left ( \dfrac{1}{2},\dfrac{1}{8} \right )\) 34) \(\begin{align*} x-\dfrac{5}{12}y &= -\dfrac{55}{12}\\ -6x+\dfrac{5}{2}y &= \dfrac{55}{2} \end{align*}\) 35) \(\begin{align*} 7x-4y &= \dfrac{7}{6}\\ 2x+4y &= \dfrac{1}{3} \end{align*}\) \(\left ( \dfrac{1}{6},0 \right )\) 36) \(\begin{align*} 3x+6y &= 11\\ 2x+4y &= 9 \end{align*}\) 37) \(\begin{align*} \dfrac{7}{3}x-\dfrac{1}{6}y &= 2\\ -\dfrac{21}{6}x+\dfrac{3}{12}y &= -3 \end{align*}\) \((x,2(7x-6))\) 38) \(\begin{align*} \dfrac{1}{2}x+\dfrac{1}{3}y &= \dfrac{1}{3}\\ \dfrac{3}{2}x+\dfrac{1}{4}y &= -\dfrac{1}{8} \end{align*}\) 39) \(\begin{align*} 2.2x+1.3y &= -0.1\\ 4.2x+4.2y &= 2.1 \end{align*}\) 40) \(\begin{align*} 0.1x+0.2y &= 2\\ 0.35x-0.3y &= 0 \end{align*}\) For the exercises 41-45, graph the system of equations and state whether the system is consistent, inconsistent, or dependent and whether the system has one solution, no solution, or infinite solutions. 41) \(\begin{align*} 3x-y &= 0.6\\ x-2y &= 1.3 \end{align*}\) Consistent with one solution 42) \(\begin{align*} -x+2y &= 4\\ 2x-4y &= 1 \end{align*}\) 43) \(\begin{align*} x+2y &= 7\\ 2x+6y &= 12 \end{align*}\) 44) \(\begin{align*} 3x-5y &= 7\\ x-2y &= 3 \end{align*}\) 45) \(\begin{align*} 3x-2y &= 5\\ -9x+6y &= -15 \end{align*}\) Dependent with infinitely many solutions For the exercises 46-50, use the intersect function on a graphing device to solve each system. Round all answers to the nearest hundredth. 46) \(\begin{align*} 0.1x+0.2y &= 0.3\\ -0.3x+0.5y &= 1 \end{align*}\) 47) \(\begin{align*} -0.01x+0.12y &= 0.62\\ 0.15x+0.20y &= 0.52 \end{align*}\) \((-3.08,4.91)\) 48) \(\begin{align*} 0.5x+0.3y &= 4\\ 0.25x-0.9y &= 0.46 \end{align*}\) 49) \(\begin{align*} 0.15x+0.27y &= 0.39\\ -0.34x+0.56y &= 1.8 \end{align*}\) 50) \(\begin{align*} -0.71x+0.92y &= 0.13\\ 0.83x+0.05y &= 2.1 \end{align*}\) For the exercises 51-55, solve each system in terms of \(A, B, C, D,\) and \(F\) where \(A-F\) are nonzero numbers. Note that \(A\neq B\) and \(AE\neq BD\). 51) \(\begin{align*} x+y &= A\\ x-y &= B \end{align*}\) \(\left ( \dfrac{A+B}{2},\dfrac{A-B}{2} \right )\) 52) \(\begin{align*} x+Ay &= 1\\ x+By &= 1 \end{align*}\) 53) \(\begin{align*} Ax+y &= 0\\ Bx+y &= 1 \end{align*}\) \(\left ( \dfrac{-1}{A-B},\dfrac{A}{A-B} \right )\) 54) \(\begin{align*} Ax+By &= C\\ x+y &= 1 \end{align*}\) 55) \(\begin{align*} Ax+By &= C\\ Dx+Ey &= F \end{align*}\) \(\left ( \dfrac{CE-BF}{BD-AE},\dfrac{AF-CD}{BD-AE} \right )\) For the exercises 56-60, solve for the desired quantity. 56) A stuffed animal business has a total cost of production \(C=12x+30\) and a revenue function \(R=20x\). Find the break-even point. 57) A fast-food restaurant has a cost of production \(C(x)=11x+120\) and a revenue function \(R(x)=5x\). When does the company start to turn a profit? They never turn a profit. 58) A cell phone factory has a cost of productiona \(C(x)=150x+10,000\) and a revenue function \(R(x)=200x\). What is the break-even point? 59) A musician charges \(C(x)=64x+20,000\), where \(x\) is the total number of attendees at the concert. The venue charges \(\$80\) per ticket. After how many people buy tickets does the venue break even, and what is the value of the total tickets sold at that point? \((1,250, 100,000)\) 60) A guitar factory has a cost of production \(C(x)=75x+50,000\). If the company needs to break even after \(150\) units sold, at what price should they sell each guitar? Round up to the nearest dollar, and write the revenue function. For the exercises 61-77, use a system of linear equations with two variables and two equations to solve. 61) Find two numbers whose sum is \(28\) and difference is \(13\). The numbers are \(7.5\) and \(20.5\) 62) A number is \(9\) more than another number. Twice the sum of the two numbers is \(10\). Find the two numbers. 63) The startup cost for a restaurant is \(\$120,000\), and each meal costs \(\$10\) for the restaurant to make. If each meal is then sold for \(\$15\), after how many meals does the restaurant break even? \(24,000\) 64) A moving company charges a flat rate of \(\$150\), and an additional \(\$5\) for each box. If a taxi service would charge \(\$20\) for each box, how many boxes would you need for it to be cheaper to use the moving company, and what would be the total cost? 65) A total of \(1,595\) first- and second-year college students gathered at a pep rally. The number of freshmen exceeded the number of sophomores by \(15\). How many freshmen and sophomores were in attendance? \(790\) sophomores, \(805\) freshman 66) \(276\) students enrolled in a freshman-level chemistry class. By the end of the semester, \(5\) times the number of students passed as failed. Find the number of students who passed, and the number of students who failed. 67) There were \(130\) faculty at a conference. If there were \(18\) more women than men attending, how many of each gender attended the conference? \(56\) men, \(74\) women 68) A jeep and BMW enter a highway running east-west at the same exit heading in opposite directions. The jeep entered the highway \(30\) minutes before the BMW did, and traveled \(7\) mph slower than the BMW. After \(2\) hours from the time the BMW entered the highway, the cars were \(306.5\) miles apart. Find the speed of each car, assuming they were driven on cruise control. 69) If a scientist mixed \(10\%\) saline solution with \(60\%\) saline solution to get \(25\) gallons of \(40\%\) saline solution, how many gallons of \(10\%\) and \(60\%\) solutions were mixed? \(10\) gallons of \(10\%\) solution, \(15\) gallons of \(60\%\) solution 70) An investor earned triple the profits of what she earned last year. If she made \(\$500,000.48\) total for both years, how much did she earn in profits each year? 71) An investor who dabbles in real estate invested \(1.1\) million dollars into two land investments. On the first investment, Swan Peak, her return was a \(110\%\) increase on the money she invested. On the second investment, Riverside Community, she earned \(50\%\) over what she invested. If she earned \(\$1\) million in profits, how much did she invest in each of the land deals? Swan Peak: \(\$750,000\), Riverside: \(\$350,000\) 72) If an investor invests a total of \(\$25,000\) into two bonds, one that pays \(3\%\) simple interest, and the other that pays \(2\dfrac{7}{8}\%\) interest, and the investor earns \(\$737.50\) annual interest, how much was invested in each account? 73) If an investor invests \(\$23,000\) into two bonds, one that pays \(4\%\) in simple interest, and the other paying \(2\%\) simple interest, and the investor earns \(\$710.00\) annual interest, how much was invested in each account? \(\$12,500\) in the first account, \(\$10,500\) in the second account. 74) CDs cost \(\$5.96\) more than DVDs at All Bets Are Off Electronics. How much would \(6\) CDs and \(2\) DVDs cost if \(5\) CDs and \(2\) DVDs cost \(\$127.73\)? 75) A store clerk sold \(60\) pairs of sneakers. The high-tops sold for \(\$98.99\) and the low-tops sold for \(\$129.99\). If the receipts for the two types of sales totaled \(\$6,404.40\), how many of each type of sneaker were sold? High-tops: \(45\), Low-tops: \(15\) 76) A concert manager counted \(350\) ticket receipts the day after a concert. The price for a student ticket was \(\$12.50\), and the price for an adult ticket was \(\$16.00\). The register confirms that \(\$5,075\) was taken in. How many student tickets and adult tickets were sold? 77) Admission into an amusement park for \(4\) children and \(2\) adults is \(\$116.90\). For \(6\) children and \(3\) adults, the admission is \(\$175.35\). Assuming a different price for children and adults, what is the price of the child's ticket and the price of the adult ticket? Infinitely many solutions. We need more information. 1) Can a linear system of three equations have exactly two solutions? Explain why or why not No, there can be only one, zero, or infinitely many solutions. 2) If a given ordered triple solves the system of equations, is that solution unique? If so, explain why. If not, give an example where it is not unique. 3) If a given ordered triple does not solve the system of equations, is there no solution? If so, explain why. If not, give an example. Not necessarily. There could be zero, one, or infinitely many solutions. For example, \((0,0,0)\) is not a solution to the system below, but that does not mean that it has no solution. \(\begin{align*} 2x+3y-6z &= 1\\ -4x-6y+12z &= -2\\ x+2y+5z &= 10 \end{align*}\) 4) Using the method of addition, is there only one way to solve the system? 5) Can you explain whether there can be only one method to solve a linear system of equations? If yes, give an example of such a system of equations. If not, explain why not. Every system of equations can be solved graphically, by substitution, and by addition. However, systems of three equations become very complex to solve graphically so other methods are usually preferable. For the exercises 6-10, determine whether the ordered triple given is the solution to the system of equations. 6) \(\begin{align*} 2x-6y+6z &= -12\\ x+4y+5z &= -1\\ -x+2y+3z &= -1 \end{align*}\; \; \text{ and }\; (0,1,-1)\) 7) \(\begin{align*} 6x-y+3z &= 6\\ 3x+5y+2z &= 0\\ x+y &= 0 \end{align*}\; \; \text{ and }\; (3,-3,-5)\) 8) \(\begin{align*} 6x-7y+z &= 2\\ -x-y+3z &= 4\\ 2x+y-z &= 1 \end{align*}\; \; \text{ and }\; (4,2,-6)\) 9) \(\begin{align*} x-y &= 0\\ x-z &= 5\\ x-y+z &= -1 \end{align*}\; \; \text{ and }\; (4,4,-1)\) 10) \(\begin{align*} -x-y+2z &= 3\\ 5x+8y-3z &= 4\\ -x+3y-5z &= -5 \end{align*}\; \; \text{ and }\; (4,1,-7)\) 11) \(\begin{align*} 3x-4y+2z &= -15\\ 2x+4y+z &= 16\\ 2x+3y+5z &= 20 \end{align*}\) \((-1,4,2)\) 12) \(\begin{align*} 5x-2y+3z &= 20\\ 2x-4y-3z &= -9\\ x+6y-8z &= 21 \end{align*}\) 13) \(\begin{align*} 5x+2y+4z &= 9\\ -3x+2y+z &= 10\\ 4x-3y+5z &= -3 \end{align*}\) \(\left ( -\dfrac{85}{107},\dfrac{312}{107},\dfrac{191}{107} \right )\) 14) \(\begin{align*} 4x-3y+5z &= 31\\ -x+2y+4z &= 20\\ x+5y-2z &= -29 \end{align*}\) 15) \(\begin{align*} 5x-2y+3z &= 4\\ -4x+6y-7z &= -1\\ 3x+2y-z &= 4 \end{align*}\) \(\left ( 1,\dfrac{1}{2},0 \right )\) 16) \(\begin{align*} 4x+6y+9z &= 4\\ -5x+2y-6z &= 3\\ 7x-4y+3z &= -3 \end{align*}\) For the exercises 17-45, solve each system by Gaussian elimination. 17) \(\begin{align*} 2x-y+3z &= 17\\ -5x+4y-2z &= -46\\ 2y+5z &= -7 \end{align*}\) \((4,-6,1)\) 18) \(\begin{align*} 5x-6y+3z &= 50\\ -x+4y &= 10\\ 2x-z &= 10 \end{align*}\) 19) \(\begin{align*} 2x+3y-6z &= 1\\ -4x-6y+12z &= -2\\ x+2y+5z &= 10 \end{align*}\) \(\left ( x,\dfrac{1}{27}(65-16x),\dfrac{x+28}{27} \right )\) 20) \(\begin{align*} 4x+6y-2z &= 8\\ 6x+9y-3z &= 12\\ -2x-3y+z &= -4 \end{align*}\) 21) \(\begin{align*} 2x+3y-4z &= 5\\ -3x+2y+z &= 11\\ -x+5y+3z &= 4 \end{align*}\) \(\left ( -\dfrac{45}{13},\dfrac{17}{13},-2 \right )\) 22) \(\begin{align*} 10x+2y-14z &= 8\\ -x-2y-4z &= -1\\ -12x-6y+6z &= -12 \end{align*}\) 23) \(\begin{align*} x+y+z &= 14\\ 2y+3z &= -14\\ -16y-24z &= -112 \end{align*}\) 24) \(\begin{align*} 5x-3y+4z &= -1\\ -4x+2y-3z &= 0\\ -x+5y+7z &= -11 \end{align*}\) 25) \(\begin{align*} x+y+z &= 0\\ 2x-y+3z &= 0\\ x-z &= 0 \end{align*}\) \((0,0,0)\) 26) \(\begin{align*} 3x+2y-5z &= 6\\ 5x-4y+3z &= -12\\ 4x+5y-2z &= 15 \end{align*}\) \(\left ( \dfrac{4}{7},-\dfrac{1}{7},-\dfrac{3}{7} \right )\) 28) \(\begin{align*} 3x-\dfrac{1}{2}y-z &= -\dfrac{1}{2}\\ 4x+z &= 3\\ -x+\dfrac{3}{2}y &= \dfrac{5}{2} \end{align*}\) 29) \(\begin{align*} 6x-5y+6z &= 38\\ \dfrac{1}{5}x-\dfrac{1}{2}y+\dfrac{3}{5}z &= 1\\ -4x-\dfrac{3}{2}y-z &= -74 \end{align*}\) \((7,20,16)\) 30) \(\begin{align*} \dfrac{1}{2}x-\dfrac{1}{5}y+\dfrac{2}{5}z &= -\dfrac{13}{10}\\ \dfrac{1}{4}x-\dfrac{2}{5}y-\dfrac{1}{5}z &= -\dfrac{7}{20}\\ -\dfrac{1}{2}x-\dfrac{3}{4}y-\dfrac{1}{2}z &= -\dfrac{5}{4} \end{align*}\) 31) \(\begin{align*} -\dfrac{1}{3}x-\dfrac{1}{2}y-\dfrac{1}{4}z &= \dfrac{3}{4}\\ -\dfrac{1}{2}x-\dfrac{1}{4}y-\dfrac{1}{2}z &= 2\\ -\dfrac{1}{4}x-\dfrac{3}{4}y-\dfrac{1}{2}z &= -\dfrac{1}{2} \end{align*}\) 32) \(\begin{align*} \dfrac{1}{2}x-\dfrac{1}{4}y+\dfrac{3}{4}z &= 0\\ \dfrac{1}{4}x-\dfrac{1}{10}y+\dfrac{2}{5}z &= -2\\ \dfrac{1}{8}x+\dfrac{1}{5}y-\dfrac{1}{8}z &= 2 \end{align*}\) 33) \(\begin{align*} \dfrac{4}{5}x-\dfrac{7}{8}y+\dfrac{1}{2}z &= 1\\ -\dfrac{4}{5}x-\dfrac{3}{4}y+\dfrac{1}{3}z &= -8\\ -\dfrac{2}{5}x-\dfrac{7}{8}y+\dfrac{1}{2}z &= -5 \end{align*}\) 34) \(\begin{align*} -\dfrac{1}{3}x-\dfrac{1}{8}y+\dfrac{1}{6}z &= -\dfrac{4}{3}\\ -\dfrac{2}{3}x-\dfrac{7}{8}y+\dfrac{1}{3}z &= -\dfrac{23}{3}\\ -\dfrac{1}{3}x-\dfrac{5}{8}y+\dfrac{5}{6}z &= 0 \end{align*}\) 35) \(\begin{align*} -\dfrac{1}{4}x-\dfrac{5}{4}y+\dfrac{5}{2}z &= -5\\ -\dfrac{1}{2}x-\dfrac{5}{3}y+\dfrac{5}{4}z &= \dfrac{55}{12}\\ -\dfrac{1}{3}x-\dfrac{1}{3}y+\dfrac{1}{3}z &= \dfrac{5}{3} \end{align*}\) \((-5,-5,-5)\) 36) \(\begin{align*} \dfrac{1}{40}x+\dfrac{1}{60}y+\dfrac{1}{80}z &= \dfrac{1}{100}\\ -\dfrac{1}{2}x-\dfrac{1}{3}y-\dfrac{1}{4}z &= -\dfrac{1}{5}\\ \dfrac{3}{8}x+\dfrac{3}{12}y+\dfrac{3}{16}z &= \dfrac{3}{20} \end{align*}\) 37) \(\begin{align*} 0.1x-0.2y+0.3z &= 2\\ 0.5x-0.1y+0.4z &= 8\\ 0.7x-0.2y+0.3z &= 8 \end{align*}\) \((10,10,10)\) 38) \(\begin{align*} 0.2x+0.1y-0.3z &= 0.2\\ 0.8x+0.4y-1.2z &= 0.1\\ 1.6x+0.8y-2.4z &= 0.2 \end{align*}\) 39) \(\begin{align*} 1.1x+0.7y-3.1z &= -1.79\\ 2.1x+0.5y-1.6z &= -0.13\\ 0.5x+0.4y-0.5z &= -0.07 \end{align*}\) \(\left ( \dfrac{1}{2},\dfrac{1}{5},\dfrac{4}{5} \right )\) 40) \(\begin{align*} 0.5x-0.5y+0.5z &= 10\\ 0.2x-0.2y+0.2z &= 4\\ 0.1x-0.1y+0.1z &= 2 \end{align*}\) 41) \(\begin{align*} 0.1x+0.2y+0.3z &= 0.37\\ 0.1x-0.2y-0.3z &= -0.27\\ 0.5x-0.1y-0.3z &= -0.03 \end{align*}\) 42) \(\begin{align*} 0.5x-0.5y-0.3z &= 0.13\\ 0.4x-0.1y-0.3z &= 0.11\\ 0.2x-0.8y-0.9z &= -0.32 \end{align*}\) 43) \(\begin{align*} 0.5x+0.2y-0.3z &= 1\\ 0.4x-0.6y+0.7z &= 0.8\\ 0.3x-0.1y-0.9z &= 0.6 \end{align*}\) 44) \(\begin{align*} 0.3x+0.3y+0.5z &= 0.6\\ 0.4x+0.4y+0.4z &= 1.8\\ 0.4x+0.2y+0.1z &= 1.6 \end{align*}\) 45) \(\begin{align*} 0.8x+0.8y+0.8z &= 2.4\\ 0.3x-0.5y+0.2z &= 0\\ 0.1x+0.2y+0.3z &= 0.6 \end{align*}\) For the exercises 46-50, solve the system for \(x,y,\) and \(z\). 46) \(\begin{align*} x+y+z &= 3\\ \dfrac{x-1}{2}+\dfrac{y-3}{2}+\dfrac{z+1}{2} &= 0\\ \dfrac{x-2}{3}+\dfrac{y+4}{3}+\dfrac{z-3}{3} &= \dfrac{2}{3} \end{align*}\) 47) \(\begin{align*} 5x-3y-\dfrac{z+1}{2} &= \dfrac{1}{2}\\ 6x+\dfrac{y-9}{2}+2z &= -3\\ \dfrac{x+8}{2}-4y+z &= 4\end{align*}\) \(\left ( \dfrac{128}{557},\dfrac{23}{557},\dfrac{428}{557} \right )\) 48) \(\begin{align*} \dfrac{x+4}{7}-\dfrac{y-1}{6}+\dfrac{z+2}{3} &= 1\\ \dfrac{x-2}{4}+\dfrac{y+1}{8}-\dfrac{z+8}{2} &= 0\\ \dfrac{x+6}{3}-\dfrac{y+2}{3}+\dfrac{z+4}{2} &= 3 \end{align*}\) 49) \(\begin{align*} \dfrac{x-3}{6}+\dfrac{y+2}{2}-\dfrac{z-3}{3} &= 2\\ \dfrac{x+2}{4}+\dfrac{y-5}{2}+\dfrac{z+4}{2} &= 1\\ \dfrac{x+6}{2}-\dfrac{y-3}{3}+z+1 &= 9 \end{align*}\) 50) \(\begin{align*} \dfrac{x-1}{3}+\dfrac{y+3}{4}+\dfrac{z+2}{6} &= 1\\ 4x+3y-2z &= 11\\ 0.02x+0.015y-0.01z &= 0.065 \end{align*}\) 51) Three even numbers sum up to \(108\). The smaller is half the larger and the middle number is \(\dfrac{3}{4}\) the larger. What are the three numbers? \(24, 36, 48\) 52) Three numbers sum up to \(147\). The smallest number is half the middle number, which is half the largest number. What are the three numbers? 53) At a family reunion, there were only blood relatives, consisting of children, parents, and grandparents, in attendance. There were \(400\) people total. There were twice as many parents as grandparents, and 50 more children than parents. How many children, parents, and grandparents were in attendance? \(70\) grandparents, \(140\) parents, \(190\) children 54) An animal shelter has a total of \(350\) animals comprised of cats, dogs, and rabbits. If the number of rabbits is \(5\) less than one-half the number of cats, and there are \(20\) more cats than dogs, how many of each animal are at the shelter? 55) Your roommate, Sarah, offered to buy groceries for you and your other roommate. The total bill was \(\$82\). She forgot to save the individual receipts but remembered that your groceries were \(\$0.05\) cheaper than half of her groceries, and that your other roommate's groceries were \(\$2.10\) more than your groceries. How much was each of your share of the groceries? Your share was \(\$19.95\), Sarah's share was \(\$40\), and your other roommate's share was \(\$22.05\). 56) Your roommate, John, offered to buy household supplies for you and your other roommate. You live near the border of three states, each of which has a different sales tax. The total amount of money spent was \(\$100.75\). Your supplies were bought with \(5\%\) tax, John's with \(8\%\) tax, and your third roommate's with \(9\%\) sales tax. The total amount of money spent without taxes is \(\$93.50\). If your supplies before tax were \(\$1\) more than half of what your third roommate's supplies were before tax, how much did each of you spend? Give your answer both with and without taxes. 57) Three coworkers work for the same employer. Their jobs are warehouse manager, office manager, and truck driver. The sum of the annual salaries of the warehouse manager and office manager is \(\$82,000\). The office manager makes \(\$4,000\) more than the truck driver annually. The annual salaries of the warehouse manager and the truck driver total \(\$78,000\). What is the annual salary of each of the co-workers? There are infinitely many solutions; we need more information 58) At a carnival, \(\$2,914.25\) in receipts were taken at the end of the day. The cost of a child's ticket was \(\$20.50\), an adult ticket was \(\$29.75\), and a senior citizen ticket was \(\$15.25\). There were twice as many senior citizens as adults in attendance, and \(20\) more children than senior citizens. How many children, adult, and senior citizen tickets were sold? 59) A local band sells out for their concert. They sell all \(1,175\) tickets for a total purse of \(\$28,112.50\). The tickets were priced at \(\$20\) for student tickets, \(\$22.50\) for children, and \(\$29\) for adult tickets. If the band sold twice as many adult as children tickets, how many of each type was sold? \(500\) students, \(225\) children, and \(450\) adults 60) In a bag, a child has \(325\) coins worth \(\$19.50\). There were three types of coins: pennies, nickels, and dimes. If the bag contained the same number of nickels as dimes, how many of each type of coin was in the bag? 61) Last year, at Haven's Pond Car Dealership, for a particular model of BMW, Jeep, and Toyota, one could purchase all three cars for a total of \(\$140,000\). This year, due to inflation, the same cars would cost \(\$151,830\). The cost of the BMW increased by \(8\%\), the Jeep by \(5\%\), and the Toyota by \(12\%\). If the price of last year's Jeep was \(\$7,000\) less than the price of last year's BMW, what was the price of each of the three cars last year? The BMW was \(\$49,636\), the Jeep was \(\$42,636\), and the Toyota was \(\$47,727\). 62) A recent college graduate took advantage of his business education and invested in three investments immediately after graduating. He invested \(\$80,500\) into three accounts, one that paid \(4\%\) simple interest, one that paid \(3\dfrac{1}{8}\%\) simple interest, and one that paid \(2\dfrac{1}{2}\%\) simple interest. He earned \(\$2,670\) interest at the end of one year. If the amount of the money invested in the second account was four times the amount invested in the third account, how much was invested in each account? 63) You inherit one million dollars. You invest it all in three accounts for one year. The first account pays \(3\%\) compounded annually, the second account pays \(4\%\) compounded annually, and the third account pays \(2\%\) compounded annually. After one year, you earn \(\$34,000\) in interest. If you invest four times the money into the account that pays \(3\%\) compared to \(2\%\), how much did you invest in each account? \(\$400,000\) in the account that pays \(3\%\) interest, \(\$500,000\) in the account that pays \(4\%\) interest, and \(\$100,000\) in the account that pays \(2\%\) interest. 64) You inherit one hundred thousand dollars. You invest it all in three accounts for one year. The first account pays \(4\%\) compounded annually, the second account pays \(3\%\) compounded annually, and the third account pays \(2\%\) compounded annually. After one year, you earn \(\$3,650\) in interest. If you invest five times the money in the account that pays \(4\%\) compared to \(3\%\), how much did you invest in each account? 65) The top three countries in oil consumption in a certain year are as follows: the United States, Japan, and China. In millions of barrels per day, the three top countries consumed \(39.8\%\) of the world's consumed oil. The United States consumed \(0.7\%\) more than four times China's consumption. The United States consumed \(5\%\) more than triple Japan's consumption. What percent of the world oil consumption did the United States, Japan, and China consume? The United States consumed \(26.3\%\), Japan \(7.1\%\), and China \(6.4\%\) of the world's oil. 66) The top three countries in oil production in the same year are Saudi Arabia, the United States, and Russia. In millions of barrels per day, the top three countries produced \(31.4\%\) of the world's produced oil. Saudi Arabia and the United States combined for \(22.1\%\) of the world's production, and Saudi Arabia produced \(2\%\) more oil than Russia. What percent of the world oil production did Saudi Arabia, the United States, and Russia produce? 67) The top three sources of oil imports for the United States in the same year were Saudi Arabia, Mexico, and Canada. The three top countries accounted for \(47\%\) of oil imports. The United States imported \(1.8\%\) more from Saudi Arabia than they did from Mexico, and \(1.7\%\) more from Saudi Arabia than they did from Canada. What percent of the United States oil imports were from these three countries? Saudi Arabia imported \(16.8\%\), Canada imported \(15.1\%\), and Mexico \(15.0\%\) 68) The top three oil producers in the United States in a certain year are the Gulf of Mexico, Texas, and Alaska. The three regions were responsible for \(64\%\) of the United States oil production. The Gulf of Mexico and Texas combined for \(47\%\) of oil production. Texas produced \(3\%\) more than Alaska. What percent of United States oil production came from these regions? 69) At one time, in the United States, \(398\) species of animals were on the endangered species list. The top groups were mammals, birds, and fish, which comprised \(55\%\) of the endangered species. Birds accounted for \(0.7\%\) more than fish, and fish accounted for \(1.5\%\) more than mammals. What percent of the endangered species came from mammals, birds, and fish? Birds were \(19.3\%\), fish were \(18.6\%\), and mammals were \(17.1\%\) of endangered species 70) Meat consumption in the United States can be broken into three categories: red meat, poultry, and fish. If fish makes up \(4\%\) less than one-quarter of poultry consumption, and red meat consumption is \(18.2\%\) higher than poultry consumption, what are the percentages of meat consumption? 1) Explain whether a system of two nonlinear equations can have exactly two solutions. What about exactly three? If not, explain why not. If so, give an example of such a system, in graph form, and explain why your choice gives two or three answers. A nonlinear system could be representative of two circles that overlap and intersect in two locations, hence two solutions. A nonlinear system could be representative of a parabola and a circle, where the vertex of the parabola meets the circle and the branches also intersect the circle, hence three solutions. 2) When graphing an inequality, explain why we only need to test one point to determine whether an entire region is the solution? 3) When you graph a system of inequalities, will there always be a feasible region? If so, explain why. If not, give an example of a graph of inequalities that does not have a feasible region. Why does it not have a feasible region? No. There does not need to be a feasible region. Consider a system that is bounded by two parallel lines. One inequality represents the region above the upper line; the other represents the region below the lower line. In this case, no points in the plane are located in both regions; hence there is no feasible region. 4) If you graph a revenue and cost function, explain how to determine in what regions there is profit. 5) If you perform your break-even analysis and there is more than one solution, explain how you would determine which x-values are profit and which are not. Choose any number between each solution and plug into \(C(x)\) and \(R(x)\). If \(C(x)<R(x)\), then there is profit. For the exercises 6-10, solve the system of nonlinear equations using substitution. 6) \(\begin{align*} x+y &= 4\\ x^2 + y^2 &= 9 \end{align*}\) 7) \(\begin{align*} y &= x-3\\ x^2 + y^2 &= 9 \end{align*}\) \((0,-3)\), \((3,0)\) 8) \(\begin{align*} y &= x\\ x^2 + y^2 &= 9 \end{align*}\) 9) \(\begin{align*} y &= -x\\ x^2 + y^2 &= 9 \end{align*}\) \(\left ( -\dfrac{3\sqrt{2}}{2},\dfrac{3\sqrt{2}}{2} \right )\), \(\left ( \dfrac{3\sqrt{2}}{2},-\dfrac{3\sqrt{2}}{2} \right )\) 10) \(\begin{align*} x &= 2\\ x^2 - y^2 &= 9 \end{align*}\) For the exercises 11-15, solve the system of nonlinear equations using elimination. 11) \(\begin{align*} 4x^2 - 9y^2 &= 36\\ 4x^2 + 9y^2 &= 36 \end{align*}\) \((-3,0)\), \((3,0)\) 12) \(\begin{align*} x^2 + y^2 &= 25\\ x^2 - y^2 &= 1 \end{align*}\) 13) \(\begin{align*} 2x^2 + 4y^2 &= 4\\ 2x^2 - 4y^2 &= 25x-10 \end{align*}\) \(\left ( \dfrac{1}{4},-\dfrac{\sqrt{62}}{8} \right )\), \(\left ( \dfrac{1}{4},\dfrac{\sqrt{62}}{8} \right )\) 14) \(\begin{align*} y^2 - x^2 &= 9\\ 3x^2 + 2y^2 &= 8 \end{align*}\) 15) \(\begin{align*} x^2 + y^2+\dfrac{1}{16} &= 2500\\ y &= 2x^2 \end{align*}\) \(\left ( -\dfrac{\sqrt{398}}{4},\dfrac{199}{4} \right )\), \(\left ( \dfrac{\sqrt{398}}{4},\dfrac{199}{4} \right )\) For the exercises 16-23, use any method to solve the system of nonlinear equations. 16) \(\begin{align*} -2x^2+y &= -5\\ 6x-y &= 9 \end{align*}\) 17) \(\begin{align*} -x^2+y &= 2\\ -x+y &= 2 \end{align*}\) \((0,2)\), \((1,3)\) 18) \(\begin{align*} x^2+y^2 &= 1\\ y &= 20x^2-1 \end{align*}\) 19) \(\begin{align*} x^2+y^2 &= 1\\ y &= -x^2 \end{align*}\) \(\left ( -\sqrt{\dfrac{1}{2}(\sqrt{5}-1)},\dfrac{1}{2}\left (1-\sqrt{5} \right ) \right )\), \(\left ( \sqrt{\dfrac{1}{2}(\sqrt{5}-1)},\dfrac{1}{2}\left (1-\sqrt{5} \right ) \right )\) 20) \(\begin{align*} 2x^3-x^2 &= y\\ y &= \dfrac{1}{2} -x \end{align*}\) 21) \(\begin{align*} 9x^2+25y^2 &= 225\\ (x-6)^2+y^2 &= 1 \end{align*}\) \((5,0)\) 22) \(\begin{align*} x^4-x^2 &= y\\ x^2+y &= 0 \end{align*}\) 23) \(\begin{align*} 2x^3-x^2 &= y\\ x^2+y &= 0 \end{align*}\) For the exercises 24-38, use any method to solve the nonlinear system. 24) \(\begin{align*} x^2+y^2 &= 9\\ y &= 3-x^2 \end{align*}\) 25) \(\begin{align*} x^2-y^2 &= 9\\ x &= 3 \end{align*}\) 26) \(\begin{align*} x^2-y^2 &= 9\\ y &= 3 \end{align*}\) 27) \(\begin{align*} x^2-y^2 &= 9\\ x-y &= 0 \end{align*}\) 28) \(\begin{align*} -x^2+y &= 2\\ -4x+y &= -1 \end{align*}\) 29) \(\begin{align*} -x^2+y &= 2\\ 2y &= -x \end{align*}\) 30) \(\begin{align*} x^2+y^2 &= 25\\ x^2-y^2 &= 36 \end{align*}\) 31) \(\begin{align*} x^2+y^2 &= 1\\ y^2 &= x^2 \end{align*}\) \(\left ( -\dfrac{\sqrt{2}}{2},-\dfrac{\sqrt{2}}{2} \right )\), \(\left ( -\dfrac{\sqrt{2}}{2},\dfrac{\sqrt{2}}{2} \right )\), \(\left ( \dfrac{\sqrt{2}}{2},-\dfrac{\sqrt{2}}{2} \right )\), \(\left ( \dfrac{\sqrt{2}}{2},\dfrac{\sqrt{2}}{2} \right )\) 32) \(\begin{align*} 16x^2-9y^2+144 &= 0\\ y^2 + x^2 &= 16 \end{align*}\) 33) \(\begin{align*} 3x^2-y^2 &= 12\\ (x-1)^2 + y^2 &= 1 \end{align*}\) 35) \(\begin{align*} 3x^2-y^2 &= 12\\ x^2 + y^2 &= 16 \end{align*}\) \((-\sqrt{7},-3)\), \((-\sqrt{7},3)\), \((\sqrt{7},-3)\), \((\sqrt{7},3)\) 36) \(\begin{align*} x^2-y^2-6x-4y-11 &= 0\\ -x^2 + y^2 &= 5 \end{align*}\) 37) \(\begin{align*} x^2+y^2-6y &= 7\\ x^2 + y &= 1 \end{align*}\) \(\left ( -\sqrt{\dfrac{1}{2}(\sqrt{73}-5)},\dfrac{1}{2}\left (7-\sqrt{73} \right ) \right )\), \(\left ( \sqrt{\dfrac{1}{2}(\sqrt{73}-5)},\dfrac{1}{2}\left (7-\sqrt{73} \right ) \right )\) 38) \(\begin{align*} x^2+y^2 &= 6\\ xy &= 1 \end{align*}\) For the exercises 39-40, graph the inequality. 39) \(x^2+y<9\) 40) \(x^2+y^2<4\) For the exercises 41-45, graph the system of inequalities. Label all points of intersection. 41) \(\begin{align*} x^2 + y &<1 \\ y &>2x \end{align*}\) 42) \(\begin{align*} x^2 + y &<-5 \\ y &>5x+10 \end{align*}\) 43) \(\begin{align*} x^2 + y^2 &<25 \\ 3x^2 - y^2 &>12 \end{align*}\) 44) \(\begin{align*} x^2 - y^2 &>-4 \\ x^2 + y^2 &<12 \end{align*}\) 45) \(\begin{align*} x^2 + 3y^2 &>16 \\ 3x^2 - y^2 &<1 \end{align*}\) 46) \(\begin{align*} y &\geq e^x \\ y &\leq \ln (x)+5 \end{align*}\) 47) \(\begin{align*} y &\leq -\log (x)\\ y &\leq e^x \end{align*}\) For the exercises 48-52, find the solutions to the nonlinear equations with two variables. 48) \(\begin{align*} \dfrac{4}{x^2} + \dfrac{1}{y^2} &= 24\\ \dfrac{5}{x^2} - \dfrac{2}{y^2} + 4 &= 0 \end{align*}\) 49) \(\begin{align*} \dfrac{6}{x^2} - \dfrac{1}{y^2} &= 8\\ \dfrac{1}{x^2} - \dfrac{6}{y^2} &= \dfrac{1}{8} \end{align*}\) \(\left ( -2\sqrt{\dfrac{70}{383}},-2\sqrt{\dfrac{35}{29}} \right )\), \(\left ( -2\sqrt{\dfrac{70}{383}},2\sqrt{\dfrac{35}{29}} \right )\), \(\left ( 2\sqrt{\dfrac{70}{383}},-2\sqrt{\dfrac{35}{29}} \right )\), \(\left ( 2\sqrt{\dfrac{70}{383}},2\sqrt{\dfrac{35}{29}} \right )\) 50) \(\begin{align*} x^2 - xy + y^2 - 2 &= 0\\ x+3y &= 4 \end{align*}\) 51) \(\begin{align*} x^2 - xy - 2y^2 - 6 &= 0\\ x^2 + y^2 &= 1 \end{align*}\) No Solution Exists 52) \(\begin{align*} x^2 + 4xy - 2y^2 - 6 &= 0\\ x &= y+2 \end{align*}\) For the exercises 53-54, solve the system of inequalities. Use a calculator to graph the system to confirm the answer. 53) \(\begin{align*} xy &< 1\\ y &> \sqrt{x} \end{align*}\) \(x=0\), \(y>0\) and \(0<x<1\), \(\sqrt{x} < y < \dfrac{1}{x}\) 54) \(\begin{align*} x^2 + y &< 3\\ y &> 2x \end{align*}\) For the exercises 55-, construct a system of nonlinear equations to describe the given behavior, then solve for the requested solutions. 55) Two numbers add up to \(300\). One number is twice the square of the other number. What are the numbers? 56) The squares of two numbers add to \(360\). The second number is half the value of the first number squared. What are the numbers? 57) A laptop company has discovered their cost and revenue functions for each day: \(C(x)=3x^2-10x+200\) and \(R(x)=-2x^2+100x+50\). If they want to make a profit, what is the range of laptops per day that they should produce? Round to the nearest number which would generate profit. \(2\) - \(20\) computers 58) A cell phone company has the following cost and revenue functions: \(C(x)=8x^2-600x+21,500\) and \(R(x)=-3x^2+480x\). What is the range of cell phones they should produce each day so there is profit? Round to the nearest number that generates profit. 1) Can any quotient of polynomials be decomposed into at least two partial fractions? If so, explain why, and if not, give an example of such a fraction. No, a quotient of polynomials can only be decomposed if the denominator can be factored. For example, \(\dfrac{1}{x^2+1}\) cannot be decomposed because the denominator cannot be factored. 2) Can you explain why a partial fraction decomposition is unique? (Hint: Think about it as a system of equations.) 3) Can you explain how to verify a partial fraction decomposition graphically? Graph both sides and ensure they are equal. 4) You are unsure if you correctly decomposed the partial fraction correctly. Explain how you could double-check your answer. 5) Once you have a system of equations generated by the partial fraction decomposition, can you explain another method to solve it? For example if you had \(\dfrac{7x+13}{3x^2+8x+15}=\dfrac{A}{x+1}+\dfrac{B}{3x+5}\) we eventually simplify to \(7x+13=A(3x+5)+B(x+1)\). Explain how you could intelligently choose an \(x\)-value that will eliminate either \(A\) or \(B\) and solve for \(A\) and \(B\). If we choose \(x=-1\), then the \(B\)-term disappears, letting us immediately know that \(A=3\). We could alternatively plug in \(x=-\dfrac{5}{3}\), giving us a \(B\)-value of \(-2\). For the exercises 6-19, find the decomposition of the partial fraction for the nonrepeating linear factors. 6) \(\dfrac{5x+16}{x^2+10x+24}\) 7) \(\dfrac{3x-79}{x^2-5x-24}\) \(\dfrac{8}{x+3}-\dfrac{5}{x-8}\) 8) \(\dfrac{-x-24}{x^2-2x-24}\) 9) \(\dfrac{10x+47}{x^2+7x+10}\) \(\dfrac{1}{x+5}+\dfrac{9}{x+2}\) 10) \(\dfrac{x}{6x^2+25x+25}\) 11) \(\dfrac{32x-11}{20x^2-13x+2}\) \(\dfrac{3}{5x-2}+\dfrac{4}{4x-1}\) 12) \(\dfrac{x+1}{x^2+7x+10}\) 13) \(\dfrac{5x}{x^2-9}\) \(\dfrac{5}{2(x+3)}+\dfrac{5}{2(x-3)}\) 14) \(\dfrac{10x}{x^2-25}\) \(\dfrac{3}{x+2}+\dfrac{3}{x-2}\) 16) \(\dfrac{2x-3}{x^2-6x+5}\) 17) \(\dfrac{4x-1}{x^2-x-6}\) \(\dfrac{9}{5(x+2)}+\dfrac{11}{5(x-3)}\) 18) \(\dfrac{4x+3}{x^2+8x+15}\) \(\dfrac{8}{x-3}-\dfrac{5}{x-2}\) For the exercises 20-30, find the decomposition of the partial fraction for the repeating linear factors. 20) \(\dfrac{-5x-19}{(x+4)^2}\) 21) \(\dfrac{x}{(x-2)^2}\) \(\dfrac{1}{x-2}-\dfrac{2}{(x-2)^2}\) 22) \(\dfrac{7x+14}{(x+3)^2}\) 23) \(\dfrac{-24x-27}{(4x+5)^2}\) \(-\dfrac{6}{4x+5}+\dfrac{3}{(4x+5)^2}\) 24) \(\dfrac{-24x-27}{(6x-7)^2}\) 25) \(\dfrac{5-x}{(x-7)^2}\) \(-\dfrac{1}{x-7}-\dfrac{2}{(x-7)^2}\) 26) \(\dfrac{5x+14}{2x^2+12x+18}\) 27) \(\dfrac{5x^2+20x+8}{2x(x+1)^2}\) \(\dfrac{4}{x}-\dfrac{3}{2(x+1)}+\dfrac{7}{2(x+1)^2}\) 28) \(\dfrac{4x^2+55x+25}{5x(3x+5)^2}\) 29) \(\dfrac{54x^3+127x^2+80x+16}{2x^2(3x+2)^2}\) \(\dfrac{4}{x}+\dfrac{2}{x^2}-\dfrac{3}{3x+2}+\dfrac{7}{2(3x+2)^2}\) 30) \(\dfrac{x^3-5x^2+12x+144}{x^2(x^2+12x+36)}\) For the exercises 31-43, find the decomposition of the partial fraction for the irreducible nonrepeating quadratic factor. 31) \(\dfrac{4x^2+6x+11}{(x+2)(x^2+x+3)}\) \(\dfrac{x+1}{x^2+x+3}+\dfrac{3}{(x+2)}\) 32) \(\dfrac{4x^2+9x+23}{(x-1)(x^2+6x+11)}\) 33) \(\dfrac{-2x^2+10x+4}{(x-1)(x^2+3x+8)}\) \(\dfrac{4-3x}{x^2+3x+8}+\dfrac{1}{(x-1)}\) 34) \(\dfrac{x^2+3x+1}{(x+1)(x^2+5x-2)}\) 35) \(\dfrac{4x^2+17x-1}{(x+3)(x^2+6x+1)}\) \(\dfrac{2x-1}{x^2+6x+1}+\dfrac{2}{(x+3)}\) 36) \(\dfrac{4x^2}{(x+5)(x^2+7x-5)}\) 37) \(\dfrac{4x^2+x+3}{x^3 - 1}\) \(\dfrac{1}{x^2+x+1}+\dfrac{4}{(x-1)}\) 38) \(\dfrac{-5x^2+18x-4}{x^3 + 8}\) 39) \(\dfrac{3x^2-7x+33}{x^3 + 27}\) \(\dfrac{2}{x^2-3x+9}+\dfrac{3}{(x+3)}\) 40) \(\dfrac{x^2+2x+40}{x^3 - 125}\) 41) \(\dfrac{4x^2+4x+12}{8x^3 - 27}\) \(-\dfrac{1}{4x^2+6x+9}+\dfrac{1}{(2x-3)}\) 42) \(\dfrac{-50x^2+5x-3}{125x^3 - 1}\) 43) \(\dfrac{-2x^3-30x^2+36x+216}{x^4 + 216x}\) \(\dfrac{1}{x}+\dfrac{1}{x+6}-\dfrac{4x}{x^2-6x+36}\) For the exercises 44-54, find the decomposition of the partial fraction for the irreducible repeating quadratic factor. 44) \(\dfrac{3x^3+2x^2+14x+15}{(x^2 + 4)^2}\) 45) \(\dfrac{x^3+6x^2+5x+9}{(x^2 + 1)^2}\) \(\dfrac{x+6}{x^2+1}+\dfrac{4x+3}{(x^2+1)^2}\) 46) \(\dfrac{x^3-x^2+x-1}{(x^2 - 3)^2}\) 47) \(\dfrac{x^2+5x+5}{(x+2)^2}\) \(\dfrac{x+1}{x+2}+\dfrac{2x+3}{(x+2)^2}\) 48) \(\dfrac{x^3+2x^2+4x}{(x^2+2x+9)^2}\) 49) \(\dfrac{x^2+25}{(x^2+3x+25)^2}\) \(\dfrac{1}{x^2+3x+25}-\dfrac{3x}{(x^2+3x+25)^2}\) 50) \(\dfrac{2x^3+11x+7x+70}{(2x^2+x+14)^2}\) 51) \(\dfrac{5x+2}{x(x^2+4)^2}\) \(\dfrac{1}{8x}-\dfrac{x}{8(x^2+4)}+\dfrac{10-x}{8(x^2+4)^2}\) 52) \(\dfrac{x^4+x^3+8x^2+6x+36}{x(x^2+6)^2}\) 53) \(\dfrac{2x-9}{(x^2-x)^2}\) \(-\dfrac{16}{x}-\dfrac{9}{x^2}+\dfrac{16}{x-1}-\dfrac{7}{(x-1)^2}\) 54) \(\dfrac{5x^3-2x+1}{(x^2+2x)^2}\) For the exercises 55-56, find the partial fraction expansion. 55) \(\dfrac{x^2+4}{(x+1)^3}\) \(\dfrac{1}{x+1}-\dfrac{2}{(x+1)^2}+\dfrac{5}{(x+1)^3}\) 56) \(\dfrac{x^3-4x^2+5x+4}{(x-2)^3}\) For the exercises 57-59, perform the operation and then find the partial fraction decomposition. 57) \(\dfrac{7}{x+8}+\dfrac{5}{x-2}-\dfrac{x-1}{x^2-6x-16}\) \(\dfrac{5}{x-2}-\dfrac{3}{10(x+2)}+\dfrac{7}{x+8}-\dfrac{7}{10(x-8)}\) 58) \(\dfrac{1}{x-4}-\dfrac{3}{x+6}-\dfrac{2x+7}{x^2+2x-24}\) 59) \(\dfrac{2x}{x^2-16}-\dfrac{1-2x}{x^2+6x+8}-\dfrac{x-5}{x^2-4x}\) \(-\dfrac{5}{4x}-\dfrac{5}{2(x+2)}+\dfrac{11}{2(x+4)}+\dfrac{5}{4(x+4)}\) 1) Can we add any two matrices together? If so, explain why; if not, explain why not and give an example of two matrices that cannot be added together. No, they must have the same dimensions. An example would include two matrices of different dimensions. One cannot add the following two matrices because the first is a \(2\times 2\) matrix and the second is a \(2\times 3\). \(\begin{bmatrix} 1 & 2\\ 3 & 4 \end{bmatrix} + \begin{bmatrix} 6 & 5 & 4\\ 3 & 2 & 1 \end{bmatrix}\) has no sum. 2) Can we multiply any column matrix by any row matrix? Explain why or why not. 3) Can both the products \(AB\) and \(BA\) be defined? If so, explain how; if not, explain why. Yes, if the dimensions of \(A\) are \(m\times n\) and the dimensions of \(B\) are \(n\times m\) both products will be defined. 4) Can any two matrices of the same size be multiplied? If so, explain why, and if not, explain why not and give an example of two matrices of the same size that cannot be multiplied together. 5) Does matrix multiplication commute? That is, does \(AB=BA\)? If so, prove why it does. If not, explain why it does not. Not necessarily. To find \(AB\), we multiply the first row of \(A\) by the first column of \(B\) to get the first entry of \(AB\). To find \(BA\), we multiply the first row of \(B\) by the first column of \(A\) to get the first entry of \(BA\). Thus, if those are unequal, then the matrix multiplication does not commute. For the exercises 6-11, use the matrices below and perform the matrix addition or subtraction. Indicate if the operation is undefined. \[A=\begin{bmatrix} 1 & 3\\ 0 & 7 \end{bmatrix}, B=\begin{bmatrix} 2 & 14\\ 22 & 6 \end{bmatrix}, C=\begin{bmatrix} 1 & 5\\ 8 & 92\\ 12 & 6 \end{bmatrix}, D=\begin{bmatrix} 10 & 14\\ 7 & 2\\ 5 & 61 \end{bmatrix}, E=\begin{bmatrix} 6 & 12\\ 14 & 5 \end{bmatrix}, F=\begin{bmatrix} 0 & 9\\ 78 & 17\\ 15 & 4 \end{bmatrix} \nonumber\] 6) \(A+B\) 7) \(C+D\) \(\begin{bmatrix} 11 & 19\\ 15 & 94\\ 17 & 67 \end{bmatrix}\) 8) \(A+C\) 9) \(B-E\) \(\begin{bmatrix} -4 & 2\\ 8 & 1 \end{bmatrix}\) 10) \(C+F\) 11) \(D-B\) Undefined; dimensions do not match For the exercises 12-17, use the matrices below to perform scalar multiplication. \[A=\begin{bmatrix} 4 & 6\\ 13 & 12 \end{bmatrix}, B=\begin{bmatrix} 3 & 9\\ 21 & 12\\ 0 & 64 \end{bmatrix}, C=\begin{bmatrix} 16 & 3 & 7 & 18\\ 90 & 5 & 3 & 29 \end{bmatrix}, D=\begin{bmatrix} 18 & 12 & 13\\ 8 & 14 & 6\\ 7 & 4 & 21 \end{bmatrix} \nonumber\] 12) \(5A\) 13) \(3B\) \(\begin{bmatrix} 9 & 27\\ 63 & 36\\ 0 & 192 \end{bmatrix}\) 14) \(-2B\) 15) \(-4C\) \(\begin{bmatrix} -64 & -12 & -28 & -72\\ -360 & -20 & -12 & -116 \end{bmatrix}\) 16) \(\dfrac{1}{2}C\) 17) \(100D\) \(\begin{bmatrix} 1,800 & 1,200 & 1,300\\ 800 & 1,400 & 600\\ 700 & 400 & 2,100 \end{bmatrix}\) For the exercises 18-23, use the matrices below to perform matrix multiplication. \[A=\begin{bmatrix} -1 & 5\\ 3 & 2 \end{bmatrix}, B=\begin{bmatrix} 3 & 6 & 4\\ -8 & 0 & 12 \end{bmatrix}, C=\begin{bmatrix} 4 & 10\\ -2 & 6\\ 5 & 9 \end{bmatrix}, D=\begin{bmatrix} 2 & -3 & 12\\ 9 & 3 & 1\\ 0 & 8 & -10 \end{bmatrix} \nonumber\] 18) \(AB\) 19) \(BC\) \(\begin{bmatrix} 20 & 102\\ 28 & 28 \end{bmatrix}\) 20) \(CA\) 21) \(BD\) \(\begin{bmatrix} 60 & 41 & 2\\ -16 & 120 & -216 \end{bmatrix}\) 22) \(DC\) 23) \(CB\) \(\begin{bmatrix} -68 & 24 & 136\\ -54 & -12 & 64\\ -57 & 30 & 128 \end{bmatrix}\) For the exercises 24-29, use the matrices below to perform the indicated operation if possible. If not possible, explain why the operation cannot be performed. \[A=\begin{bmatrix} 2 & -5\\ 6 & 7 \end{bmatrix}, B=\begin{bmatrix} -9 & 6\\ -4 & 2 \end{bmatrix}, C=\begin{bmatrix} 0 & 9\\ 7 & 1 \end{bmatrix}, D=\begin{bmatrix} -8 & 7 & -5\\ 4 & 3 & 2\\ 0 & 9 & 2 \end{bmatrix}, E=\begin{bmatrix} 4 & 5 & 3\\ 7 & -6 & -5\\ 1 & 0 & 9 \end{bmatrix} \nonumber\] 24) \(A+B-C\) 25) \(4A+5D\) Undefined; dimensions do not match. 26) \(2C+B\) 27) \(3D+4E\) \(\begin{bmatrix} -8 & 41 & -3\\ 40 & -15 & -14\\ 4 & 27 & 42 \end{bmatrix}\) 28) \(C-0.5D\) 29) \(100D-10E\) \(\begin{bmatrix} -840 & 650 & -530\\ 330 & 360 & 250\\ -10 & 900 & 110 \end{bmatrix}\) For the exercises 30-40, use the matrices below to perform the indicated operation if possible. If not possible, explain why the operation cannot be performed. (Hint: \(A^2=A\cdot A\)) \[A=\begin{bmatrix} -10 & 20\\ 5 & 25 \end{bmatrix}, B=\begin{bmatrix} 40 & 10\\ -20 & 30 \end{bmatrix}, C=\begin{bmatrix} -1 & 0\\ 0 & -1\\ 1 & 0 \end{bmatrix} \nonumber\] 31) \(BA\) \(\begin{bmatrix} -350 & 1,050\\ 350 & 350 \end{bmatrix}\) Undefined; inner dimensions do not match. 34) \(A^2\) 35) \(B^2\) \(\begin{bmatrix} 1,400 & 700\\ -1,400 & 700 \end{bmatrix}\) 36) \(C^2\) 37) \(B^2A^2\) \(\begin{bmatrix} 332,500 & 927,500\\ -227,500 & 87,500 \end{bmatrix}\) 38) \(A^2B^2\) 39) \((AB)^2\) \(\begin{bmatrix} 490,000 & 0\\ 0 & 490,000 \end{bmatrix}\) 40)\((BA)^2\) \[A=\begin{bmatrix} 1 & 0\\ 2 & 3 \end{bmatrix}, B=\begin{bmatrix} -2 & 3 & 4\\ -1 & 1 & -5 \end{bmatrix}, C=\begin{bmatrix} 0.5 & 0.1\\ 1 & 0.2\\ -0.5 & 0.3 \end{bmatrix}, D=\begin{bmatrix} 1 & 0 & -1\\ -6 & 7 & 5\\ 4 & 2 & 1 \end{bmatrix} \nonumber\] \(\begin{bmatrix} -2 & 3 & 4\\ -7 & 9 & -7 \end{bmatrix} \nonumber\) \(\begin{bmatrix} -4 & 29 & 21\\ -27 & -3 & 1 \end{bmatrix} \nonumber\) 45) \(D^2\) \(\begin{bmatrix} -3 & -2 & -2\\ -28 & 59 & 46\\ -4 & 16 & 7 \end{bmatrix} \nonumber\) \(\begin{bmatrix} 1 & -18 & -9\\ -198 & 505 & 369\\ -72 & 126 & 91 \end{bmatrix} \nonumber\) 48) \((AB)C\) 49) \(A(BC)\) \(\begin{bmatrix} 0 & 1.6\\ 9 & -1 \end{bmatrix} \nonumber\) For the exercises 50-54, use the matrices below to perform the indicated operation if possible. If not possible, explain why the operation cannot be performed. Use a calculator to verify your solution. \[A=\begin{bmatrix} -2 & 0 & 9\\ 1 & 8 & -3\\ 0.5 & 4 & 5 \end{bmatrix}, B=\begin{bmatrix} 0.5 & 3 & 0\\ -4 & 1 & 6\\ 8 & 7 & 2 \end{bmatrix}, C=\begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 1 \end{bmatrix} \nonumber\] \(\begin{bmatrix} 2 & 24 & -4.5\\ 12 & 32 & -9\\ -8 & 64 & 61 \end{bmatrix} \nonumber\) \(\begin{bmatrix} 0.5 & 3 & 0.5\\ 2 & 1 & 2\\ 10 & 7 & 10 \end{bmatrix} \nonumber\) 54) \(ABC\) For the exercises 55-, use the matrix below to perform the indicated operation on the given matrix. \[B=\begin{bmatrix} 1 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{bmatrix} \nonumber\] \(\begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \nonumber\) 59) Using the above questions, find a formula for \(B^n\). Test the formula for \(B^{201}\) and \(B^{202}\), using a calculator. \(B^n=\begin{cases} \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}, & n\text{ even }\\ \\ \begin{bmatrix} 1 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{bmatrix}, & n\text{ odd } \end{cases}\) 1) Can any system of linear equations be written as an augmented matrix? Explain why or why not. Explain how to write that augmented matrix. Yes. For each row, the coefficients of the variables are written across the corresponding row, and a vertical bar is placed; then the constants are placed to the right of the vertical bar. 2) Can any matrix be written as a system of linear equations? Explain why or why not. Explain how to write that system of equations. 3) Is there only one correct method of using row operations on a matrix? Try to explain two different row operations possible to solve the augmented matrix \(\left [ \begin{array}{cc|c} 9 & 3 & 0\\ 1 & -2 & 6\\ \end{array} \right ]\). No, there are numerous correct methods of using row operations on a matrix. Two possible ways are the following: Interchange rows 1 and 2. Then \(R_2=R_2-9R_1\). \(R_2=R_1-9R_2\) . Then divide row 1 by \(9\). 4) Can a matrix whose entry is \(0\) on the diagonal be solved? Explain why or why not. What would you do to remedy the situation? 5) Can a matrix that has \(0\) entries for an entire row have one solution? Explain why or why not. No. A matrix with \(0\) entries for an entire row would have either zero or infinitely many solutions. For the exercises 6-10, write the augmented matrix for the linear system. 6) \(\begin{align*} 8x-37y &= 8\\ 2x+12y &= 3 \end{align*}\) 7) \(\begin{align*} 16y &= 4\\ 9x-y &= 2 \end{align*}\) \(\left [ \begin{array}{cc|c} 0 & 16 & 4\\ 9 & -1 & 2\\ \end{array} \right ]\) 8) \(\begin{align*} 3x+2y+10z &= 3\\ -6x+2y+5z &= 13\\ 4x+z &= 18 \end{align*}\) 9) \(\begin{align*} x+5y+8z &= 19\\ 12x+3y &= 4\\ 3x+4y+9z &= -7 \end{align*}\) \(\left [ \begin{array}{ccc|c} 1 & 5 & 8 & 16\\ 12 & 3 & 0 & 4\\ 3 & 4 & 9 & -7\end{array} \right ]\) 10) \(\begin{align*} 6x+12y+16z &= 4\\ 19x-5y+3z &= -9\\ x+2y &= -8 \end{align*}\) For the exercises 11-15, write the linear system from the augmented matrix. 11) \(\left [ \begin{array}{cc|c} -2 & 5 & 5\\ 6 & -18 & 26\\ \end{array} \right ]\) \(\begin{align*} -2x+5y &= 5\\ 6x-18y &= 26 \end{align*}\) 12) \(\left [ \begin{array}{cc|c} 3 & 4 & 10\\ 10 & 17 & 439\\ \end{array} \right ]\) 13) \(\left [ \begin{array}{ccc|c} 3 & 2 & 0 & 3\\ -1 & -9 & 4& -1\\ 8 & 5 & 7 & 8\\ \end{array} \right ]\) \(\begin{align*} 3x+2y &= 13\\ -x-9y+4z &= 53\\ 8x+5y+7z &= 80 \end{align*}\) 14) \(\left [ \begin{array}{ccc|c} 8 & 29 & 1 & 43\\ -1 & 7 & 5 & 38\\ 0 & 0 & 3 & 10\\ \end{array} \right ]\) 15) \(\left [ \begin{array}{ccc|c} 4 & 5 & -2 & 12\\ 0 & 1 & 58 & 2\\ 8 & 7 & -3 & -5\\ \end{array} \right ]\) \(\begin{align*} 4x+5y-2z &= 12\\ y+58z &= 2\\ 8x+7y-3z &= -5 \end{align*}\) For the exercises 16-46, solve the system by Gaussian elimination. 16) \(\left [ \begin{array}{cc|c} 1 & 0 & 3\\ 0 & 0 & 0\\ \end{array} \right ]\) No solutions 19) \(\left [ \begin{array}{cc|c} -1 & 2 & -3\\ 4 & -5 & 6\\ \end{array} \right ]\) \((-1,-2)\) 20) \(\left [ \begin{array}{cc|c} -2 & 0 & 1\\ 0 & 2 & -1\\ \end{array} \right ]\) 21) \(\begin{align*} 2x-3y &= -9\\ 5x+4y &= 58 \end{align*}\) 22) \(\begin{align*} 6x+2y &= -4\\ 3x+4y &= -17 \end{align*}\) 23) \(\begin{align*} 2x+3y &= 12\\ 4x+y &= 14 \end{align*}\) 24) \(\begin{align*} -4x-3y &= -2\\ 3x-5y &= -13 \end{align*}\) 25) \(\begin{align*} -5x+8y &= 3\\ 10x+6y &= 5 \end{align*}\) \(\left (\dfrac{1}{5}, \dfrac{1}{2} \right )\) 26) \(\begin{align*} 3x+4y &= 12\\ -6x-8y &= -24 \end{align*}\) 27) \(\begin{align*} -60x+45y &= 12\\ 20x-15y &= -4 \end{align*}\) \(\left (x, \dfrac{4}{15}(5x+1) \right )\) 28) \(\begin{align*} 11x+10y &= 43\\ 15x+20y &= 65 \end{align*}\) 29) \(\begin{align*} 2x-y &= 2\\ 3x+2y &= 17 \end{align*}\) 30) \(\begin{align*} -1.06x-2.25y &= 5.51\\ -5.03x-1.08y &= 5.40 \end{align*}\) 31) \(\begin{align*} \dfrac{3}{4}x-\dfrac{3}{5}y &= 4\\ \dfrac{1}{4}x+\dfrac{2}{3}y &= 1 \end{align*}\) \(\left (\dfrac{196}{39}, -\dfrac{5}{13} \right )\) 32) \(\begin{align*} \dfrac{1}{4}x-\dfrac{2}{3}y &= -1\\ \dfrac{1}{2}x+\dfrac{1}{3}y &= 3 \end{align*}\) 33) \(\left [ \begin{array}{ccc|c} 1 & 0 & 0 & 31\\ 0 & 1 & 1 & 45\\ 0 & 0 & 1 & 87\\ \end{array} \right ]\) \((31,-42,87)\) 34) \(\left [ \begin{array}{ccc|c} 1 & 0 & 1 & 50\\ 1 & 1 & 0 & 20\\ 0 & 1 & 1 & -90\\ \end{array} \right ]\) 35) \(\left [ \begin{array}{ccc|c} 1 & 2 & 3 & 4\\ 0 & 5 & 6 & 7\\ 0 & 0 & 8 & 9\\ \end{array} \right ]\) \(\left (\dfrac{21}{40}, \dfrac{1}{20}, \dfrac{9}{8} \right )\) 36) \(\left [ \begin{array}{ccc|c} -0.1 & 0.3 & -0.1 & 0.2\\ -0.4 & 0.2 & 0.1 & 0.8\\ 0.6 & 0.1 & 0.7 & -0.8\\ \end{array} \right ]\) 37) \(\begin{align*} -2x+3y-2z &= 3\\ 4x+2y-z &= 9\\ 4x-8y+2z &= -6 \end{align*}\) \(\left (\dfrac{18}{13}, \dfrac{15}{13}, -\dfrac{15}{13} \right )\) 38) \(\begin{align*} x+y-4z &= -4\\ 5x-3y-2z &= 0\\ 2x+6y+7z &= 30 \end{align*}\) 39) \(\begin{align*} 2x+3y+2z &= 1\\ -4x-6y-4z &= -2\\ 10x+15y+10z &= 5 \end{align*}\) \(\left (x, y, \dfrac{1}{2}(1-2x-3y) \right )\) 40) \(\begin{align*} x+2y-z &= 1\\ -x-2y+2z &= -2\\ 3x+6y-3z &= 5 \end{align*}\) \(\left (x, -\dfrac{x}{2}, -1 \right )\) 42) \(\begin{align*} x+y &= 2\\ x+z &= 1\\ -y-z &= -3 \end{align*}\) 43) \(\begin{align*} x+y+z &= 100\\ x+2z &= 125\\ -y+2z &= 25 \end{align*}\) \((125,-25,0)\) 44) \(\begin{align*} \dfrac{1}{4}x-\dfrac{2}{3}z &= -\dfrac{1}{2}\\ \dfrac{1}{5}x+\dfrac{1}{3}y &= \dfrac{4}{7}\\ \dfrac{1}{5}y-\dfrac{1}{3}z &= \dfrac{2}{9} \end{align*}\) 45) \(\begin{align*} -\dfrac{1}{2}x+\dfrac{1}{2}y+\dfrac{1}{7}z &= -\dfrac{53}{14}\\ \dfrac{1}{2}x-\dfrac{1}{2}y+\dfrac{1}{4}z &= 3\\ \dfrac{1}{4}x+\dfrac{1}{5}y+\dfrac{1}{3}z &= \dfrac{23}{15} \end{align*}\) \((8,1,-2)\) 46) \(\begin{align*} -\dfrac{1}{2}x-\dfrac{1}{3}y+\dfrac{1}{4}z &= -\dfrac{29}{6}\\ \dfrac{1}{5}x+\dfrac{1}{6}y-\dfrac{1}{7}z &= \dfrac{431}{210}\\ -\dfrac{1}{8}x+\dfrac{1}{9}y+\dfrac{1}{10}z &= -\dfrac{49}{45} \end{align*}\) For the exercises 47-51, use Gaussian elimination to solve the system. 47) \(\begin{align*} \dfrac{x-1}{7}+\dfrac{y-2}{8}+\dfrac{z-3}{4} &= 0\\ x+y+z &= 6\\ \dfrac{x+2}{3}+2y+\dfrac{z-3}{3} &= 5 \end{align*}\) 48) \(\begin{align*} \dfrac{x-1}{4}-\dfrac{y+1}{4}+3z &= -1\\ \dfrac{x+5}{2}+\dfrac{y+7}{4}-z &= 4\\ x+y-\dfrac{z-2}{2} &= 1 \end{align*}\) 49) \(\begin{align*} \dfrac{x-3}{4}-\dfrac{y-1}{3}+2z &= -1\\ \dfrac{x+5}{2}+\dfrac{y+5}{2}+\dfrac{z+5}{2} &= 8\\ x+y+z &= 1 \end{align*}\) \(\left (x, \dfrac{31}{28}-\dfrac{3x}{4}, \dfrac{1}{28}(-7x-3) \right )\) 50) \(\begin{align*} \dfrac{x-3}{10}+\dfrac{y+3}{2}-2z &= 3\\ \dfrac{x+5}{4}-\dfrac{y-1}{8}+z &= \dfrac{3}{2}\\ \dfrac{x-1}{4}+\dfrac{y+4}{2}+3z &= \dfrac{3}{2} \end{align*}\) No solutions exist. For the exercises 52-61, set up the augmented matrix that describes the situation, and solve for the desired solution. 52) Every day, a cupcake store sells \(5,000\) cupcakes in chocolate and vanilla flavors. If the chocolate flavor is \(3\) times as popular as the vanilla flavor, how many of each cupcake sell per day? 53) At a competing cupcake store, \(\$4,520\) worth of cupcakes are sold daily. The chocolate cupcakes cost \(\$2.25\) and the red velvet cupcakes cost \(\$1.75\). If the total number of cupcakes sold per day is \(2,200\), how many of each flavor are sold each day? \(860\) red velvet, \(1,340\) chocolate 54) You invested \(\$10,000\) into two accounts: one that has simple \(3\%\) interest, the other with \(2.5\%\) interest. If your total interest payment after one year was \(\$283.50\), how much was in each account after the year passed? 55) You invested \(\$2,300\) into account 1, and \(\$2,700\) into account 2. If the total amount of interest after one year is \(\$254\), and account 2 has \(1.5\) times the interest rate of account 1, what are the interest rates? Assume simple interest rates. \(4\%\) for account 1, \(6\%\) for account 2 56) Bikes'R'Us manufactures bikes, which sell for \(\$250\). It costs the manufacturer \(\$180\) per bike, plus a startup fee of \(\$3,500\). After how many bikes sold will the manufacturer break even? 57) A major appliance store is considering purchasing vacuums from a small manufacturer. The store would be able to purchase the vacuums for \(\$86\) each, with a delivery fee of \(\$9,200\), regardless of how many vacuums are sold. If the store needs to start seeing a profit after \(230\) units are sold, how much should they charge for the vacuums? \(\$126\) 58) The three most popular ice cream flavors are chocolate, strawberry, and vanilla, comprising \(83\%\) of the flavors sold at an ice cream shop. If vanilla sells \(1\%\) more than twice strawberry, and chocolate sells \(11\%\) more than vanilla, how much of the total ice cream consumption are the vanilla, chocolate, and strawberry flavors? 59) At an ice cream shop, three flavors are increasing in demand. Last year, banana, pumpkin, and rocky road ice cream made up \(12\%\) of total ice cream sales. This year, the same three ice creams made up \(16.9\%\) of ice cream sales. The rocky road sales doubled, the banana sales increased by \(50\%\), and the pumpkin sales increased by \(20\%\). If the rocky road ice cream had one less percent of sales than the banana ice cream, find out the percentage of ice cream sales each individual ice cream made last year. Banana was \(3\%\), pumpkin was \(7\%\), and rocky road was \(2\%\) 60) A bag of mixed nuts contains cashews, pistachios, and almonds. There are \(1,000\) total nuts in the bag, and there are \(100\) less almonds than pistachios. The cashews weigh \(3\) g, pistachios weigh \(4\) g, and almonds weigh \(5\) g. If the bag weighs \(3.7\) kg, find out how many of each type of nut is in the bag. 61) A bag of mixed nuts contains cashews, pistachios, and almonds. Originally there were \(900\) nuts in the bag. \(30\%\) of the almonds, \(20\%\) of the cashews, and \(10\%\) of the pistachios were eaten, and now there are \(770\) nuts left in the bag. Originally, there were \(100\) more cashews than almonds. Figure out how many of each type of nut was in the bag to begin with. \(100\) almonds, \(200\) cashews, \(600\) pistachios 1) In a previous section, we showed that matrix multiplication is not commutative, that is, \(AB\neq BA\) in most cases. Can you explain why matrix multiplication is commutative for matrix inverses, that is, \(A^{-1}A=AA^{-1}\)? If \(A^{-1}\) is the inverse of \(A\), then \(AA^{-1}=I\), the identity matrix. Since \(A\) is also the inverse of \(A^{-1}\), \(A^{-1}A=I\). You can also check by proving this for a \(2\times 2\) matrix. 2) Does every \(2\times 2\) matrix have an inverse? Explain why or why not. Explain what condition is necessary for an inverse to exist. 3) Can you explain whether a \(2\times 2\) matrix with an entire row of zeros can have an inverse? No, because \(ad\) and \(bc\) are both \(0\), so \(ad-bc=0\), which requires us to divide by \(0\) in the formula. 4) Can a matrix with an entire column of zeros have an inverse? Explain why or why not. 5) Can a matrix with zeros on the diagonal have an inverse? If so, find an example. If not, prove why not. For simplicity, assume a \(2\times 2\) matrix. Yes. Consider the matrix \(\begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}\). The inverse is found with the following calculation: \(A^{-1} = \dfrac{1}{0(0)-1(1)} \begin{bmatrix} 0 & -1\\ -1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}\) In the exercises 6-12, show that matrix \(A\) is the inverse of matrix \(B\). 6) \(A = \begin{bmatrix} 1 & 0\\ -1 & 1 \end{bmatrix}, B = \begin{bmatrix} 1 & 0\\ 1 & 1 \end{bmatrix}\) 7) \(A = \begin{bmatrix} 1 & 2\\ 3 & 4 \end{bmatrix}, B = \begin{bmatrix} -2 & 1\\ \frac{3}{2} & -\frac{1}{2} \end{bmatrix}\) \(AB = BA = \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} = I\) 8) \(A = \begin{bmatrix} 4 & 5\\ 7 & 0 \end{bmatrix}, B = \begin{bmatrix} 0 & \frac{1}{7}\\ \frac{1}{5} & -\frac{4}{35} \end{bmatrix}\) 9) \(A = \begin{bmatrix} -2 & \frac{1}{2}\\ 3 & -1 \end{bmatrix}, B = \begin{bmatrix} -2 & -1\\ -6 & -4 \end{bmatrix}\) 10) \(A = \begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & -1\\ 0 & 1 & 1 \end{bmatrix}, B = \dfrac{1}{2}\begin{bmatrix} 2 & 1 & -1\\ 0 & 1 & 1\\ 0 & -1 & 1 \end{bmatrix}\) \(AB = BA = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = I\) 12) \(A = \begin{bmatrix} 3 & 8 & 2\\ 1 & 1 & 1\\ 5 & 6 & 12 \end{bmatrix}, B = \dfrac{1}{36}\begin{bmatrix} -6 & 84 & -6\\ 7 & -26 & 1\\ -1 & -22 & 5 \end{bmatrix}\) For the exercises 13-26, find the multiplicative inverse of each matrix, if it exists. 13) \(\begin{bmatrix} 3 & -2\\ 1 & 9 \end{bmatrix}\) \(\dfrac{1}{29}\begin{bmatrix} 9 & 2\\ -1 & 3 \end{bmatrix}\) 14) \(\begin{bmatrix} -2 & 2\\ 3 & 1 \end{bmatrix}\) \(\dfrac{1}{69}\begin{bmatrix} -2 & 7\\ 9 & 3 \end{bmatrix}\) 16) \(\begin{bmatrix} -4 & -3\\ -5 & 8 \end{bmatrix}\) 17) \(\begin{bmatrix} 1 & 1\\ 2 & 2 \end{bmatrix}\) There is no inverse 19) \(\begin{bmatrix} 0.5 & 1.5\\ 1 & -0.5 \end{bmatrix}\) \(\dfrac{4}{7}\begin{bmatrix} 0.5 & 1.5\\ 1 & -0.5 \end{bmatrix}\) 20) \(\begin{bmatrix} 1 & 0 & 6\\ -2 & 1 & 7\\ 3 & 0 & 2 \end{bmatrix}\) 21) \(\begin{bmatrix} 0 & 1 & -3\\ 4 & 1 & 0\\ 1 & 0 & 5 \end{bmatrix}\) \(\dfrac{1}{17}\begin{bmatrix} -5 & 5 & -3\\ 20 & -3 & 12\\ 1 & -1 & 4 \end{bmatrix}\) 22) \(\begin{bmatrix} 1 & 2 & -1\\ -3 & 4 & 1\\ -2 & -4 & -5 \end{bmatrix}\) 23) \(\begin{bmatrix} 1 & 9 & -3\\ 2 & 5 & 6\\ 4 & -2 & -7 \end{bmatrix}\) \(\dfrac{1}{209}\begin{bmatrix} 47 & -57 & 69\\ 10 & 19 & -12\\ -24 & 38 & -13 \end{bmatrix}\) 24) \(\begin{bmatrix} 1 & -2 & 3\\ -4 & 8 & -12\\ 1 & 4 & 2 \end{bmatrix}\) 25) \(\begin{bmatrix} \frac{1}{2} & \frac{1}{2} & \frac{1}{2}\\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5}\\ \frac{1}{6} & \frac{1}{7} & \frac{1}{8} \end{bmatrix}\) \(\begin{bmatrix} 18 & 60 & -168\\ -56 & -140 & 448\\ 40 & 80 & -280 \end{bmatrix}\) 26) \(\begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9 \end{bmatrix}\) For the exercises 27-34, solve the system using the inverse of a \(2\times 2\) matrix. 27) \(\begin{align*} 5x-6y &= -61\\ 4x+3y &= -2 \end{align*}\) 28) \(\begin{align*} 8x+4y &= -100\\ 3x-4y &= 1 \end{align*}\) 29) \(\begin{align*} 3x-2y &= 6\\ -x+5y &= -2 \end{align*}\) 30) \(\begin{align*} 5x-4y &= -5\\ 4x+y &= 2.3 \end{align*}\) 31) \(\begin{align*} -3x-4y &= 9\\ 12x+4y &= -6 \end{align*}\) \(\left (\dfrac{1}{3}, -\dfrac{5}{2} \right )\) 32) \(\begin{align*} -2x+3y &= \dfrac{3}{10}\\ -x+5y &= \dfrac{1}{2} \end{align*}\) 33) \(\begin{align*} \dfrac{8}{5}x-\dfrac{4}{5}y &= \dfrac{2}{5}\\ -\dfrac{8}{5}x+\dfrac{1}{5}y &= \dfrac{7}{10} \end{align*}\) \(\left (-\dfrac{2}{3}, -\dfrac{11}{6} \right )\) 34) \(\begin{align*} \dfrac{1}{2}x+\dfrac{1}{5}y &= -\dfrac{1}{4}\\ \dfrac{1}{2}x-\dfrac{3}{5}y &= -\dfrac{9}{4} \end{align*}\) For the exercises 35-42, solve a system using the inverse of a \(3\times 3\) matrix. 35) \(\begin{align*} 3x-2y+5z &= 21\\ 5x+4y &= 37\\ x-2y-5z &= 5 \end{align*}\) \(\left (7, \dfrac{1}{2}, \dfrac{1}{5} \right )\) 36) \(\begin{align*} 4x+4y+4z &= 40\\ 2x-3y+4z &= -12\\ -x+3y+4z &= 9 \end{align*}\) 37) \(\begin{align*} 6x-5y-z &= 31\\ -x+2y+z &= -6\\ 3x+3y+2z &= 13 \end{align*}\) 38) \(\begin{align*} 6x-5y+2z &= -4\\ 2x+5y-z &= 12\\ 2x+5y+z &= 12 \end{align*}\) 39) \(\begin{align*} 4x-2y+3z &= -12\\ 2x+2y-9z &= 33\\ 6y-4z &= 1 \end{align*}\) \(\dfrac{1}{34} \left(-35, -97, -154 \right)\) 40) \(\begin{align*} \dfrac{1}{10}x-\dfrac{1}{5}y+4z &= \dfrac{-41}{2}\\ \dfrac{1}{5}x-20y+\dfrac{2}{5}z &= -101\\ \dfrac{3}{10}x+4y-\dfrac{3}{10}z &= 23 \end{align*}\) 41) \(\begin{align*} \dfrac{1}{2}x-\dfrac{1}{5}y+\dfrac{1}{5}z &= \dfrac{31}{100}\\ -\dfrac{3}{4}x-\dfrac{1}{4}y+\dfrac{1}{2}z &= \dfrac{7}{40}\\ -\dfrac{4}{5}x-\dfrac{1}{2}y+\dfrac{3}{2}z &= \dfrac{1}{4} \end{align*}\) \(\dfrac{1}{690} \left(65, -1136, -229 \right)\) 42) \(\begin{align*} 0.1x+0.2y+0.3z &= -1.4\\ 0.1x-0.2y+0.3z &= 0.6\\ 0.4y+0.9z &= -2 \end{align*}\) For the exercises 43-46, use a calculator to solve the system of equations with matrix inverses. 43) \(\begin{align*} 2x-y &= -3\\ -x+2y &= 2.3\\ \end{align*}\) \(\left (-\dfrac{37}{30}, \dfrac{8}{15} \right )\) 44) \(\begin{align*} -\dfrac{1}{2}x-\dfrac{3}{2}y &= -\dfrac{43}{20}\\ \dfrac{5}{2}x+\dfrac{11}{5}y &= \dfrac{31}{4}\\ \end{align*}\) 45) \(\begin{align*} 12.3x-2y-2.5z &= 2\\ 36.9x+7y-7.5z &= -7\\ 8y-5z &= -10 \end{align*}\) \(\left (\dfrac{10}{123}, -1, \dfrac{2}{5} \right )\) 46) \(\begin{align*} 0.5x-3y+6z &= -0.8\\ 0.7x-2y &= -0.06\\ 0.5x+4y+5z &= 0 \end{align*}\) For the exercises 47-51, find the inverse of the given matrix. 47) \(\begin{bmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\\ 0 & 1 & 1 & 0\\ 0 & 0 & 1 & 1 \end{bmatrix}\) \(\dfrac{1}{2}\begin{bmatrix} 2 & 1 & -1 & -1\\ 0 & 1 & 1 & -1\\ 0 & -1 & 1 & 1\\ 0 & 1 & -1 & 1 \end{bmatrix}\) 48) \(\begin{bmatrix} -1 & 0 & 2 & 5\\ 0 & 0 & 0 & 2\\ 0 & 2 & -1 & 0\\ 1 & -3 & 0 & 1 \end{bmatrix}\) 49) \(\begin{bmatrix} 1 & -2 & 3 & 0\\ 0 & 1 & 0 & 2\\ 1 & 4 & -2 & 3\\ -5 & 0 & 1 & 1 \end{bmatrix}\) \(\dfrac{1}{39}\begin{bmatrix} 3 & 2 & 1 & -7\\ 18 & -53 & 32 & 10\\ 24 & -36 & 21 & 9\\ -9 & 46 & -16 & -5 \end{bmatrix}\) 50) \(\begin{bmatrix} 1 & 2 & 0 & 2 & 3\\ 0 & 2 & 1 & 0 & 0\\ 0 & 0 & 3 & 0 & 1\\ 0 & 2 & 0 & 0 & 1\\ 0 & 0 & 1 & 2 & 0 \end{bmatrix}\) 51) \(\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix}\) \(\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ -1 & -1 & -1 & -1 & -1 & 1 \end{bmatrix}\) For the exercises 52-61, write a system of equations that represents the situation. Then, solve the system using the inverse of a matrix. 52) \(2,400\) tickets were sold for a basketball game. If the prices for floor 1 and floor 2 were different, and the total amount of money brought in is \(\$64,000\), how much was the price of each ticket? 53) In the previous exercise, if you were told there were \(400\) more tickets sold for floor 2 than floor 1, how much was the price of each ticket? Infinite solutions. 54) A food drive collected two different types of canned goods, green beans and kidney beans. The total number of collected cans was \(350\) and the total weight of all donated food was \(348\) lb, \(12\) oz. If the green bean cans weigh \(2\) oz less than the kidney bean cans, how many of each can was donated? 55) Students were asked to bring their favorite fruit to class. \(95\%\) of the fruits consisted of banana, apple, and oranges. If oranges were twice as popular as bananas, and apples were \(5\%\) less popular than bananas, what are the percentages of each individual fruit? \(50\%\) oranges, \(25\%\) bananas, \(20\%\) apples 56) A sorority held a bake sale to raise money and sold brownies and chocolate chip cookies. They priced the brownies at \(\$1\) and the chocolate chip cookies at \(\$0.75\). They raised \(\$700\) and sold \(850\) items. How many brownies and how many cookies were sold? 57) A clothing store needs to order new inventory. It has three different types of hats for sale: straw hats, beanies, and cowboy hats. The straw hat is priced at \(\$13.99\), the beanie at \(\$7.99\), and the cowboy hat at \(\$14.49\). If \(100\) hats were sold this past quarter, \(\$1,119\) was taken in by sales, and the amount of beanies sold was \(10\) more than cowboy hats, how many of each should the clothing store order to replace those already sold? \(10\) straw hats, \(50\) beanies, \(40\) cowboy hats 58) Anna, Ashley, and Andrea weigh a combined \(370\) lb. If Andrea weighs \(20\) lb more than Ashley, and Anna weighs \(1.5\) times as much as Ashley, how much does each girl weigh? 59) Three roommates shared a package of \(12\) ice cream bars, but no one remembers who ate how many. If Tom ate twice as many ice cream bars as Joe, and Albert ate three less than Tom, how many ice cream bars did each roommate eat? Tom ate \(6\), Joe ate \(3\), and Albert ate \(3\). 60) A farmer constructed a chicken coop out of chicken wire, wood, and plywood. The chicken wire cost \(\$2\) per square foot, the wood \(\$10\) per square foot, and the plywood \(\$5\) per square foot. The farmer spent a total of \(\$51\), and the total amount of materials used was \(14\) ft2. He used \(3\) ft2 more chicken wire than plywood. How much of each material in did the farmer use? 61) Jay has lemon, orange, and pomegranate trees in his backyard. An orange weighs \(8\) oz, a lemon \(5\) oz, and a pomegranate \(11\) oz. Jay picked \(142\) pieces of fruit weighing a total of \(70\) lb, \(10\) oz. He picked \(15.5\) times more oranges than pomegranates. How many of each fruit did Jay pick? \(124\) oranges, \(10\) lemons, \(8\) pomegranates 1) Explain why we can always evaluate the determinant of a square matrix. A determinant is the sum and products of the entries in the matrix, so you can always evaluate that product—even if it does end up being \(0\). 2) Examining Cramer's Rule, explain why there is no unique solution to the system when the determinant of your matrix is \(0\). For simplicity, use a \(2\times 2\) matrix. 3) Explain what it means in terms of an inverse for a matrix to have a \(0\) determinant. The inverse does not exist. 4) The determinant of \(2\times 2\) matrix \(A\) is \(3\). If you switch the rows and multiply the first row by \(6\) and the second row by \(2\), explain how to find the determinant and provide the answer. For the exercises 5-24, find the determinant. 5) \(\begin{vmatrix} 1 & 2\\ 3 & 4 \end{vmatrix}\) \(-2\) 6) \(\begin{vmatrix} -1 & 2\\ 3 & -4 \end{vmatrix}\) 7) \(\begin{vmatrix} 2 & -5\\ -1 & 6 \end{vmatrix}\) 8) \(\begin{vmatrix} -8 & 4\\ -1 & 5 \end{vmatrix}\) 9) \(\begin{vmatrix} 1 & 0\\ 3 & -4 \end{vmatrix}\) 10) \(\begin{vmatrix} 10 & 20\\ 0 & -10 \end{vmatrix}\) 11) \(\begin{vmatrix} 10 & 0.2\\ 5 & 0.1 \end{vmatrix}\) 12) \(\begin{vmatrix} 6 & -3\\ 8 & 4 \end{vmatrix}\) 13) \(\begin{vmatrix} -2 & -3\\ 3.1 & 4,000 \end{vmatrix}\) \(-7,990.7\) 14) \(\begin{vmatrix} -1.1 & 0.6\\ 7.2 & -0.5 \end{vmatrix}\) 15) \(\begin{vmatrix} -1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & -3 \end{vmatrix}\) 17) \(\begin{vmatrix} 1 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 0 \end{vmatrix}\) 18) \(\begin{vmatrix} 2 & -3 & 1\\ 3 & -4 & 1\\ -5 & 6 & 1 \end{vmatrix}\) 19) \(\begin{vmatrix} -2 & 1 & 4\\ -4 & 2 & -8\\ 2 & -8 & -3 \end{vmatrix}\) \(224\) 20) \(\begin{vmatrix} 6 & -1 & 2\\ -4 & -3 & 5\\ 1 & 9 & -1 \end{vmatrix}\) 21) \(\begin{vmatrix} 5 & 1 & -1\\ 2 & 3 & 1\\ 3 & -6 & -3 \end{vmatrix}\) \(15\) 22) \(\begin{vmatrix} 1.1 & 2 & -1\\ -4 & 0 & 0\\ 4.1 & -0.4 & 2.5 \end{vmatrix}\) 23) \(\begin{vmatrix} 2 & -1.6 & 3.1\\ 1.1 & 3 & -8\\ -9.3 & 0 & 2 \end{vmatrix}\) \(-17.03\) 24) \(\begin{vmatrix} -\frac{1}{2} & \frac{1}{3} & \frac{1}{4}\\ \frac{1}{5} & -\frac{1}{6} & \frac{1}{7}\\ 0 & 0 & \frac{1}{8} \end{vmatrix}\) For the exercises 25-34, solve the system of linear equations using Cramer's Rule. 25) \(\begin{align*} 2x-3y &= -1\\ 4x+5y &= 9 \end{align*}\) 26) \(\begin{align*} 5x-4y &= 2\\ -4x+7y &= 6 \end{align*}\) 27) \(\begin{align*} 6x-3y &= 2\\ -8x+9y &= -1 \end{align*}\) 28) \(\begin{align*} 2x+6y &= 12\\ 5x-2y &= 13 \end{align*}\) 29) \(\begin{align*} 4x+3y &= 23\\ 2x-y &= -1 \end{align*}\) 30) \(\begin{align*} 10x-6y &= 2\\ -5x+8y &= -1 \end{align*}\) 31) \(\begin{align*} 4x-3y &= -3\\ 2x+6y &= -4 \end{align*}\) \(\left (-1, -\dfrac{1}{3} \right )\) 33) \(\begin{align*} 4x+10y &= 180\\ -3x-5y &= -105 \end{align*}\) \((15,12)\) 34) \(\begin{align*} 8x-2y &= -3\\ -4x+6y &= 4 \end{align*}\) 35) \(\begin{align*} x+2y-4z &= -1\\ 7x+3y+5z &= 26\\ -2x-6y+7z &= -6 \end{align*}\) 36) \(\begin{align*} -5x+2y-4z &= -47\\ 4x-3y-z &= -94\\ 3x-3y+2z &= 94 \end{align*}\) 37) \(\begin{align*} 4x+5y-z &= -7\\ -2x-9y+2z &= 8\\ 5y+7z &= 21 \end{align*}\) 38) \(\begin{align*} 4x-3y+4z &= 10\\ 5x-2z &= -2\\ 3x+2y-5z &= -9 \end{align*}\) 39) \(\begin{align*} 4x-2y+3z &= 6\\ -6x+y &= -2\\ 2x+7y+8z &= 24 \end{align*}\) \(\left (\dfrac{1}{2}, 1, 2 \right )\) 40) \(\begin{align*} 5x+2y-z &= 1\\ -7x-8y+3z &= 1.5\\ 6x-12y+z &= 7 \end{align*}\) 41) \(\begin{align*} 13x-17y+16z &= 73\\ -11x+15y+17z &= 61\\ 46x+10y-30z &= -18 \end{align*}\) 42) \(\begin{align*} -4x-3y-8z &= -7\\ 2x-9y+5z &= 0.5\\ 5x-6y-5z &= -2 \end{align*}\) 43) \(\begin{align*} 4x-6y+8z &= 10\\ -2x+3y-4z &= -5\\ x+y+z &= 1 \end{align*}\) Infinite solutions 44) \(\begin{align*} 4x-6y+8z &= 10\\ -2x+3y-4z &= -5\\ 12x+18y-24z &= -30 \end{align*}\) For the exercises 45-48, use the determinant function on a graphing utility. 45) \(\begin{vmatrix} 1 & 0 & 8 & 9\\ 0 & 2 & 1 & 0\\ 1 & 0 & 3 & 0\\ 0 & 2 & 4 & 3 \end{vmatrix}\) 46) \(\begin{vmatrix} 1 & 0 & 2 & 1\\ 0 & -9 & 1 & 3\\ 3 & 0 & -2 & -1\\ 0 & 1 & 1 & -2 \end{vmatrix}\) 47) \(\begin{vmatrix} \frac{1}{2} & 1 & 7 & 4\\ 0 & \frac{1}{2} & 100 & 5\\ 0 & 0 & 2 & 2,000\\ 0 & 0 & 0 & 2 \end{vmatrix}\) For the exercises 49-52, create a system of linear equations to describe the behavior. Then, calculate the determinant. Will there be a unique solution? If so, find the unique solution. 49) Two numbers add up to \(56\). One number is \(20\) less than the other. Yes; \(18\), \(38\) 50) Two numbers add up to \(104\). If you add two times the first number plus two times the second number, your total is \(208\) 51) Three numbers add up to \(106\). The first number is \(3\) less than the second number. The third number is \(4\) more than the first number. Yes; \(33\), \(36\), \(37\) 52) Three numbers add to \(216\). The sum of the first two numbers is \(112\). The third number is 8 less than the first two numbers combined. For the exercises 53-65, create a system of linear equations to describe the behavior. Then, solve the system for all solutions using Cramer's Rule. 53) You invest \(\$10,000\) into two accounts, which receive \(8\%\) interest and \(5\%\) interest. At the end of a year, you had \(\$10,710\) in your combined accounts. How much was invested in each account? \(\$7,000\) in first account, \(\$3,000\) in second account. 54) You invest \(\$80,000\) into two accounts, \(\$22,000\) in one account, and \(\$58,000\) in the other account. At the end of one year, assuming simple interest, you have earned \(\$2,470\) in interest. The second account receives half a percent less than twice the interest on the first account. What are the interest rates for your accounts? 55) A movie theater needs to know how many adult tickets and children tickets were sold out of the \(1,200\) total tickets. If children's tickets are \(\$5.95\), adult tickets are \(\$11.15\), and the total amount of revenue was \(\$12,756\), how many children's tickets and adult tickets were sold? \(120\) children, \(1,080\) adult 56) A concert venue sells single tickets for \(\$40\) each and couple's tickets for \(\$65\). If the total revenue was \(\$18,090\) and the \(321\) tickets were sold, how many single tickets and how many couple's tickets were sold? 57) You decide to paint your kitchen green. You create the color of paint by mixing yellow and blue paints. You cannot remember how many gallons of each color went into your mix, but you know there were \(10\) gal total. Additionally, you kept your receipt, and know the total amount spent was \(\$29.50\). If each gallon of yellow costs \(\$2.59\), and each gallon of blue costs \(\$3.19\), how many gallons of each color go into your green mix? \(4\) gal yellow, \(6\) gal blue 58) You sold two types of scarves at a farmers' market and would like to know which one was more popular. The total number of scarves sold was \(56\), the yellow scarf cost \(\$10\), and the purple scarf cost \(\$11\). If you had total revenue of \(\$583\), how many yellow scarves and how many purple scarves were sold? 59) Your garden produced two types of tomatoes, one green and one red. The red weigh \(10\) oz, and the green weigh \(4\) oz. You have \(30\) tomatoes, and a total weight of \(13\) lb, \(14\) oz. How many of each type of tomato do you have? \(13\) green tomatoes, \(17\) red tomatoes 60) At a market, the three most popular vegetables make up \(53\%\) of vegetable sales. Corn has \(4\%\) higher sales than broccoli, which has \(5\%\) more sales than onions. What percentage does each vegetable have in the market share? 61) At the same market, the three most popular fruits make up \(37\%\) of the total fruit sold. Strawberries sell twice as much as oranges, and kiwis sell one more percentage point than oranges. For each fruit, find the percentage of total fruit sold. Strawberries \(18\%\), oranges \(9\%\), kiwi \(10\%\) 62) Three bands performed at a concert venue. The first band charged \(\$15\) per ticket, the second band charged \(\$45\) per ticket, and the final band charged \(\$22\) per ticket. There were \(510\) tickets sold, for a total of \(\$12,700\). If the first band had \(40\) more audience members than the second band, how many tickets were sold for each band? 63) A movie theatre sold tickets to three movies. The tickets to the first movie were \(\$5\), the tickets to the second movie were \(\$11\), and the third movie was \(\$12\). \(100\) tickets were sold to the first movie. The total number of tickets sold was \(642\), for a total revenue of \(\$6,774\). How many tickets for each movie were sold? \(100\) for movie 1, \(230\) for movie 2, \(312\) for movie 3 64) Men aged \(20–29\), \(30–39\), and \(40–49\) made up \(78\%\) of the population at a prison last year. This year, the same age groups made up \(82.08\%\) of the population. The \(20–29\) age group increased by \(20%\), the \(30–39\) age group increased by \(2\%\), and the \(40–49\) age group decreased to \(\dfrac{3}{4}\) of their previous population. Originally, the \(30–39\) age group had \(2\%\) more prisoners than the \(20–29\) age group. Determine the prison population percentage for each age group last year. 65) At a women's prison down the road, the total number of inmates aged \(20–49\) totaled \(5,525\). This year, the \(20–29\) age group increased by \(10\%\), the \(30–39\) age group decreased by \(20\%\), and the \(40–49\) age group doubled. There are now \(6,040\) prisoners. Originally, there were \(500\) more in the \(30–39\) age group than the \(20–29\) age group. Determine the prison population for each age group last year. \(20–29: 2,100\), \(30–39: 2,600\), \(40–49: 825\) For the exercises 66-68, use this scenario: A health-conscious company decides to make a trail mix out of almonds, dried cranberries, and chocolate-covered cashews. The nutritional information for these items is shown in the Table below. Almonds (10) 6 2 3 Cranberries (10) 0.02 0 8 Cashews (10) 7 3.5 5.5 66) For the special "low-carb"trail mix, there are \(1,000\) pieces of mix. The total number of carbohydrates is \(425\) g, and the total amount of fat is \(570.2\) g. If there are \(200\) more pieces of cashews than cranberries, how many of each item is in the trail mix? 67) For the "hiking" mix, there are \(1,000\) pieces in the mix, containing \(390.8\) g of fat, and \(165\) g of protein. If there is the same amount of almonds as cashews, how many of each item is in the trail mix? \(300\) almonds, \(400\) cranberries, \(300\) cashews 68) For the "energy-booster" mix, there are \(1,000\) pieces in the mix, containing \(145\) g of protein and \(625\) g of carbohydrates. If the number of almonds and cashews summed together is equivalent to the amount of cranberries, how many of each item is in the trail mix? Jay Abramson (Arizona State University) with contributing authors. Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at https://openstax.org/details/books/precalculus. 10: Analytic Geometry OpenStax
CommonCrawl
Research | Open | Published: 17 September 2016 Hybrid Pareto artificial bee colony algorithm for multi-objective single machine group scheduling problem with sequence-dependent setup times and learning effects Lei Yue1, Zailin Guan1, Ullah Saif1,2, Fei Zhang1 & Hao Wang1 SpringerPlusvolume 5, Article number: 1593 (2016) | Download Citation Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems. Group technology (GT) is a well-known method used to improve the production efficiency in manufacturing and engineering management through exploiting similarities of different products and exploiting similar activities in their designs and production processes. GT was first proposed by Mitrofanov (1966) and Opitz (1970) and later many manufacturing companies have taken advantage of GT to enhance productivity (Webster and Baker 1995; Logendran et al. 2005; Keshavarz and Salmasi 2014). Variety of scheduling models used GT in which set of similar jobs are divided into subsets, called families or groups. Each job in a group contains similar technological requirements in terms of tooling and setups. This can eliminate the time of setups between the jobs in a single group and increase the production efficiency. Jobs grouping advantage has increased the research in group scheduling (GS) and has attracted numerous researchers due to their significant application in industries. Different GS research problems in manufacturing environment have been addressed in literature. For example, single-machine GS problem (SMGS) (Webster and Baker 1995; Kuo and Yang 2006; Kuo 2012; Wu et al. 2008), GS in flowshop environment (FSGS) (Logendran et al. 2005; Gelogullari and Logendran 2010; Solimanpur and Elmi 2011; Costa et al. 2014) etc. More recent works on GS problems in different manufacturing environment have also been presented in literature (Keshavarz et al. 2015; Neufeld et al. 2015; Ji et al. 2016; Egilmez et al. 2016; Adressi et al. 2016). In many industries, there exists frequent changeover of jobs on machines which needs setup time. If the frequent change of jobs occurs on the bottleneck resources of the production system, it can cause a large amount of waste of time. According to the theory of constraints (TOC), the performance of complex manufacturing systems often depends mostly on the bottleneck machines of the production system. Therefore, scheduling with setup times on the bottleneck machines plays a critical role for the enterprise because it is primarily cause for delays in the delivery of customer orders. Production schedules of the system often rely on management of these setup times on bottleneck resources. The setup time includes sequence-independent setup times and sequence-dependent setup (SDS) times. Setup time is sequence-independent if its duration depends only on the current job to be processed. Setup time is sequence-dependent if setup time depends on both the current and the immediately preceding job. The presence of SDS has increased the complexity of industrial scheduling problem. Group technology has the advantage that, no machine setups are needed between two consecutively scheduled jobs in the same group due to similarities in operations. However, setup time is required between processing of jobs from different groups which is called as group setup. In most real-world problems, the group setup time is considered as sequence dependent. Therefore, SDS has been investigated in literature for GS problems to enhance the advantage of GT. The sequencing of groups in an order that the two consecutive groups in the sequence can require less changes in the machines setup. This can reduce the SDS time between different groups in the group scheduling. Group scheduling with SDS has been studied by limited researchers. In recent literature, Costa et al. (2014), Neufeld et al. (2015) and Salmasi et al. (2011) considered a group scheduling problem with sequence dependent setup times to minimize the makespan. Keshavarz et al. (2015) investigated a flexible flowshop sequence-dependent group scheduling problem with an objective to minimize total completion time. Moreover, a sequence dependent group scheduling problem on unrelated-parallel machines with a combined objective of makespan and total weighted tardiness has also been addressed (Bozorgirad and Logendran 2012). Khamseh et al. (2015) presented a model which integrates group scheduling problem with sequence-dependent setups and preventive maintenance activities in order to minimize the total completion time. Zandieh and Karimi (2011) presented a multi-objective group scheduling problem with SDS times by minimizing total weighted tardiness and the maximum completion time simultaneously. Due to significant application of group scheduling, current research investigates the problem of group scheduling with SDS on single machine environment. In classical scheduling models, the processing time of jobs is assumed as fixed and the schedule is made on the fixed processing time of jobs. However, in many realistic situations where manual workers perform operations, due to repetition of production operations, the actual processing time of jobs can be reduced as compared to its initial value due to "learning effect". When the new workers are assigned to process some jobs, the worker can take different time as compared to the time they take after several times repetition of the process of the same job on machines. Learning effect can cause change in the processing time of jobs with repetition and the schedule which is based on fixed value of the processing time of jobs might be optimal for the fixed processing time value. The change in processing time can cause different waiting time of jobs in the schedule and can give different value of the performance objectives as compared to predetermined schedule. Scheduling with learning effect is significant and therefore, it has received considerable attention in recent years (Kuo and Yang 2006; Zhu et al. 2011; Huang et al. 2011; Wang et al. 2008; Yin et al. 2009; Li et al. 2013). Due to the significance application of learning effect, it has also been studied by several researchers in group scheduling problems (Kuo and Yang 2006; Yang and Yang 2010; Kuo 2012; Bai et al. 2012; Zhu et al. 2011; Yang 2011). However, they have not considered the SDS in the group scheduling. In literature, some studies have considered group scheduling with SDS but have not involved learning effect in their models (Janiak et al. 2005; Schaller 2001; Salmasi et al. 2011; Keshavarz and Salmasi 2014; Keshavarz et al. 2015; Neufeld et al. 2015). A limited research work found in literature has considered both learning effects and SDS in group scheduling problem simultaneously. Low and Lin (2012) considered a single machine group scheduling problem with past sequence dependent setup (PSDS) and learning effect. They considered makespan and the total completion time as objectives in their studies. The PSDS they considered is significant and more suitable for the cases where the setup time of the newly insert job depends on all the previous jobs that are scheduled to process before it. The past sequence dependent setup time is more applicable in the PC Board industries (Koulamas and Kyparisis 2008; Wang 2008; Low and Lin 2012). However, in most other industries, setup time depends only on the newly entered job in the sequence and the last scheduled job before this job in most of the production environment (Dudek et al. 1974). For example, in the manufacturing industries of heavy machinery, SDS is more significant where the SDS exists on production machines and depends only on the two consecutive jobs of the sequence irrespective of the other jobs in the schedule. For example, SDS exists in the jobs on different machines in SANY Heavy Industry Company in Changsha, China and Yu Tong BUS Company in Zhengzhou, China which are producing heavy construction machinery and buses and other transportation machinery respectively. The SDS times occurs in these companies and is considered in the current research. In literature studies on group scheduling problems, most of the research optimized either single objective or linear combination of more than one objective with giving certain weight to each objective (Neufeld et al. 2015; Costa et al. 2014; Salmasi et al. 2011; Keshavarz et al. 2015; Karimi et al. 2011). However, in most of the companies, more than one objective is desired to optimize and in most of the cases, the desired objectives are conflicting and companies require to optimize these conflicting objectives simultaneously. Simultaneous consideration of more than one conflicting objective is significant in most of these companies. Therefore, current research used two conflicting objectives including makespan and total weighted tardiness to optimize simultaneously for the current research single machine group scheduling problem with SDS times and LE for the heavy machinery manufacturing company environment. Group scheduling problem with SDS is NP hard (Webster and Baker 1995; Janiak et al. 2005). In literature, different methods have been proposed to investigate group scheduling (Logendran et al.2005; Solimanpur and Elmi 2011; Adressi et al. 2016; Zhu et al. 2011) and group scheduling with SDS (Costa et al.2014; Keshavarz et al. 2015; Neufeld et al. 2015; Ji et al. 2016; Karimi et al. 2011; Salmasi and Logendran 2008; Sabouni and Logendran 2013; Anghinolfi and Paolucci 2009). For example, heuristics (Neufeld et al. 2015; Salmasi and Logendran 2008; Li et al. 2013), branch-and-bound procedure (Schaller 2001; Sabouni and Logendran 2013; Keshavarz et al. 2015), tabu search (Bozorgirad and Logendran 2012), particle swarm optimization (Anghinolfi and Paolucci 2009), imperialist competitive algorithm (Karimi et al. 2011), genetic algorithm (Zandieh and Karimi 2011; Adressi et al. 2016), etc. Recently, Karaboga (2005), proposed an artificial bee colony algorithm (ABC) which is a popular algorithm and it is based on the foraging behavior of honey bee swarm. ABC algorithm needs less control parameters, can be used for different kind of continuous and discrete problems and easy to implement. These features make it feasible and applicable in different areas of optimization problems. Therefore, it has been applied to permutation flowshop (Tasgetiren et al. 2011), flexible job shop (Li et al. 2011), large scale engineering optimization problems (Akay and Karaboga 2012), and constraint optimization problem (Ajorlou and Shams 2013) etc. In recent years there is little research work that has been done for multi objective optimization problems (Omkar et al. 2011; Akbari et al. 2012; Pan et al. 2011) etc. Zhang et al. (2013) proposed a hybrid ABC for flowshop problem and more recently Saif et al. (2014) proposed Pareto based ABC for multi objective optimization of simple assembly line balancing problem. However, their presented algorithm described above fits more for the type of problem in their study, which motivates us to introduce hybrid Pareto ABC (HPABC) algorithm for the current problem. Current research is novel to consider group scheduling problem on a single machine with SDS consideration and learning effect. Moreover, multiple conflicting objectives including makespan and TWT are considered simultaneously to optimize and the Pareto optimal results are obtained. Furthermore, the considered problem in the current study has not, to date, been presented and solved using some recent meta-heuristics such as ABC algorithm. Moreover, the proposed HPABC algorithm is novel to solve the current research problem. The proposed HPABC algorithm employs some steps of genetic algorithm and incorporates the Pareto optimality in the original ABC algorithm for this problem and is novel. Reset of the paper is organized as follows: "Problem description and formulation" section illustrates the problem formulation. "Hybrid Pareto artificial bee colony algorithm" section deals with the proposed HPABC algorithm. "Taguchi experimental design" section presents the data generation and test case specifications of the current problem, and then describes tuning of the parameters of the proposed algorithm using Taguchi method. "Experimental results" section illustrates computational experiments and results over five different categories of problems and makes results comparisons among three different algorithms by performance of some evaluation indexes. Finally, "Conclusions" section concludes the paper and presents some future aspects of the research. Problem description and formulation The group scheduling problem for a single machine with sequence dependent setup and learning effect can be formulated as follows. There are n jobs in m groups to be processed. Different numbers of jobs are grouped into families accordingly to the GT principles. Each group G i , for 1 ≤ i ≤ m, consists of a set of n i jobs \(\left\{ {J_{i1} ,J_{i2} , \ldots ,J_{{in_{i} }} } \right\}\). Assuming that all the jobs are available for processing at time zero on a continuously available machine. An abridged general view of group scheduling problem with SDS times which schedule the groups and the jobs in each groups simultaneously is indicated in Fig. 1 as follow. An abridged general view of group scheduling problem with SDS times The problem is developed using the following notations. Additional notations will be introduced when needed throughout the paper. Notations and abbreviations m The number of groups n i The number of jobs in group G i n The total number of jobs, \(\sum\nolimits_{i = 1}^{m} {n_{i} = n}\) i Index used to represent a group h Index used to represent a group j Index used to represent a job r The job position in a group k The group position in a sequence r i The setup time if a job in group i is first scheduled in the sequence α Learning effect factor for jobs within a group, α > 1 β Learning effect factor for jobs among groups, 0 < β ≤ 1 GS ih The group setup time from group i to group h J ij The job j in group i, j = (1, 2, …, n i ) P ij The normal processing time of a job J ij P i[r] The normal processing time of a job J i[r] which is scheduled in the rth position in a sequence in group G i \(P_{ij}^{k,r}\) The actual processing time of a job J ij is scheduled in the rth and in the kth group in a sequence d ij The due date of a job J ij w ij The weight of a job J ij regarding the objective function C ij The completion time of job J ij \(C_{ij}^{k,r}\) The completion time of job J ij which is scheduled in the kth group position and rth job position in a schedule C max The makespan of an instance \(T_{{J_{ij} }}\) The tardiness of job J ij , \(T_{{J_{ij} }} = max\left\{ {0,C_{ij} - d_{ij} } \right\}\) TWT The total weighted tardiness of all jobs of all groups \(X_{h}^{k} \left\{ {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right.\) If group h is processed at the kth position in the schedule Otherwise \(Y_{hj}^{l} \left\{ {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right.\) If job j in group h is processed at lth position Otherwise \(X_{ijq} \left\{ {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right.\) If job q is processed after job j in group i Otherwise \(X_{ih} \left\{ {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right.\) If group h is processed after group i Otherwise P ij is used to indicate the normal processing time of job j in group i. r and k denote the job position in the group and the group position in the group sequence respectively. In addition, P i[r] represent the normal processing time of a job if it is scheduled in the rth position in the group i in the sequence. Both time-dependent and position-based learning effects are used to determine the actual processing time of a job in a specific job group (Low and Lin 2012). The actual processing time of a job in each group is a function of the sum of the normal processing times of the jobs already scheduled and the position of the corresponding group in the schedule. The actual processing time of a job J ij that is scheduled in the rth position and in the kth group in a schedule, \(P_{ij}^{k,r}\), is computed from Eq. (1). $$P_{ij}^{k,r} = P_{ij} \left( {1 - \frac{{\sum\nolimits_{l = 1}^{r - 1} {P_{i\left[ l \right]} } }}{{\sum\nolimits_{l = 1}^{{n_{i} }} {P_{il} } }}} \right)^{a}\quad \beta^{k - 1} = P_{ij} \left( {\frac{{\sum\nolimits_{l = r}^{{n_{i} }} {P_{i\left[ l \right]} } }}{{\sum\nolimits_{l = 1}^{{n_{i} }} {P_{il} } }}} \right)^{a} \quad \beta^{k - 1} ,\quad \forall j,r = 1,2, \ldots ,n_{i} ,\quad \forall i,k = 1,2, \ldots ,m$$ Minimizing makespan is the first objective that we consider for this problem and is as: $$Z_{1} = \hbox{min} \left( {\hbox{max} \left\{ {C_{ij} } \right\}} \right)$$ The second objective is to minimize total weighted tardiness as below: $$Z_{2} = \hbox{min} \left( {TWT} \right)$$ $$TWT = \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{{n_{i} }} {w_{ij} T_{{J_{ij} }} } }$$ Every group is located in only one position in the group schedule and all the groups must be included in the group schedule. $$\sum\limits_{h = 1}^{m} {X_{h}^{k} = 1,\quad \forall h,k = 1,2, \ldots ,m}$$ $$\sum\limits_{k = 1}^{m} {\sum\limits_{h = 1}^{m} {X_{h}^{k} = m,\quad \forall h,k = 1,2, \ldots ,m} }$$ Each job in a group should be assigned only one position in jobs schedule in its group and all the jobs in the group should be sequenced in the schedule. $$\sum\limits_{j = 1}^{{n_{h} }} {Y_{hj}^{l} = 1,\quad \forall j,l = 1,2, \ldots ,n_{h} \quad \forall h = 1,2, \ldots m}$$ $$\sum\limits_{l = 1}^{{n_{h} }} {\sum\limits_{j = 1}^{{n_{h} }} {Y_{hj}^{l} = n_{h} ,\quad \forall j,l = 1,2, \ldots ,n_{h} \quad \forall h = 1,2, \ldots m} }$$ For m number of groups to schedule, there occurs total of (m − 1) number of sequence dependent setups. $$\sum\limits_{h = 1}^{m} {\sum\limits_{i = 1}^{m} {X_{ih} = m - 1,\quad \forall h,i = 1,2, \ldots ,m,\quad i \ne h} }$$ Completion time of a job J iq located in the first position among jobs in group i and the group i is located in first position in the group schedule. $$C_{iq}^{1,1} = \sum\limits_{i = 1}^{m} {X_{i}^{1} } \left( {r_{i} + \sum\limits_{q = 1}^{{n_{i} }} {Y_{iq}^{1} P_{iq} } } \right),\quad \forall i = 1,2, \ldots ,m\quad \forall q = 1,2, \ldots ,n_{i}$$ Completion time of a job J is located in any position among jobs in group i and the group i is located in any position in the group schedule. $$C_{is}^{1,r} = C_{it}^{1,r - 1} + \sum\limits_{i = 1}^{m} {\sum\limits_{s = 1}^{{n_{i} }} {Y_{is}^{r} X_{i}^{1} P_{is} \left( {\frac{{\sum\nolimits_{l = r}^{{n_{i} }} {P_{i\left[ l \right]} } }}{{\sum\nolimits_{l = 1}^{{n_{i} }} {P_{il} } }}} \right)^{a} } } ,\quad \forall i = 1,2, \ldots ,m\quad \forall s,t,r = 1,2, \ldots ,n_{i} ,\quad s \ne t$$ Completion time of a job J hq located in first position among jobs in group h and the group h is located in second position in the group schedule. $$C_{hq}^{2,1} = C_{is}^{{1,n_{i} }} + \sum\limits_{i = 1}^{m} {\sum\limits_{h = 1}^{m} {X_{i}^{1} X_{h}^{2} X_{ih} GS_{ih} + \sum\limits_{h = 1}^{m} {\sum\limits_{q = 1}^{{n_{h2} }} {Y_{hq}^{1} X_{h}^{2} P_{hq} \beta } \quad \forall i,h = 1,2, \ldots ,m,i \ne h\quad \forall q = 1,2, \ldots ,n_{h} \quad \forall s = 1,2, \ldots ,m} } }$$ Completion time of a job J hs located in any position among jobs in group h and the group h is located in second position in the group schedule. $$C_{hs}^{2,r} = C_{ht}^{2,r - 1} + \sum\limits_{h = 1}^{m} {\sum\limits_{s = 1}^{{n_{h} }} {Y_{hs}^{r} X_{h}^{2} \left( {\frac{{\sum\nolimits_{l = r}^{{n_{h} }} {P_{h\left[ l \right]} } }}{{\sum\nolimits_{l = 1}^{{n_{h} }} {P_{hl} } }}} \right)^{a} } } \beta ,\quad \forall h = 1,2, \ldots ,m\quad \forall s,t,r = 1,2, \ldots ,n_{h} ,\quad s \ne t,\;r \ne 1$$ Completion time of a job J gq located in first position among jobs in group g and the group g is located in any position in the group schedule. $$C_{gq}^{k,1} = C_{of}^{{k - 1,n_{o} }} + \sum\limits_{g = 1}^{m} {\sum\limits_{o = 1}^{m} {X_{g}^{k} X_{o}^{k - 1} X_{og} GS_{og} + \sum\limits_{g = 1}^{m} {\sum\limits_{q = 1}^{{n_{g} }} {Y_{gq}^{1} X_{g}^{k} P_{gq} \beta^{k - 1} } } } } ,\quad \forall g,o,k = 1,2, \ldots ,{\text{m}},\;g \ne o\quad \forall q = 1,2, \ldots ,n_{g} \quad \forall f = 1,2, \ldots ,n_{o}$$ Completion time of a job J gs located in any position among jobs in group g and the group g is located in any position in the group schedule. $$C_{gs}^{k,r} = C_{gt}^{k,r - 1} + \sum\limits_{g = 1}^{m} {\sum\limits_{s = 1}^{{n_{g} }} {Y_{gs}^{r} X_{g}^{k} P_{gs} \left( {\frac{{\sum\nolimits_{l = r}^{{n_{g} }} {P_{g\left[ l \right]} } }}{{\sum\nolimits_{l = 1}^{{n_{g} }} {P_{gl} } }}} \right)^{a} } } \beta^{k - 1} ,\quad \forall g,k = 1,2, \ldots ,m\quad \forall s,t,r = 1,2, \ldots ,n_{g} ,\quad s \ne t,\;r \ne 1$$ The objectives of minimizing makespan and minimizing total weighted tardiness are illustrated in Eqs. (2) and (3) respectively. The constraints of the proposed problem are shown in Eqs. (4)–(8). Completion time of any job J gs from group g in a given schedule is given from Eqs. (9) to (14) described above. Hybrid Pareto artificial bee colony algorithm Artificial bee colony algorithm (ABC), proposed by Karaboga (2005), is a popular algorithm and it is based on the foraging behavior of honey bee swarm. ABC algorithm is composed of three kinds of bees called, employee bee, onlooker bees and scout bees. The number of employee bees and onlooker bees are equal. The food source in ABC algorithm represents a solution of the problem and the nectar amount of the food source indicates the corresponding fitness of the solution. In ABC algorithm employee bees travels in the field and taste different food sources and takes their nectar amounts. The nectar amount of the food sources identifies the value of the objectives or nectar value of the food sources. Employee bees informs this nectar value to the onlooker bees which are waiting in the dance area in the hive. Onlooker bee investigates the employee bees and selects the best food source from them. They also decide the future direction of the employee bee to travel for further search of the food sources. The employee bee which gets the same value of nectar amount from the food sources it searches for known number of cycles (called limit cycles), is turned to a scout bee and scout bee find the new direction of travel to search food sources randomly. This cycle is repeated for known number of algorithm cycles and the best food source ever found is considered as near optimal solution of the considered optimization problem. The problems investigated in literature are quite different from the current research problem of simultaneous group scheduling and job sequencing problem. The solution of current problem is desired to have sequence of different group of jobs and in each group the jobs sequence is also needed. The solution requirement of the current optimization problem is different and therefore a new food source representation is needed to study group scheduling and job sequencing in each group simultaneously. The flowchart of the proposed HPABC is shown in Fig. 2 and the step wise procedure of the proposed HPABC algorithm is presented in this section. Flowchart of the proposed HPABC algorithm Encoding of food source The food source in the current problem is designed to consider both the sequence of groups and schedule of jobs in each group. The food source for the current problem is composed of two layers. The first layer of food source represents the permutation encoding of the group of jobs and is called as layer 1 of the food source. The second layer of the food source represents the sequence of jobs in each group and is called as layer 2 of food source as shown in Fig. 3. It can be seen from Fig. 2 that there are three groups of jobs in the food source of layer 1. The food source presented in Fig. 2 indicates that second group of jobs can process at the first priority and later the group 1 and at the end group 3 will processing. The layer 2 of the corresponding food source indicates that the sequence of jobs in first group (i.e. group 2) is 2, 4 and 7, sequence of jobs in group 1 is 1, 5 while the sequence of jobs in group 3 is 3, 8, and 6 respectively. The proposed encoding of food source in the current group scheduling problem is significant to make several schedules of groups and in each group, different sequences of the jobs are also formed and this kind food sources can be tasted by the employee bees to identify the best food source in the proposed HPABC algorithm. Food source representation of the group scheduling and job sequencing problem Initializing food sources Food source population is generated randomly to include different kind of food sources for tasting. These food sources are tested and employee bee tastes these food sources if the solution represented in the food source satisfies all the constraints of the problem. Otherwise, the food source is again created randomly. The number of food source generated is equal to the number of employee bees. Send employee bees The employee bee phase of proposed HPABC is composed of following steps: Send each employee bee to its respective food source to taste it and get the nectar amount. In the proposed HPABC algorithm, each employee bee creates and taste known number of neighbor food sources of the original food source given to it. The neighbors of food source have same solution in their layer 1 while have different solutions in their layer 2. This can increase the possibility that for each schedule of groups, different job sequences can be formed. In order to create neighbor food sources, a random vector is generated in which the number of elements is equal to the number of groups and the numbers appearing in each element of this vector has value of 0 or 1. The 0 value corresponds to the condition that while making a neighbor food source, the sequence of jobs in the corresponding group is not changed. Whereas the value of 1 in an element of the random vector corresponds to the condition that the jobs of the corresponding group can change their sequence to make a neighbor. Swap mutation is used to change the position of jobs in the groups which are allowed to change their job positions according to the values (i.e. 0 or 1) appearing in the random vector. The random vector and the procedure of swap mutation to change the sequence of jobs for different groups for a food source to create its neighborhood food a source is indicated in Fig. 4. It can be seen from Fig. 4 that the random vector has elements equal to the number of groups in the food source i.e. 3 elements. The elements in the random vector which have value of 1 allowed their corresponding groups to change the corresponding jobs sequences in them. As can be seen from Fig. 4 that the second and third element of random vector has values of 1 and the corresponding groups in the food source are job group 1 and group 3 and they are appeared in grey color. The jobs appearing in these groups in the layer 2 of the food source can change the position of jobs in them by swap mutation, i.e. job 5 and job 1 are interchanged in the group 1 and job 6 and job 3 are interchanged in group 3. Each employee bee creates E neb number of its neighbors and for each neighbor, there is a new random vector. Current problem is multi objective optimization problem and therefore, each food source is required to be observed on all objectives. Therefore, nectar amount of food source ingredients is computed in this stage, each ingredient corresponds to an objective. The nectar amount of food source ingredients are illustrated in Eqs. (15) and (16). $$Nec_{1} = C_{\hbox{max} }$$ $$Nec_{2} = \sum\limits_{i = 1}^{m} {\sum\limits_{j = 1}^{{n_{i} }} {w_{ij} T_{{J_{ij} }} } }$$ where, \(T_{{J_{ij} }} = \hbox{max} \left\{ {0,C_{ij} - d_{ij} } \right\}\), d ij is the due date of job J ij , w ij is the weight related to the job J ij . In this step, non-dominated sorting of the food source neighbors of each employee bee along with the stored food source of each employee bee (if there is some food source in archive of each employee bee) is performed separately (Deb et al. 2002). In non-dominated sorting, a food source S dominates another food source F i.e. S ≺ F if food source S is better than the food source F in all of its ingredients. Further, S is strictly better than F in at least one of the food source ingredient value. Non-dominated solutions from the food source neighbors of each employee bee are separately identified from the population of neighborhood food sources of each employee bee. These non-dominated food sources of each employee bee are separately graded. The grade 1 non-dominated food sources are those to which no other solutions can dominate. The grade 1 food non-dominated food sources might be more than one for each employee bee. For each employee bee there is possibility that they can have more than one food source as non-dominated in grade 1 and in this situation, a food source which is in the middle range of the Pareto front is given priority in the proposed HPABC algorithm because the middle values on the Pareto front are more near to the optimal values. Furthermore, there is requirement of the diversity in the solutions and therefore, a new value of the nectar value is designed here which can combine the effect of the middle point food sources on the Pareto front and the diversified food sources from the front. Equation (17) indicates the nectar amount of the food source i. $$Nec_{i} = \left( {W_{cd} \times C_{d} } \right)_{i} + \left( {W_{mc} \times MS} \right)_{i}$$ where, W cd is the weightage given to the crowding distance of the food sources, C d is the crowding distance (Deb et al. 2002) of the non-dominated food sources, W mc is the weightage given to the Pareto points on the middle of the Pareto front and MS i is the middle score which can be computed from the relation shown in Eq. (18). $$MS_{i}^{e} = d_{i,1} \times d_{1,2} ,\quad \forall 1 < i < h$$ where, h is the number of Pareto points on the front for the Pareto solutions of the neighbors of an employee bee, \(MS_{i}^{e}\) is the selection function of a Pareto point i. The larger value of \(MS_{i}^{e}\) can give a solution which is more in the middle on the Pareto front. \(MS_{i}^{e}\) defines the product of the Euclidean distance of a Pareto point i on the front from the two extreme points on the Pareto front. Neighbor food sources of each employee bee are sorted on the basis of the value of the nectar amount computed from Eq. (17) and one best food source from the neighbors of each employee bee is selected. From the population of the selected food source of each employee bee (population has one best food source from each employee bee), tournament selection is performed to select two food sources from them and they are named as parent food sources. Precedence preservative crossover (PPX) operation is performed between them to share information between them. The PPX operation is performed on the layer 2 of the parent food sources of the employee bees. In order to perform PPX crossover, a random vector is formed similar in structure with the food sources. The elements in the layer 1 of this random vector have values of 0 or 1 and each element corresponds to a job group. For example, the first element of this random vector corresponds to the group appearing on first position in layer 1 of the food source. The value of 0 in the element of first layer shows that the corresponding group has no crossover between the parent food sources. If there appears 1 in the element of the layer 1 of this random vector, means there is crossover operation in the corresponding job group of the parent food sources and the crossover is performed for the group appearing in parent 1 food source. The layer 2 of the random vector indicates the values of either 1 or 2. The value 1 corresponds to parent 1 and 2 corresponds to the second parent. If there appears 1 in the element of the layer 2 of the random vector, it indicates that the corresponding value of the element in the offspring 1 is filled with the value appearing in the parent 1 and same value is deleted from the parent 2, as shown in Fig. 5. In order to generate the second offspring, the element values of the random vector in the layer 2 are reversed, i.e. replace 1 with 2 and replace 2 with 1 to make a new random vector to create random vector for the second offspring. The proposed PPX crossover operation is indicated in Fig. 5. It can be seen from Fig. 5 that there are three groups in parent 1 and parent 2 of the food sources. The random vector has three elements in layer 1 and the values appearing in it are 1, 0 and 0. The 1 value indicates that there is crossover operation in the group which is appearing at the first location in layer 1 of the parent food source 1 i.e. group 2. Therefore, the crossover is performed between the parent food source for the group which is appeared at position 1 in the parent 1 food source and the same group in parent 2 i.e. group 2. The random vector is reversed to replace 1 with 0 and replace 0 with 1 for the creation of second offspring. The layer 1 of random vector for second offspring indicates the groups of parent 2 which can have crossover operation. In this step, non-dominated sorting is performed between the neighbor food sources and offspring food sources of the two selected employee bee (the employee bee from which parent food sources are obtained) separately and the one best non-dominated food source on the basis of nectar value shown in Eq. (17) from each is stored in their archive. Each employee bee has a separate archive to store the selected neighbor of each employee bee separately. The selected food source from the non-dominated sorting of each employee bee neighbor food sources, they all are stored in an archive of each employee bee and their archive is updated after each cycle of the algorithm. One best food source neighbor from each employee bee which has maximum nectar value of Eq. (17) is selected and the selected neighbor food source from each employee bee is sent to the onlooker bees. Creation of a neighborhood food source of an employee bee PPX crossover in layer 2 to create offspring 1 Send onlooker bee Onlooker bee phase of the proposed HPABC algorithm is composed of the following steps: Onlooker bee stage of the proposed HPABC have a separate archive to store the best food sources found after finishing the onlooker bee stage. In this step, the food sources in the archive and the food sources given by the employee bee are combined to make a single population and non-dominated sorting is performed between them. The food sources which are appearing on the middle of the Pareto front are given priority and two of the best food sources from the Pareto front are obtained based on the nectar value appearing in Eq. (17). The selected two food sources are considered as parent food sources in onlooker bee phase. They are allowed to crossover for N times to create 2N number of offspring. The crossover is allowed to be performed only in layer 1 of the food sources. The procedure of crossover in layer 1 is indicated in Fig. 6. It can be seen from Fig. 6 that a random vector is created which can give the values of either 1 or 2. These values correspond to parent 1 and parent 2 respectively. When there appears value of 1 in the random vector, the corresponding element of the offspring is filled with the same elements in layer 1 and layer 2 of the parent food source 1 and similar group is deleted from the food source of parent 2. For example, the first element of the random vector is 2, it means the first element of the offspring will be filled with the first element of the parent 2 and the first element with layer 1 and layer 2 of parent 2 is copied to the first element of the offspring 1 and it is deleted from the parent 1 (as described by a small arrow in element containing the same group of jobs from parent 1 food source, i.e. group 3 is deleted from parent 1 once it is appeared in the offspring 1). The same procedure is followed to create the second offspring but the random vector is reversed, i.e. the value 1 appearing in the random vector is changed to 2 and the value appearing as 2 in random vector is changed to 1 for making a new random vector for offspring 2. Non-dominated sorting is performed between 2 N number of offspring, the food sources in the onlooker bee stage and the food sources in archive to get a Pareto front. The nectar value of the food sources is computed using relation given in Eq. (17) and the food sources are sorted on the basis of this nectar amount. The best food sources are stored in the archive for next cycle of the algorithm and the X % of the population of the food sources for the employee bee for next cycle of algorithm is taken from this archive and remaining is obtained from scout bee. PPX crossover in layer 1 to create offspring food sources Send scout bee The scout bees are used to introduce diversity in the food source population and they introduce new food sources to the employee bees. Scout bee can create random food sources and give this information to the employee bees. Taguchi experimental design Artificial bee colony algorithm, like most other searching algorithms, is mainly influenced by values of parameters. These parameters can be set manually or by applying different setting approaches such as full factorial experiment. This is a comprehensive approach but it would lose its efficiency by increasing the number of parameters (Montgomery 2000; Karimi et al. 2011), while in Taguchi method, a large number of decision variables would be tuned through a small number of experiments. Taguchi method is used to design set of experiments in the form of an orthogonal array (OA). In OA, different levels of each parameter are defined and for each experiment there exist different combination of parameter levels to make different set of experiments. Each experiment has different levels of parameters consisting of different values. The number of columns in this matrix represents different parameters and the rows represents the number of experiments, each containing different set of parameters. These set of experiments with each containing different of levels of parameters, signal to noise (S/N) ratio is determined. S/N is the ratio of the objective function value obtained for an experiment with the variance value of the objective function. Taguchi method is used to determine best set of levels of all parameters of algorithm which can give maximum value of the S/N, i.e. best objective function value with less variations in its values. This method can identify the robust values of parameters which can be used for different instances of the problems. Data generation and test case specifications The proposed HPABC algorithm is tested against several test problems. These test problems are much closer to the real-world problems. The main purpose of applying group scheduling techniques in production is to decompose the complex production problems. Thus, in industrial environment neither too many groups nor too many jobs in each group are expected to be assigned. According to relevant previous research, the maximum number of groups consider in current study is set equal to 20, and the maximum number of jobs in each group is set equal to 16. The number of groups is varied from 2 to 4, 5 to 8, 9 to 12, 13 to 16, and 17 to 20, for small, small medium, medium, large medium, and large problem instances, while the number of jobs in each group is a random integer taken from a discrete uniform (DU) distribution, DU [2, 4], DU [5, 7], DU [8, 10], DU [11, 13], and DU [14, 16] for small, small medium, medium, large medium, and large problems, respectively. The experiments are implemented on these five sizes of problem: small, small medium, medium, large medium, large which are shown in the Table 1. The specifications of required data for all the problems are as follows: Table 1 Characteristics of different size of test problem Processing times of jobs are made from DU [5, 25] Setup times between groups are generated from DU [5, 50] Defining proper due dates can positively affect the performance of the algorithms on the basis of previous work (Bozorgirad and Logendran 2012; Zandieh and Karimi 2011;). Two different factors are introduced to define due dates: tardiness factor (τ), and due date range factor (R). The tardiness factor (τ) is used to create loose or tight due dates, and τ is defined as \(1 - \bar{d}/C_{max}\), where \(\bar{d}\) is the average due date and C max is the maximum completion time of all jobs. Tight or loose due dates can be obtained by large or small value of τ respectively. Moreover, the due date range factor (R) decides the variability of due dates. The range factor (R) is equal to (d max − d min )/C max , where d min is the minimum due date among all the jobs, and d max is the maximum one. Different combinations of τ and R can provide different characteristics for randomly generated due dates. In current research, the values of τ and R are set to 0.4 and 0.6 severally which can provide small medium and wide range due dates. Then the due dates are uniformly distributed over the interval \([ {\bar{d} - R\bar{d}, \bar{d}}]\) with probability τ and over the interval \([ {\bar{d},\bar{d} + \left( {C_{max} - \bar{d}} \right)R} ]\) with probability (1 − τ). Job weights are generated from uniform integer distribution [1,4] The learning effect indexes are set as α = 1.5 and β = 0.9 Tuning of proposed algorithm parameters with Taguchi method To begin with the tuning of parameters, the parameters which can affect the performance of the results of proposed HPABC are identified. These factors include, the size of population of the food source, the number of neighborhoods of the algorithm and the maximum number of algorithm cycles. These three parameters are named here as population size, neighborhoods and cycles respectively. The parameter values are set against different levels which are illustrated in Table 2. It can be seen from Table 2 that each column of the table indicates different values of the parameters and their corresponding levels. For example, level 2 of the first parameters represents that the number of food sources in the experiment is 60 while level 2 of the second parameter indicates that number of neighborhoods of the algorithm is 30 and level 2 of the third parameter indicates the number of cycles of the algorithm, i.e. 100. The number of experiments for three parameters with each containing 5 levels runs for 10 times for each instance. Total 4750 (19 × 25 × 10) runs of the proposed HPABC algorithm are carried out to obtained the best level of parameters for different size of problems. Table 2 Effective parameters of the proposed algorithm Table 3 Orthogonal array (OA) for Taguchi design of experiments for the proposed algorithm In the current experiment design, each problem is tested according to different level of parameters as mentioned in the proposed OA, as shown in Table 3 and the corresponding values of the two objective functions are computed. Once each problem is tested according set of parameters as given in OA, the mean value of the objectives, for each level of each parameter is computed for each problem. For example, the mean value of objectives for the parameter 'population size' at level 1 is obtained from first five experiments of the OA matrix. Similarly, mean value for parameter 'population size' at level 2 is acquired by taking the average of objective values obtained from the next five experiments. Similar procedure is employed to get mean value of objectives against each parameter for each level. Then mean of mean objective values (called mean of means) for each level of each category of problems is computed. Furthermore, the measured values that are obtained through the experiments are transformed into signal-to-noise (S/N) ratio. Actually this ratio is the amount of variation in the response variable. Signal-to-noise ratio can be categorized in different sets according to its characteristics: continuous or discrete; nominal-is-best, smaller-the-better, or larger-the-better. Based on current scheduling problem features, the current research applies nominal-is-best. The considered S/N value is indicated in Eq. (19). $$\left( {{S \mathord{\left/ {\vphantom {S N}} \right. \kern-0pt} N}} \right)_{\text{nominal}} = 10\log \left( {\frac{{(mean)^{2} }}{{\left( {Variance} \right)^{2} }}} \right)$$ where, (mean)2 indicates the mean value of the optimizing objective and (variance)2 is the variance value in the optimizing objectives. S/N values for each objective of different problems are calculated according to OA and then mean value of S/N of each objective for each level of parameter is computed. Later, mean value of S/N values (called mean of S/N) for each level of each category of problems is computed. In the experiments, the mean S/N values of small size of problems are infinite due to Zero value of variance. However, mean value of means and mean value of S/N for considered problems are indicated in Figs. 7 and 8 respectively. Graphical method is employed here to identify the specific level of different parameters for each category of problems. Mean value of means for different level of parameters Mean value of S/N for different level of parameters In the current case the level of parameter which gives small value of the optimizing objectives is preferred because objective functions are the minimizing objectives. Moreover, the level of parameter at which maximum value of S/N is obtained is preferred. The optimum level of parameter for each category of problem is obtained by observing both mean value of means and mean of S/N values of all objectives for each category of problem. The optimum level of parameters for each category of problem is illustrated in Table 4. Table 4 Optimum level of parameters of proposed HPABC for each category of problem Experimental results In this section, performance of the proposed HPABC algorithm is tested using the optimum level of parameters obtained in the previous section. Several instances of different categories of problem which are presented before are analyzed using HPABC algorithm and other three famous multi-objective optimization algorithms in literature, i.e. non-dominated sorting genetic algorithm II (NSGAII) (Deb et al. 2002), the improved strength Pareto evolutionary algorithm (SPEA2) (Zitzler et al. 2001) and particle swarm optimization algorithm (PSO) (Kennedy and Eberhart 1995). The parameters of SPEA2, NSGAII and PSO used for different size of problems are also obtained from different runs of experiment and the parameters values which can give good results for SPEA2, NSGAII and PSO are selected for them. The selected values of parameters of the three considered algorithms for the tested instances are illustrated in Table 5. 'Pop' is used to represent population size, 'Nei' indicates Number of neighborhoods, and 'Cyc' is a shortened form of Maximum number of algorithm cycles. Each experiment of each instance is run 10 times by each algorithm. The results of the proposed HPABC algorithm are compared with that obtained from SPEA2, NSGAII and PSO. The proposed HPABC, SPEA2, NSGAII and PSO are all coded in Visual C# and run on an Intel Core i7, 3.4 GHz CPU, 4 GB RAM computer. The performance of proposed HPABC algorithm is compared with SPEA 2, NSGA II and PSO algorithm based on different metrics including diversity and quality of solutions, inverted generational difference and spacing of Pareto points on the Pareto fronts. The comparison of results based on each comparison metric is indicated in this section. Table 5 Parameters for HPABC, SPEA2, NSGA II and PSO algorithm for different categories of problems The diversity and the quality of non-dominated solutions In order to assess the performance of algorithms, the measures of diversity and quality which have been firstly applied by Hyun et al. (1998) are used in current study. The measures of diversity and quality are also used by Zandieh and Karimi (2011) called as, qualitative and quantitative measures. Both Hyun et al. (1998) and Zandieh and Karimi (2011) have presented the relation to determine the quality of Pareto. However, their studies have not considered common Pareto optimal solutions of the two different algorithms. There is possibility that true Pareto front can have some common optimal Pareto points both from HPABC and other comparison algorithms. Therefore, in current study a new measure of quality with a small improvement based on the previous work is presented. Since each algorithm finds out near Pareto optimal solutions, a solution found by algorithm A could dominate that found by another algorithm B, or vice versa. Putting together all the solutions found by A and B, non-dominated between them is performed. Some of Pareto points are the common solutions discovered by A and B simultaneously, some of them are only discovered by A or B respectively. Assuming that N A and N B are Pareto optimal solutions of algorithms A and B respectively, the combined Pareto front have N T Pareto optimal solutions which is less than N A + N B . N com is defined as the number of common Pareto solutions found by A and B, N T A and N T B indicate the number of Pareto solutions of algorithms A and B in the combined Pareto front respectively. Diversity measure of each algorithm is its number of Pareto optimal solutions (N A and N B respectively) and is shown in Table 6. The quality measure (Qua i ∀i = A, B) is a ratio calculated from the relation indicated in Eq. (20) $$Qua_{i} = \frac{{N_{T}^{i} - N_{com} }}{{N_{T} - N_{com} }}\quad \forall i = A,B$$ Table 6 Results based on comparison of diversity The ratio may be used to indicate which algorithm is better in terms of solution quality. In this way, every pair of algorithms is compared, and the outcomes are shown in Table 7. Table 7 Results based on comparison of quality Comparison of diversity The results based on comparison of diversity for the proposed HPABC, SPEA 2, NSGA II and PSO algorithms are indicated in Table 6. It can be seen from the Table 6 that in most of the studied problems, HPABC gives better results and gives more number of Pareto points as compared to NSGA II, SPEA 2 and PSO algorithm. For instance, for small medium size of problems, 31.50 average number of Pareto solutions are found for the case problem containing 7 groups by HPABC. However, SPEA 2 gives 14.10, NSGA II gives 16.40 and PSO gives 14.80 average number of Pareto solutions for the same problem instance. Moreover, for medium size of problems, 27.40 average number of Pareto solutions are obtained for the problem containing 10 groups instance by HPABC. However, SPEA 2 gives 10.80, NSGA II gives 16.80 and PSO gives 14.70 average number of Pareto solutions for the same problem respectively. Furthermore, for large medium size of problems, 16.90 average number of Pareto solutions are found for the case with 13 groups by HPABC while, SPEA 2 gives 7.70, NSGA II gives 9.80 and PSO gives 8.50 average number of Pareto solutions respectively for the same problem. In addition, for large size of problems, 9.20 average number of Pareto solutions are gained for the case with 19 groups by HPABC. However, SPEA 2 gives 4.20, NSGA II gives 5.10 and PSO gives 4.70 average number of Pareto solutions for large problem case with 19 groups respectively. Results shown in Table 6 indicates that, for small number of group problems, HPABC, NSGA II and PSO gives almost same average number of Pareto solutions. However, for rest of all problems belonging from each group size, HPABC outperforms NSGA II, SPEA 2 and PSO on the basis of diversity comparison results. The results based on comparison of diversity for different size of problems are indicated in Fig. 9. The average number of Pareto solutions for each size of problems by each algorithm is calculated from mean value of Pareto solutions of all instance of each size of problems and presented in Fig. 9 for each size category of problems. It can be seen from Fig. 9 that the number of Pareto points obtained from HPABC are larger than that from SPEA2, PSO and NSGAII against all size of problems and when the job groups become larger from small size to small medium size, the average Pareto solutions is increasing because of the extension of solution space. However, the average number of Pareto solutions is decreasing gradually from small medium size to large size due to the increasing complexity of the problem. These results indicate that, for small size problems, the Pareto solution points are less due to less search space of solutions for small problems. The number of Pareto solutions increases as the size of problem increases in the medium size problems and number of Pareto solutions decreases as the problem size increases and becomes larger than the average size problems. This is due to increase in complexity of problem as its size increases. This patterns of number of Pareto solutions is similar for HPABC, NSGA II, PSO and SPEA2. However, in all problem sizes, the number of Pareto solutions obtained from HPABC are more as compared to NSGA II, PSO and SPEA 2 algorithm. The average number of Pareto solutions for all instances of different size of problems obtained from the four different algorithms Comparison of quality The Pareto results obtained from HPABC, SPEA 2, PSO and NSGA II are also compared by computing the ratio of number of Pareto solutions obtained from one algorithm to the number of Pareto solutions obtained from other comparison algorithms. The results based on comparison of quality are indicated in Table 7. It can be seen from Table 7 that the ratios of quality for small size problems between HPABC and SPEA2 are (100:0 %) because HPABC obtained the true Pareto solutions for the small size of problems. Moreover, NSGAII and PSO also can find the true Pareto solutions for all the small size problems. Thus, the ratios of quality between NSGAII and SPEA2 and between PSO and SPEA2 are also (100:0 %). While the ratios between HPABC and NSGAII, HPABC and PSO and NSGAII and PSO are undefined on account of 0/0. However, for the rest of the problems HPABC can outperform NSGAII, PSO and SPEA2. For all instances in different size of problems, the average quality ratio between HPABC and SPEA2 is (85.08:14.92 %), and the average quality ratio is (59.49:40.51 %) between HPABC and NSGAII, while the average quality ratio is (64.42:35.58 %) between HPABC and PSO. Overall results indicate that, HPABC can give the best performance both in diversity and in quality. NSGAII is demonstrated to be the second best both in terms of the number of non-dominated solutions and the quality of solutions, While PSO is tested to be the third best of the four algorithms. SPEA2 shows the worst results for both of the measures. However, for 12 groups instance of the medium size problem, the result of SPEA2 is a little better than NSGAII. Inverted generational distance The current problem has two objectives, so the results of all instances of each category of problems from HPABC, SPEA2, PSO and NSGAII algorithms are sets of Pareto fronts. The inverted generational distance (GD) value is used to investigate the performance of the proposed HPABC, SPEA2, PSO and NSGAII by estimating the distance of elements in the Pareto optimal solutions from the true Pareto front. The value of GD is computed from the relation indicated in Eq. (21). $$GD = \frac{{\sqrt {\sum\nolimits_{i = 1}^{h} {d_{i}^{2} } } }}{h}$$ where, d i is the Euclidean distance between a Pareto optimal solution in the Pareto front and the nearest Pareto point in the true Pareto front, h is the number of Pareto optimal solutions in the Pareto front. The smaller value of GD indicates that the Pareto optimal solution is closer towards the true Pareto front and can give the near optimal solution. The comparison of HPABC, SPEA2, PSO and NSGAII on the basis of GD value for different size of problems at 10 runs of each problem is indicated in Fig. 10 using box plots. It can be seen from Fig. 10a that, the GD values of small size problems from 10 runs of HPABC, PSO and NSGAII respectively is always zero. These results indicate that the true Pareto fronts is found for the problems of small size by proposed HPABC, PSO and NSGAII. However, the GD values of SPEA2 for small size problems indicate the performance of SPEA2 is worse than HPABC, PSO and NSGAII. The GD results shown in Fig. 10b demonstrate that the proposed HPABC performs better than NSGAII, PSO and SPEA2 for small medium size problems. The error point of 6 groups case of HPABC indicate that a weak solution is found from 10 runs of this problem with HPABC. However, variations of GD values of HPABC is obviously less as compared to NSGAII, PSO and SPEA2. The GD values for medium size problems against HPABC, NSGA II, PSO and SPEA2 algorithm are shown in Fig. 10c. In Fig. 10c the problem containing 12 groups has large variations in the GD values for all comparison algorithms and GD values are divided by 5 to show in Fig. 10c. It can be seen from Fig. 10c that HPABC outperforms SPEA2, PSO and NSGAII in GD value for the medium size of problems. In addition, the GD values of these four algorithms are increasing with the increase of job groups respectively. The results based on GD value of different algorithms for large medium size problems and large size problems are indicated in Fig. 10d, e respectively. These two figures also show that HPABC can give the optimal solutions due to the smaller GD values for large medium size problems and large size problems. GD values of proposed HPABC, SPEA2, NSGAII and PSO algorithm for the problems from different size of problems. a GD values of 10 runs of each problem in Small size by the four algorithms respectively. b GD values of 10 runs of each problem in Small Medium size by the four algorithms respectively. c GD values of 10 runs of each problem in Medium size by the four algorithms respectively. d GD values of 10 runs of each problem in Large Medium size by the four algorithms respectively. e GD values of 10 runs of each problem in Large size by the four algorithms respectively Spacing metric Spacing metric is used to measure the distribution of Pareto points on the Pareto front. It is assumed that there are number of Pareto solutions on a front. Then SP can be computed from Eq. (22). $$SP = \sqrt {\frac{1}{k - 1}\sum\limits_{u = 1}^{k} {\left( {d_{avg} - d_{u} } \right)^{2} } }$$ where, \(d_{u} = \mathop {\hbox{min} }\nolimits_{v} \left[ {\mathop \sum \nolimits_{a = 1}^{O} \left| {Z_{a}^{u} - Z_{a}^{v} } \right|} \right],\quad \forall u,v = 1,2, \ldots ,k\), k indicates the number of solutions in the Pareto front, d avg is the mean of all d u , Z a u represents the value of objective a, O is the total number of objectives. It can be seen from Eq. (22) that smaller value of the SP is desirable. Moreover, the zero value of SP indicates that all the Pareto points on the front are equidistant to each other and the Pareto points are evenly distributed on the front. The comparison of the performance of HPABC, SPEA 2, PSO and NSGAII algorithm based on the SP values from different 10 runs of experiment of different size of problems is indicated in Fig. 11 using box plots. It can be seen from Fig. 11a that, for small size problems, the performances of HPABC, PSO and NSGAII on the basis of SP values are same due to the same solution points found by these three algorithms. While the SP values of SPEA2 are smaller than HPABC, PSO and NSGAII for 3 groups instance and 4 groups instance. Nevertheless, it does not indicate that SPEA2 performs better than the other three algorithms because maybe only 1 or 2 solutions are found by SPEA2 at most runs of the problems. As shown in the rest of the figures in Fig. 11, for most of the problems in different size, the proposed HPABC gives better results of SP value as compared to SPEA2, PSO and NSGAII. SP values of proposed HPABC, SPEA2, NSGAII and PSO algorithm for the problems from different size of problems. a SP values of 10 runs of each problem in Small size by the four algorithms respectively. b SP values of 10 runs of each problem in Small Medium size by the four algorithms respectively. c SP values of 10 runs of each problem in Medium size by the four algorithms respectively. d SP values of 10 runs of each problem in Large Medium size by the four algorithms respectively. e SP values of 10 runs of each problem in Large size by the four algorithms respectively Pareto fronts The performance of proposed HPABC, SPEA2, PSO and NSGA II algorithm on the basis of their Pareto fronts for an instance from different categories of problems are illustrated in Fig. 12. It can be seen from these figures that in different size of problems, the Pareto fronts generated by the proposed HPABC algorithm are always nearer to the true Pareto front which turns out HPABC is better than SPEA2, PSO and NSGAII for the proposed problem in current research. Pareto front of HPABC, SPEA2, PSO and NSGAII algorithm for different size of problems. a Pareto front of HPABC, SPEA2, PSO and NSGAII algorithm for the 2 groups case problem of small size problems. b Pareto front of HPABC, SPEA2, PSO and NSGAII algorithm for the 5 groups case problem of small size problems. c Pareto front of HPABC, SPEA2, PSO and NSGAII algorithm for the 10 groups case problem of small size problems. d Pareto front of HPABC, SPEA2, PSO and NSGAII algorithm for the 15 groups case problem of small size problems. e Pareto front of HPABC, SPEA2, PSO and NSGAII algorithm for the 19 groups case problem of small size problems Pareto fronts of HPABC, SPEA2, PSO and NSGAII algorithms for one of the small size problems, small medium size problems, medium size problems, large medium and large size problems are indicated in Fig. 12a–e respectively. It is shown in Fig. 12a that, Pareto fronts obtained from HPABC, PSO and NSGAII coincide because they can get all the true Pareto solutions for the instance with 2 groups of small size problems. While SPEA2 may only find some of the true Pareto solutions. It can be seen from Fig. 12b that for the 5 groups instance of small medium size problems, most Pareto points found by HPABC are nearer to the true Pareto front. From Fig. 12c it can be seen that the Pareto front obtained from HPABC for10 groups instance of medium size is much nearer to the true Pareto front as compared to the fronts obtained from NSGA II, PSO and SPEA 2 algorithms for the same problem. Figure 12d indicates that the solution points of HPABC are very near to the true Pareto front for the case with 15 groups of large medium size while SPEA2, PSO and NSGAII are a little bit far with respect to HPABC. Meanwhile, HPABC can obtain more number of Pareto solutions for the current instance of large medium size as compared to SPEA2, PSO and NSGAII. Furthermore, it can be seen from Fig. 12e for lager size problem with 19 groups, the Pareto points of HPABC can dominate much more Pareto points of SPEA2, PSO and NSGAII. However, SPEA2, PSO and NSGAII may randomly obtain a few point better than HPABC. These results indicate that their results might not be stable to find the optimal solutions for large size problems consistently. In conclusion, all Pareto results obtained from HPABC outperforms SPEA2, PSO and NSGAII and can generate optimal Pareto front for different category of problems in current study. Group scheduling problem has got lots of attentions in recent years because it is significant for efficient and cost effective production environment. In current study a single machine group scheduling problem involving SDS time and learning effect, is proposed here. Furthermore, multi objective optimization is considered to minimize the makespan and the total weighted tardiness time simultaneously due to the desire of multiple conflicting objectives at the same time in real environment. Moreover, a hybrid Pareto artificial bee colony (HPABC) algorithm, which integrates the original ABC algorithm with some steps of genetic algorithm and the Pareto optimality, is presented to get Pareto solution of the multiple objectives. The effective parameters of the proposed HPABC algorithm are tuned with robust experimental design procedure using Taguchi method. In this method five different sizes (small, small medium, medium, large medium and large) of test problems involving 19 instances are presented for the current problem. The proposed HPABC algorithm parameters are identified and tuned for each size of problems with Taguchi method. In order to assess the performance of HPABC algorithm, the computational experiments are carried out and the results based on diversity and quality measures, GD value, SP value and Pareto front reveal that the proposed HPABC outperforms SPEA2, PSO and NSGAII comprehensively. Future research can be extended by taking into account of simultaneous sequence dependent group scheduling and lot-sizing scheduling together. In addition, more practical applications need to be considered, e.g., multi-parallel machine scheduling, the uncertain arrival time of jobs and the machine reliability, etc. Furthermore, the proposed HPABC algorithm is desired to be further developed in terms of convergence and diversity. Adressi A, Hassanpour S, Azizi V (2016) Solving group scheduling problem in no-wait flexible flowshop with random machine breakdown. Decis Sci Lett 5(1):157–168 Ajorlou S, Shams I (2013) Artificial bee colony algorithm for CONWIP production control system in a multi-product multi- machine manufacturing environment. J Intell Manuf 24(6):1145–1156 Akay B, Karaboga D (2012) Artificial bee colony algorithm for large-scale problems and engineering design optimization. J Intell Manuf 23(4):1001–1014 Akbari R, Hedayatzadeh R, Ziarati K, Hassanizadeh B (2012) A multi-objective artificial bee colony algorithm. Swarm Evolut Comput 2(1):39–52 Anghinolfi D, Paolucci M (2009) A new discrete particle swarm optimization approach for the single-machine total weighted tardiness scheduling problem with sequence-dependent setup times. Eur J Oper Res 193(1):73–85 Bai J, Li ZR, Huang X (2012) Single-machine group scheduling with general deterioration and learning effects. Appl Math Model 36(3):1267–1274 Bozorgirad MA, Logendran R (2012) Sequence-dependent group scheduling problem on unrelated-parallel machines. Expert Syst Appl 39(10):9021–9030 Costa A, Cappadonna FA, Fichera S (2014) Joint optimization of a flow-shop group scheduling with sequence dependent set-up times and skilled workforce assignment. Int J Prod Res 52(9):2696–2728 Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197 Dudek RA, Smith ML, Panwalkar SS (1974) Use of a case study in sequencing/scheduling research. Omega 2(2):253–261 Egilmez G, Mese EM, Erenay B, Süer GA (2016) Group scheduling in a cellular manufacturing shop to minimise total tardiness and nT: a comparative genetic algorithm and mathematical modelling approach. Int J Serv Oper Manag 24(1):125–146 Gelogullari CA, Logendran R (2010) Group-scheduling problems in electronics manufacturing. J Sched 13(2):177–202 Huang X, Wang MZ, Wang JB (2011) Single-machine group scheduling with both learning effects and deteriorating jobs. Comput Ind Eng 60(4):750–754 Hyun CJ, Kim Y, Kim YK (1998) A genetic algorithm for multiple objective sequencing problems in mixed model assembly lines. Comput Oper Res 25(7):675–690 Janiak A, Kovalyov MY, Portmann MC (2005) Single machine group scheduling with resource dependent setup and processing times. Eur J Oper Res 162(1):112–121 Ji M, Zhang X, Tang X (2016) Group scheduling with group-dependent multiple due windows assignment. Int J Prod Res 54(4):1244–1256 Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Technical Report-TR06, (Oct 2005) Computer Engineering Department Erciyes University Turkey Karimi N, Zandieh M, Najafi AA (2011) Group scheduling in flexible flow shops: a hybridised approach of imperialist competitive algorithm and electromagnetic-like mechanism. Int J Prod Res 49(16):4965–4977 Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks, 1995, vol 4. IEEE, pp 1942–1948 Keshavarz T, Salmasi N (2014) Efficient upper and lower bounding methods for flowshop sequence-dependent group scheduling problems. Eur J Ind Eng 8(3):366–387 Keshavarz T, Salmasi N, Varmazyar M (2015) Minimizing total completion time in the flexible flowshop sequence-dependent group scheduling problem. Ann Oper Res 226(1):351–377 Khamseh A, Jolai F, Babaei M (2015) Integrating sequence-dependent group scheduling problem and preventive maintenance in flexible flow shops. Int J Adv Manuf Technol 77(1–4):173–185 Koulamas C, Kyparisis GJ (2008) Single-machine scheduling problems with past-sequence-dependent setup times. Eur J Oper Res 187(3):1045–1049 Kuo WH (2012) Single-machine group scheduling with time-dependent learning effect and position-based setup time learning effect. Ann Oper Res 196(1):349–359 Kuo WH, Yang DL (2006) Single-machine group scheduling with a time-dependent learning effect. Comput Oper Res 33(8):2099–2112 Li JQ, Pan QK, Gao KZ (2011) Pareto-based discrete artificial bee colony algorithm for multi-objective flexible job shop scheduling problems. Int J Adv Manuf Technol 55(9):1159–1169 Li ZT, Chen QX, Mao N (2013) A heuristic algorithm for two-stage flexible flow shop scheduling with head group constraint. Int J Prod Res 51(3):751–771 Logendran R, Carson S, Hanson E (2005) Group scheduling in flexible flow shops. Int J Prod Econ 96(2):143–155 Low C, Lin WY (2012) Single machine group scheduling with learning effects and past-sequence-dependent setup times. Int J Syst Sci 43(1):1–8 Mitrofanov SP (1966) Science principles of group technology. National Lending Library of Science and Technology, Boston Spa Montgomery DC (2000) Design and analysis of experiments, 5th edn. Wiley, New York Neufeld JS, Gupta JND, Buscher U (2015) Minimising makespan in flowshop group scheduling with sequence-dependent family set-up times using inserted idle times. Int J Prod Res 53(6):1791–1806 Omkar SN, Senthilnath J, Khandelwal R, Naik GN, Gopalakrishnan S (2011) Artificial Bee Colony (ABC) for multi-objective design optimization of composite structures. Appl Soft Comput 11(1):489–499 Opitz H (1970) A classification system to describe workpieces: Parts I and II. Pergamon, Oxford Pan QK, Tasgetiren MF, Suganthan PN, Chua TJ (2011) A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem. Inf Sci 181(12):2455–2468 Sabouni MY, Logendran R (2013) A single machine carryover sequence-dependent group scheduling in PCB manufacturing. Comput Oper Res 40(1):236–247 Saif U, Guan Z, Liu W, Zhang C, Wang B (2014) Pareto based artificial bee colony algorithm for multi objective single model assembly line balancing with uncertain task times. Comput Ind Eng 76(C):1–15 Salmasi N, Logendran R (2008) A heuristic approach for multi-stage sequence-dependent group scheduling problems. J Ind Eng Int 4(4):48–58 Salmasi N, Logendran R, Skandari MR (2011) Makespan minimization of a flowshop sequence-dependent group scheduling problem. Int J Adv Manuf Technol 56(5–8):699–710 Schaller J (2001) A new lower bound for the flow shop group scheduling problem. Comput Ind Eng 41(2):151–161 Solimanpur M, Elmi A (2011) A tabu search approach for group scheduling in buffer-constrained flow shop cells. Int J Comput Integr Manuf 24(3):257–268 Tasgetiren MF, Pan QK, Suganthan PN (2011) A discrete artificial bee colony algorithm for the total flowtime minimization in permutation flow shops. Inf Sci 181(16):3459–3475 Wang JB (2008) Single-machine scheduling with past-sequence-dependent setup times and time-dependent learning effect. Comput Ind Eng 55(3):584–591 Wang JB, Ng CT, Cheng TCE, Liu LL (2008) Single-machine scheduling with a time-dependent learning effect. Int J Prod Econ 111(2):802–811 Webster S, Baker KR (1995) Scheduling groups of jobs on a single machine. Oper Res 43(4):692–703 Wu CC, Shiau YR, Lee WC (2008) Single-machine group scheduling problems with deterioration consideration. Comput Oper Res 35(5):1652–1659 Yang SJ (2011) Group scheduling problems with simultaneous considerations of learning and deterioration effects on a single-machine. Appl Math Model 35(8):4008–4016 Yang SJ, Yang DL (2010) Single-machine group scheduling problems under the effects of deterioration and learning. Comput Ind Eng 58(4):754–758 Yin Y, Xu D, Sun K, Li H (2009) Some scheduling problems with general position-dependent and time-dependent learning effects. Inf Sci 179(14):2416–2425 Zandieh M, Karimi N (2011) An adaptive multi-population genetic algorithm to solve the multi-objective group scheduling problem in hybrid flexible flowshop with sequence-dependent setup times. J Intell Manuf 22(6):979–989 Zhang R, Song S, Wu C (2013) A hybrid artificial bee colony algorithm for the job shop scheduling problem. Int J Prod Econ 141(1):167–178 Zhu Z, Sun L, Chu F, Liu M (2011) Single-machine group scheduling with resource allocation and learning effect. Comput Ind Eng 60(1):148–157 Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the performance of the strength Pareto evolutionary algorithm. TIK-Report 103, May 2001 ZG leaded the research group. LY carried out the research work and drafted the manuscript. SU proposed a novel algorithm named HPABC and helped to implement all the experiments and analyze the experimental results in detail. HW helped to do some work of data processing. ZG, FZ helped to make some revisions of the manuscript and give a lot of valuable suggestions. All authors read and approved the final manuscript. This work has been supported by MOST (Ministry of Science and Technology of China) the Funds for International Cooperation and Exchange of the National Natural Science Foundation of China (No. 51561125002), the National Natural Science Foundation of China (No. 51275190, 51575211), and the Fundamental Research Funds for the Central Universities (HUST: 2014TS038). State Key Lab of Digital Manufacturing Equipment and Technology, HUST-SANY Joint Lab of Advanced Manufacturing, Huazhong University of Science and Technology, Wuhan, 430074, People's Republic of China Lei Yue , Zailin Guan , Ullah Saif , Fei Zhang & Hao Wang Department of Industrial Engineering, University of Engineering and Technology, Taxila, Pakistan Ullah Saif Search for Lei Yue in: Search for Zailin Guan in: Search for Ullah Saif in: Search for Fei Zhang in: Search for Hao Wang in: Correspondence to Ullah Saif. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Group scheduling Multi-objectives Sequence dependent setup Learning effect Taguchi method
CommonCrawl
Radiative Transfer of Polarized X-rays: Magnetized Thomson Scattering in Neutron Stars BARCHAS-DOCUMENT-2017.pdf Barchas, Joseph Baring, Matthew G This thesis is a focused study of the polarization characteristics of radiative transfer in a strong magnetic field. The main process examined here is magnetized Compton scattering in a non-relativistic regime (i.e.\ magnetized Thomson scattering), and we focus on applying this study to predict polarization properties of the X-ray emission from magnetars. Magnetars are a highly magnetic sub-class of neutron stars, characterized by their extremely high surface magnetic fields, comparable to or exceeding the quantum critical field ($B_{\rm cr}\simeq4.41\times10^{13}$ Gauss) at which an electron's cyclotron energy and rest mass energy are equal. There are 29 known/candidate magnetars at this time, and they commonly exhibit persistent quasi-thermal surface emission in soft X-rays with flat tails extending into the hard X-rays up to around 150 keV, as well as transient bursting activity in hard X-rays attributed to magnetospheric flares. Magnetized Thomson scattering refers to electron-photon scattering in a background magnetic field. The field introduces anisotropy to the problem, giving it a more complicated angular dependence. It also produces a strong frequency dependence to the cross section: it is resonant at the cyclotron frequency $\omega_B=eB/mc$. Additionally, electron motion perpendicular to the field becomes increasingly suppressed at higher field strengths, leading to a reduction in the cross section for certain incoming photon angles when $\omega\ll\omega_B$. There are complicated polarization characteristics for the process as well, with the differential cross section depending on the initial and final polarization state of the photon. An important distinction occurs between photons that have a component of linear polarization parallel to the field and those that are fully orthogonal to it. We explore this process in detail using a Monte Carlo simulation model, treating the transfer primarily in slab geometries, a common simplification. This allows a direct comparison with previous work and is an important step towards achieving more complicated geometries and scattering regimes. We fully map the frequency and angular dependence of this process in the optically thick regime, capturing both resonant and non-resonant properties of the scattering. We present results for a model of magnetar persistent surface emission, as well as a simple magnetar flare model. In both cases the transfer is purely due to magnetic Thomson scattering, and we superimpose the emission from regions of different temperature, density, and magnetic field. For magnetar surface emission, we see a phase-dependent linear polarization, forming either a single- or double-peaked pattern with a maximum level of roughly $\sim25\%$, depending on the angle between the observer and the spin axis. This could have important implications for polarimetric determination of the effect of vacuum birefringence as polarized X-rays transfer through the magnetosphere to infinity. For the flare model we see much stronger polarization signals as the emission is coming from more localized regions, and it is highly dependent on viewing geometry and frequency. A secondary process is also examined due to its importance in magnetized plasmas: the so-called generalized Faraday effect. This is analogous to the ordinary Faraday effect, where the phase lag caused by birefringence of circular eigenmodes of electromagnetic waves produces a constant rotation of the plane of linear polarization for a propagating wave. The generalized effect occurs when the eigenmodes are no longer circular, and it produces a very similar rotation when viewed in terms of the Poincar\'e sphere. When this effect is assumed to be prolific, the transfer can be reformulated in terms of the normal modes in what is called the normal mode description. We explore the parameter space in which this description is valid, and develop a method to handle transfer in certain regimes where it is invalid. This prepares the way for incorporating such nuances in future developments of the magnetized radiative transfer problem. Radiative Transfer; Neutron Stars Barchas, Joseph. "Radiative Transfer of Polarized X-rays: Magnetized Thomson Scattering in Neutron Stars." (2017) Diss., Rice University. https://hdl.handle.net/1911/99677.
CommonCrawl
Explain the formation of replicators to a layman I am reading The Selfish Gene by Richard Dawkins. I am on chapter two. He speaks of the observation of the formation of amino-acids when you simulate environmental conditions of primordial earth. UV light + water + carbon dioxide + methane + ammonia + a couple of weeks time = amino-acids I understand that amino acids are an organic compound that serve a lot of important functions in our bodies. Presumably, they are important to all life, because from the context I can derive, they seem to be somewhat of a precursor to life itself. Dawkins then says: Processes analogous to these must have given rise to the 'primeval soup' which biologists and chemists believe constituted the seas some three to four thousand million years ago. He goes on and eventually states: At some point a particularly remarkable molecule was formed by accident. We will call it the Replicator. .... Actually a molecule that makes copies of itself is not as difficult to imagine as it seems at first, and it only had to arise once. Think of the replicator as a mould or template. Imagine it as a large molecule consisting of a complex chain of various sorts of building block molecules. The small building blocks were abundantly available in the soup surrounding the replicator. Now suppose that each building block has an affinity for its own kind. Then whenever a building block from out in the soup lands up next to a part of the replicator for which it has an affinity, it will tend to stick there. The building blocks that attach themselves in this way will automatically be arranged in a sequence that mimics that of the replicator itself. It is easy then to think of them joining up to form a stable chain just as in the formation of the original replicator. This process could continue as a progressive stacking up, layer upon layer. This is how crystals are formed. On the other hand, the two chains might split apart, in which case we have two replicators, each of which can go on to make further copies. A more complex possibility is that each building block has affinity not for its own kind, but reciprocally for one particular other kind. Then the replicator would act as a template not for an identical copy, but for a kind of 'negative', which would in its turn re-make an exact copy of the original positive. For our purposes it does not matter whether the original replication process was positive-negative or positive-positive, though it is worth remarking that the modem equivalents of the first replicator, the DNA molecules, use positive- negative replication. What does matter is that suddenly a new kind of 'stability' came into the world. Previously it is probable that no particular kind of complex molecule was very abundant in the soup, because each was dependent on building blocks happening to fall by luck into a particular stable configuration. As soon as the replicator was born it must have spread its copies rapidly throughout the seas, until the smaller building block molecules became a scarce resource, and other larger molecules were formed more and more rarely. As someone with a limited understanding of Biology and Chemistry, I have a couple of questions. We have observed that the chemical conditions outlined at the top of this question seem to yield more complex organic compounds with time. Do we know (or have theories) as to why this is, or is it merely something we have observed and re-created? The theory about replicators suggests that they are like a chain of building-block molecules each with an affinity for either its own kind, or some other kind. (positive-positive vs positive-negative). -- Which ever was the case, you could end up with a big chain of molecules that had a fractal nature, because these building blocks were freely available in the soup surrounding the replicator. -- If the chain split apart, you would get two identical copies and each part could go off and replicate further. Now, I suppose this question mirrors the first, but why is it that it is so easy to imagine the affinity relationship? What makes molecules want to stick together with other, specific molecules? Further more, what makes chains of molecules split apart (replicate)? Can somebody elaborate further on these points, and on what Dawkins is saying here in general, so as to pander to the cravings of someone who is not particularly versed in the parlance? organic-chemistry molecules LukeLuke $\begingroup$ Since you're reading about the concept of molecular replication, you might be interested in very relevant and more general chemical term: autocatalysis. A substance that once formed favours the production of more of itself is not unique to life or biochemical compounds. $\endgroup$ – Nicolau Saker Neto Apr 7 '15 at 22:04 We have observed that the chemical conditions outlined at the top of this question seem to yield more complex organic compounds with time. Do we know (or have theories) as to why this is, or is it merely something we have observed and re-created. A recurring theme in chemistry,and all of your questions, is "lower energy usually equates with more stable structures." Oversimplifying just a bit (see note on time to reach equilibrium below), we can say that all chemical transformations are also equilibria. That means that our starting materials are converted to products and our products are converted back to starting materials - reactions are continually happening in both directions. Once at equilibrium the rate of the forward reaction matches that of the back reaction and the concentrations of starting materials and products will no longer change. At equilibrium the equilibrium constant ($\ce{K_{eq}}$) describing the relative concentrations of our starting materials and products will have the following form $$\mathrm{K_{eq}={\frac{(concentration ~products)}{(concentration~starting~ materials)}}}$$ and the equilibrium constant is related to the free energy difference between our starting materials and products by $$\mathrm{\Delta G = -RT\ln K_{eq}}$$ These equations tell us that the more stable a structure, the more it will predominate at equilibrium. If we take carbon, oxygen, hydrogen and nitrogen an equilibrium will be set up between these elements and the simple amino acid glycine according to the following equation $$\ce{4C + 5H2 + N2 + 2O2 -> 2C2H5NO2~(glycine) + heat}$$ Over 500kJ of heat is produced per mole of glycine formed. This means that glycine is over 500kJ/mol more stable than the elements used to create it! Bottom line: If a complex molecule is more stable than a simpler molecule, then over time the higher energy "simple" molecule will transform into the more stable (lower energy) "complex" molecule. Just a word about time. It may take a long or a short time for an equilibrium to be attained on its own. Suffice it to say that with catalysts and\or energy, equilibria can be attained quickly. ...why is it that it is so easy to imagine the affinity relationship? What makes molecules want to stick together with other, specific molecules? Again the answer has to do with lowering the energy of the system and making things more stable. Some molecules have what is called an "active site". It may be an odd-shaped kink or pocket in the molecule. Let's think of this odd-shaped pocket as a lock, only molecules shaped like the key will fit. Further, there are often other functional groups (perhaps hydroxyl groups) around the active site that will bind (hydrogen bonds perhaps) with the key and stabilize it once it is inserted into the lock. Stabilization means lowering the energy of the system. Only the key fits into the lock and once inserted it is stabilized - energy is lowered. By now you've probably guessed that the splitting apart step occurs because it lowers the energy of the system. Perhaps once the molecule reaches a certain size it folds differently - a new lower energy shape is now possible. A protein that was bound to the molecule and stabilizing it can no longer bind to it due to this new folding pattern. The protein is displaced, it is replaced by a different protein that stabilizes the molecule's new folded structure but predisposes it to splitting apart to a smaller size. ronron $\begingroup$ At first, I was disappointed to see an answer like "lower energy usually equates with more stable structures." because I immediately want to know WHY this is, but the answer actually made something click in my head. That is, If I want to know that answer, I need to go much deeper, as I believe its more related to questions about the fundamental forces that cause the atoms themselves to form. And like Dawkins says, No matter what level you're on, It all comes down to "survival of the stable" $\endgroup$ – Luke Apr 7 '15 at 23:45 $\begingroup$ This answer is just plain wrong. Biological life happens far from thermodynamical equillibrium, where entropy is maximised. Glycine is more stable than the elements, but far less stable than CO2, water and nitrogen. $\endgroup$ – Karl Dec 22 '18 at 23:00 The situation leading to the formation of amino acids was simulated by Stanley Miller and Harold Urey in their famous experiment and published as A Production of Amino Acids Under Possible Primitive Earth Conditions in Science, 1953, 117, 528-529 (DOI). Imagine that this formation of amino acids might have happened not just in the sea, but also in shallow pits at the shore that once in a while fell dry and were heated by sunlight. Under these conditions, particularly in the presence of insoluble salts serving as catalysts, amino acids may have undergone condensation reactions, i.e., splitting off water and forming peptide bonds. The resulting molecules are oligopeptides, small brothers of proteines. Imagine that these pools were filled with water again and that the process repeats. Between oligopeptides of fitting amino acid sequences, hydrogen bonds can form, resulting in $\beta$-sheets or similar aggregates. Upon repetition of the condensation step, free amino acids bound to one chain by hydrogen bonds might preferably undergo condensation in the other. This may be the molecular background of the replicator concept described by Richard Dawkins. Klaus-Dieter WarzechaKlaus-Dieter Warzecha 'UV light + water + carbon dioxide + methane + ammonia + a couple of weeks time = amino-acids' The answers by ron and Klaus are very good, but I'd like to add one additional point which I think is very important. The key "reactant" in the above equation is, to my mind "UV light". UV light comes from the sun. The sun is very hot. The surface of the sun is at ~5800 Kelvins, or about ~5500 °C, and the light it emits is effectively the equilibrium mixture of photons for that temperature (this equilibrium for photons is called black body radiation). The high temperature for the sun means that the equilibrium mixture of light frequencies it emits has much more UV light than "equilibrium" light at Earth's temperatures. You can view the formation of complex molecules from simple molecules via UV light as a partial, frustrated attempt for the simple molecules to equilibrate with the blackbody radiation of the sun, i.e., for the simple molecules to equilibrate at temperatures of 5800 K. Those high temperatures favor high entropies or "disorders" among the molecules. You can view a "soup" that contains small amounts of many different amino acids and other molecules (in addition to large amounts of unreacted simple molecules) as having more disorder or entropy than a mix where the simple molecules are present in 100% purity. An analogy is the frying of an egg. An egg in the refrigerator is ordered and "simple":. When you try to equilibrate the egg with blackbody radiation at a higher temperature (i.e. put it on the stove), the simple mix changes considerably and the result is a complex, solid-like gel of egg white and a pasty solid goo of yolk. Note that your question mainly seems to be about formation of the primordial soup, and not actually formation of the replicators themselves. Curt F.Curt F. Not the answer you're looking for? Browse other questions tagged organic-chemistry molecules or ask your own question. Functional group naming order? How to choose a proper chromatography way to separate substrates? (Computationally) finding similarity between two organic compounds Detecting a gas molecule moving through a small pinhole
CommonCrawl
Naive Bayes Classifier Could you explain the Naive Bayes classifier with examples? What is Bayes' Theorem and how is it relevant to the NB classifier? What are the types of the NB classifier? How would you use Naive Bayes classifier for categorical features? What is Laplace smoothing in the context of the NB classifier? Can we use the NB classifier when features are not independent? What are some applications of the NB classifier? How is the NB classifier related to logistic regression? What are some disadvantages and advantages of the NB classifier? algorithms machine learning statistics natural language processing classification probabilistic Statistical Classification Supervised vs Unsupervised Learning Probability for Data Scientists Artificial Neural Network 3601,3599 14,3601 Improve formatting of refs. Two questions answered. Formula formatting improved. Improvements to milestones. Adding three questions, no answers yet. Improved wording of questions. Work in progress. Added nlp tag. Updated tags and See Also. Work in progress. All Versions 2022-03-31 05:01:43 by arvindpdmn 2022-03-31 03:44:45 by arvindpdmn 2022-03-30 03:57:39 by arvindpdmn 2022-03-29 16:14:34 by arvindpdmn 2022-03-10 07:24:59 by arvindpdmn 2022-03-03 05:49:13 by arvindpdmn 2022-03-03 04:16:57 by Bhavani vangipurapu 2022-02-23 10:39:24 by Bhavani vangipurapu 2022-02-22 11:42:56 by Bhavani vangipurapu 2022-02-10 08:10:00 by Bhavani vangipurapu 2022-02-07 12:24:28 by Bhavani vangipurapu 2022-02-07 12:12:22 by Bhavani vangipurapu 2022-01-23 11:27:24 by Bhavani vangipurapu 2022-01-21 06:08:07 by Bhavani vangipurapu All Sections Summary Discussion Sample Code References Milestones Tags See Also Further Reading Improvements for future 1. New questions - How does NB classifier model discrete versus continuous features? - Could you compare the NB classifier against alternative techniques? 2. Improve some of the current answers. 3. Better refs: some of the current ones are sub-standard. 1. Identify good refs via Google Scholar the way you did it for Random Forest article. 2. More useful images are needed. Types image is not very useful: it doesn't add any value to the text. 3. Many mistakes or omissions in ref formatting. This is very surprising since this is not your first article. Please correct. 4. Milestones are not good enough. The cited source has no mention of 1960. Older milestones are more relevant to Bayes Theorem. More research is needed. Study other ML articles on Devopedia to get an idea how milestones have been written. 5. Ultimately, we want this article to be better than articles published on other sites. One important question would be how to improve the classifier, that is, share useful tips for practitioners. 6. Another thing to address is how to modify the model in case variables are dependent. I am sure there are techniques that address this. 7. There's no mention of neural networks or SVM. How does NB classifier compare against these in terms of results? By Bhavani vangipurapu In the real-time applications, naïve bayes produces better results that is why they choose this algorithm than other ml algorithms. This article is completed with all of the suggestions. By cnkprasad 1. Question "Where are Naive Bayes classifiers used for real-time applications?": Any ML classifier is applicable to all real-time applications. Don't understand why these use cases are only or more relevant to NB. Either mention how NB particularly helps for these use cases or remove this question. 2. Change the title of the algorithm from "Naive Bayesian" to "Naive Bayes". Overall, it looks better. A couple of points in Summary: 1. Statement "The accuracy of naive Bayes is not directly.. between the features" is coming all of a sudden in summary, without much background. You may entirely remove this. 2. It's a good idea to mention this algorithm is applicable for Classification tasks only, unlike many other ML algorithms which can typically perform Regression as well as Classification tasks. Naive Bayes is a probabilistic classifier that returns the probability of a test point belonging to a class rather than the label of the test point. It's among the most basic Bayesian network models, but when combined with kernel density estimation, it may attain greater levels of accuracy. . This algorithm is applicable for Classification tasks only, unlike many other ML algorithms which can typically perform Regression as well as Classification tasks. Naive Bayes algorithm is considered naive because the assumptions the algorithm makes are virtually impossible to find in real-life data. It uses conditional probability to calculate a product of individual probabilities of components. This means that the algorithm assumes the presence or absence of a specific feature of a class which is not related to the presence or absence of any other feature (absolute independence of features), given the class variable. Two features with histograms of antenna length. Source: Adapted from Keogh 2011, slide 3. Consider two groups of insects, grasshoppers and katydids. By studying the antenna lengths from many insect samples, we can discern some patterns and computed probabilities. For examples, given an antenna length of 3 cm, the insect is more likely to be a grasshopper than a katydid. Naive Bayes classifier is a technique to perform such a classification. Antenna length is a feature that's used to classify an insect into one of two classes. Suppose the antenna length is 5 cm. Probabilities computed from observed samples inform that both classes are equally likely. In this case, classification can be improved by considering more features such as abdomen length. NB classifier assumes that features are independent of one another. Consider the statement "Officer Drew arrested me." Is Drew male or female? We can answer this by gathering data on the officer: height, eye colour and long/short hair. Then we lookup a police database of all officers and apply NB classifier. This problem uses three independent features and two classes (male or female). Bayes theorem. Source: Bazett 2017. Bayes theorem (aka Bayes rule) works on conditional probability. In conditional probability, the occurrence of a particular outcome is conditioned on the outcome of another event occurring. Given two events A and B, Bayes theorem states that, $$P(A|B) = \frac{P(A⋂B)}{P(B)} = \frac{P(A) \cdot P(B|A)}{P(B)}$$ where \(P(A)\) and \(P(B)\), called marginal probability or prior probability, are the probabilities of events A and B event occurring; where \(P(A|B)\), called posterior probability, is the probability of event A occurring given that event B has occurred; where \(P(B|A)\), called likelihood probability, is the probability of event B occurring given that event A has occurred; \(P(A⋂B)\) is the joint probability of both events occurring. \(P(A|B)\) and \(P(B|A)\) are also called conditional probabilities. Suppose you have drawn a red card from a deck of playing cards. What's the probability that it's a four? We apply conditional probability. There are 26 possible red cards and two of the are fours. Thus, \(P(four|red)=2/26=1/13\). Bayes Theorem allows us to reformulate the problem as follows: $$P(four|red) = P(four) \cdot P(red|four) / P(red)\\= (4/52 \cdot 2/4) / (26/52)\\= 1/13$$ Types of naive bayes classifier. Source: Rastogi 2020. scikit-learn implements three naive Bayes variants based on the same number of different probabilistic distributions: Bernoulli, multinomial, and Gaussian. Bernoulli Naive Bayes The predictors in this case are boolean variables. So your only options are 'True' and 'False' (you might also have 'Yes' or 'No'). When the data has a multivariate Bernoulli distribution, we use it. Multinomial Naive Bayes The frequency with which particular events were created by a multinomial distribution are represented by feature vectors. This is the event model that is most commonly used for document classification.This algorithm is used to tackle document classification difficulties. For example, if you want to know whether a document is in the 'Legal' or 'Human Resources' category, you'd use this technique to figure it out. It makes advantage of the frequency of the current words as a feature. Gaussian Naive Bayes It is used for numerical / continuous features. The distribution of continues values are "assumed" to be Gaussian. And therefore the likelihood probabilities are computed based on Gaussian distribution. For a discrete variable with more than two possible outcomes, such as the roll of a dice, the categorical distribution is an extension of the Bernoulli distribution. In contrast, the categorical distribution provides a probability of different outcomes for one drawing rather than multiple drawings as is the multinomial distribution. The properties should be encoded using label encoding techniques, and each category should be assigned a unique number. It is given by: \(p(x_i = t | y = c; α) = N_???+α /N_c+α n_i\) \(?_???\) = Number of times category t appears in the samples ??, which belong to class ? \(?_?\) = Total number of samples with class c \(?\) = Laplace smoothing parameter used to handle zero frequency problem \(?_?\) = Number of available categories of feature Laplace smoothing is a smoothing technique used in Naive Bayes to solve the problem of zero probability. Consider text categorization, where the aim is to determine if a review is good or negative. Based on the training data, we create a likelihood table. We use the Likelihood table values when querying a review, but what if a word in a review was not present in the training dataset?. For example, a test query has form, Query review= x1x2x' Let, a test sample have three words, where we assume x1 and x2 are present in the training data but not x'. Laplace smoothing comes into picture. \(P(x'/positive)= (number of reviews with x' and target_outcome=positive + α) / (N+ α*k)\) K denotes the number of dimensions (features) in the data. N is the number of reviews with the target outcome=positive. α represents the smoothing parameter. The process of evaluating features depending on how successful they are in predicting the target variable is known as feature importance.The naive bayes classifiers do not provide an intrinsic technique for determining the relevance of features. Naive Bayes algorithms forecast the class with the highest probability by computing the conditional and unconditional probabilities associated with the features.As a result, no coefficients have been generated or connected with the characteristics used to train the model.However, there are ways for analysing the model after it has been trained that can be used post-hoc. One of these strategies is the Permutation Importance, which has been neatly implemented in scikit-learn. When the data is tabular, permutation feature importance is a model inspection technique that can be utilised for any fitted estimator. For a given dataset, the permutation importance function computes the feature importance of estimators. The n_ repeats option specifies how many times a feature is randomly shuffled before returning a sample of feature importances. Text classification/ Spam Filtering/ Sentiment Analysis: Naive Bayes classifiers, which are commonly employed in text classification (owing to better results in multi-class problems and the independence criterion), have a greater success rate than other techniques. As a result, it is commonly utilised in spam filtering (determining spam e-mail) and sentiment analysis (in social media analysis, to identify positive and negative customer sentiments) Recommendation System: The Naive Bayes Classifier and Collaborative Filtering work together to create a Recommendation System that employs machine learning and data mining techniques to filter unseen data and forecast whether a user would enjoy a given resource or not. Multi-class Prediction: This algorithm is also well-known for its multi-class prediction capability. We can anticipate the likelihood of various target variable classes here. Real-time Prediction: Naive Bayes is a quick learning classifier that is eager to learn. As a result, it might be utilised to make real-time forecasts. Given input features \(X\), both NB classifier and logistic regression predict an output class, that is, output \(Y\) is categorical. Logistic regression directly estimates \(P(Y|X)\) whereas NB classifier applies the Bayes theorem and estimates \(P(Y)\) and \(P(X|Y)\). As such, we call logistic regression a discriminative classifier and NB a generative classifier. It's been observed that on small training datasets, NB classifier does better than logistic regression. If more training samples are available, logistic regression does better. While logistic regression has a lower asymptotic error, NB classifier may converge faster to its higher asymptotic error. It's known that the Gaussian Naive Bayes (GNB) classifier is closely related to logistic regression. Parameters of one model can be expressed in terms of the other. Moreover, asymptotically both converge to the same classifier when GNB assumptions hold. When the assumptions don't hold, such as dependence among features, logistic regression does better because it adjusts its parameters to give a better fit. Strength and weakness of naive bayes. Source: MachineLearningInterview 2021. Advantages: Naive bayes is Simple to put into action. The conditional probabilities are simple to compute. The probabilities can be determined immediately, there is no need for iterations. As a result, this strategy is useful in situations when training speed is critical. If the conditional Independence assumption is true, the consequences could be spectacular. This algorithm predicts classes faster than many other classification algorithms. Disadvantages:The premise of independent predictors is the main imitation of Naive Bayes. Naive Bayes implicitly assumes that all attributes are independent of one another. In practise, it is very hard to obtain a set of predictors that are totally independent. If a categorical variable in the test data set has a category that was not observed in the training data set, the model will assign a 0 (zero) probability and will be unable to predict. This is commonly referred to as Zero Frequency. you can utilise the smoothing approach to remedy this. Laplace estimation is one of the most basic smoothing techniques. The Royal Society publishes a paper on probability by Thomas Bayes after his death in 1761. It's titled Essay Towards Solving a Problem in the Doctrine of Chances and details what would later become famous as the Bayes inference. The basic idea is to revise predictions based on new evidence. Decades later (early 19th century), Pierre-Simon Laplace develops and popularizes Bayesian probability. Bayesian approach is applied during the Second World War. It sees a revival in the years after the war. Earlier, Bayesian approach had been criticized. The frequentist approach developed by R.A. Fisher had been favoured since the mid-1920s. Maron and Kuhns apply Bayes' Theorem to the task of Information Retrieval (IR). The probability of retrieving a relevant document given a query can be computed from the prior probability of document relevance and conditional probability of user making a particular query given the relevant document. Over the next forty years, Naive Bayes is the main technique in IR until machine learning techniques become popular. Probability-based rules (left) and finite data set accuracy (right). Source: Adapted from Hughes 1968. Hughes considers a two-class pattern recognition problem. The model considers \(n\) discrete values that can be measured and \(m\) sample patterns. He shows that for a given \(m\), there's an optimal \(n\) that minimizes the pattern recognition error. This is shown in the figure (right) for the case of equal class probabilities. The figure (left) also shows an example of \(n=5\) in which values 1-3 imply class \(c_1\) and values 4-5 imply class \(c_2\). Duda and Hart use the Naive Bayes classifier in pattern recognition. Langley et al. present an analysis of Bayesian classifiers considering noisy classes and noise-free attributes. They find that the Naive Bayes classifier gives comparable results to the C4 algorithm that induces decision trees. They conclude that despite its simplicity, the Naive Bayes classifier deserves more research attention. Domingos and Pazzani show that even when attributes are not independent, the Bayesian classifier does well. It can be optimal under zero-one loss (misclassification rate). It's optimal under squared error loss only when the independence assumption holds. Kasif et al. propose a probabilistic framework for memory-based reasoning (MBR). Such a framework can be used for classification tasks. They note that a probabilistic graphical model is really another way of looking at the Naive Bayes classifier. Bazett, Trefor. 2017. "Bayes' Theorem - The Simplest Case." Trefor Bazett, on YouTube, November 19. Accessed 2022-02-23. Berrar, Daniel. 2018. "Bayes' Theorem and Naive Bayes Classifier." Encyclopedia of Bioinformatics and Computational Biology, vol. 1, Elsevier, pp. 403-412. Accessed 2022-02-07. Chauhan, Nagesh Singh. 2020. "Introduction to the Naïve Bayes Algorithm." KDnuggets, June 8. Accessed 2022-01-22. Domingos, P., and M. Pazzani. 1997. "On the Optimality of the Simple Bayesian Classifier under Zero-One Loss." Machine Learning, vol. 29, pp. 103–130. doi: 10.1023/A:1007413511361. Accessed 2022-03-30. Encyclopaedia Britannica. 2022. "Thomas Bayes." Encyclopedia Britannica, January 1. Accessed 2022-03-29. Gandhi, Rohith. 2018. "Naive Bayes Classifier." Towards Data Science, on Medium, May 5. Accessed 2022-01-22. HolyPython. 2020. "Naive Bayes Classifier History." HolyPython, July 29. Accessed 2022-01-23. Hughes, G. 1968. "On the mean accuracy of statistical pattern recognizers." IEEE Transactions on Information Theory, vol. 14, no. 1, pp. 55-63, January. doi: 10.1109/TIT.1968.1054102. Accessed 2022-03-29. Jayaswal, Vaibhav. 2020. "Laplace smoothing in Naïve Bayes algorithm." Towards Data Science, on Medium, November 22. Accessed 2022-02-22. Kasif, Simon, Steven Salzberg, David Waltz, John Rachlin, and David W. Aha. 1998. "A probabilistic framework for memory-based reasoning." Artificial Intelligence, vol. 104, no. 1–2, pp. 287-311. Accessed 2022-03-30. Kaviani, Pouria and Sunita Dhotre. 2017. "Short Survey on Naive Bayes Algorithm." International Journal of Advance Engineering and Research Development, vol. 4, no. 11, pp. 607-611, November. Accessed 2022-02-23. Keogh, Eamonn. 2011. "Naïve Bayes Classifier." Computational Entomology, University of California, Riverside. Accessed 2022-02-07. Kumar, Naresh. 2019. "Advantages and Disadvantages of Naive Bayes in Machine Learning." The Professionals Point, March 2. Accessed 2022-02-23. Langley, Pat, Wayne Iba, and Kevin Thompson. 1992. "An analysis of Bayesian classifiers." Proceedings of the tenth national conference on Artificial intelligence (AAAI'92), AAAI Press, pp. 223–228. Accessed 2022-03-29. Lewis, David D. 1998. "Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval." In: Nédellec, C., and Rouveirol, C. (eds), Machine Learning: ECML-98, ECML 1998, Lecture Notes in Computer Science, vol. 1398, Springer. doi: 10.1007/BFb0026666. Accessed 2022-03-29. MachineLearningInterview. 2021. "How does Naive Bayes Classifier Work? What are the pros and cons with Naive Bayes Classifier?" MachineLearningInterview, on YouTube, July 30. Accessed 2022-01-23. Maron, M. E., and J. L. Kuhns. 1960. "On relevance, probabilistic indexing, and information retrieval." Journal of the ACM, vol. 7, no. 3, pp. 216-244, July. doi: 10.1145/321033.321035. Accessed 2022-03-29. Mitchell, Tom M. 2000. "Generative and Discriminative Classifiers: Naive Bayes and Logistic Regression." Chapter 3 in: Machine Learning, Draft, October 1. Accessed 2022-03-31. Nelson, Daniel. 2020. "What is Bayes Theorem?" AI Masterclass, Unite.AI, August 23. Accessed 2022-01-21. Ng, Andrew and Michael Jordan. 2001. "On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes." In: T. Dietterich, S. Becker and Z. Ghahramani (eds.), Advances in Neural Information Processing Systems 14 (NIPS 2001). Accessed 2022-03-31. Rastogi, Rahul. 2020. "Naive Bayes & its Mathematical Implementation." On Medium, June 24. Accessed 2022-01-23. Reddy, Suman Kumar. 2020a. "Categorical Naive Bayes Classifier implementation in Python." Blog, iNeuron, October 31. Accessed 2022-01-23. Reddy, Suman Kumar. 2020b. "Feature Importance in Naive Bayes Classifiers." Blog, iNeuron, October 31. Accessed 2022-02-23. Rish, I. 2001. "An empirical study of the naive Bayes classifier." T.J. Watson Research Center, IBM. Accessed 2022-02-07. Santhosh, Gautham. 2020. "Understanding Naive Bayes in the real world." On Medium, February 07. Accessed 2022-01-23. Scikit-learn. 2001. "Permutation feature importance." Scikit-learn. Accessed 2022-02-23. Stecanella, Bruno. 2017. "A practical explanation of a Naive Bayes classifier." Blog, MonkeyLearn, May 07. Accessed 2022-01-23. Wikipedia. 2022. "Naive Bayes classifier." Wikipedia, March 4. Accessed 2022-01-21. Yang S. 2019. "An Introduction to Naïve Bayes Classifier." Towards Data Science, on Medium, September 9. Accessed 2022-02-22. lukeprog. 2011. "A History of Bayes' Theorem." LessWrong, August 29. Accessed 2022-03-29. scikit-learn. 2021. "1.9. Naive Bayes." scikit-learn v1.0.2, December. Accessed 2022-01-23. James H. Martin. 2021. "Naive Bayes and Sentiment Classification." web.stanford.edu, December 29. Accessed 2022-01-23. Srijith Rajeev. 2019. "Naive Bayes and Sentiment Classification." www.commonlounge.com, December 29. Accessed 2022-01-23. Vikramkumar. 2014. "Bayes and Naive Bayes Classifier." Arxiv.org, April 03. Accessed 2022-01-23. Bhavani vangipurapu Devopedia. 2022. "Naive Bayes Classifier." Version 14, March 31. Accessed 2022-10-09. https://devopedia.org/naive-bayes-classifier Readability score of this article is below 60 (47.3). Use shorter sentences. Use simpler words.
CommonCrawl
Artificial Intelligence Meta Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up. What is a time-step in a Markov Decision Process? The "discounted sum of future rewards" (or return) using discount factor $\gamma$ is $$\gamma^1 r_1 +\gamma^2 r_2 + \gamma^3 r_2 + \dots \tag{1}\label{1}$$ where $r_i$ is the reward received at the $i$th time-step. I am confused as to what constitutes a time-step. Say, I take an action now, so I will get a reward in 1 time-step. Then, I will take an action again in timestep 2 to get a second reward in time-step 3. But the formula \ref{1} suggests something else. How does one define a time-step? Can we take action as well receive a reward in a single step? Examples are most helpful. reinforcement-learning markov-decision-process time-step nbro Abhishek BhatiaAbhishek Bhatia 42722 gold badges55 silver badges1515 bronze badges $\begingroup$ Hello. This is an old question, but you should accept one of the answers below, if they answer your question. Take a look at ai.stackexchange.com/help/someone-answers. $\endgroup$ – nbro Jan 19, 2021 at 1:16 In a Markov Decision Process (MDP) model, we define a set of states ($S$), a set of actions ($A$), the rewards ($R$), and the transition probabilities $P(s' \mid s, a)$. The goal is to figure out the best action to take in each of the states, i.e. the policy $\pi$. To calculate the policy we make use of the Bellman equation: $$V_{i+1}(s)=R(s)+\gamma \max _{a \in A}\left(\sum_{s^{\prime} \in S} P\left(s^{\prime} \mid s, a\right) V_{i}\left(s^{\prime}\right)\right)$$ When starting to calculate the values we can simply start with: $$V_{1}(s)=R(s)$$ To improve this value, we should take into account the next action, which can be taken by the system and will result in a new reward: $$V_{2}(s)=R(s)+\gamma \max _{a \in A}\left(\sum_{s^{\prime} \in S} P\left(s^{\prime} \mid s, a\right) V_{1}\left(s^{\prime}\right)\right)$$ Here you take into account the reward of the current state $s$: $R(s)$, and the weighted sum of possible future rewards. We use $P(s' \mid s, a)$ to give the probability of reaching state $s'$ from $s$ with action $a$. $\gamma$ is a value between $0$ and $1$ and is called the discount factor because it reduces the importance of future rewards since these are uncertain. An often-used value is $\gamma = 0.95$. When using value iteration this process is continued until the value function has converged, which means that the value function does not change significantly when doing new iterations: $$\left\|V_{i+1}(s)-V_{i}(s)\right\|<\epsilon, \; \forall_{s \in S},$$ where $\epsilon$ is a really small value. Discounted sum of future rewards If you look at the Bellman equation and execute it iteratively you'll see: $$ {\scriptstyle V(s)=R(s) + \gamma \max _{a \in A}\left(\sum_{a \in A} P\left(s^{\prime} \mid s, a\right)\left[R\left(s^{\prime}\right) + \gamma \max _{a \in A}\left(\sum_{s^{\prime \prime} \in S} P\left(s^{\prime \prime} \mid s^{\prime}, a\right)\left(R\left(s^{\prime \prime}\right) + \gamma \max _{a \in A}\left(\sum_{s^{\prime \prime \prime} \in S} P\left(s^{\prime \prime \prime} \mid s^{\prime \prime}, a\right) V\left(s^{\prime \prime \prime}\right)\right)\right]\right)\right.\right. }$$ This is like (without transition functions): $$R(s)+\gamma R\left(s^{\prime}\right)+\gamma^{2} R\left(s^{\prime \prime}\right)+\gamma^{3} R\left(s^{\prime \prime \prime}\right)+\ldots$$ So when we start in state s we want to take the action that gives us the best total reward taking into account not only the current, or next state, but all possible next states until we reach the goal. These are the time steps you refer to, i.e. each action taken is done in a time step. And when we learn the policy we try to take into account as many time steps as possible to choose the best action. You can find quite a large number of examples if you search on the internet, for example, in the slides of the CMU, the UC Berkeley or the UW. answered Nov 8, 2016 at 14:38 agoldagold $\begingroup$ While this is nicely detailed, I think you could have answered the question more directly and succinctly $\endgroup$ – hisairnessag3 In the reinforcement learning setting, an agent interacts with an environment in (discrete) time steps, which are incremented after the agent takes an action, receives a reward and the "system" (the environment and the agent) moves to a new state. More precisely, at time step $t=0$ (the first time step), the environment (including the agent) is in some state $s_t = s_0$, takes an action $a_t = a_0$ and receives and reward $r_t = r_0$ and the environment (including the agent) moves to a next state $s_{t+1} = s_{0 + 1} = s_1$, which will also be the state that the environment will be in at the next time step, $t+1$, hence the notation $s_{t+1}$. Here, the subscripts $_t$ refer to the time step associated with those "entities" (state, action and rewards). So, after one time step (or after $t=0$), the agent will be in state $s_{t+1}$ and the new time step will be $t + 1 = 0 + 1 = 1$. So, we are now at time step $t=1$ (because we have just incremented the time step) and the agent is in state $s_{t} = s_1$. The previously described interaction then repeats: the agent takes an action $a_{t} = a_1$, gets the reward $r_t = r_1$ and the environment moves to the state $s_{t+1} = s_{1+1} = s_{2}$, and so on. In your summation, we are just discounting the rewards using a value denoted by $\gamma$ (which is usually between $0$ and $1$), that is often called the "discount factor". That summation represents the summation of the rewards the agent will received starting (in this case) from time step $t=1$. We could also just have $r_1 + r_2 + r_3 + \dots $, but, for technical or mathematical reasons, we often "discount" the rewards, that is, we multiply them by $\gamma$ (raised to a power associated with the time step that reward will be received). In the above description, I said that, at some time step $t$, the agent takes an action $a_t$ and receives a reward $r_t$. However, it is often the case that the reward received after taken an action at time step $t$ is denoted by $r_{t+1}$. I think this is a little confusing, but not conceptually "wrong", because one might think that the reward for having performed an action at time step $t$ is only received at the next time step. (You should get used to slightly different notations and terminology. At the beginning, it is not easy to understand, if the notation is not precise and consistent across sources, but you will get used to it, the more you learn about the topic, in the same way that you get used to a new language). nbronbro Upcoming moderator election in January 2023 Why are Q values updated according to the greedy policy? What is a generalized MDP? If Least-Squares TD is computationally more expensive, then why is it more data efficient than semi-gradient TD(0)? How to deal with episode termination in Advantage Actor-Critic algorithm? Is the next state drawn from the joint distribution of the previous state and action? Apart from the state and state-action value functions, what are other examples of value functions used in RL? How is the DQN loss derived from (or theoretically motivated by) the Bellman equation, and how is it related to the Q-learning update? How can we find the value function by solving a system of linear equations without knowing the policy? How to encourage the reinforcement-learning agent to reach the goal as quickly as possible, and what's the effect of discount factor?
CommonCrawl
User talk:Dfeuer From ProofWiki 2 Smullyan and Fitting 3 SourceReview template 4 Epsilon 6 spambots 8 make sure of redirects 9 Discipline requested 10 Point Finite 11 Source works 12 Please help me 13 Take care when redirecting User talk:Dfeuer/Archive/Prehistory User talk:Dfeuer/Archive/Ancient History Smullyan and Fitting Note the pages that I have added containing the S&F work: the link you want to add at the bottom of your pages is now {{BookReference|Set Theory and the Continuum Problem|2010|Raymond M. Smullyan|author2=Melvin Fitting|ed=revised|edpage=Revised Edition}}. Have fun. How to treat the issue of class vs. set in all the pages which have practically identical contents for both set and class has been vexing us for a long time. 1) It would be good to have "purely set theory" pages wherein the issue of classes does not appear at all, so as not to cause confusion for those who do not know what a "class" is or why it is important (or indeed, for their purposes, whether). 2) On the other hand we do not want to duplicate the entirety of the set theory category with just "set" replaced by "class". Your compromise is as good as any other we've devised but it still does not feel optimal. The whole concept of "class theory" sucks cat farts out of a lead balloon anyway - the whole shebang is a messy kludge to get around Russell's paradox and I'm afraid I cannot be party to it and keep what little remains of my sanity. I have the Bernays work - one day I'll transfer the contents to ProofWiki but that won't be anywhere near immediate. --prime mover (talk) 21:48, 21 March 2013 (UTC) Avoiding self-contradictory axioms seems to be rather important. I'm not bothered by putting sets and classes side by side. What's more troubling, from my perspective, is that different theories have different notions of the relationship between sets and classes. Smullyan and Fitting decree that every set is also a class. Some others apparently do not, preferring instead to have a notion of "small class", a class that is extensionally equal to some set. This leads to a bit of a clash in terminology. Another terminological issue: the current definition of a foundational relation is a weak one, requiring only that every non-empty subset have a minimal (or initial) element. Smullyan and Fitting instead require (of a "well-founded relational system") that every subclass have an initial element. These are apparently equivalent if the axiom of foundation is accepted. The intuitionists apparently prefer a still stronger definition, requiring that it be possible to do induction relative to the relation. This is classically equivalent to Smullyan and Fitting's definition, but apparently not intuitionistically. Perhaps the stronger one should be "strongly foundational/strongly well-founded"? --Dfeuer (talk) 22:07, 21 March 2013 (UTC) Not my business but I will just say it anyway. Why not just implement a principle? Have a look at: Duality Principle (Order Theory) for example. The principle you make will probably be more intricate than that but it will save you a lot of, frankly, artificial work. Mathematics really is like a tree sometimes. Its roots branch out too. --Jshflynn (talk) 22:28, 21 March 2013 (UTC) TL;DR: $\mathsf{Pr} \infty \mathsf{fWiki}$ isn't ready for this. More contemplation with broad (indeed, site-wide) perspective is necessary. So far we have been unable to come up with a section where multiple genuinely different (i.e. not equivalent through some perhaps difficult theorem) approaches are taken. The PropLog section is the most advanced in this regard, and were it not for my reprioritisation I would be working hard on accomplishing this feat (described by some, upon success, as "something seriously worthwhile"). Because the logic section is more suited to this kind of exploration (logicians deal with different logical systems all the time; set theorists or mathematicians building on set theory usually bother with precisely one version of set theory) I suggest that further explorations in this direction are postponed until we have generated a proper framework (e.g. support for proving that "theories are equivalent" when they are formulated in different signatures) which will necessarily see its first fruits in said logic department. Just my 2 cents. — Lord_Farin (talk) 22:32, 21 March 2013 (UTC) @Jshflynn: Problem is that the (more precisely, some) different formulations of set theory are not equivalent. This prohibits as simple a scheme as you set out. — Lord_Farin (talk) 22:32, 21 March 2013 (UTC) The text I'm currently going through deals with NBG class-set theory (a conservative extension of ZF(C)) under classical logic and classically-embeddable modal logic, both with and without foundation, replacement, choice, CH, and GCH. So there's no major conflict between this and classical ZFC. However, there are (1) generally unimportant but annoying differences between theories such as whether a set is a class and (2) classically equivalent but non-foundationally, intuitionistically, or constructivistically distinct definitions of certain terms. We need to figure out how to handle such sort-of-equivalent-but-not-really definitions. --Dfeuer (talk) 22:44, 21 March 2013 (UTC) Exactly L_F's point. Seriously, while it's a passable stopgap to add "... (or class)" to wherever a set is mentioned, it is at best a compromise and a not-very-good one at that. I'm having similar trouble getting the whole area of polynomial theory structured in some way so that the theorems and definitions all mean what is intended in each of the various contexts in which they are raised - and nothing hits the spot. You understand better now the reasons behind my insistence on sourcing all (at least) definitions and axioms from hard-copy sources? --prime mover (talk) 23:02, 21 March 2013 (UTC) First, LF is correct. In the long run it would be better if the machinery for comparing theories by strength was developed first. In reality though $\mathsf{Pr} \infty \mathsf{fWiki}$ has to grab what it can get from volunteers and Dfeuer is interested in this it seems. @Dfeuer You seem to have a large perspective on this. Can you list all the distinct theories (and their major source works if possible) that 95% of contemporary pure set theorists actually work with please. No, I cannot. I poked around to try to get a sense of things, and the main sense I got was of a whole zoo of different theories and philosophies and so on. The source-rigidity policy actually does not help here at all—in fact it's more of a hindrance. When three different sources define the same term to mean three different things, there is value to being able to give them three different names. See, for a simple, example, the debate on Locally Compact—there's a traditional definition, and then there's a stronger sense that's equivalent for Hausdorff spaces (Munkres calls that "strongly locally compact" but acknowledges no standard term for it). On that page, PW chose the stronger sense and only mentions the weak sense in an "alternative definition" on the same page. Not ideal, really, but the conversation fizzled with no resolution. Aaaaanyway. The real point I think is that sources are a zoo and just being sure to stick with them doesn't give any guarantee of coherence. --Dfeuer (talk) 23:16, 21 March 2013 (UTC) Concur on the zoo part. However I do feel that using sources is more likely to provide coherence than obscuring the origin of a train of thought would (in that source links provide a means to let multiple pairs of eyes assess intention, validity, etc. of a particular source). In the utopian limit, sources could all be sifted through and a true amalgamate of sources and mathematical knowledge would replace $\mathsf{Pr} \infty \mathsf{fWiki}$. Sadly, as-is this will require an infinitude of hard work. Nothing wrong with hard work. In fact, I thought I acknowledged the difficulty of the present process; in any case, glad you agree on that one. But difficulty alone is not a conclusive reason to abandon the project. That's it for now — LF out, off to bed and thesis (in that order). — Lord_Farin (talk) 23:33, 21 March 2013 (UTC) Not all sources are created or looked upon equally. A wiki has absolutely no reputation at all. It is dependent on its sources. You already know this of course but I just feel I need to say it out loud for some reason. --Jshflynn (talk) 23:30, 21 March 2013 (UTC) In that spirit I'd like to add that anything dependent on its sources needs to be judicial in choosing them. There's a real tendency to look on books and articles as reliable sources, simply because of their format. I think it's important to remember just how easy it is to get any kind of work published somewhere. --Linus44 (talk) 17:37, 22 March 2013 (UTC) It's also important to consider the ages of sources. As mathematics develops, mathematicians develop new views of what constitutes the essence of an idea. A reliable, or even once-definitive, text may not be the best source for definitions if it is not very recent. Similarly, an introductory textbook (or even, perhaps, some advanced ones) may not be the best source for definitions because there may be subtle limitations that don't surface in that context. --Dfeuer (talk) 18:08, 22 March 2013 (UTC) No, I'm serious. I understand that some contributors look upon ProofWiki as a learning experience, as in: "if I post up loads of maths then maybe some of it will sink in" but that's not what this was originally about. If you are really having difficulty getting your head round an area of mathematics, then you are encouraged to leave it alone as you are by definition not necessarily going to do a good job on it. I think you need to read what I write more carefully. The problem we are discussing here has nothing to do with my difficulty, or lack thereof in any field of mathematics. It has to do with problems inherent to the task of putting together many different sources with wildly divergent perspectives based on different understandings of similar topics. As for the rest, I suspect that if you limit contributors on PW to experts that you will find there are very few, if any, people remaining. You were the one who complained that the exercise described above was too difficult. My point stands: you need to understand this area of mathematics before you can put together the appropriate structure. There is more to this than just knowing what the various definitions are and how you can manipulate the various entities: it involves having a holistic appreciation for the entire thing. --prime mover (talk) 19:24, 22 March 2013 (UTC) You can delete my posts all you want, but you'll have to admit: truth hurts. I will continue to delete your schoolyard insults. You may have more mathematical experience than I do, but neither of us has learned everything there is to learn about math. Oh, and chance will be a fine thing if you added any sources at all without being leaned on really heavily. My view is that the better sources are early sources. Modern textbooks are nothing more than tired regurgitation in ever more pointlessly expensive containers. It's the only way mathematicians can make money: throw together a grandiosely packaged and disgustingly expensive bag of rubbish and then make all the poor stupid undergrads buy it before they are allowed to graduate. The hope is that all one would need to do would be to read ProofWiki and not need to tap into that legalised mugging that is the "education system". --prime mover (talk) 18:45, 22 March 2013 (UTC) I am not saying, by any means, that old textbooks are bad and new textbooks are good. My personal experience suggests that age is not a good predictor of quality. I am saying that in order to avoid being permanently stuck in the mathematics of the 60s, we need to take care to make room for more modern ideas and terminology. The older terminology and treatments can be put into modern perspectives. This doesn't require going and changing everything always to the most recent pronouncement of so-and-so, but it requires some willingness to rename older definitions as appropriate to allow newer ones to take the spotlight. --Dfeuer (talk) 19:08, 22 March 2013 (UTC) Indeed, assessing the quality of a textbook is a hard thing to do. Invariably one needs to have multiple accounts of the same subject or bias will occur. Since $\mathsf{Pr} \infty \mathsf{fWiki}$ is a long stretch from the subjects that are considered to have only one good source work, this shouldn't hamper things too much. In general, a book's age and number of editions/printings are indications of how good it is; another one is reviews, though it is usually hard to come by some good ones. As a final refuge, one could read the book for oneself before considering posting it up here (I highly recommend this last step be carried out to a reasonable extent for any source). As for the case at hand, one definitely needs above-average erudition and subject-specific knowledge to make the mess that mathematicians have made of set theory a bit more sensible. Besides a completely rigorous account of ZF(C), which has the largest collection of authoritative resources developing it, it is IMHO best to read multiple books on the subject (not necessarily completely thoroughly; after a while one understands the most common trains of thought) but sufficiently deep as to be able to discern the idiosyncratic from the general consensus in any particular source. As-is, I fear it is necessary to conclude that not a single contributor matches these criteria (though the ubiquitous e-book versions may aid in that regard without hitting the wallet too much — consider it academic research if you worry about legal issues). I will be pleased when I hear someone feels to be up for the job. Until then, I plead it be left alone. — Lord_Farin (talk) 19:50, 22 March 2013 (UTC) Left alone doesn't look very good to me. The site currently adheres more or less to the set theory enshrined in a single text, which no one appears to have, plus errors from various contributors which are rather difficult to correct given the shaky foundations. As I've said, I'm going to put up Smullyan and Fitting's version of NBG in my user space as long as I feel like it, but I do hope that it will be possible to move out of user space before ProofWiki implodes entirely. --Dfeuer (talk) 19:58, 22 March 2013 (UTC) I hope so too. In fact, I hope to be back before it implodes (and to avert that event). Point is, we don't know how to implement various approaches properly right now, and it seems bad to add to the _beep_ that's already out there. I presume you refer to the Takeuti/Zaring thing with the single text; I've expressed my opinion on that one in the past. I like to think that there's not too many plainly false or severely flawed material out there. If you find a dissonance with this in your own mind it might be good to assemble a list of things that you think need to be addressed. A motivation is duly appreciated. Finally, please do continue drafting pages in your user namespace about NBG theory; that way we can easily put out a lot of material relatively quickly when we get the paradigm sorted out. Investigations on further sources that could be used (if not covered fully, then at least for comparing with the Takeuti/Zaring material to assess what's particular to them, and what is considered common practice) would be most appreciated. Usual standards for bringing in sources apply. — Lord_Farin (talk) 20:17, 22 March 2013 (UTC) SourceReview template I know you don't give a flying damn about sources, but some of us do: when you refactor a result with sources cited at the bottom, please add {{SourceReview}} at the bottom, above all the source citations that you duplicate between pages. Otherwise the process flow of the sources involved becomes compromised. --prime mover (talk) 18:26, 25 March 2013 (UTC) Having started a new paradigm with your edit of Symbols:Epsilon, your help would be appreciated in two further ventures: a) Work on making it look more attractive b) Put the same thing in all the other Greek letters. --prime mover (talk) 21:10, 1 April 2013 (UTC) "GUESSED at missing link." Yes, I put it that way to try to make it clear that I was not sure that it was correct, and should be verified by someone who has a clue about that kind of mathematics. Please, try to assume good faith. --Dfeuer (talk) 05:38, 3 April 2013 (UTC) As I have said repeatedly, if you don't know what you're doing, don't do it. Put a MissingLinks template in place instead. Happens you guessed correctly, but summary comments are not visible enough to be able to be picked up on long-term. --prime mover (talk) 05:56, 3 April 2013 (UTC) spambots Oh, and there's nothing more futile than trolling spambots. Don't worry, they get caught without your kind attentions. It may make you feel better but it makes us look childish. --prime mover (talk) 05:58, 3 April 2013 (UTC) I'm going to have to call you out over sources again. Please add your sources (particular case in point being Axiom:Axiom of Infinity/Naturally Ordered Semigroup.--prime mover (talk) 08:37, 14 April 2013 (UTC) make sure of redirects I note you're been changing the names of various definitions around well-founded, strongly well-founded, foundational, etc. etc. and also the redirects of e.g. Definition:Well-Founded Relation. There's a danger with doing this. Proofs which may rely on "well-founded relation" meaning one thing may no longer be valid if the target page of the redirect is not the same one as it was when those proofs were written. This can be especially problematical when you are making things up (e.g. Definition:Strongly Well-Founded Relation which you coined). As I suggested when this topic was first raised: the recommended technique is to take a particular work and use it as a main source work for a flow of thought. What you seem to be doing is picking a concept, googling for every usage you can find, and trying to synthesise a definition for a widely disparate range of sources for which you are not completely familiar with the contexts. Such an approach is suboptimal in $\mathsf{Pr} \infty \mathsf{fWiki}$. Please stop using this approach. Pick a book and work with it: work through it and ensure that the context is completely consistent throughout. If this sort of approach is not to your taste, then maybe you need to reconsider where lies the optimum outlets for your talents. --prime mover (talk) 21:33, 20 April 2013 (UTC) It's a work in progress. Very few things depended on that redirect, but I will check them. i'd like to do what I can to avoid letting the profusion of minor definitional variations confuse matters too much. The made-up terminology is not an ideal approach, but I'm not currently seeing a better way—there are both old and new texts defining these notions in each of several ways that textbooks have been treating as substantially different for at least three decades and probably longer. Even the "no infinite descending chains" definition, which appears to have fallen out of favor in set theory a long time ago, is still common in computer science, where that condition tends to be just what is needed in a terminatiom argument. If we call all these things "well-founded" we will confuse matters terribly. --Dfeuer (talk) 22:13, 20 April 2013 (UTC) Discipline requested You have replaced a perfectly good proof of Separable Metacompact Space is Lindelöf (obtained directly from a published work) with a creation of your own which has holes in it. Now you know how this website works. It doesn't matter how little you think of existing work, or however many flaws you see in the argument, you don't just blow it away and replace it with your own creations. You create a separate proof, and raise whatever concerns you have in the talk page of the original work. You know this: you have been told it repeatedly. Either you are incapable of learning or you disagree with it from a philosophical viewpoint. Both are unacceptable. --prime mover (talk) 05:57, 9 May 2013 (UTC) NO. This is a general attitude problem, not just about that page.--prime mover (talk) 06:45, 9 May 2013 (UTC) You do not have priority ownership of your user page. This section has been added here as a specific point for which improvement is required. As such you do not get to delete its contents just because you don't like what it says. --prime mover (talk) 07:45, 9 May 2013 (UTC) Point Finite Are you going to fix Definition:Point Finite to be compatible with Point Finite Set of Open Sets in Separable Space is Countable? If not I'm afraid I'm going to have to revert the latter to discussion of covers rather than of general sets of sets. --prime mover (talk) 05:17, 13 May 2013 (UTC) Yes, I will. --Dfeuer (talk) 05:24, 13 May 2013 (UTC) Source works Sorry, beg pardon, you did. --prime mover (talk) 16:59, 1 July 2013 (UTC) As I accidentally moved User:kc_kennylau/sandbox to Mills' Theorem instead of just copying and pasting it, please help me to move the history of Mills' Theorem to User:kc_kennylau/sandbox or please delete User:kc_kennylau/sandbox so that I can move the history of Mills' Theorem back to User:kc_kennylau/sandbox. I know this is not a very appropriate request, but please help me. Many thanks. --kc_kennylau (talk) 12:24, 21 July 2013 (UTC) Deleted. In the future, the correct way to make such requests is to use the delete template. --Dfeuer (talk) 15:20, 21 July 2013 (UTC) Thank you for teaching me. --kc_kennylau (talk) 15:22, 21 July 2013 (UTC) Take care when redirecting I've recently come across some examples of redirects where the transclusions weren't taken care of when moving the parent- and subpages. This can potentially result in pages breaking in the future. I'm confident this notice makes sure that it doesn't happen again in the near future. — Lord_Farin (talk) 19:56, 20 September 2013 (UTC) Retrieved from "https://proofwiki.org/w/index.php?title=User_talk:Dfeuer&oldid=340452" Random proof ProofWiki.org Proof Index Definition Index Symbol Index Axiom Index Mathematicians Proofread Articles Wanted Proofs Tidy Articles Improvements Invited Proposed Mergers Proposed Deletions Maintenance Needed This page was last modified on 28 January 2018, at 06:55 and is 0 bytes About ProofWiki
CommonCrawl
Villain model with long-range couplings @inproceedings{Giachetti2022VillainMW, title={Villain model with long-range couplings}, author={Guido Giachetti and Nicol{\`o} Defenu and Stefano Ruffo and Andrea Trombettoni}, G. Giachetti, N. Defenu, +1 author A. Trombettoni The nearest-neighbor Villain, or periodic Gaussian, model is a useful tool to understand the physics of the topological defects of the two-dimensional nearest-neighbor XY model, as the two models share the same symmetries and are in the same universality class. The long-range counterpart of the two-dimensional XY has been recently shown to exhibit a non-trivial critical behavior, with a complex phase diagram including a range of values of the power-law exponent of the couplings decay, σ , in… Renormalization, vortices, and symmetry-breaking perturbations in the two-dimensional planar model J. V. José, L. Kadanoff, S. Kirkpatrick, D. Nelson The classical planar Heisenberg model is studied at low temperatures by means of renormalization theory and a series of exact transformations. A numerical study of the Migdal recursion relation… A modified Villain formulation of fractons and other exotic theories Pranay Gorantla, Ho Tat Lam, N. Seiberg, Shu-Heng Shao Journal of Mathematical Physics We reformulate known exotic theories (including theories of fractons) on a Euclidean spacetime lattice. We write them using the Villain approach and then we modify them to a convenient range of… Analysis of the low-temperature phase in the two-dimensional long-range diluted XY model Fabiana Cescatti, Miguel Ib'anez-Berganza, A. Vezzani, R. Burioni The critical behaviour of statistical models with long-range interactions exhibits distinct regimes as a function of $\rho$, the power of the interaction strength decay. For $\rho$ large enough,… SELF‐DUALITY AND THE LOGARITHMIC GAS IN THREE DIMENSIONS * L. Jacobs, R. Savit a, U( 1) ) symmetry, are self-dual (in a way to be made precise below) and have a phase structure as a function of N quite similar to that of the two-dimensional Z ( N ) clock models and the… Universality of the Berezinskii–Kosterlitz–Thouless type of phase transition in the dipolar XY-model A. Vasiliev, A. Tarkhov, L. Menshikov, P. Fedichev, U. R. Fischer We investigate the nature of the phase transition occurring in a planar XY-model spin system with dipole–dipole interactions. It is demonstrated that a Berezinskii–Kosterlitz–Thouless (BKT) type of… Self-consistent harmonic approximation in presence of non-local couplings G. Giachetti, N. Defenu, S. Ruffo, A. Trombettoni Physics, Mathematics EPL (Europhysics Letters) We derive the self-consistent harmonic approximation for the 2D XY model with non-local interactions. The resulting equation for the variational couplings holds for any form of the spin-spin coupling… Unitary Chern-Simons matrix model and the Villain lattice action Mauricio Romo, M. Tierz We use the Villain approximation to show that the Gross-Witten model, in the weak- and strong-coupling limits, is related to the unitary matrix model that describes U(N) Chern-Simons theory on S^3.… Berezinskii-Kosterlitz-Thouless Paired Phase in Coupled XY Models. G. Bighin, N. Defenu, I. Nándori, L. Salasnich, A. Trombettoni Numerical simulations in the paradigmatic case of two coupled XY models at finite temperature find evidences that for any finite value of the interlayer coupling, the BKT-paired phase is present. Topological phase transitions in four dimensions N. Defenu, A. Trombettoni, D. Zappalà Logarithmic corrections in the two-dimensional XY model W. Janke Using two sets of high-precision Monte Carlo data for the two-dimensional XY model in the Villain formulation on square $L \times L$ lattices, the scaling behavior of the susceptibility $\chi$ and…
CommonCrawl
Fereydooni, A., Safapour, A. (2018). Banach Pair Frames. Wavelet and Linear Algebra, 5(1), 27-47. doi: 10.22072/wala.2017.60236.1107 Abolhassan Fereydooni; Ahmad Safapour. "Banach Pair Frames". Wavelet and Linear Algebra, 5, 1, 2018, 27-47. doi: 10.22072/wala.2017.60236.1107 Fereydooni, A., Safapour, A. (2018). 'Banach Pair Frames', Wavelet and Linear Algebra, 5(1), pp. 27-47. doi: 10.22072/wala.2017.60236.1107 Fereydooni, A., Safapour, A. Banach Pair Frames. Wavelet and Linear Algebra, 2018; 5(1): 27-47. doi: 10.22072/wala.2017.60236.1107 Banach Pair Frames Abolhassan Fereydooni 1; Ahmad Safapour2 1Department of Basic Sciences, Ilam University, Ilam, Iran 2Vali-e-Asr university In this article, we consider pair frames in Banach spaces and introduce Banach pair frames. Some various concepts in the frame theory such as frames, Schauder frames, Banach frames and atomic decompositions are considered as special kinds of (Banach) pair frames. Some frame-like inequalities for (Banach) pair frames are presented. The elements that participant in the construction of (Banach) pair frames are characterized. It is shown that a Banach space $\mathrm{X}$ has a Banach pair frame with respect to a Banach scalar sequence space $\ell$, when it is precisely isomorphic to a complemented subspace of $\ell$. It is shown that if we are allowed to choose the scalar sequence space, pair frames and Banach pair frames with respect to the chosen scalar sequence space denote the same concept. Banach frame; Atomic decomposition; (Banach)pair frame; (Banach)pair Bessel [1] A. Aldroubi, Q. Sun and W. Tang, $p$-frames and shift invariant subspaces of $L^p$, J. Fourier Anal. Appl., 7(1) (2001), 1-22. [2] P. Balazs, Basic definition and properties of Bessel multipliers, J. Math. Anal. Appl., 325(1) (2007), 571-585. [3] P. Balazs and T.S. Diana, Representation of the inverse of a frame multiplier, J. Math. Anal. Appl., 422(2) (2015), 981-994. [4] D. Carando, S. Lassalle and P. Schmidberg, The reconstruction formula for Banach frames and duality, J. Approx. Theory, 163(5) (2011), 640-651. [5] P.G. Casazza, The art of frame theory, Taiwanese J. Math., 4(2) (2000), 129-201. [6] P.G. Casazza, O. Christensen and D. Stoeva, Frame expansions in separable Banach spaces, J. Math. Anal. Appl., 307(2) (2005), 710-723. [7] P.G. Casazza, S.J. Dilworth, E. Odell, Th. Schlumprecht and A. Zsak, Coefficient quantization for frames in Banach spaces, J. Math. Anal. Appl., 348(1) (2008), 66-86. [8] P.G. Casazza, D. Han and D.R. Larson, Frames for Banach spaces, Contemp. Math., 247 (1999), 149-182. [9] O. Christensen, An Introduction to Frames and Riesz Bases, Birkh"auser, 2008. [10] O. Christensen and R.S. Laugesen, Approximately dual frame pairs in Hilbert spaces and applications to Gabor frames, Sampl. Theory Signal Image Process., 9(1-3) (2010), 77-89. [11] O. Christensen and D.T. Stoeva, $p$-frames in separable Banach spaces, Adv. Comput. Math., 18(2-4) (2003), 117-126. [12] J.B. Conway, A Course in Functional Analysis, Springer-Verlag, 1990 (2nd Ed). [13] I. Daubechies, A. Grossmann and Y. Meyer, Painless non-orthogonal expansions, J. Math. Phys., 27(5) (1986), 1271-1283. [14] R.J. Duffin and A.C. Schaeffer, A Class of nonharmonic Fourier series, Trans. Amer. Math. Soc., 72(2) (1952), 341-366. [15] H.G. Feichtinger and K. Gröchenig, A unified approach to atomic decompositions via integrable group representations, In book: Function spaces and applications, Springer Berlin Heidelberg, (1988), 52-73. [16] A. Fereydooni and A. Safapour, Pair frames, Result. Math., 66(1) (2014), 247-263. [17] A. Fereydooni, A. Safapour and A. Rahimi, Adjoint of pair frames, Sci. Bull., Ser. A, Appl. Math. Phys., 74(2) (2012), 131-140. [18] C. Fern'{a}ndez, A. Galbis and E. Primo, Unconditionally convergent multipliers and Bessel sequences, arXiv:1603.03216 [math.FA], (2016). [19] D. Gabor, Theory of communication, Journal of Institution of Electrical Engineers, 93(3) (1946), 429-457. [20] K. Gr"ochenig, Describing functions: Atomic decompositions versus frames, Monatsh. Math., 112(1) (1991), 1-42. [21] D. Han and D.R. Larson, Frames, bases and group representations, Mem. Am. Math. Soc., 697(697) (2000). [22] P.K. Jain, S.K. Kaushik and L.K. Vashisht, Banach frames for conjugate Banach spaces, Z. Anal. Anwend., 23(4) (2004), 713-720. [23] L.V. Kantorovich and G.P. Akilov, Functional Analysis in Normed Spaces, Pergamon, 1964. [24] M. Mirzaee Azandaryani and A. Fereydooni, Pair frames in Hilbert $C^ast-$modules, Proceedings Mathematical Sciences, 128(2) (2018). [25] D.T. Stoeva, Frames and Bases in Banach Spaces, PhD thesis, 2005. [26] D.T. Stoeva and P. Balazs, Canonical forms of unconditionally convergent multipliers, J. Math. Anal. Appl., 399(1) (2013), 252-259. [27] D.T. Stoeva and P. Balazs, Invertibility of multipliers, Appl. Comput. Harmon. Anal., 33(2) (2012), 292-299. [28] R. Young, An Introduction to Nonharmonic Fourier Series, Academic Press, New York, 1980.
CommonCrawl
Skip to main content Skip to sections Journal of Philosophical Logic Principles for Object-Linguistic Consequence: from Logical to Irreflexive Carlo Nicolai Lorenzo Rossi First Online: 20 June 2017 We discuss the principles for a primitive, object-linguistic notion of consequence proposed by (Beall and Murzi, Journal of Philosophy, 3 pp. 143–65 (2013)) that yield a version of Curry's paradox. We propose and study several strategies to weaken these principles and overcome paradox: all these strategies are based on the intuition that the object-linguistic consequence predicate internalizes whichever meta-linguistic notion of consequence we accept in the first place. To these solutions will correspond different conceptions of consequence. In one possible reading of these principles, they give rise to a notion of logical consequence: we study the corresponding theory of validity (and some of its variants) by showing that it is conservative over a wide range of base theories: this result is achieved via a well-behaved form of local reduction. The theory of logical consequence is based on a restriction of the introduction rule for the consequence predicate. To unrestrictedly maintain this principle, we develop a conception of object-linguistic consequence, which we call grounded consequence, that displays a restriction of the structural rule of reflexivity. This construction is obtained by generalizing Saul Kripke's inductive theory of truth (strong Kleene version). Grounded validity will be shown to satisfy several desirable principles for a naïve, self-applicable notion of consequence. Object-linguistic consequence V-Curry paradox Logical consequence Irreflexive consequence The authors gratefully acknowledge the support of, respectively, the European Commission, Grant 658285 FOREMOTIONS, and the Fonds zur Förderung der wissenschaftlichen Forschung (FWF), Grant P29716-G24. Object-linguistic treatments of consequence have been extensively investigated in the recent literature: on these approaches, consequence is formalized as a predicate in some first-order language, and principles governing its behaviour are given. These studies are motivated by diverse philosophical aims, ranging from criticisms to paraconsistent theories [39], to deflationism about consequence [34], to new versions of truth-theoretical paradoxes such as Curry's paradox [1, 18, 27]. Some of these authors, Beall and Murzi [1] and Murzi and Shapiro [23] in particular, also stress the analogy between object-linguistic treatments of consequence, truth, and comprehension, and call for a unified solution of the resulting paradoxes, arguing that substructural approaches are preferable to fully structural ones.1 In order to conform with the terminology adopted in the literature, we will treat 'consequence' and 'validity' as synonymous where, crucially, consequence or validity do not necessarily coincide with logical consequence or logical validity. All these approaches can in fact be seen as investigating different ways in which some conclusion 'follows from' some premises. In their recent [1], Beall and Murzi proposed the following naïve principles for a primitive validity predicate Val(x, y): where φ and ψ range over sentences possibly containing Val itself, and \(\ulcorner \cdot \urcorner \) is informally understood as a name-forming device. It is not completely clear how to read ⤙ : Beall and Murzi [1] interpret it as an unspecified relation of 'following from'. They also introduce (what they see as) an analogue of the disquotation schema for truth: (VP) and (VD) are inconsistent with classical logic, over a sufficiently expressive base theory. Beall and Murzi show this via a variant of Curry's Paradox, which they call V-Curry Paradox. In order to introduce the paradox, let us fix the meaning of ⤙ as a sequent arrow of a system including (VD) and the axioms of a sufficiently strong syntax theory as initial sequents, and closed under (VP) and the standard logical rules – notably, left contraction and cut. Our syntactic axioms enable us to find a sentence ν that is inter-derivable with \(\mathsf {Val}(\ulcorner \nu \urcorner ,\ulcorner \bot \urcorner )\), where ⊥ is some falsity of the base theory. Reasoning in the naïve theory of validity, we have: In this paper, we explore different strategies to block the V-Curry and related paradoxes. These strategies fall under a common intuition: starting with some meta-theoretic consequence relations, we internalize them in the object language in ways that capture their fundamental traits. That is, each such strategy corresponds to the acceptance of different principles for Val and to different restrictions of the structural rules. Different solutions to the paradox, then, correspond to different ways of cashing out the idea that the acceptance of a sentence of the form \(\mathsf {Val}(\ulcorner \varphi \urcorner ,\ulcorner \psi \urcorner )\) is ultimately to be explained and justified by the acceptance of some meta-theoretical validity statements, where the acceptance of the latter does not involve object-linguistic validity principles. A natural option to develop this strategy, and block the V-Curry (and related paradoxes), is to apply (VP)only to logical derivations. Under this reading, Val becomes a primitive predicate for logical validity. This is the strategy followed by Ketland [15]: he axiomatizes Val over Peano Arithmetic (henceforth PA) and proves the consistency of the theory resulting from this restriction of (VP). This option is supported by the fact that (VP) does not preserve logical validity. The very possibility of formulating (VP) requires a well-behaved machinery to handle the name-forming device \(\ulcorner \cdot \urcorner \) and this machinery does not satisfy uniform substitutivity, violating a basic requirement for logically valid principles.2 However, Ketland's consistency proof only applies to a restricted category of theories. These theories will be called later reflexive theories. Moreover, Ketland's strategy is based on the possibility of reducing the primitive logical validity predicate to a provability predicate definable in PA. Besides establishing consistency, such proof-theoretic reductions (such as conservativity and variants of interpretability) also help us in characterizing the notion associated with the logical validity predicate. For instance, the reducibility of the truth predicate to the base theory may be used to assess general conceptions of truth such as truth-theoretical deflationism (see [14, Ch. 7]): in the same way, proof-theoretical reductions might be employed to assess deflationary and other general conceptions of logical validity (see [34]). Therefore, in Section 2, we provide more general reduction techniques that, besides yielding a consistency proof, will give us a finer-grained analysis of logical validity axiomatized over a wide range of base theories.3 A noteworthy feature of object-linguistic logical validity is that iterations of the validity predicate are not allowed. For if φ ⤙ ψ is logically valid we can conclude \(\mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner )\) in the theory of logical validity, but from the latter we cannot conclude \(\mathsf {Val}(\ulcorner \varnothing \urcorner , \ulcorner \mathsf {Val} (\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner ) \urcorner )\). There are, however, different notions of consequence, and notions expressing 'following from' more generally, for which iterations are very natural, such as entailment or implication.4 A standard option to approximate iterability is resorting to hierarchies, namely stratifying the ⤙ and the validity predicate. For example, Field [9] suggests a hierarchy of validity predicates and sequent arrows, and the following version of Beall and Murzi's principles, where ⤙ β is read as 'derivability in the theory of validity of level β': In Section 2.4 we will see that this stratified notion of validity may be understood in terms of a hierarchy of reflection principles over the starting theory: therefore, not only stratified object-linguistic validity is classically consistent, but it has a natural conceptual analysis in terms of a hierarchy of soundness extensions of the starting theory. However, like any hierarchical approach, also this proposal suffers from variants of the so-called 'Nixon-Dean problem' (see [16], pp. 694–697); consider for example the following case Speaker Asays: 'the negation of everything Isay follows from what Speaker Bsays'. Speaker Bsays: 'everything Isay follows from what Speaker Asays'. As for truth, these cases pose problems for hierarchical and non-self-applicable accounts of consequence. In Sections 3–4, we develop an approach to object-linguistic validity that overcomes this problem: it blocks paradoxical arguments and, at the same time, avoids restrictions on (VP), recovers a natural version of (VD), and delivers a single and genuinely self-applicable notion of validity. This will be accomplished by an inductive construction that generalizes the one in [16] (strong Kleene version). The fundamental feature of the construction is that the models (for languages with self-applicable validity) it generates do not satisfy the structural rule of reflexivity.5 The smallest fixed point of our construction yields a notion that we might call grounded validity, in that it extends Kripke's notion of grounded truth (see [16], pp. 694 and 706–707). This is because the meta-theoretical notion of validity that holds in the base language determines the extension of the object-linguistic validity predicate. As we argue in Section 5, grounded validity affords us a natural reading of the naïve validity-theoretical principles. 2 Object-Linguistic Validity and Classical Logic One may be tempted to read the rules (VP) and (VD) as characterizing a notion of logical validity or logical consequence. However, it became soon clear that this temptation should be resisted: object-linguistic treatments of logical consequence simply do not give rise to paradox. This is the conclusion reached by Cook [7] and Ketland [15], and echoed by Field [9]. In particular, the former analyze the Curry-like derivation sketched in the introduction and come to the main conclusion that paradox arises when the principles governing this notion (whatever it may be) of primitive consequence are themselves considered to be logically valid.6 In the following two subsections we consider strategies to overcome paradox while keeping classical logic. But we do not only aim at (classical) consistency: by considering suitable reduction methods of the theories with primitive validity to the respective base theory or suitable extensions of it, we intend to study the nature of the concept of validity in relation with the inferential resources of the starting base theory. In particular: Improving on [7, 15], we give a uniform method for the conservativity of the theories of logical validity over an arbitrary theory extending Elementary Arithmetic (EA), a very weak arithmetical theory. This method will also yield the reducibility of the theory of logical validity to reflexive base theories (e.g. PA as in [7, 15]) and its local reducibility in finitely axiomatized based theories (e.g. EA itself), in which 'reduction' is intended as a well-behaved version of relative interpretability that preserves arithmetical vocabulary. Even if logical validity is extended to purely arithmetical consequence, classical logic can consistently be kept by interpreting Val as provability in the base theory. Starting form this observation, we show that the hierarchy of validity predicates hinted at by Field [9] can be naturally interpreted as a hierarchy of local reflection principles for the starting theory. These formal results suggest in turn that the notions of primitive logical and arithmetical (or syntactic) validity are not only unparadoxical, but that they can be conceptually reduced, either globally or locally, to notions definable in the base theory or extensions in the same language. 2.1 Arithmetical Theories and Reductions We now fix some formal details. We work in the language \(\mathcal {L}=\{0,\mathsf {S},+,\times ,\mathsf {exp},\) ≤} of arithmetic. Occurrences of the quantifiers in expressions of the form (∀x ≤ t) φ(x) and (∃x ≤ t) φ(x) where t does not contain x are called bounded. Formulas containing only bounded occurrences of quantifiers are called elementary formulas or Δ0-formulas. All theories considered below will extend Elementary Arithmetic EA (or, equivalently, IΔ0 plus the totality of exponentiation).7 The class of elementary functions \(\mathcal {E}\) is obtained by closing the initial functions zero(⋅), Suc(⋅), + , ×, 2 x , \(\mathsf {P}_{i}^{n}(x_{1},...,x_{n})=x_{i}\) with (1 ≤ i ≤ n), and truncated subtraction \(x\dot -y\) under the operations of composition and bounded minimalization: $$H(\mathbf{x})\,=\,F(G_{1}(\mathbf{x}),\ldots,G_{n}(\mathbf{x})); \quad\quad (\mu t\leq y)\;P(\textbf{x},t)\,=\,\left\{\begin{array}{l}\,\text{the least}\, t\leq y\, \text{s.t.}\, P(\mathbf{x},t)\\ \,0, \, \, \, \text{if there is no such}\, t \end{array}\right. $$ where F, G1,…,G n are elementary functions and P an elementary predicate. EA has sufficient resources to naturally introduce new relations corresponding to the elementary functions by proving their defining equations. We will therefore freely employ some functional expressions for the relevant elementary operations and relations. The formalization of the syntax of first-order theories as it is standardly done in, e.g., [33], is carried out without difficulties in EA. In particular, once we show that the standard arithmetization of the syntax can be captured by elementary functions, the fact that EA can Σ1-define precisely the elementary functions ensures us that syntactic predicates and notions can be intensionally captured in it. Unless otherwise specified, throughout this section we fix a Hilbert-style system for first-order logic in which modus ponens is the only rule of inference: \(X\vdash \varphi \) then indicates that there is a derivation in this system of φ from sentences in X, logical axioms, and using modus ponens only. Derivations will therefore be sequences of formulas. Also the syntactic notion of relative interpretation of a theory U presented via an elementary set of axioms into another elementary presented theory W will repeatedly occur:8 it can be considered as a triple (U, τ, W), with τ a translation function \(\tau \colon \mathcal {L}_{U}\to \mathcal {L}_{W}\) that maps n-ary relations of \(\mathcal {L}_{U}\) into \(\mathcal {L}_{W}\)-formulas with n free variables, n-ary functions of \(\mathcal {L}_{U}\) into \(\mathcal {L}_{W}\)-formulas with n + 1 free variables satisfying the obvious existence and uniqueness conditions, and that relativizes quantifiers to a suitable \(\mathcal {L}_{W}\)-formula δ(x), the domain of the translation. In addition, (U, τ, W) satisfies $$\text{if }\;U\vdash \varphi,\;\text{ then }\;W\vdash \bigwedge_{x_{i}\in \mathsf{FV}(\varphi)}\delta(x_{i})\rightarrow \varphi^{\tau} $$ for formulas \(\varphi \in \mathcal {L}_{U}\) and FV(φ) the set of free variables of φ. A relative interpretation preserves the structure of a proof. On many occasions we will employ a more regimented notion of relative interpretation. An interpretation is direct if quantifiers are unrelativized and identity is mapped into identity. Let U and W be such that \(\mathcal {L}\subseteq \mathcal {L}_{U}\cap \mathcal {L}_{W}\). We say that U is \(\mathcal {L}\) -embeddable in W if there is a direct interpretation of U into W that leaves the \(\mathcal {L}\)-vocabulary unchanged. \(\mathcal {L}\)-embedding is a properly stricter notion than relative interpretability.9 Finally, U is locally interpretable in W if any finite subtheory of U is relatively interpretable in W. The notion of local \(\mathcal {L}\) -embedding is defined analogously. 2.2 Object-Linguistic Logical Validity As anticipated, in this subsection we deal with object-linguistic treatments of logical validity: that is we focus on theories that will be obtained by restricting (VP) only to purely logical derivations. It is worth remarking here that, since we are employing classical logic, the deduction theorem holds: this makes the presentation of the theories of logical validity smoother. Let \(T\supseteq \mathsf {EA}\) be a consistent theory formulated in \(\mathcal {L}_{V}=\mathcal {L} \cup \{\mathsf {Val}\}\). The theory T V 0extends T with the following principles, for all \(\mathcal {L}_{V}\) -sentences φ, ψ: We refer to the theory \({ T}^{\mathsf {V_{0}}}\mathpunct \upharpoonright \) as the theory obtained from T V 0 by allowing only formulas of \(\mathcal {L}\) as instances of nonlogical axiom schemata of T. T V 0 results from restricting (VP) to purely logical derivations. However, since conditional introduction will be assumed throughout this section, it is convenient to work with a unary rather than a binary validity predicate. Definition 2 (Primitive logical validity) Let \(T\supseteq \mathsf {EA}\) be a consistent theory formulated in \(\mathcal {L}^{+}=\mathcal {L}\cup \{\mathsf {V} \}\), where Vis now intended as a unary predicate. The theory T V extends T with the principles, for all \(\mathcal {L}^+\) -sentences φ: Again we refer to the theory \({ T}^{\mathsf {V}}\mathpunct \upharpoonright \) as the theory obtained from T V by allowing only formulas of \(\mathcal {L}\) as instances of nonlogical axiom schemata of T. That T V is no essential modification of T V 0 is guaranteed by the following: T V and T V 0are mutually \(\mathcal {L}\) -embeddable, and so are \(T^{\mathsf {V}}\!\!\upharpoonright \) and \(T^{\mathsf {V_{0}}}\mathpunct \upharpoonright \). The idea is entirely straightforward. By employing the recursion theorem to translate within Gödel quotes,10 we can uniformly replace Val(x, y)and V(x)with, respectively, \(\mathsf {V}(\tau _{0}(x\underset {\cdot }{\rightarrow } y))\) and \(\mathsf {Val}(\ulcorner 0=0\urcorner ,\tau _{1} (x))\) where τ0, τ1 are suitable (elementary) translations that leave the arithmetical vocabulary unchanged and do not relativize quantifiers. The verification that the two translations are in fact\(\mathcal {L}\)-embeddingsis routine as the following holds, for i ≤ 1and φ either in \(\mathcal {L}^{+}\) or in \(\mathcal {L}_{V}\): $$(5)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{if}\, \varphi\, \text{is provable in pure logic, then so is}\, \varphi^{\tau_{i}}{\kern6pc} $$ It is intuitively fairly clear that the derivation of the V-Curry paradox is blocked in T V . When (VP) is applied in the informal presentation of the paradox on p. 2, (VD) has been already employed and therefore (VP), in the step between (3) and (4), cannot be applied, since the sequent in (3) is not obtained via a purely logical derivation. This indicates a strategy to prove the consistency of T V , which anticipates some traits of the construction carried out in Section 3.2. The following is a positive inductive definition of the set S of logical truths of \(\mathcal {L}^+\): $$(6)~~~~~~~~~~~~y\in S\,\leftrightarrow\,\mathsf{Sent}_{\mathcal{L}^+}(y)\land \left(\mathsf{LAx}(y)\vee \exists x\,(x \in S \land (x\underset{\cdot}{\rightarrow} y)\in S)\right){\kern3pc} $$ where \(\mathsf {Sent}_{\mathcal {L}^+}\) and LAx are elementary predicates representing the set of (codes of) sentences of \(\mathcal {L}^{+}\) and of logical axioms respectively. A fixed point of this definition is a set X such that \((\mathbb {N},X)\) models (6) in the sense that X is taken as the extension of S. It is easy to see that the least such fixed point I V is reached after ω iterations of the operator associated with (6) and that the structure \((\mathbb {N},\mathsf {I}_{V})\) is a model of T V (and \(T^{\mathsf {V}}\!\!\upharpoonright \)). However, consistency may be seen as a necessary but not sufficient condition for a full characterization of the concept of validity captured by T V . As we mentioned above, a primitive validity predicate is usually motivated – for instance in [1, 23, 34] – along similar lines as the truth predicate: in both cases we aim at expressing meta-theoretic facts in the object-language. For instance, one might want to prove in T V that all tautologies are logically valid, or that so are all implications from a finite subsets of the axioms of T V . A natural question concerns therefore the costs of the extra expressive power given by V with respect to the inferential resources of the base theory. Moreover, it is mathematically interesting to weigh these costs across a wide range of possible syntactic base theories by abstracting away from specific conditions related to a particular choice of the base theory. From this point of view, a general study of the properties of the theory of object-linguistic validity such as its \(\mathcal {L}\)-embedding in the base theory T, conservativity over T, finite axiomatizability over T, become integral part of the study of this notion of validity. The analogy with truth cannot be pushed much further; in particular, it would be a mistake to see theories of logical validity as a subspecies of theories of truth. Theories of truth featuring the truth-theoretic version of (VD1) are usually prone to an asymmetry between the internal theory – i. e. what the theory proves true – and the set of its theorems: they prove the conjunction \(\lambda \land \neg \mathsf {T}\ulcorner \lambda \urcorner \) for some sentence λ, where T is the truth predicate. In other words, the theory displays the puzzling feature of asserting a sentence while declaring it untrue.11 The situation in T V is both similar and radically different. By diagonalization, we can obviously obtain a sentence χ such that $$(7)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{T^{\mathsf{V}}} \vdash \chi\leftrightarrow \neg \mathsf{V}(\ulcorner \chi\urcorner){\kern10pc} $$ By (VD1) and (7), we can derive \(\neg \mathsf {V} (\ulcorner \chi \urcorner )\) and therefore χ in T V . However, if V is interpreted as logical validity, it is not only harmless, but even desirable for χ to be derivable in T V but not logically valid because its derivation crucially involves (VD1). Next we show that the primitive notion of consequence given by T V cannot serve an expressive role of finite re-axiomatization.12 Lemma 1 T V is not finitely axiomatizable. Seeking a contradiction, let T0 be finite reaxiomatization of T V such that \(T_{0} \dashv \vdash {T^{\mathsf {V}}} \). This entails that we can find a finite subtheory A of T V such that \(\mathsf {A} \dashv \vdash {T^{\mathsf {V}}} \) and in which (VP1) and (VD1) can only be applied to sentences of \(\mathcal {L}^{+}\) containing at most n logical symbols. Let \(\mathcal {L}^{+}_{n}\) be this latter set of sentences. Now adapt (6) in the following way, where \(\mathsf { Sent}_{\mathcal {L}_{n}^{+}}\) is the set of (codes of) sentences of \(\mathcal {L}^{+}\) containing at most n occurrences of logical symbols: $$\begin{array}{@{}rcl@{}} k\in S\;\leftrightarrow \;k&=&\ulcorner \neg(\underbrace{\top\land\ldots\land \top}_{\land\, \text{applied}\, n\text{-times}})\urcorner \;\vee\\ &&{}\left[k\!\in\! \mathsf{Sent}_{\mathcal{L}_{n}^{+}}\!\land\! \left(\mathsf{LAx}(k)\!\vee\! \exists m\,(m\in \mathsf{Sent}_{\mathcal{L}_{n}^{+}}\!\land\! m \in S \!\land\! (m\underset{\cdot}{\!\rightarrow\!} k)\!\in\! S)\right)\right] \end{array} $$ Let I V n be the least fixed point of this inductive definition. \((\mathbb {N},\mathsf {I}_{V^{n}})\) is a model of A but it cannot be a model of T V . □ We now move on to the question of the conservativity and \(\mathcal {L}\)-embeddability of T V in T. In what follows, we distinguish between reflexive and finitely axiomatizable extensions of EA. We recall that a theory is reflexive if it proves the consistency of any of its finite subtheories: for all natural choices of T, reflexive theories T prove, for finite \(S\subset T\) and for all \(\varphi \in \mathcal {L}\), $$(\textsf{Rfn}(S))~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\mathsf{Pr}_{S}(\ulcorner \varphi\urcorner)\rightarrow \varphi{\kern20pc} $$ where Pr S (⋅) is a canonical provability predicate for S. For reflexive T, the question of the \(\mathcal {L}\)-embedding and conservativity of T V in T is readily obtained: let us define the elementary translation \(\mathfrak {a}\colon \mathcal {L}^{+}\to \mathcal {L}\): $$\begin{array}{@{}rcl@{}} &&(s=t)^{\mathfrak{a}}:= s=t (\mathsf{V} t)^{\mathfrak{a}}:=\mathsf{Pr}_{\varnothing} (\mathfrak{a}(t))\\ &&(\neg \varphi)^{\mathfrak{a}}:=\neg \varphi^{\mathfrak{a}} (\varphi\land \psi)^{\mathfrak{a}}:=\varphi^{\mathfrak{a}}\land \psi^{\mathfrak{a}}\\ &&(\forall x\varphi)^{\mathfrak{a}}:=\forall x \varphi^{\mathfrak{a}} \end{array} $$ The definition of \(\mathfrak {a}\) again relies on Kleene's recursion theorem: in particular \(\mathfrak {a}(\cdot )\) represents \((\cdot )^{\mathfrak {a}}\) in EA; moreover, \(\mathsf {Pr}_{\varnothing }(\cdot )\) stands for canonical logical provability, that is provability from the empty set of nonlogical assumptions. If T is reflexive, then \(\mathfrak {a}\) is an \(\mathcal {L}\) -embeddingof T V in T. If T is finitely axiomatizable, \(\mathfrak {a}\) cannot be an interpretation of T V in T. Both proofs are immediate. For (i), one simply notices that T, being reflexive, proves \(\mathsf {Pr}_{\varnothing }(\ulcorner \varphi \urcorner )\rightarrow \varphi \) for all \(\varphi \in \mathcal {L}\) by Rfn(S). For (ii), if \(\mathfrak {a}\) were an interpretation of T V in T, by letting A be again a finite axiomatization of T, we would have $$\mathsf{A}\vdash \mathsf{Pr}_{\mathsf{A}}(\bot )\rightarrow \bot, $$ (with\(\bot :=\ulcorner 0=1\urcorner )\),which contradicts Gödel's second incompleteness theorem. □ Part (i) of the previous lemma obviously entails the interpretability of T V in T for reflexive T, being \(\mathcal {L}\)-embeddings stricter than intepretability. Two further remarks: the interpretation \(\mathfrak {a}\) is a variant of the one contained in [15], which only takes care of external occurrences of V without applications of the recursion theorem. Moreover, in Lemma 2(i), \(\mathfrak {a}\) is indeed an \(\mathcal {L}\)-embedding of T V in T when the latter is reflexive. This clearly indicates that, in the case of reflexive theories, the notion of validity governed by (VP1) and (VD1) can be unequivocally understood as a definable notion of logical validity. Lemma 2(i) also immediately yields the conservativity of T V over T for reflexive T. However reflexive theories are in many senses very special and they have a peculiar behaviour with respect to interpretability and related notions. For instance, by Orey's compactness theorem,13 reflexive theories collapse the distinction between local and global interpretability and they have the very convenient feature of proving the reflection principle for pure logic that is, as we have seen, closely related to (VD1). It is therefore natural to generalize the picture given by Lemma 2 and ask ourselves whether the notion of consequence captured by (VP1) and (VD1) can be uniformly characterized also in the case of non-reflexive theories. As we anticipated, we will focus on finitely axiomatized theories, which are provably distinct from reflexive theories due to Gödel's second incompleteness theorem. We recall that U is locally interpretable in V if every finite subtheory of U is interpretable in V. Similarly, U is locally \(\mathcal {L}\) -embeddable in V if every finite \(U_{0}\subseteq U\) is \(\mathcal {L}\)-embeddable in V. We also recall that T V is formulated in a Hilbert-style calculus in which modus ponens is the only rule of inference. Proofs in T V of φ are therefore objects of the form \( \mathcal {D}=\langle \varphi _{0},\ldots ,\varphi _{n-1},\varphi \rangle \) where each element of the sequence is either an axiom of T V or it has been obtained from previous elements by modus ponens. Also, the code \(|\mathcal {D}|\) of \(\mathcal {D}\) is the code of \(\langle \ulcorner \varphi _{0}\urcorner ,\ldots ,\ulcorner \varphi _{n-1}\urcorner \rangle \). To prove the local \(\mathcal {L}\)-embeddability of T V in T, we need the following, well known fact: Lemma 3 (Σ1-completeness) For every Σ1-formula φof \(\mathcal {L}\) and every T extending EA, if \(\mathbb {N}\vDash \varphi \), then \(\, T\vdash \varphi \). The informal idea for the proof of the local \(\mathcal {L}\)-embeddability of T V in T is straightforward: we translate only the outermost occurrences of V because only one 'layer' of the logical validity predicate matters in logical proofs. This enables us to dispense with uses of more sophisticated devices, such as the recursion theorem, to translate within Gödel corners. T V is locally \(\mathcal {L}\) -embeddablein T. Let B be a finite subsystem of T V . In B, we can safely assume that there are at most m applications of (VP1) to logical proofs \(\mathcal {D}_{i}\), i ≤ m. Let \(\,|\mathcal {D}_{i}|\,\leq n\) for all logical proofs \(\mathcal {D}_{i}\), that is, the code of each such \(\mathcal {D}_{i}\) is smaller or equal than n. By our assumptions on sequence coding (see [33, Section 2.2]), bounds for (codes of) sequences and their concatenations are given by $$\begin{array}{@{}rcl@{}} \underbrace{\langle k,\ldots,k\rangle}_{m\text{-times}}\leq (k+1)^{2^{m}}; s_{0}^{\smallfrown} s_{1}\leq (s_{0}+s_{1})^{2^{2\mathsf{lh}(s_{0})}}. \end{array} $$ where lh(⋅) is the elementary function that outputs the number of the elements of a sequence. We define an elementary predicate V n (x) stating that x is proved in predicate logic with a proof whose code is less than n: $$\mathsf{V}_{\!n}(x)\;:\leftrightarrow\;(\exists y\leqslant n)\,(\mathsf{Prf}_{\varnothing}(y,x)) $$ Here \(\mathsf {Prf}_{\varnothing }(\cdot ,\cdot )\)is elementary and expresses Hilbert-style provability in\(\mathsf { PL}(\mathcal {L}^{+})\), predicate logicin the language \(\mathcal {L}^{+}\). This n is fixed and will be kept so throughout the proof. We specify the translation \(\mathfrak {b}\):it is important to notice that here we are not employing the recursion theorem. $$\begin{array}{@{}rcl@{}} &&(s=t)^{\mathfrak{b}}:=\;s=t (\mathsf{V} x)^{\mathfrak{b}}:=\mathsf{V}_{\!n}(x)\\ &&\cdot^{\mathfrak{b}}\, \text{commutes with prop. connectives} (\forall x\varphi)^{\mathfrak{b}}:=\forall x \varphi^{\mathfrak{b}} \end{array} $$ To verify that \(\mathfrak {b}\) is an \(\mathcal {L}\)-embedding,we check that (VP1)and (VD1)hold modulo the translation. More generally, we show by induction on the length of the derivation in B that,for all \(\mathcal {L}^{+}\)-sentences φ, $$(8)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{if }B\vdash \varphi, \text{ then } T\vdash \varphi^{\mathfrak{b}}{\kern10pc} $$ It is clear that we obtain (8) when φ is a logical or arithmetical axiom of B. Let φ be of the form\(\mathsf {V}(\ulcorner \psi \urcorner )\rightarrow \psi \). Obviously wehave $$\text{either }\mathbb{N}\models \mathsf{V}\!_{n}(\ulcorner \psi\urcorner)\text{ or }\mathbb{N}\models \neg \mathsf{V}\!_{n}(\ulcorner \psi\urcorner). $$ If the latter, then we are done by Lemma 3 because V n iselementary. If the former disjunct obtains, there is a purely logical derivation\(\mathcal {D}_{j}\)of ψ such that\(|\mathcal {D}_{j}|\leq n\). Then there is a purelylogical derivation \(\mathcal {D}^{\mathfrak {b}}_{j}\)of \(\psi ^{\mathfrak {b}}\)obtained bytranslating its elements.14 For the induction step, we only need to worry about (VP1). Now if φ is obtained by anapplication of (VP1),it has the form \(\mathsf {V} (\ulcorner \psi \urcorner )\)for ψ an\(\mathcal {L}^{+}\)-sentence and there isa purely logical proof \(\mathcal {D}_{i}\)of ψ. Byassumption, \(\,|\mathcal {D}_{i}|\,\leq n\),therefore \(T\vdash \mathsf {V}_{\!n}(\ulcorner \psi \urcorner )\)byLemma 3. □ Now Proposition 2 immediately yields, besides the consistency of T V that wasn't seriously doubted, the conservativity of T V over T for any T extending EA. Corollary 1 T V is a conservative extension of T. If \({T^{\mathsf {V}}} \vdash \varphi \) and \(\varphi \in \mathcal {L}\), then already a finite subsystem \(B\subset {T^{\mathsf {V}}}\) proves φ. By Proposition 2, \(T\vdash \varphi ^{\mathfrak {b}}\). But \(\varphi ^{\mathfrak {b}}\) is nothing more than φ itself by definition of \(\mathfrak {b}\). □ The conservativity of T V over T immediately yields the consistency of T V , relative to the consistency of T, that was assumed in Definition 2. It should be noted that the conservativity of \(T^{\mathsf {V}}\!\!\upharpoonright \) can be obtained in a straightforward way since any model \(\mathcal {M}\) of \(T\supseteq \mathsf {EA}\) can be expanded to a model \((\mathcal {M},S)\) of \(T^{\mathsf {V}}\!\!\upharpoonright \) where S is the set specified in the inductive definition (6). It is not clear to us whether this strategy can be adapted to the full T V . The strategy employed in Proposition 2, however, has the additional advantage of being formalizable with only weak arithmetical assumptions. Moreover, we obtain another proof of the interpretability of T V in T, for T reflexive, by Orey's compactness theorem: For \(T\supseteq \mathsf {EA}\) and reflexive, T V is interpretable in T. As far as the authors know, the question of the global interpretability of T V for arbitrary \(T\supseteq \mathsf {EA}\) is still open. 2.3 Extending Logical Consequence It seems natural to wonder whether the reduction methods considered in the previous section can be tweaked to satisfy more principles for V. As noticed already by Ketland, the \(\mathcal {L}\)-embedding \(\mathfrak {a}\), without essential modifications, gives us a more substantial theory of logical consequence over reflexive theories encompassing principles such as the following ones: $$\textup{(\textsf{K})}~~~~~~~~~~~~~~~~~~~~~~~~\mathsf{V} (\ulcorner \varphi\rightarrow\psi\urcorner)\land \mathsf{V} (\ulcorner \varphi\urcorner)\rightarrow \mathsf{V} (\ulcorner \psi\urcorner){\kern10pc} $$ $$(9)~~~~~~~~~~~~~~~~~~~~~~~~~ \neg \mathsf{V} (\ulcorner \mathsf{V}(\ulcorner \varphi\urcorner)\urcorner){\kern17pc} $$ The consistency of the theory T V + K + 9 is guaranteed by the following corollary to Lemma 2: The theory T V + K+9is \(\mathcal {L}\) -embeddablein and conservative over T for reflexive \(T\supseteq \mathsf {EA}\). Let's abbreviate T V + K by writing T V +. Can we obtain analogues of Proposition 2 and Corollary 1 for T V + over arbitrary \(T\supseteq \mathsf {EA}\)? It turns out that we can, by suitably tweaking the proofs given above.15 The fundamental idea is to modify the bound given in the definition of V n in Proposition 2 to allow for the concatenation of the logical proofs of formulas φ and \(\varphi \rightarrow \psi \) of \(\mathcal {L}^+\) when the translations of the antecedent of K are assumed. T V +islocally \(\mathcal {L}\) -embeddablein T. As before, let B be a finite subsystem of T V +. Again, we fix a standard n as bound for the codes of the finitely many logical proofs \(\mathcal {D}_{i}\) preceding an application of (VP1). Let C n (x)be equivalent to: $$\begin{array}{@{}rcl@{}} &&(\exists y\!\leq\! H(n))\;(\mathsf{Prf}_{\varnothing}(y,x)\land (\forall i\leq \mathsf{lh}(y))(\mathsf{det}((y)_{i},y)\rightarrow \mathsf{tru}(y,i)\leq n\land(\exists w\leq n)\\ &&\quad\quad\quad\quad\quad\quad(\mathsf{ Prf}_{\varnothing}(w,(y)_{i})))) \end{array} $$ $$H(n)=\underbrace{\langle n,\ldots,n\rangle}_{n^{2}\text{-times}}\,^{\smallfrown} \,\underbrace{\langle n,\ldots,n\rangle}_{n^{2}\text{-times}}=\left(2(n+1)^{2^{n^{2}}}\right)^{2^{2n^{2}}} $$ det(x, y)is anelementary predicate expressing that x is an 'only detachable' member of y, that is the proofonly 'cuts' x via modus ponens and x is not a proper subformula of any other member ofy; tru(x, y)is anelementary function that takes the initial subsequence of x with y components and outputsits code.16Intuitively, C n (x)expresses that x has a proof in pure logic that (i) applies modus ponens toassumptions that are themselves logically provable with proofs smaller than n and (ii) in which all subproofs of these assumptions are also smaller thann. As before, we define the translation\(\mathfrak {c}\)that, like\(\mathfrak {b}\), only replaces outeroccurrences of V in proofs,clearly this time with C n and not V n . The proof now proceeds along similar lines as the proof of Proposition 2 except,of course, for the case of K. In particular, we want to show, for an arbitrary\(\varphi \in \mathcal {L}^+\), $$(10)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ T\vdash \mathsf{C}_{n}(\ulcorner \varphi\urcorner)\land \mathsf{C}_{n}(\ulcorner \varphi\rightarrow \psi\urcorner)\rightarrow \mathsf{C}_{n}(\ulcorner \psi\urcorner){\kern15pc} $$ As before, if one of \(\mathsf {C}_{n}(\ulcorner \varphi \urcorner )\)and\(\mathsf {C}_{n}(\ulcorner \varphi \rightarrow \psi \urcorner )\)are nottrue-in-\(\mathbb {N}\),we obtain the claim by Lemma 3. If they are both true, then there are proofs\(\mathcal {D}_{0}\)and\(\mathcal {D}_{1}\)in\(\mathsf {PL}(\mathcal {L}^{+})\)of φ and\(\varphi \rightarrow \psi \) respecting the conditions above. Since the codes of the detachable members of both proofs willbe smaller than n, and so is the number of their subformulas, we can safely assume that $$|\mathcal{D}_{i}|\leq \langle\underbrace{n,\ldots,n}_{n^{2}\text{-times}}\rangle $$ for i ∈{0,1}and that thereis a proof of ψ in\(\mathsf {PL}(\mathcal {L}^+)\)with Gödelnumber ≤ H(n). Therefore \(\mathbb {N}\vDash \mathsf {C}_{n}(\ulcorner \psi \urcorner )\)and \(T\vdash \mathsf {C}_{n}(\ulcorner \psi \urcorner )\)byLemma 3. □ By the same argument given in Corollary 1: T V +is a conservative extension of T. Again, this guarantees the consistency of T V + relative to the consistency of T. As above, Orey's compactness theorem gives us: For T reflexive, T V +is interpretable in T. The results just presented improve the picture discussed in [7, 15] and tell us that in many respects – especially if one focuses on conservativity – primitive logical validity is uniformly reducible to the resources of the base theory for a much wider class of theories than the one considered before. However, we were not able to show the interpretability, let alone the \(\mathcal {L}\)-embedding, of T V and T V + in T. This is, however, not an unexpected difficulty: by Orey's and analogous results, there is no gap between local and global interpretability in the context of reflexive theories such as PA. In the case of finitely axiomatizable theories, by contrast, the relationships between these two notions vary considerably and are usually hard to characterize.17 From the point of view of the theory of logical validity the lesson to learn is apparent: the combination of (VP) and (VD)cannot be taken to characterize logical validity, which is unparadoxical and uniformly conservative over base theories that contain just a minimum amount of syntactic reasoning. 2.4 Arithmetical Consequence and Hierarchies In the previous two subsections we analyzed primitive logical consequence based on an introduction rule (VP1) for the primitive validity predicate restricted to purely logical proofs. It turns out that no incisions on classical reasoning are needed even if one liberalizes (VP1) to arithmetical consequence. We now show that by iterating this idea to the transfinite we obtain a symmetry between the hierarchy of validity predicates suggested in [1, 9] and the hierarchy of local reflection principles for a starting theory T. For the sake of determinateness, we assume our starting theory to be EA, although the arguments would proceed in an analogous way for any \(T\supseteq \mathsf {EA}\). To define a hierarchy of primitive notions of validity, we assume a notation \((\mathsf {OR},\prec )\) for ordinals up to Γ0, available in EA,18 and a countable stock of predicates V a (x) – where a ranges over codes of ordinals α < Γ0, that is we take a Latin alphabet letter to code the corresponding ordinal in the Greek alphabet. We let \(\mathcal {L}_{0}\) to be \(\mathcal {L}\) itself and \(\mathcal {L}_{\alpha +1}\) is \(\mathcal {L}_{\alpha } \cup \{\mathsf {V}_{\!a}\}\); \(\mathcal {L}_{\lambda }\), for λ limit, contains all V b for β < λ. Definition 3 (Hierarchical validity) Let S0 := EA. For successor ordinals, with α < Γ0, Sα+1in \(\mathcal {L}_{\alpha }\) isdefined as follows: $$\textup{(HL)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\bigcup_{\beta<\lambda} \mathsf{S}^{\beta}\;\;\text{ for}\, \lambda\, \text{limit.}{\kern17pc} $$ We briefly comment on the halting point Γ0: it is motivated by the availability of natural notation systems and corresponding well-ordering proofs in the theories S α . Variations are obviously possible: notations for more ordinals are possible in EA, although the details will bring us too far from our main concerns here. By contrast, if one wants to stick with ordinals that are provably well-ordered in EA, one would need to stop at ω3. We claim that this hierarchy of validity predicates is closely related to the following, well-known hierarchy of local reflection principles over EA, again for ordinals α, λ < Γ0: $$\begin{array}{@{}rcl@{}} \mathsf{R}^{0}:= \mathsf{EA}; \quad\qquad\quad \mathsf{R}^{\alpha+1}:=\mathsf{R}^{\alpha}+ \mathsf{Rfn}(\mathsf{R}^{\alpha}); \quad\qquad\quad \mathsf{ R}^{\lambda} := \bigcup\limits_{\beta<\lambda} \mathsf{R}^{\beta}. \end{array} $$ $$\mathsf{Rfn}(T):=\mathsf{Pr}_{T}(\ulcorner \varphi\urcorner)\rightarrow \varphi \;\;\text{ for all }\varphi\in \mathcal{L}_{T}. $$ The formalization of provability for the theories R α can be carried out in a standard way once a notation for the ordinals and suitable well-ordering proofs are available. For α < Γ0, S α is \(\mathcal {L}\) -embeddablein R α . For each α < Γ0, we define a translation \(\mathfrak d\colon \mathcal {L}_{<\alpha }\to \mathcal {L}\) as follows, for all β < α, and where \(\mathcal {L}_{<\alpha }:=\bigcup _{\beta <\alpha }\mathcal {L}_{\beta }\): $$\begin{array}{@{}rcl@{}} &&(s=t)^{\mathfrak d}:= (s=t) (\neg\varphi)^{\mathfrak d}:=\neg \varphi^{\mathfrak d}\\ &&(\varphi\land \psi)^{\mathfrak d}:= \varphi^{\mathfrak d}\land \psi^{\mathfrak d} (\forall x\varphi)^{\mathfrak d}:=\forall x\varphi^{\mathfrak d}\\ &&(\mathsf{V}_{\! b}(x))^{\mathfrak d}:= \mathsf{Pr}_{\mathsf{R}^{\beta}}({\mathfrak d}(x)) \end{array} $$ Now we argue inductively given that V0 = R0 and that limit stages are not problematic. For HS2 α , if, in Rα+1, we have\(\mathsf {Pr}_{\mathsf {R}^{\alpha }}(\mathfrak {d}(\ulcorner \varphi \urcorner ))\) for a standard φ, we can conclude \(\varphi ^{\mathfrak d}\) by applying the reflection principle of level α since\(\mathsf {Sent}_{\mathcal {L}}(\mathfrak {d}\ulcorner \varphi \urcorner )\), provably in EA. For HS1 α , we can safely assume that \(\mathsf {R}^{\alpha }\vdash \varphi ^{\mathfrak {d}}\). Therefore already in EA, \(\mathsf {Pr}_{\mathsf {R}^{\alpha }}({\mathfrak {d}}(\ulcorner \varphi \urcorner ))\). □ Proposition 4 suggests at least the following two remarks: for the reader interested in the mathematical strength of the theories S α , by a result of Beklemishev [2], these theories will prove no more Π1-arithmetical sentences than ω α iterated consistency progressions over EA. At the philosophical level, the theories S α embody a notion of arithmetical validity corresponding to a proper extension of arithmetical provability in the starting theory EA stratified along ordinal paths that are meaningful in the starting theory. No incision on classical logic is needed at this stage. However, as pointed out also in [9], the formulation of the theories S α relies on how many ordinals we can code in the starting theory. In order to read off a notion of validity from this stratified picture there seem to be only two options: either validity is inherently stratified, or there is a specific countable ordinal α such that S α fulfills the requirements we are willing to ascribe to the notion of validity. Neither of these alternatives, however, is completely satisfactory; for one thing, there is no reason to think that validity should be stratified, unless one is happy to concede that also truth is a stratified notion. Moreover, the countable ordinals that are provably well-founded in arithmetical theories vary considerably, and it is highly implausible that the notion of validity should be tied to these implementation details. This is not to say, however, that stratified validity lacks importance. Even if it doesn't afford a viable notion of validity, it gives us a tool to generate validities starting from valid inferences in the base language. This picture will be improved in the in the next two sections, where we will turn the hierarchical increase of validities into a positive inductive definition. This technical shift will yield a truly self-applicable notion of validity whose extension and properties are independent of how we represent ordinals. As in the case of truth, this will require restricting classical reasoning. 3 A New Construction for Naïve Validity In this section we propose a way of transcending the stratified picture of validity that generalizes Kripke's method to provide models for languages with a self-applicable truth predicate (see [16]). We will be mainly concerned with providing a class of models that makes the naïve principles for validity consistent, and not so much with formulating effectively presented theories of validity (as in the previous section).19 The models are obtained via fixed points of an inductive construction. Similarly to what happens in Kripke's theory of truth, we have only one validity predicate that can be introduced without restrictions, via (VP). At a fixed point, (VP) and all the principles that are accepted in the construction can be iterated arbitrarily, thus internalizing all the inferences deriving from validities of the base language. 3.1 Initial Sequents and Rules of Inferences Since, in the perspective of an unstratified picture of validity, (VP) is not in question, what are we do to with the V-Curry Paradox? We have seen in the introduction that the paradox forces a restriction of contraction, cut, or (VD). The idea of transcending the hierarchical conception of validity via an inductive definition is at odds with (VD), even though it calls for an unrestricted acceptance of contraction and cut.20 For sure, (VD) looks perfectly fine if we read Val as an unspecified 'following from' and clearly also as the notion of logical validity studied in Sections 2.2 and 2.3, but things are different if we accept (VP) unrestrictedly: in so doing, we take Val-statements to represent a naïve notion of consequence, namely meta-inferences that hold in virtue of logical, base-theoretic, and validity-theoretic principles. However, in the presence of full (VD), this idea translates into the acceptance of sentences that we might not want to accept, such as ν. In the perspective of transcending the hierarchy of validity predicates, one might think that the problem with (VD) is that it allows us to conclude ψ on the assumption that φ and \(\mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner )\) hold. However, if the validity predicate represents meta-theoretical inferences (possibly nested, due to its iterability), one might want to employ an elimination rule that is based on Val-statements that are actually accepted, rather than arbitrarily assumed. The following elimination rule for Val embodies this intuition: Adopting (VDm), however, is not sufficient to avoid the V-Curry Paradox in the presence of reflexivity: In fact, (Ref) and (VDm) together immediately yield (VD). So, a proof of ⊥ is now easy to obtain via a modification of the V-Curry derivation given in the introduction, using (Ref) and (VDm). How can we avoid this new path to triviality? Our proposed solution consists in the development of a Kripke-style positive inductive definition that, while restricting (VD) and (Ref), consistently satisfies (VP), (VDm), contraction, cut, and indeed every other classically valid rule of inference (with nonzero premises).21 More generally, our construction will operate uniform restrictions on initial sequents: this harmonizes with the motivation to restrict (VD) outlined above, since arbitrary initial sequents might contain Val-sentences codifing problematic inferences. Rules of inference, by contrast, are safe: if we can control the sequents that we accept, we can adopt all such rules. A Kripke-style construction along the lines of the one developed here has been hinted at by Field in [9]. Meadows [20] also develops an inductive construction that recovers all Beall and Murzi's principles. His construction also rejects reflexivity but, unlike ours, it is not closed under contraction and cut.22 3.2 The KV-Construction The generalization of Kripke's construction we propose here consists in dealing with sequents rather than single sentences. By 'sequent', from now on, we will mean an object of the form \({\Gamma } \Rightarrow {\Delta }\), where Γ (the antecedent) and Δ (the consequent) are finite sets of \(\mathcal {L}_{V}\)-sentences. Let us describe our approach informally. The following definitions formalize the intuition outlined above, by enabling us to: start with a set of sequents containing at least the ones the form \({\Gamma } \Rightarrow {\Delta }, s_{0}=t_{0}\) and \(s_{1}=t_{1}, {\Gamma } \Rightarrow {\Delta }\), for s0 = t0 an atomic arithmetical truth, and s1 = t1 an atomic arithmetical falsity, apply the operational and structural rules of inference to them, internalize the sequents so obtained within the validity predicate, interpreting Val via principles that are modelled after the rules of inference for the classical material conditional \(\supset \). This process can be iterated: we apply again the operational, structural, and Val rules, and so on ad infinitum. At some ordinal stage, this process reaches a fixed point, which provides the desired interpretation of Val. Let \(S \subseteq \omega \), anddefine the set S+as follows. n ∈ S+if: n ∈ S; or n is \({\Gamma } \Rightarrow s=t, {\Delta }\), and \(\mathbb {N} \models s=t\); or n is \({\Gamma }, s=t \Rightarrow {\Delta }\), and \(\mathbb {N} \models s\neq t\); or n is \({\Gamma } \Rightarrow \varphi \wedge \psi , {\Delta }\), and \({\Gamma } \Rightarrow \varphi , {\Delta } \in S\), and \({\Gamma } \Rightarrow \psi , {\Delta } \in S\); or n is \({\Gamma }, \varphi \wedge \psi \Rightarrow {\Delta }\), and \({\Gamma }, \varphi , \psi \Rightarrow {\Delta } \in S\); or n is \({\Gamma } \Rightarrow \varphi \vee \psi , {\Delta }\), and \({\Gamma } \Rightarrow \varphi , \psi , {\Delta } \in S\); or (vii) n is \({\Gamma }, \varphi \vee \psi \Rightarrow {\Delta }\), and \({\Gamma }, \varphi \Rightarrow {\Delta } \in S\), and \({\Gamma }, \psi \Rightarrow {\Delta } \in S\); or (viii) n is \({\Gamma } \Rightarrow \forall x \varphi (x), {\Delta }\), and for all \(t \in \mathsf { Cter_{\mathcal {L}_{V}}}\), \({\Gamma } \Rightarrow \varphi (t), {\Delta } \in S\); or (ix) n is \({\Gamma }, \forall x \varphi (x) \Rightarrow {\Delta }\), and for some \(t \in \mathsf { Cter_{\mathcal {L}_{V}}}\), \({\Gamma }, \varphi (t) \Rightarrow {\Delta } \in S\); or n is \({\Gamma } \Rightarrow \mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner ), {\Delta }\), and \({\Gamma }, \varphi \Rightarrow \psi , {\Delta } \in S\); or (xi) n is \({\Gamma }, \mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner ) \Rightarrow {\Delta }\), and \({\Gamma } \Rightarrow \varphi , {\Delta } \in S\), and \({\Gamma }, \psi \Rightarrow {\Delta } \in S\). \(\mathsf {Cter_{\mathcal {L}_{V}}}\) is an elementary predicate representing the set of (codes of) closed terms of \(\mathcal {L}_{V}\). Let ζ(n, S) abbreviate items (i)-(xi). We can express this definition with an operator \({\Psi } : \mathcal {P}(\omega ) \longmapsto \mathcal {P}(\omega )\) defined as Ψ(S) := {n ∈ ω | ζ(n, S)}. The operator Ψ is increasing and monotone, namely: for every \(S \subseteq \omega \), we have that \(S \subseteq {\Psi }(S)\); for every \(S_{1}, S_{2} \subseteq \omega \), if \(S_{1} \subseteq S_{2}\), then \({\Psi }(S_{1}) \subseteq {\Psi }(S_{2})\). For every \(S\subseteq \omega \), the set $$S_{\Psi} := \bigcup_{\alpha \in \textit{Ord}} {\Psi}^{\alpha}(S)$$ is a fixed point of Ψ, since Ψ(SΨ) = SΨ. SΨ is said to be the fixed point of Ψgenerated byS. Let's denote with IΨ the fixed point of Ψ generated by the empty set: $$\mathsf{I}_{\Psi} := \bigcup_{\alpha \in \textit{Ord}}{\Psi}^{\alpha}(\varnothing). $$ IΨ is the least fixed point of Ψ: for every \(S \subseteq \omega \), \(\mathsf {I}_{\Psi } \subseteq S_{\Psi }\). In the next section, we prove that IΨ can be used to interpret \(\mathcal {L}_{V}\)-sequents in a non-trivial way. We will then investigate the behaviour of the structural rules and of the validity-theoretical principles in fixed points of Ψ. 4 Main Properties of the KV-Construction We start by showing that there are consistent fixed points of Ψ. A fixed point SΨ is consistent if it does not contain the empty sequent \(\varnothing \Rightarrow \varnothing \). Consistency typically avoids triviality: if \(\varnothing \Rightarrow \varnothing \) is in a fixed point closed under weakening, then every sequent is in that fixed point.23 I Ψ is consistent. The proof is by induction on the stages \(\mathsf {I}^{\alpha }_{\Psi }\) of the construction of IΨ. The claim is trivial for \(\mathsf {I}^{0}_{\Psi }\) and \(\mathsf {I}^{1}_{\Psi }\). Assuming the claim for the stage \(\mathsf { I}^{\alpha }_{\Psi }\), one simply notices that the stage \(\mathsf {I}^{\alpha +1}_{\Psi }\) is obtained from \(\mathsf {I}^{\alpha }_{\Psi }\) by adding to it all sequents resulting from an application of the clauses of Definition 4. It is then clear that, if \(\varnothing \Rightarrow \varnothing \) is not in \(\mathsf {I}^{\alpha }_{\Psi }\), no such clause can introduce it in \(\mathsf {I}^{\alpha +1}_{\Psi }\). The limit case follows straightforwardly from the successor cases. □ We first notice that every stage \(\mathsf {I}_{\Psi }^{\alpha }\) of IΨ is closed under left and right weakening. By construction, any sequent \({\Gamma } \Rightarrow {\Delta }\) in \(\mathsf {I}_{\Psi }^{\alpha }\) is obtained by applying a series of Ψ-clauses to sequents containing arithmetical truths or falsities, with arbitrary side sentences. Therefore, in order to have \({\Gamma }, {\Gamma }^{\prime } \Rightarrow {\Delta },{\Delta }^{\prime }\) in \(\mathsf { I}_{\Psi }^{\alpha }\), we simply consider the same succession of Ψ-clauses applied to starting sequents with \({\Gamma }^{\prime }\) and \({\Delta }^{\prime }\) as extra side sentences. Lemma 4 (Weakening) For every ordinal α, if \({\Gamma } \Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\), then for every \({\Gamma }^{\prime }, {\Delta }^{\prime } \subseteq \mathsf {Sent_{\mathcal {L}_{V}}}\), the sequent \({\Gamma }, {\Gamma }^{\prime } \Rightarrow {\Delta },{\Delta }^{\prime }\) is in \(\mathsf { I}_{\Psi }^{\alpha }\). Also, since we are dealing with finite sets, left and right contraction hold for every sequent in every stage of the construction of IΨ. Lemma 5 (Contraction) For every ordinal α, if \({\Gamma },\varphi ,\varphi \Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\), then \({\Gamma },\varphi \Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\). Similarly, if \({\Gamma }\Rightarrow \psi ,\psi ,{\Delta }\) is in \(\mathsf { I}_{\Psi }^{\alpha }\), then \({\Gamma }\Rightarrow \psi ,{\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\). A crucial feature of our construction is that, at any stage, sequents are grounded in at least one sentence in their antecedent or consequent. Lemma 6 (Groundedness) For every ordinal αand every sequent \({\Gamma } \Rightarrow {\Delta }\), if \({\Gamma } \Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\), then there is at least one sentence φin Γsuch that \(\varphi \Rightarrow \varnothing \) is in \(\mathsf {I}_{\Psi }^{\alpha }\), or at least one sentence ψin Δsuch that \(\varnothing \Rightarrow \psi \) is in \(\mathsf {I}_{\Psi }^{\alpha }\). We reason by induction on the construction of IΨ. The claim is trivial for \(\mathsf {I}_{\Psi }^{0}\). For \(\mathsf {I}_{\Psi }^{1}\), the claim is also immediate since this set contains only sequents with atomic arithmetical truths in the consequent or atomic arithmetical falsities in the antecedent. We now assume the claim up to α, and prove it for α + 1. Let \({\Gamma } \Rightarrow {\Delta } \in \mathsf {I}_{\Psi }^{\alpha +1}\) be obtained by applying one Ψ -clause to sequents in \(\mathsf {I}_{\Psi }^{\alpha }\). We consider two cases. We first deal with the Ψ-clause for introducing ∀ on the right: in this case \({\Gamma } \Rightarrow {\Delta }\) has the form \({\Gamma } \Rightarrow {\Delta }^{\prime },\forall x \varphi (x)\). The sequents in \(\mathsf {I}_{\Psi }^{\alpha }\) from which \({\Gamma } \Rightarrow {\Delta }^{\prime }, \forall x \varphi (x)\) is obtained, then, have the following form: $$(11) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{\Gamma} \Rightarrow {\Delta}^{\prime}, \varphi(t_{0}), \ldots,\, {\Gamma} \Rightarrow {\Delta}^{\prime}, \varphi(t_{n}), \ldots{\kern14pc} $$ By induction hypothesis, for every \({\Gamma } \Rightarrow {\Delta }^{\prime }, \varphi (t_{i})\) in (4), there is a ψ i in Γ sucht hat \(\psi _{i} \Rightarrow \varnothing \) belongs to \(\mathsf {I}_{\Psi }^{\alpha }\), or a χ i in\({\Delta }^{\prime }, \varphi (t_{i})\) such that \(\varnothing \Rightarrow \chi _{i}\) belongs to \(\mathsf {I}_{\Psi }^{\alpha }\). If, forsome i, ψ i or χ i are in Γ or\({\Delta }^{\prime }\), we are done. If there is no i such that ψ i or χ i are in Γ or\({\Delta }^{\prime }\), the induction hypothesis gives us that \(\varnothing \Rightarrow \varphi (t_{i})\) is in \(\mathsf {I}_{\Psi }^{\alpha }\)for all i. Therefore, an application of the Ψ-clause(ix) gives us that \(\varnothing \Rightarrow \forall x \varphi (x)\) is in \(\mathsf {I}_{\Psi }^{\alpha +1}\),as desired. If \({\Gamma } \Rightarrow {\Delta }\)is obtained via the Ψ-clause (xi) of Definition 4,then it has the form \({\Gamma }^{\prime }, \mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner )\Rightarrow {\Delta }\) and \(\mathsf {I}_{\Psi }^{\alpha }\) contains \({\Gamma }^{\prime } \Rightarrow \varphi _{0},{\Delta }\) and \({\Gamma }^{\prime },\varphi _{1}\Rightarrow {\Delta }\). If the induction hypothesis gives us sequents \(\psi \Rightarrow \varnothing \)or \(\varnothing \Rightarrow \chi \) where ψ or χ are in \({\Gamma }^{\prime }\)or Δ respectively, we are done. In the only other case, the induction hypothesis gives us that\(\varphi _{1}\Rightarrow \varnothing \) and \(\varnothing \Rightarrow \varphi _{0}\) are in \(\mathsf {I}_{\Psi }^{\alpha }\). By the Ψ-clause (xi) of Definition 4, then, we get \(\mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner )\Rightarrow \varnothing \)in \(\mathsf {I}_{\Psi }^{\alpha +1}\). □ To prove the closure of the stages of the construction of IΨ under cut, we need the following inversion lemma. Lemma 7 (Inversion) For every ordinal α, the following holds: If \({\Gamma } \Rightarrow \varphi \wedge \psi , {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\), then \({\Gamma } \Rightarrow \varphi , {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\) and \({\Gamma } \Rightarrow \psi , {\Delta } \) is in \(\mathsf { I}_{\Psi }^{\alpha }\). If \({\Gamma }, \varphi \wedge \psi \Rightarrow {\Delta }\) is in \( \mathsf {I}_{\Psi }^{\alpha }\), then \({\Gamma }, \varphi , \psi \Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\). If \({\Gamma } \Rightarrow \varphi \vee \psi , {\Delta }\) is in \( \mathsf {I}_{\Psi }^{\alpha }\), then \({\Gamma } \Rightarrow \varphi , \psi , {\Delta }\) is in \( \mathsf {I}_{\Psi }^{\alpha }\). If \({\Gamma }, \varphi \vee \psi \Rightarrow {\Delta }\) is in \( \mathsf {I}_{\Psi }^{\alpha }\), then \({\Gamma }, \varphi \Rightarrow {\Delta }\) is in \( \mathsf {I}_{\Psi }^{\alpha }\) and \({\Gamma }, \psi \Rightarrow {\Delta } \) is in \(\mathsf { I}_{\Psi }^{\alpha }\). If \({\Gamma } \Rightarrow \forall x \varphi (x), {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\), then for all \(t \in \mathsf {Cter_{\mathcal {L}_{V}}}\): \({\Gamma } \Rightarrow \varphi (t), {\Delta }\) is in \( \mathsf { I}_{\Psi }^{\alpha }\). If \({\Gamma }, \forall x \varphi (x) \Rightarrow {\Delta } \) is in \(\mathsf {I}_{\Psi }^{\alpha }\), then for some \(t \in \mathsf {Cter_{\mathcal {L}_{V}}}\): \({\Gamma }, \varphi (t) \Rightarrow {\Delta }\) is in \(\mathsf { I}_{\Psi }^{\alpha }\). If \({\Gamma } \Rightarrow \mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner ), {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\), then \({\Gamma }, \varphi \Rightarrow \psi , {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\). If \({\Gamma }, \mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner ) \Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\), then \({\Gamma } \Rightarrow \varphi , {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\) and \({\Gamma }, \psi \Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\). We will only consider case (viii). Le α + 1 be the least ordinal such that \({\Gamma }, \mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner ) \Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha +1}\). Then either (a) \({\Gamma }, \mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner ) \Rightarrow {\Delta }\) is obtained by the Ψ-clause (xi) of Definition 4 or (b) it is obtained via a different Ψ-clause. In case (a) \({\Gamma }, \varphi _{1} \Rightarrow {\Delta }\) and \({\Gamma }\Rightarrow \varphi _{0},{\Delta }\) are in \(\mathsf {I}_{\Psi }^{\alpha }\) and we are done. If (b), there are several sub-cases to consider: we just deal with an application of the Ψ-clause (viii), which yields a sequent of the form \({\Gamma }, \mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner ) \Rightarrow {\Delta }^{\prime },\forall x\varphi (x)\), with \({\Delta }={\Delta }^{\prime },\forall x\varphi (x)\). In this case \({\Gamma }, \mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner ) \Rightarrow {\Delta }^{\prime },\varphi (t_{i})\) is in \(\mathsf {I}_{\Psi }^{\alpha }\) for every i for some formula φ(x). By induction hypothesis we obtain, in \(\mathsf {I}_{\Psi }^{\alpha }\), sequents of the form $$\begin{array}{@{}rcl@{}} &(\star)\hspace{10pt}{\Gamma}, \varphi_{1} \Rightarrow {\Delta}^{\prime},\varphi(t_{i}) \qquad\qquad (\dagger)\hspace{10pt} {\Gamma} \Rightarrow \varphi_{0},{\Delta}^{\prime},\varphi(t_{i}) \end{array} $$ for every i. By applying the Ψ-clause (viii) to all sequents of the form (⋆) and (‡) respectively, we obtain that \({\Gamma }, \varphi _{1} \Rightarrow {\Delta }^{\prime },\forall x \varphi (x)\) and \({\Gamma } \Rightarrow \varphi _{0},{\Delta }^{\prime },\forall x \varphi (x)\) in \(\mathsf {I}_{\Psi }^{\alpha +1}\), as desired. □ Finally, we show that every stage of the construction of IΨ is closed under cut. Proposition 6 (Closure under cut) For every α, if \({\Gamma }\Rightarrow {\Delta },\varphi \) and \(\varphi ,{\Gamma }\Rightarrow {\Delta }\) are in \(\mathsf {I}_{\Psi }^{\alpha }\), then also \({\Gamma }\Rightarrow {\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\). The proof is by induction. The case for \(\mathsf {I}_{\Psi }^{0}\) is trivial. The case for \(\mathsf {I}_{\Psi }^{1}\) is also immediately obtained since, for \({\Gamma }\Rightarrow {\Delta },\varphi \) and \(\varphi ,{\Gamma }\Rightarrow {\Delta }\) to be in \(\mathsf {I}_{\Psi }^{1}\), Γ or Δ have to contain at least one atomic arithmetical falsity or truth respectively. Let us suppose that, for α > 0, \({\Gamma }\Rightarrow {\Delta },\varphi \) and \(\varphi ,{\Gamma }\Rightarrow {\Delta }\) are in \(\mathsf {I}_{\Psi }^{\alpha +1}\). There are three main cases to be considered: (a) in the first, \({\Gamma }\Rightarrow {\Delta },\varphi \) and \(\varphi ,{\Gamma }\Rightarrow {\Delta }\) are obtained by means of a Ψ-clause that introduces φ; (b) in the second, only one of \({\Gamma }\Rightarrow {\Delta },\varphi \) and \(\varphi ,{\Gamma }\Rightarrow {\Delta }\) is obtained via a Ψ-clause that introduces φ; (c) in the third, neither \({\Gamma }\Rightarrow {\Delta },\varphi \) nor \(\varphi ,{\Gamma }\Rightarrow {\Delta }\) is obtained via a Ψ-clause that introduces φ. (a) We consider the case in which φ is of the form \(\mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner )\).Therefore the sequents $$\begin{array}{@{}rcl@{}} \text{(I)}\hspace{10pt}{\Gamma},\varphi_{0}\Rightarrow\varphi_{1}, {\Delta} \qquad\qquad \text{(II)}\hspace{10pt} {\Gamma}\Rightarrow \varphi_{0},{\Delta} \qquad\qquad \text{(III)}\hspace{10pt} {\Gamma},\varphi_{1}\Rightarrow {\Delta} \end{array} $$ are in \(\mathsf {I}_{\Psi }^{\alpha }\). By the weakening lemma applied to (II), also \({\Gamma }\Rightarrow \varphi _{0},\varphi _{1},{\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\). By induction hypotesis, since (I) is in \(\mathsf {I}_{\Psi }^{\alpha }\), also \({\Gamma }\Rightarrow \varphi _{1},{\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\). Therefore, since (III) is also in \(\mathsf {I}_{\Psi }^{\alpha }\), \({\Gamma }\Rightarrow {\Delta }\) will be in \(\mathsf {I}_{\Psi }^{\alpha }\subseteq \mathsf {I}_{\Psi }^{\alpha +1}\) as well, as desired. (b) We only consider the case in which φ is \(\mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner )\). We assume, moreover, that \({\Gamma }\Rightarrow {\Delta }, \mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner )\) is obtained via the Ψ-clause (viii) from sequents in \(\mathsf {I}_{\Psi }^{\alpha }\) of the form $$\text{(IV)}\hspace{10pt}{\Gamma}\Rightarrow {\Delta}^{\prime}, \varphi(t_{i}), \mathsf{Val}(\ulcorner \varphi_{0}\urcorner,\ulcorner\varphi_{1}\urcorner) $$ for all i ∈ ω, and that\(\mathsf {Val}(\ulcorner \varphi _{0}\urcorner ,\ulcorner \varphi _{1}\urcorner ),{\Gamma }\Rightarrow {\Delta }\) is obtainedvia the Ψ-clause(xi) from \({\Gamma }\Rightarrow \varphi _{0},{\Delta }\) and \({\Gamma },\varphi _{1}\Rightarrow {\Delta }\) in \(\mathsf {I}_{\Psi }^{\alpha }\). On the one hand, by the inversion lemma applied to all sequentsof the form (IV), we obtain that all sequents of the form \({\Gamma },\varphi _{0}\Rightarrow {\Delta }^{\prime }, \varphi (t_{i}),\varphi _{1}\) are in \(\mathsf {I}_{\Psi }^{\alpha }\). By the weakening lemma, since \({\Delta }={\Delta }^{\prime },\forall x\varphi (x)\), $$\text{(V)}\hspace{10pt}{\Gamma},\varphi_{0}\Rightarrow {\Delta}, \varphi(t_{i}),\varphi_{1} $$ is in \(\mathsf {I}_{\Psi }^{\alpha }\) for all i ∈ ω. On the other, from the fact that \({\Gamma }\Rightarrow \varphi _{0},{\Delta }\) and \({\Gamma },\varphi _{1}\Rightarrow {\Delta }\) are in \(\mathsf {I}_{\Psi }^{\alpha }\), the weakening lemma gives us $$\begin{array}{@{}rcl@{}} \text{(VI)}\hspace{10pt}{\Gamma}\Rightarrow\varphi(t_{i}),\varphi_{0}, \varphi_{1},{\Delta} \qquad\qquad \text{(VII)}\hspace{10pt} {\Gamma},\varphi_{1}\Rightarrow\varphi(t_{i}), {\Delta} \end{array} $$ in \(\mathsf {I}_{\Psi }^{\alpha }\). By induction hypotesis, since for all i ∈ ω(V) and (VI) are in \(\mathsf {I}_{\Psi }^{\alpha }\), also \({\Gamma }\Rightarrow \varphi (t_{i}), \varphi _{1},{\Delta }\) is in \(\mathsf {I}_{\Psi }^{\alpha }\) for all i ∈ ω. Therefore, since for all i ∈ ω (VII) is also in \(\mathsf {I}_{\Psi }^{\alpha }\), \({\Gamma }\Rightarrow \varphi (t_{i}), {\Delta }\) will be in \(\mathsf {I}_{\Psi }^{\alpha }\) for all i ∈ ω. An application of the Ψ-clause (viii) gives us the desired result. (c) We consider the case in which \({\Gamma }\Rightarrow {\Delta },\varphi \) and \(\varphi ,{\Gamma }\Rightarrow {\Delta }\)are obtained by application of the Ψ-clause (viii) to sequents in \(\mathsf {I}_{\Psi }^{\alpha }\) of the form $$\begin{array}{@{}rcl@{}} \text{(VIII)}\hspace{10pt}{\Gamma}\Rightarrow{\Delta}^{\prime},\varphi_{0}(t_{i}),\varphi \qquad\qquad \text{(IX)}\hspace{10pt}\varphi,{\Gamma}\Rightarrow{\Delta}^{\prime\prime},\varphi_{1}(t_{i}) \end{array} $$ for all i ∈ ω. Bythe groundedness lemma applied to all sequents of the form (VIII), we obtain, for each i ∈ ω, that either thereis a sequent \(\psi _{i}\Rightarrow \varnothing \) is in \(\mathsf {I}_{\Psi }^{\alpha }\) for ψ i ∈ Γ, or there is a sequent \(\varnothing \Rightarrow \chi _{i}\) in \(\mathsf {I}_{\Psi }^{\alpha }\) with \(\chi _{i}\in {\Delta }^{\prime },\varphi _{0}(t_{i}),\varphi \). In the former case, we are done by the weakening lemma; in the latter case, if\(\chi _{i}\in {\Delta }^{\prime }\)we are also done by weakening, otherwise we reason as follows. If χ i is φ0(t i ) for all i, an application of the Ψ-clause (viii) gives us that \(\varnothing \Rightarrow \forall x\varphi _{0}(x)\) is in \(\mathsf {I}_{\Psi }^{\alpha +1}\), therefore the claim follows by the weakening lemma. If χ i is φ for some i, we apply the groundedness lemma to all sequents of the form (IX). By the consistency of IΨ, the induction hypothesis cannot give us \(\varphi \Rightarrow \varnothing \) in \(\mathsf {I}_{\Psi }^{\alpha }\). In all other possible outcomes of the groundedness lemma applied to all sequents of the form (IX), we reason as we did in the corresponding cases of (VIII). Reflexivity is the only structural rule that does not hold unrestrictedly in IΨ (the proof is a slight variant of the V-Curry derivation sketched in the introduction). I Ψ cannot contain all the instances of $$\begin{array}{@{}rcl@{}} \textup{(Ref)} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\varphi \Rightarrow \varphi\kern30pc \end{array} $$ for φ an arbitrary \(\mathcal {L}_{V}\) -sentence. Lemma 8 shows also that dropping reflexivity is 'best possible': we have a single structural rule that cannot be consistently accepted.24 IΨ, however, features a weaker form of reflexivity, which follows immediately from the weakening lemma. For every \(\varphi \in \mathcal {L}_{V}\), we can always find Γ and \( {\Delta } \subseteq \mathsf {Sent}_{\mathcal {L}_{V}}\) such that \({\Gamma }, \varphi \Rightarrow \varphi , {\Delta } \in \mathsf {I}_{\Psi }\), where Γ and Δ can be taken to be disjoint. 4.1 Naïve Validity in I Ψ Several principles for naïve validity (including the (Val-Schema)+ formulated and discussed in [9]) are recovered in IΨ, in the sense made precise by the following result. For every \(\varphi , \psi \in \mathcal {L}_{V}\), and every \({\Gamma }_{0}, {\Gamma }_{1}, {\Delta }_{0}, {\Delta }_{1} \subseteq \mathsf {Sent}_{\mathcal {L}_{V}}:\) $$\begin{array}{@{}rcl@{}} &&{}(\mathsf{VDm})~\quad\quad\quad\quad\text{ if }~ {\Gamma}_{0} \Rightarrow \varphi, {\Delta}_{0} \text{ is in } \mathsf{I}_{\Psi}\text{ and } {\Gamma}_{1} \Rightarrow \mathsf{Val}(\ulcorner \varphi \urcorner, \ulcorner \psi \urcorner), {\Delta}_{1} \text{ is in } \mathsf{I}_{\Psi}, \\ &&{}\qquad\qquad~~\text{ then } {\Gamma}_{0}, {\Gamma}_{1} \Rightarrow \psi, {\Delta}_{0}, {\Delta}_{1} \text{ is in }\mathsf{I}_{\Psi}. \\ (\mathsf{Val}\text{-}\mathsf{Schema})^{+} \, &&{\Gamma}, \varphi \!\Rightarrow\! \psi, {\Delta} \text{ is in } \mathsf{I}_{\Psi} \text{ if and only if } ~{\Gamma} \!\Rightarrow\! \mathsf{Val}(\ulcorner \varphi \urcorner, \ulcorner \psi \urcorner), {\Delta} \text{ is in } \mathsf{I}_{\Psi}. \end{array} $$ Since (VAl-Schema)+ holds in IΨ, it is clear that (VP) and (VAl-Schema) are recovered in IΨ as well. (VDm) is not to be understood as a 'weaker version' of (VD), since there are theories that validate (VD) but for which (VDm) is too strong and yields triviality. The non-transitive approach of Ripley [29] and Cobreros et al. [6] is a case in point: adapting the theory developed there to the validity predicate, we see that (VD) holds, while an unrestricted acceptance of (VDm) would trivialize the theory. Essentially, this is because (VDm) incorporates a form of cut, which is clearly inadmissible in a non-transitive approach.25 (VD) is the only validity-theoretical principle that does not hold unrestrictedly in IΨ. In fact, if IΨ contained all instances of (VD), then it would also contain its instance $$\begin{array}{@{}rcl@{}} \nu, \mathsf{Val}(\ulcorner \nu \urcorner, \ulcorner \bot \urcorner) \Rightarrow \bot \end{array} $$ where ν is the V-Curry sentence \(\mathsf {Val}(\ulcorner \nu \urcorner , \ulcorner \bot \urcorner )\). The derivation of the V-Curry paradox (outlined in the introduction) would then give us the sequent \(\varnothing \Rightarrow \varnothing \) in IΨ, against the consistency of IΨ. In Section 3.1 we suggested that a uniform way to avoid V-Curry-driven triviality consists in restricting our acceptance of initial sequents, avoiding the acceptance of Val-sentences that express inferences that we cannot control. Rules of inference, on the other hand, are safe, since the construction of IΨ operates a selection over the sequents that are accepted in the first place. This view sits naturally with a restriction of (Ref) and (VD) (the only schematic inferences amongst the structural rules and the validity-theoretical principles) and an unrestricted acceptance of weakening, contraction, cut, (VP), (VDm), and (Val-Schema)+. The results 4, 5, 6, 8, and 9, establish that IΨ realizes the solution to the paradoxes described in Section 3. It is easy to turn IΨ into a proper model of the language \(\mathcal {L}_{V}\), as it is standardly done with Kripke fixed points. Let the extension of the validity predicate generated by IΨ, in symbols EΨ, be the set of those pairs of \(\mathcal {L}_{V}\)-sentences 〈φ, ψ〉 such that \(\varnothing \Rightarrow \mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner )\) is in IΨ, and the anti-extension of the validity predicate generated by IΨ, in symbols AΨ, be the set of pairs of \(\mathcal {L}_{V}\)-sentences 〈φ, ψ〉 such that \(\mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \psi \urcorner ) \Rightarrow \varnothing \) is in IΨ. The model of \(\mathcal {L}_{V}\) naturally associated with IΨ, thus, is \((\mathbb {N}, \mathsf {E}_{\Psi }, \mathsf {A}_{\Psi })\), and its evaluation clauses can be read off Definition 4. In particular, \((\mathbb {N}, \mathsf {E}_{\Psi }, \mathsf {A}_{\Psi })\) can be associated with a three-valued semantics, say with values {0,1/2,1}, where the logical vocabulary is interpreted as in strong Kleene semantics (with Val being treated as the strong Kleene conditional), and where \({\Gamma } \Rightarrow {\Delta }\) has a tolerant-strict reading, that is whenever every φ ∈ Γ has value 1/2 or 1, then there is a ψ ∈ Δ with value 1.26 We conclude this subsection by noticing that there are close relations between IΨ and the least fixed point of Kripke's construction for truth (strong Kleene version) from [16]. This can be achieved by defining ¬φ and \(\mathsf {T}(\ulcorner \varphi \urcorner )\), respectively, as \(\mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner 0=1 \urcorner )\) and \(\mathsf {Val}(\ulcorner 0=0 \urcorner , \ulcorner \varphi \urcorner )\), and by constructing the least Kripke fixed point, in the usual way, for the language \(\mathcal {L}_{V}\). For every sentence \(\varphi \in \mathcal {L}_{V}\) we will have that: $$\begin{array}{@{}rcl@{}} &&{\kern-1.5pc} \text{if } \varphi \text{ is in the extension of } \mathsf{T} \text{ in the least Kripke's fixed point, then } \varnothing \!\Rightarrow\! \varphi \text{ is in } \mathsf{I}_{\Psi};\\ &&{\kern-1.5pc} \text{if } \varphi \text{ is in the anti-extension of } \mathsf{T} \text{ in the least Kripke's fixed point, then } \varphi \!\Rightarrow\! \varnothing \text{ is in } \mathsf{I}_{\Psi}. \\ (12) \end{array} $$ (12) indicates that Kripke's least fixed point for truth constitutes a proper fragment of IΨ. Clearly, the other direction of (12) does not hold. 4.2 Non-Minimal Fixed Points and Extensions Lemma 5 shows that every fixed point of Ψ is closed under contraction. However, this is not so for weakening. The fixed point \(\{(\varnothing \Rightarrow \mu )\}_{\Psi }\), where μ is the sentence \(\mathsf {Val}(\ulcorner \mu \urcorner , \ulcorner \mu \urcorner )\), for example, is not closed under weakening (μ is the validity-theoretical analogue of the truth-teller). This shortcoming, however, can be easily fixed. Let Ψ+ be the monotone operator that results by adding to items (i)-(xi) of Definition 4 the following positive elementary clause as a further disjunct: (xii) n is \(({\Gamma }, {\Gamma }^{\prime } \Rightarrow {\Delta }^{\prime }, {\Delta })\), and \(({\Gamma } \Rightarrow {\Delta }) \in S\). Let's adapt the notation adopted for Ψ to Ψ+. The following result is immediate (the first item follows from Lemma 4 and the proof of Proposition 6, and the second by an induction on the build-up of \(S_{{\Psi }^{+}}\)). Lemma 10 \(\mathsf {I}_{\Psi } = \mathsf {I}_{{\Psi }^{+}}\). For every \(S \subseteq \omega \), SΨ is consistent if and only if \(S_{{\Psi }^{+}}\) is consistent. The properties of IΨ transfer to \(\mathsf {I}_{{\Psi }^{+}}\), and the consistency of a fixed point SΨ transfers to the fixed point of Ψ+ generated by the same set S.27 The operator Ψ+, however, guarantees closure under weakening, contraction, and cut.28 For every \(S \subseteq \omega \), every \(\varphi \in \mathcal {L}_{V}\), and every \({\Gamma }, {\Delta } \subseteq \mathsf {Sent}_{\mathcal {L}_{V}}\!\!:\) $$\begin{array}{@{}rcl@{}} \begin{array}{ll} \mathsf{(L}\text{-}\mathsf{Wkn)} \;&\; \text{ If } ~{\Gamma} \!\Rightarrow\! {\Delta} \text{ is in } S_{{\Psi}^{+}}, \text{ then } {\Gamma}, \varphi \Rightarrow {\Delta} \in S_{{\Psi}^{+}}. \\ \mathsf{(R}\text{-}\mathsf{Wkn)} \;&\; \text{ If } ~{\Gamma} \!\Rightarrow\! {\Delta} \text{ is in } S_{{\Psi}^{+}}, \text{ then } {\Gamma} \Rightarrow \varphi, {\Delta} \text{ is in } S_{{\Psi}^{+}}. \\ \mathsf{(L}\text{-}\mathsf{Ctr)} \;&\; \text{ If } ~{\Gamma}, \varphi, \varphi \Rightarrow {\Delta} \text{ is in } S_{{\Psi}^{+}},\text{ then } {\Gamma}, \varphi \Rightarrow {\Delta} \text{ is in } S_{{\Psi}^{+}}. \\ \mathsf{(R}\text{-}\mathsf{Ctr)} \;&\; \text{ If } ~{\Gamma} \!\Rightarrow\! \varphi, \varphi, {\Delta} \text{ is in } S_{{\Psi}^{+}}, \text{ then } {\Gamma} \Rightarrow \varphi, {\Delta} \text{ is in } S_{{\Psi}^{+}}. \\ \mathsf{(Cut)} \;&\; \text{ If } ~{\Gamma} \!\Rightarrow\! \varphi, {\Delta} \text{ is in } S_{{\Psi}^{+}} \text{ and } {\Gamma}, \varphi \Rightarrow {\Delta} \text{ is in } S_{{\Psi}^{+}}, \text{ then } {\Gamma} \Rightarrow {\Delta} \text{ is in } S_{{\Psi}^{+}}. \end{array} \end{array} $$ 5 From Logical to Grounded Consequence In this paper, we investigated different combinations and modifications of the principles (VP) and (VD), and the corresponding notions of consequence. Starting with a restriction of (VP) corresponding to the notion of logical consequence, we explored ways of keeping the full (VP), thereby restricting (VD), carving out a notion of self-applicable consequence grounded in truths and falsities of the base theory. The key findings of the paper are summarized in Table 1. Key Finding (i) Conservativity for all \(B \supseteq \mathsf {EA}\) (ii) Local \(\mathcal {L}\)-embeddability for all \(B \supseteq \mathsf {EA}\) Arithmetical Analysis of Field's hierarchy via local reflection principles (i) Consistency of (VP), (VDm), (Val-Schema)+ (ii) Kripke-style theory for Val: development of Ψ (iii) Make sense of Val-principles via IΨ and grounded validity The rules (VP) and (VD) support a strict reading of ⤙ as logical derivability, and therefore of Val as the class of the logically valid inferences. On this approach, (VP), namely (VP1) in the formalism of Section 2, is restricted to purely logical inferences, while the full (VD) – (VD1) in Section 2 – can be consistently kept: from this point of view, we can naturally read (VD) as preservation of truth in logically valid arguments. A sub-theory of our theory of logical validity is the theory given by (VD) and a suitable version of (VP): this theory meets the criterion (suggested by Field in [9]) of giving the same reading to both the meta-theoretic notion expressed by \(\vdash \) and to the predicate Val. The theory of logical validity is therefore simple and well-behaved: it's then natural to study in more depth the corresponding notion of object-linguistic validity by comparing it to the inferential resources of the underlying base theory. Corollary 1 tells us that logical validity does not have any impact on the underlying syntactic structure: for any theory extending a very weak arithmetical system, (VD) and the restricted version of (VP) do not enable us to prove new syntactic or arithmetical facts. This extends to further principles for logical consequence, like internalized modus ponens, as shown in Corollary 3. These results seem to suggest that 'deflationary' approaches to logical consequence, such as Shapiro's [34], may endorse conservativity requirements for logical validity. This is not to say, however, that the predicate of logical consequence does not play an indispensable expressive role: although the theory of logical validity is uniformly locally interpretable in the base theory, our results do not show that it is relatively interpretable in the object theory and therefore, arguably, expressively reducible to it. As we have seen, the consistency of the theory of logical validity follows from a restriction of (VP) to purely logical derivations: consistency, however, is preserved even if we internalize purely arithmetical derivations. This led us, following [9], to investigate a hierarchy of arithmetical consequence predicates. We have shown that every stage in this hierarchy corresponds to a stage of a parallel hierarchy of local reflection principles; as a consequence, the hierarchical notion of validity can be read as iterated arithmetical provability. The hierarchical approach to consequence suffers from several well-known problems: in particular, there seem to be notions of 'following from' that cannot be accounted for in the stratified picture, such as implication. Transcending the hierarchy calls for an unrestricted (VP). We have achieved this via a Kripke-style construction, the KV-construction, in which the unrestricted (VP) is balanced by a rule form of (VD). The least fixed point of the KV-construction, IΨ, embodies a notion that, following Kripke's theory of truth, we might call grounded validity, i. e. validity grounded in truths and falsities of the base language, in our case arithmetical truths and falsities.29 The main intuition behind grounded validity is that first we have inferences involving non-semantic facts, which we can then combine and iterate to express more complex inferences, crucially including nested occurrences of the consequence relation. At the fixed point IΨ, the process of generating more and more acceptable inferences reaches a halt: the set IΨ realizes in full the idea of iterating arbitrarily the grounded consequences, and of expressing them in the object-language. To see this, let FΨ be the set of \(\mathcal {L}_{V}\)-sentences such that φ is in FΨ if and only if \(\varnothing \Rightarrow \varphi \) is in IΨ. It is immediate to see that, thanks to the fixed-point property of IΨ, for all sentences φ, ψ we have that: $$\begin{array}{@{}rcl@{}} (13){\kern3pc} \mathsf{Val}(\ulcorner \varphi \urcorner, \ulcorner \psi \urcorner) \in \mathsf{F}_{\Psi} \text{ if and only if } \mathsf{Val}(\ulcorner \varnothing \urcorner, \ulcorner \mathsf{Val}(\ulcorner \varphi \urcorner, \ulcorner \psi \urcorner) \urcorner) \in \mathsf{F}_{\Psi}. \end{array} $$ (13) follows immediately by Lemma 9, that shows that (Val-Schema)+ holds unrestrictedly in IΨ. In his [9], Field rejects (Val-Schema)+ on the grounds of an example that, informally, reads thus: $$\begin{array}{@{}rcl@{}} &&\text{It follows } \,\text{from `snow is white' that}\\ (14){\kern5pc}\\ &&\qquad\,\,\qquad\text{it follows from `grass is green' that `snow is white'. } \end{array} $$ If 'follows from' is intended as logical consequence, (14) clearly does not hold. However, (14), and more generally (Val-Schema)+, cease to be troubling if we adopt the grounded consequence reading: since snow is white, this truth about the non-semantic vocabulary grounds and justifies the consequence expressed in (14).30 Desirable features such as that expressed in (13) come at a cost: we cannot consistently accept all sequents of the form \(\varphi \Rightarrow \varphi \) in IΨ. As a consequence, not all sentences of the form \(\mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner \varphi \urcorner )\) are in FΨ. However, this restriction is not so implausible in the context of a grounded consequence relation. Some sequents of the form \(\varphi \Rightarrow \varphi \), in fact, are to be rejected because they are ungrounded. A paradigmatic case of failure of (Ref) involves the V-Curry sentence itself: we do not have \(\nu \Rightarrow \nu \) in IΨ. In other words, we do not accept that $$\begin{array}{@{}rcl@{}} \text{ from the fact that (from this sentence it follows that } 0=1), \\ (15){\kern26pc}\\ \text{ it follows that (from this sentence it follows that } 0=1). \end{array} $$ If we only accept grounded Val-sentences, we want to unpack the 'it follows' used in (15), to see from where it derives. Given the ungrounded nature of ν, such unpacking does not lead us to non-semantic inferences, but to an endless, circular regress. Cases such as (15) are the only kind of sequents of the form \(\varphi \Rightarrow \varphi \) that are not in IΨ, which makes the restriction of reflexivity less drastic.31 Similarly, ungrounded instances of (VD) are not in IΨ, and this restriction is justified as in the case of reflexivity. We conclude by sketching some directions for further work. In the context of logical validity, the main open problem is the question of the global interpretability and, most importantly, of the \(\mathcal {L}\)-embedding of the theory of logical validity in the base theory. As for grounded consequence, the construction described by IΨ can be turned into an axiomatic theory. It would then be natural to study the relationships between this theory and the class of models extending \((\mathbb {N}, \mathsf {E}_{\Psi }, \mathsf {A}_{\Psi })\). Finally, it would be interesting to relate irreflexive validity with paracomplete theories of truth and validity.32 To emphasize even more the analogy with truth, we note that an object-linguistic predicate for consequence is needed for the same purposes that motivate a truth predicate, such as blind ascriptions ('all the derivations made this morning in the Logic lecture are valid') or generalizations. Suppose in fact that (VP) is logically valid, that there is a logical derivation of φ ⤙ ψ, and that f is a function in the language from names of sentences to terms that do not name sentences. Then applying (VP) and uniform substitutivity we conclude that \(\mathsf {Val}(f(\ulcorner \varphi \urcorner ), f(\ulcorner \psi \urcorner ))\) comes out as logically derivable, which is absurd. This is clearly remarked in [7]. Ketland's results will turn out to be special cases of our findings. See [28]. For a quirk of terminology, 'reflexive' will be employed in two different senses: (i) as referred to arithmetical systems that prove the consistency of all their finite subsystems, and (ii) as referred to theories that satisfy the structural rule of reflexivity. We apologize for the possible, but unavoidable, confusion. It is also clear that the paradox can be derived by avoiding any syntactic/arithmetical assumption. To see this, one can employ a familiar trick due to Montague [21]. A finite formulation of the axioms of a finitely axiomatizable T (see below for the choice of T) can be pushed into the instance of the diagonal lemma needed in the derivation of the paradox. Starting with the instance $$\nu \leftrightarrow \mathsf{Val}(\ulcorner \nu\land { A}\urcorner,\bot) $$ of the diagonal lemma, where A is a finite reaxiomatization of T, one easily obtains a version of the paradox that does not rely on the assumption of the logical validity of the underlying arithmetical theory but only of the principles for logical consequence. For more details on EA and elementary functions the reader may consult [3, 33]. The axioms of EA are the universal closures of the following: $$\begin{array}{llllll} \mathsf{EA1}\quad& 0\neq \mathsf{S}x \\ \mathsf{EA2}\quad& \mathsf{S}x=\mathsf{S}y\rightarrow x=y\\ \mathsf{EA3}\quad& x+0=x\\ \mathsf{EA4}\quad& x+\mathsf{S}y=\mathsf{S}(x+y)\\ \mathsf{EA5}\quad& x\times 0=0\\ \end{array} \hspace{1cm} \begin{array}{llllll} \mathsf{EA6}\quad& x\times \mathsf{S}y=(x\times y)+x\\ \mathsf{EA7}\quad& \mathsf{exp}(0)=1\\ \mathsf{EA8}\quad& \mathsf{exp}(\mathsf{S}x)=\mathsf{exp}(x)+\mathsf{exp}(x)\\ \mathsf{EA9}\quad& x\leq 0 \,\leftrightarrow\,x=0\\ \mathsf{EA10}\quad& x\leq \mathsf{S}y\,\leftrightarrow\,(x\leq y\vee x=\mathsf{S} y) \end{array} $$ In addition, EA features the principle of induction for elementary formulas φ(x): The notion of relative interpretation goes back to Tarski et al. [36]. Although it is a syntactic notion, it has a natural semantic counterpart: if U is interpretable in W, in any model of W we can construct an internal model of U. Feferman's theorem on the interpretability of inconsistency (see [38]) represents a separating example between relative interpretation and \(\mathcal {L}\)-embeddings. Let T be ω-consistent: by Feferman's theorem, T + ¬Con(T) is interpretable in T, but it cannot be \(\mathcal {L}\)-embeddable in T because \(\mathcal {L}\)-embeddability clearly preserves ω-inconsistency. For a study of how to separate \(\mathcal {L}\)-embeddings from stronger notions of equivalence, see [26]. We refer to [13, Section 5.3] for motivation andto [33, Section 2.6.1] for a proof of the theorem in EA. This is the standard objection against classical theories such as a version of the well-known theory Kripke-Feferman KF. See [8, 13]. The same proof applies to \(T^{\mathsf {V}}\!\!\upharpoonright \). The theorem states that if V is reflexive and any finite \(U_{0}\subseteq U\) is interpretable in V, also U is interpretable in V (see [12, Secton 3]). By employing the sequence encoding sketched above, one can estimate for the translated proof:\(|\mathcal {D}^{\mathfrak {b}}_{j}|\leq G(n)\), where\(G(n)=\underbrace {\langle n(n+1){2^{n}},\ldots ,n(n+1){2^{n}}\rangle }_{n\text {-times}}=\left (n(n+1){2^{n}}+1\right ){2^{(n(n+1){2^{n}})}}\) We adapt to the present setting a more general strategy suggested to us by Albert Visser. In the definition of C n , (x) y = z is an elementary functional expression corresponding to the projection of the y th -element ofx. For a study of the asymmetry between finitely axiomatized and reflexive theories in the context of theories of truth, see [25]. In particular, OR is an elementary set of ordinal codes and \(\prec \) an elementary relation isomorphic to the usual well-ordering of ordinals up to Γ0. It is nonetheless possible to develop axiomatic theories that are adequate for these models and therefore avoid any arbitrariness in choosing a natural halting point for progressions of theories of stratified validity. Some substructural theories compatible with Beall and Murzi's principles are available in the literature: the non-contractive theory in [41] (see also [5]) and the non-transitive approach of [29] and [6] (see also [37]) support both (VP) and (VD), and block the derivation of ⊥. Strictly speaking, these are theories of naïve truth that feature a conditional obeying conditional versions of (VP) and (VD), but they can be easily adapted to naïve validity. For the theory of inductive definitions, see [22]. Non-reflexive approaches to paradoxes did not receive an extensive attention in the literature: some works on the topic include [10, 11, 31, 32]. For a strengthening of Meadows' approach, see [35]. Consistency is typically defined as the absence of a contradiction, but our definition is equivalent to that. We could introduce a connective ¬, interpreting ¬φ as \(\mathsf {Val}(\ulcorner \varphi \urcorner , \ulcorner 0=1 \urcorner )\), and show that the classical rules for negation hold for the so-defined ¬ in IΨ. Then, it would be easy to show that \(\varnothing \Rightarrow \varnothing \notin \mathsf {I}_{\Psi }\) if and only if there is no \(\mathcal {L}_{V}\)-sentence φ s. t. \(\varnothing \Rightarrow \varphi \wedge \neg \varphi \in \mathsf {I}_{\Psi }\). We note that the resulting negation is weaker than classical negation, and indeed even weaker than intuitionistic negation (not al the ex falso sequents \(\varphi \wedge \neg \varphi \Rightarrow \bot \) are in IΨ, although the corresponding rule of inference is admissible in IΨ). [32] has recently remarked that a similar restriction of reflexivity in the context of rules for naïve comprehension can avoid paradoxes and, at the same time, make both cut and contraction admissible. The choice between (VD) and (VDm) reflects an ongoing debate in the truth-theoretical literature (especially concerning substructural theories of truth) on the 'correct' form of modus ponens (see [42] and [30]). Two of the main contenders are analogous to (VD) and (VDm): $$\varphi, \varphi \rightarrow \psi \vdash \psi \quad\text{and} \quad \text{from } {\Gamma} \vdash \varphi \text{ and } {\Delta} \vdash \varphi \rightarrow \psi \text{ infer } {\Gamma}, {\Delta} \vdash \psi. $$ We are grateful to Paul Egré, Francesco Paoli, and Robert van Rooij for suggesting this point to us. Tolerant-strict consequence is also employed in [20]. We did not use Ψ+ in the first place in order to simplify the proofs of the properties of IΨ: this has no practical effects, however, since IΨ and \(\mathsf {I}_{{\Psi }^{+}}\) are identical. This result improves on the structural rules recovered by Meadows in [20]. See [16], p. 694 and p. 701. For an analysis of Kripkean groundedness, see [40]. For arguments for Kripkean grounded truth, see [17, 19], and [4]. Grounded validity is clearly distinct from analytical validity, i. e. validity based on analytical truths and falsities. In fact, it is possible to start our construction from non-analytic claims, e. g. about snow being white and grass being green. Thanks to Andreas Fjellstad for bringing this point to our attention. The standard counterpart of of reflexivity in natural deduction is the rule of assumption (see [24], Section 1. 3). If we accept a grounded picture of consequence, we can safely assume all the sentences of the base language, and apply all rules of inferences to them: the only sentences we cannot assume are the ungrounded ones, in line with our conception of consequence. Moreover, it is easy to see that the notion of consequence captured by IΨ includes all the logical consequences grounded in the base language. Many thanks to Kit Fine, Lionel Shapiro, and Luca Tranchini for useful discussions on these points. The latter research question has also been suggested in [10]. Open access funding provided by Austrian Science Fund (FWF). We are grateful to Paul Egré, Kit Fine, Andreas Fjellstad, Volker Halbach, Julien Murzi, Francesco Paoli, Lionel Shapiro, Luca Tranchini, Robert van Rooij, Albert Visser, and an anonymous referee for several useful comments and discussions about the material presented in this paper. Beall, J.C., & Murzi, J. (2013). Two Flavours of Curry's Paradox. Journal of Philosophy CX, 3, 143–65.CrossRefGoogle Scholar Beklemishev, L. (1995). Iterated reflection versus iterated consistency. Annals of Pure and Applied Logic, 7, 25–48.CrossRefGoogle Scholar Beklemishev, L. (2005). Reflection principles and provability algebras in formal arithmetic. Russian Mathematical Surveys, 60, 197–268.CrossRefGoogle Scholar Burgess, J.P. (2014). Friedman and the Axiomatization of Kripke's Theory of Truth: In Tennant, N. (Ed. ) Foundational Adventures. Essays in Honour of Harvey Friedman. College Publications.Google Scholar Caret, C., & Weber, Z. (2015). A Note on Contraction-Free Logic for Validity. Topoi, 34, 63–74.CrossRefGoogle Scholar Cobreros, P., Egré, P., Ripley, D., & van Rooij, R. (2014). Reaching transparent truth. Mind, 122(488), 841–866.CrossRefGoogle Scholar Cook, R. (2014). There is No Paradox of Logical Validity. Logica Universalis. published online 11 Jan 2014.Google Scholar Field, H. (2008). Saving truth from paradox. Oxford: Oxford University Press.CrossRefGoogle Scholar Field, H. (2017). Disarming a Paradox of Validity. Notre Dame Journal of Formal Logic, 58(1), 1–19.CrossRefGoogle Scholar French, R. (2016). Structural Reflexivity and the Paradoxes of Self-Reference. Ergo, 3, 5.Google Scholar Greenough, P. (2001). Free Assumptions and the Liar Paradox. American Philosophical Quarterly, 38(2), 115–135.Google Scholar Hájek, P., & Pudlák, P. (1998). Metamathematics of First-Order Arithmetic. Berlin: Springer.Google Scholar Halbach, V. (2011). Axiomatic Theories of Truth. Cambridge: Cambridge University Press.CrossRefGoogle Scholar Horsten, L. (2011). The Tarksian turn: deflationism and axiomatic truth. Cambridge: MIT Press.CrossRefGoogle Scholar Ketland, J. (2012). Validity as a primitive. Analysis, 72(3), 421–430.CrossRefGoogle Scholar Kripke, S. (1975). Outline of a theory of truth. Journal of Philosophy, 72(19), 690–716.CrossRefGoogle Scholar Leitgeb, H. (2005). What Truth Depends On. Journal of Philosophical Logic, 34, 155–192.CrossRefGoogle Scholar Mares, E., & Paoli, F. (2014). Logical Consequence and the Paradoxes. Journal of Philosophical Logic, 43(2), 439–469.CrossRefGoogle Scholar Martin, D.A. (2011). Field's Saving Truth from Paradox: Some Things It Doesn't Do. Review of Symbolic Logic, 4(3), 339–347.CrossRefGoogle Scholar Meadows, T. (2014). Fixed Points for Consequence Relations. Logique et Analyse, 57(227), 333–357.Google Scholar Montague, R. (1963). Syntactical treatments of modality, with corollaries on reflection principles and finite axiomatizability. Acta Philosophica Fennica, 16, 153–67.Google Scholar Moschovakis, Y.N. (1974). Elementary Induction on Abstract Structures. Amsterdam, London, New York: North-Holland and Elsevier.Google Scholar Murzi, J., & Shapiro, L. (2015). Validity and Truth-Preservation: In Achourioti, T., Fujimoto, K., Galinon, H., Martinez-Fernandez, J. (Eds.) Unifying the Philosophy of Truth. Springer.Google Scholar Negri, S., & von Plato, J. (2001). Structural Proof Theory. Cambridge: Cambridge University Press.CrossRefGoogle Scholar Nicolai, C. (2016). On expressive power over arithmetic. Unpublished manuscript.Google Scholar Nicolai, C. (2017). Equivalence for truth predicates, Review of Symbolic Logic. online first, doi: 10.017/S1755020316000435. Priest, G., & Routley, R. (1982). Lessons from Pseudo Scotus. Philosophical Studies, XLII(2), 189–99.CrossRefGoogle Scholar Quine, W.V.O. (1961). Reply to Professor Marcus. Synthese, 13(4), 323–330.CrossRefGoogle Scholar Ripley, D. (2012). Conservatively Extending Classical Logic with Transparent Truth. Review of Symbolic Logic, 5(2), 354–378.CrossRefGoogle Scholar Ripley, D. (2015). Comparing Substructural Theories of Truth. Ergo, 2, 299–328.Google Scholar Schroeder-Heister, P. (2012). Paradoxes and Structural Rules In Dutilh Novaes, C., & Hjortland, O. (Eds.), Insolubles and Consequences: Essays in honour of Stephen Read, (pp. 203–211). London: College Publications.Google Scholar Schroeder-Heister, P. (2016). Restricting Initial Sequents: The Trade-Offs Between Identity, Contraction and Cut. In Kahle, R., Strahm, T., Studer, T. (Eds.) Advances in Proof Theory: Birkhäuser, 339–351.Google Scholar Schwichtenberg, H., & Wainer, S.S. (2012). Proofs and Computation. Cambridge: Cambridge University Press.Google Scholar Shapiro, L. (2011). Deflating Logical Consequence. The Philosophical Quarterly, LXI(243), 320–42.CrossRefGoogle Scholar Tajer, D., & Pailos, F. (2017). Validity in a dialetheist framework. Logique et Analyse, 60, 238.Google Scholar Tarski, A., Mostowski, A., & Robinson, R.M. (1953). Undecidable Theories. Amsterdam: North Holland.Google Scholar Tennant, N. (2015). A new unified account of truth and paradox. Mind, 124, 571–605.CrossRefGoogle Scholar Visser, A. (forthcoming). The interpretability of inconsistency. Feferman's theorem and related results, Forthcoming in the Bulletin of Symbolic Logic.Google Scholar Whittle, B. (2004). Dialetheism, Logical Consequence and Hierarchy. Analysis, LXI, 320–42.Google Scholar Yablo, S. (1982). Grounding, Dependence, and Paradox. Journal of Philosophical Logic, 11, 117–137.CrossRefGoogle Scholar Zardini, E. (2011). Truth without contra(di)ction. Review of Symbolic Logic, 4(4), 498–535.CrossRefGoogle Scholar Zardini, E. (2013). Naive Modus Ponens. Journal of Philosophical Logic, 42, 575–593.CrossRefGoogle Scholar © The Author(s) 2017 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Munich Center for Mathematical PhilosophyLudwig-Maximilians-Universität MünchenMunichGermany 2.Department of Philosophy (KGW)Universität SalzburgSalzburgAustria Nicolai, C. & Rossi, L. J Philos Logic (2018) 47: 549. https://doi.org/10.1007/s10992-017-9438-x Received 10 November 2016 Accepted 27 March 2017 DOI https://doi.org/10.1007/s10992-017-9438-x
CommonCrawl
EnglishEspañol한국어Русский简体中文Bahasa Indonesia A bulk silicon PV module consists of multiple individual solar cells connected, nearly always in series, to increase the power and voltage above that from a single solar cell. The voltage of a PV module is usually chosen to be compatible with a 12V battery. An individual silicon solar cell has a voltage at the maximum power point around 0.5V under 25 °C and AM1.5 illumination. Taking into account an expected reduction in PV module voltage due to temperature and the fact that a battery may require voltages of 15V or more to charge, most modules contain 36 solar cells in series. This gives an open-circuit voltage of about 21V under standard test conditions, and an operating voltage at maximum power and operating temperature of about 17 or 18V. The remaining excess voltage is included to account for voltage drops caused by other elements of the PV system, including operation away from maximum power point and reductions in light intensity. In a typical module, 36 cells are connected in series to produce a voltage sufficient to charge a 12V battery. The voltage from the PV module is determined by the number of solar cells and the current from the module depends primarily on the size of the solar cells. At AM1.5 and under optimum tilt conditions, the current density from a commercial solar cell is approximately between 30 mA/cm2 to 36 mA/cm2. Single crystal solar cells are often 15.6 × 15.6 cm2, giving a total current of almost 9 – 10A from a module. The table below shows the output of typical modules at STC. IMP and ISC do not change that much but VMP and VOC scale with the number of cells in the module. VMPP IMPP 72 340 Wp 37.9 V 8.97 A 47.3 V 9.35 A 17.5% 36 170 Wp 19.2 V 8.85 A 23.4 V 9.35 A 17% Modules for residential or large fields usually contain either 60 or 72 cells. There are other sizes such as 96 cell modules but they are much less common. If all the solar cells in a module have identical electrical characteristics, and they all experience the same insolation and temperature, then all the cells will be operating at exactly the same current and voltage. In this case, the IV curve of the PV module has the same shape as that of the individual cells, except that the voltage and current are increased. The equation for the circuit becomes: N is the number of cells in series; M is the number of cells in parallel; IT is the total current from the circuit; VT is the total voltage from the circuit; I0 is the saturation current from a single solar cell; IL is the short-circuit current from a single solar cell; n is the ideality factor of a single solar cell; and q, k, and T are constants as given in the constants page. The overall IV curve of a set of identical connected solar cells is shown below. The total current is simply the current of an individual cell multiplied by the number of cells in parallel. Such that: ISC total = ISC × M. The total voltage is the voltage of an individual cell multiplied but the number of cells in series. Such that: $$I_{SC}(total) = I_{SC}(cell)\times M$$ $$I_{MP}(total) = I_{MP}(cell)\times M$$ $$V_{OC}(total) = V_{OC}(cell)\times N$$ $$V_{MP}(total) = V_{MP}(cell)\times N$$ If the cells are identical then the fill factor does not change when the cells are in parallel or series. However, there is usually mismatch in the cells so the fill factor is lower when the cells are combined. The cell mismatch may come from manufacturing or from differences in light on the cells where one cell has more light than another. I-V curve for N cells in series x M cells in parallel. Series Parallel Cells 7.2. Interconnection EffectsMismatch Effects
CommonCrawl
The roles of maturation delay and vaccination on the spread of Dengue virus and optimal control Lin-Fei Nie ORCID: orcid.org/0000-0001-5960-80781 & Ya-Nan Xue1 A mathematical model of Dengue virus transmission between mosquitoes and humans, incorporating a control strategy of imperfect vaccination and vector maturation delay, is proposed in this paper. By using some analytical skills, we obtain the threshold conditions for the global attractiveness of two disease-free equilibria and prove the existence of a positive equilibrium for this model. Further, we investigate the sensitivity analysis of threshold conditions. Additionally, using the Pontryagin maximum principle, we obtain the optimal control strategy for the disease. Finally, numerical simulations are delivered to verify the correctness of the theoretical results, the feasibility of a vaccination control strategy, and the influences of the controlling parameters on the control and elimination of this disease. Theoretical results and numerical simulations show that the vaccination rate and effectiveness of vaccines are two key factors for the control of Dengue spread, and the manufacture of the Dengue vaccine is also architecturally significant. Dengue is a vector-borne disease which transcends international borders as the most important arbovirus disease currently threatening human populations. In the light of evolution, at least approximately 50-100 million people are affected by the Dengue virus each year [1]. The Dengue virus is transmitted to humans by mosquitos, mostly the Aedes aegypti and Aedes albopictus. As far back as 1981, Jousset [2] published geographic locations of Aedes aegypti strains and the Dengue virus. To better describe the influences of the Dengue virus, many scholars have investigated Dengue transmission in mathematical models (see [3–5] and the references therein). Particularly, Esteva et al. [6] proposed a Dengue virus transmission model and analyzed the global stability of equilibria, and the control measures of the vector population are also discussed in terms of threshold conditions. Further, Wang et al. [7] proposed a nonlocal and time-delayed reaction-diffusion model of the Dengue virus, and established threshold dynamics in terms of the basic reproduction number. In addition, Garba et al. [8] proposed a deterministic model for the transmission dynamics of a strain of Dengue, which allows for transmission by exposed humans and mosquitoes. They proved the existence and local asymptotical stability of the disease-free equilibrium if the basic reproduction number is less than unity. The authors also examined the phenomenon of backward bifurcation. How to control and eliminate the Dengue virus has always been a hot topic. Until now, the available strategy that controls the spread of Dengue virus only controls the vector. Despite combined community participation and vector control, together with active disease surveillance and insecticides, the examples of successful Dengue prevention and control on a national scale are few [1]. Besides, with the increase of vector resistance, the intervals between treatments are shorter. Moreover, as a result of the high costs of development and registration and low gains, only few insecticide products are offered on the market [9]. Considering these realities, vaccination could be more effective to protect against the Dengue virus [10]. It is a well-known fact that vaccination has already been successfully applied to control and eliminate various infectious diseases. Particularly, in 1760, the Swiss mathematician Daniel Bernoulli published an investigation on the impact of immunization with cowpox. Then, the means of protecting people from infection through immunization began to be widely used. In addition, the method has already successfully decreased both mortality and morbidity [11–13]. In fact, during the 1940s, Dengue vaccines were under development. In recent years, however, with the increase in Dengue infections and a serious need for faster development of a vaccine [14], the progress in Dengue vaccines development has amazingly accelerated. To guide public support for vaccine development in both industrialized and developing countries, economic analysis has been conducted, including previous cost-effectiveness studies of Dengue [15–17]. The cost of the disease burden is compared with the possibility of making a vaccination campaign, by the authors of this analytical work; finally, they consider that Dengue vaccines, as a means of intervention, have a potential economic benefit. On the other hand, there are three successive aquatic juvenile phases (egg, larva and pupa) and one adult pupa of the life cycle of mosquitoes [18]. The duration of the development from egg to adult (1-2 weeks) is often compared to the average life span of an adult mosquito (about 3 weeks). The size of the mosquito population is strongly affected by temperature, and the number of female mosquitoes changes accordingly due to seasonal variations [19, 20]. Therefore, it is vital to consider the maturation time of mosquitoes [21], the length of the larval phase from egg to adult mosquitoes, and the impact on the spread of the Dengue virus. Based on the above-mentioned information and the immature Dengue vaccine, a delayed mathematical model of dynamical Dengue transmission between mosquitoes and humans, incorporating a control strategy of imperfect vaccination, is proposed in this paper, aiming to discuss the influences of vaccination and a maturation delay for controlling and eliminating the Dengue virus. The rest of the paper is structured as follows. Section 2 describes an imperfect vaccination model with the maturation time of mosquitoes, and the basic properties of this model are presented in Section 3. In Section 4, the threshold conditions and the existence and attractiveness of equilibria of the model are discussed. In Section 5, we will investigate the sensitivity of our threshold conditions. In Section 6, we discuss the optimal control strategy for the disease. Finally, we give numerical simulations in Section 7, and present some concluding comments in Section 8. Model formulation In this section, we present a mathematical model to study the transmission dynamics of the Dengue virus. The model is based on a susceptible, infectious, recovered and vaccinated structure and explains the transmission process of humans and mosquitoes. Let \(S_{h}(t)\), \(I_{h}(t)\), \(R_{h}(t)\) and \(V_{h}(t)\) denote the numbers of susceptible (individuals who can contract the disease), infectious (individuals who are capable of transmitting the disease), resistant (individuals who have recovered and acquired immunity) and vaccinated (individuals who were vaccinated and are now immune) individuals at time t, respectively. Similarly, \(S_{m}(t)\) and \(I_{m}(t)\) represent the numbers of susceptible (mosquitoes able to contract the disease) and infectious (mosquitoes capable of transmitting the disease to humans) adult female mosquitoes at time t. Here the total numbers of humans and mosquitoes are denoted by \(N_{h}(t)=S_{h}(t)+I_{h}(t)+V_{h}(t)+R_{h}(t)\) and \(N_{m}(t)=S_{m}(t)+I_{m}(t)\), respectively. Since the development of mosquitoes from eggs to adults is density dependent, a Ricker type function is taken to ensure the birth rate into the adult mosquitoes. Additionally, let the positive constant τ be the maturation time of the mosquito, that is, the average time needed for an egg to develop into an adult mosquito. Therefore, the birth rate function of mosquitoes is taken as \(r_{m}N_{m}(t-\tau )e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}\), where the meanings of the parameters can be found in Table 1. For more biological explanation, we refer to [22]. Table 1 Parameter interpretations, value ranges and sources of model ( 1 ) Additionally, for some potentially human infections (such as measles, hepatitis B, influenza, polio, etc.), there has been considerable focus on vaccinating newborns or infected individuals. Therefore, Dengue can be a serious candidate for this type of vaccination. Further, we suppose that a mass vaccination program may be initiated whenever there is an increase of the risk of an epidemic, and the vaccination may reduce but not completely eliminate susceptibility to infection, or the immunity, which is obtained by the vaccination process, is temporary. The new model for the transmission between humans and mosquitoes is given in the flowchart (Figure 1). Flowchart of the transmission of Dengue virus transmission between mosquitoes and humans. Based on these considerations, a mathematical model with maturation and imperfect vaccination can be described as $$ \textstyle\begin{cases} \frac{\mathrm{d} S_{h}(t)}{\mathrm{d} t}= \mu_{h}N_{h}+\theta V_{h}(t)- ( b \beta_{mh}\frac{I_{m}(t)}{N_{h}(t)}+\psi+\mu _{h} ) S_{h}(t), \\ \frac{\mathrm{d} V_{h}(t)}{\mathrm{d} t}= \psi S_{h}(t)- ( \sigma b \beta_{mh}\frac{I_{m}(t)}{N_{h}(t)}+\theta+\mu_{h} ) V_{h}(t), \\ \frac{\mathrm{d} I_{h}(t)}{\mathrm{d} t}= b\beta_{mh}\frac {I_{m}(t)}{N_{h}(t)} (S_{h}(t)+ \sigma V_{h}(t) )-(\eta_{h}+\mu _{h})I_{h}(t), \\ \frac{\mathrm{d} R_{h}(t)}{\mathrm{d} t}= \eta_{h} I_{h}(t)-\mu_{h} R_{h}(t), \\ \frac{\mathrm{d} S_{m}(t)}{\mathrm{d} t}= r_{m}S_{m}(t-\tau )e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}-b \beta_{hm}\frac {I_{h}(t)}{N_{h}(t)}S_{m}(t)-d_{m}S_{m}(t) \\ \hphantom{\frac{\mathrm{d} S_{m}(t)}{\mathrm{d} t}=}{} +(1-q)r_{m}I_{m}(t-\tau)e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}, \\ \frac{\mathrm{d} I_{m}(t)}{\mathrm{d} t}= qr_{m}I_{m}(t-\tau )e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}+b\beta_{hm}\frac {I_{h}(t)}{N_{h}(t)}S_{m}(t)-d_{m}I_{m}(t). \end{cases} $$ The meanings of parameters of model (1) are shown in Table 1. The initial conditions of model (1) are given as $$ \begin{gathered} S_{h}(0)>0,\quad\quad V_{h}(0)\geq0, \quad\quad I_{h}(0)\geq0, \quad\quad R_{h}(0)\geq0,\\ S_{m}(\theta)=\phi_{s}(\theta)>0, \quad\quad I_{m}(\theta)= \phi_{i}(\theta)>0, \end{gathered} $$ where \(\phi_{s}(\theta)\) and \(\phi_{i}(\theta)\) are positive continuous functions for \(\theta\in[-\tau,0]\). In this section, the basic dynamical features of model (1) will be explored. First, from the first to the fourth equation of this model, we have \(\mathrm{d} N_{h}/\mathrm{d} t=0\). Then the total number of humans \(N_{h}(t):=N_{h}\) is constant. Further, it follows from model (1) that the total number of adult female mosquitoes \(N_{m}(t)=S_{m}(t)+I_{m}(t)\) satisfies $$ \frac{\mathrm{d} N_{m}(t)}{\mathrm{d} t}=r_{m}N_{m}(t-\tau )e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}-d_{m}N_{m}(t) $$ with the initial condition $$ N_{m}(\theta)=\phi_{s}(\theta)+ \phi_{i}(\theta)>0 \quad\text{for all } \theta\in[-\tau,0]. $$ $$ N_{m}^{*}=\frac{1}{\alpha}\ln \biggl( \frac{r_{m}e^{-d_{j}\tau }}{d_{m}} \biggr) , $$ it follows that \(N_{m}^{*}\) is a unique positive equilibrium of equation (3), and it exists if and only if \(r_{m}e^{-d_{j}\tau}>d_{m}\). Now, we define a threshold condition for the mosquito population $$ \mathcal{R}_{01}=\frac{r_{m}e^{-d_{j} \tau}}{d_{m}}. $$ In fact, \(\mathcal{R}_{01}\) is the threshold condition of the existence of a positive equilibrium with model (3). The following theorem describes the global dynamical behavior of model (3). Solution \(N_{m}(t)\) of model (3) with the initial condition (4) is positive for any finite time \(t\geq0\). Further, if \(\mathcal{R}_{01}\leq1\), then solution \(N_{m}(t)\) is bounded and the trivial equilibrium \(N_{m}=0\) is globally asymptotically stable; if \(\mathcal{R}_{01}>1\), then \(h< N_{m}(t)< H\) for any \(t\geq0\), where $$ h=\frac{1}{2}\min \Bigl\{ \min_{\theta\in[-\tau, 0]} \bigl\{ \phi _{s}(\theta)+\phi_{i}(\theta) \bigr\} , N_{m}^{*} \Bigr\} ,\quad\quad H=1+\max \Bigl\{ N_{m}^{*}, \max _{\theta\in[-\tau, 0]} \bigl\{ \phi _{s}(\theta)+\phi_{i}( \theta) \bigr\} \Bigr\} . $$ Moreover, model (3) has a unique positive equilibrium \(N_{m}^{*}\) which is globally asymptotically stable. Noting that \(N_{m}(\theta)>0\) for any \(\theta\in[-\tau, 0]\), if there is a \(t^{*}>0\) such that \(N_{m}(t^{*})=0\) and \(N_{m}(t)>0\) for all \(t< t^{*}\), then \(\mathrm{d} N_{m}(t^{*})/\mathrm{d} t\leq0\). It follows from (3) that $$ \frac{\mathrm{d} N_{m}(t^{*})}{\mathrm{d} t}=r_{m}N_{m} \bigl(t^{*}-\tau \bigr)e^{-d_{j}\tau}>0, $$ which leads to a contradiction with \(\mathrm{d} N_{m}(t^{*})/\mathrm{d} t\leq0\). Hence \(N_{m}(t)>0\) for any finite time \(t\geq0\). Now we prove (i). Assume that \(\mathcal{R}_{01}\leq1\). We claim that \(N_{m}(t)\leq H\). Otherwise, there is a \(t_{1}>0\) such that \(N_{m}(t_{1})=H\) and \(N_{m}(t)< H\) for any \(t< t_{1}\). Then we have \(\mathrm{d} N_{m}(t_{1})/\mathrm{d} t\geq0\). From (3), we have $$ \begin{aligned} \frac{\mathrm{d} N_{m}(t_{1})}{\mathrm{d} t}&=r_{m}N_{m}(t_{1}-\tau )e^{-d_{j}\tau}e^{-\alpha H}-d_{m}H\leq H \bigl(r_{m}e^{-d_{j}\tau }e^{-\alpha H}-d_{m} \bigr) \\ &< H \bigl(r_{m}e^{-d_{j}\tau}-d_{m} \bigr)\leq0, \end{aligned} $$ which leads to a contradiction. Hence \(N_{m}(t)\leq H\) for any \(t\geq0\). Next we turn to (ii). Assume that \(\mathcal{R}_{01}>1\). We claim that \(h< N_{m}(t)< H\) for any \(t\geq0\). Otherwise, there is a \(t_{2}>0\) such that \(N_{m}(t_{2})=H\) and \(N_{m}(t)< H\) for any \(t< t_{2}\). From (3), we have $$ \frac{\mathrm{d} N_{m}(t_{2})}{\mathrm{d} t}=r_{m}N_{m}(t_{2}-\tau )e^{-d_{j}\tau}e^{-\alpha H}-d_{m}H< H \bigl(r_{m}e^{-d_{j}\tau}e^{-\alpha H}-d_{m} \bigr)\leq0. $$ The last inequality is true since \(H>N_{m}^{*}\). But the definition of \(t_{2}\) implies that \(\mathrm{d} N_{m}(t_{2})/\mathrm{d} t\geq0\), a contradiction. Hence \(N_{m}(t)< H\) for any \(t\geq0\). Similarly, we assume there is a \(\tilde{t}>0\) such that \(N_{m}(\tilde{t})=h\) and \(N_{m}(t)>h\) for any \(t<\tilde{t}\), and \(\mathrm{d} N_{m}(\tilde {t})/\mathrm{d} t\leq0\). Again from (3), since \(h\leq N_{m}^{*}\), we have $$ \frac{\mathrm{d} N_{m}(\tilde{t})}{\mathrm{d} t}=r_{m}N_{m}(\tilde {t}- \tau)e^{-d_{j}\tau}e^{-\alpha h}-d_{m}h> h \bigl(r_{m}e^{-d_{j}\tau }e^{-\alpha h}-d_{m} \bigr)\geq0, $$ which leads to a contradiction. Therefore, \(h< N_{m}(t)< H\) for any \(t\geq0\). In order to prove that the global stability of equilibria \(N_{m}=0\) and \(N_{m}^{*}\), we denote the right hand side of (3) as functions \(f(N_{m}(t)\text{ and }N_{m}(t-\tau))\). Since \(\partial f(x, y)/\partial y>0\), it follows that (3) generates an eventually strongly monotone semiflow on the space \(\mathcal{C}\) of a continuous function on \([-\tau, 0]\) with the usual pointwise ordering (see Smith [24]). If \(\mathcal{R}_{01}\leq1\), there is only a single trivial equilibrium \(N_{m}=0\). By Theorem 2.3.1 in [24], the equilibrium \(N_{m}=0\) is globally asymptotically stable. If \(\mathcal{R}_{01}>1\), there are two equilibria \(N_{m}=0\) and \(N_{m}^{*}\). By Theorem 2.3.2 in [24], solutions of (3) converge to one of two equilibria. To eliminate the possibility of \(N_{m}=0\) as an attractor, we linearize the system about \(N_{m}=0\) and use Theorem A2 in [25] to conclude that it is unstable when \(\mathcal{R}_{01}>1\). Hence \(N_{m}(t)\rightarrow N_{m}^{*}\) as \(t\rightarrow\infty\). □ Existence and attractiveness of equilibria We define, firstly, a threshold condition for the full model (1) as follows: $$ \mathcal{R}_{02}=\frac{(\theta+\sigma\psi+\mu_{h})b^{2} \beta_{mh}\beta_{hm}N_{m}^{*}}{d_{m}(1-q)(\mu_{h}+\eta_{h})(\theta +\psi+\mu_{h})N_{h}}. $$ In fact, the value of \(\mathcal{R}_{02}\) determines the existence of a positive equilibrium of model (1). For model (1), we get two nontrivial disease-free equilibria, that is, the disease-free equilibrium without mosquitoes \(E_{01}\) for \(\mathcal{R}_{01}\leq1\), and the disease-free equilibrium with mosquitoes \(E_{02}\) for \(\mathcal{R}_{01}>1\) and \(\mathcal {R}_{02}<1\), where \(E_{01}\) and \(E_{02}\) are given by $$ \begin{gathered} E_{01}= \biggl( \frac{(\theta+\mu_{h})N_{h}}{\psi+\theta+\mu_{h}}, \frac{\psi N_{h}}{\psi+\theta+\mu_{h}}, 0, 0, 0, 0 \biggr) ,\\ E_{02}= \biggl( \frac{(\theta+\mu_{h})N_{h}}{\psi+\theta+\mu_{h}}, \frac{\psi N_{h}}{\psi+\theta+\mu_{h}}, 0, 0, N_{m}^{*}, 0 \biggr) . \end{gathered} $$ Further, model (1) admits endemic equilibria \(E^{*}(S_{h(1,2)}^{*}, V_{h(1,2)}^{*}, I_{h(1,2)}^{*}, R_{h(1,2)}^{*}, S_{m(1,2)}^{*}, I_{m(1,2)}^{*})\) for \(\mathcal{R}_{01}>1\) and \(\mathcal {R}_{02}>1\), where $$ \begin{aligned} &S_{h(1,2)}^{*}= \biggl( \frac{\sigma b^{2}\beta_{mh}\beta _{hm}N_{m}^{*}I_{h(1,2)}^{*}}{\psi N_{h}[d_{m}(1-q)N_{h}+b\beta _{hm}I_{h(1,2)}^{*}]}+\frac{\mu_{h}+\theta}{\psi} \biggr) V_{h(1,2)}^{*}, \qquad R_{h(1,2)}^{*}=\frac{\eta_{h}}{\mu _{h}}I_{h(1,2)}^{*}, \\ &I_{m(1,2)}^{*}=\frac{b\beta _{hm}N_{m}^{*}I_{h(1,2)}^{*}}{d_{m}(1-q)N_{h}+b\beta_{hm}I_{h(1,2)}^{*}}, \quad\quad S_{m(1,2)}^{*}=N_{m}^{*}-I_{m(1,2)}^{*}, \\ &V_{h(1,2)}^{*}=\frac{(\eta_{h}+\mu_{h})\psi I_{h(1,2)}^{*}}{(\sigma M+\theta+\mu_{h}+\sigma\psi)M}, \qquad M=\frac{b^{2}\beta_{mh}\beta _{hm}N_{m}^{*}I_{h(1,2)}^{*}}{N_{h}[d_{m}(1-q)N_{h}+b\beta _{hm}I_{h(1,2)}^{*}]}, \end{aligned} $$ and \(I_{h(1,2)}^{*}\) is obtained by the solutions \(I_{h}\) of the following equation: $$ AI_{h}^{2}+BI_{h}+C=0 $$ $$\begin{aligned}& A=b^{2}\beta_{hm}^{2}( \mu_{h}+\eta_{h}) \bigl[ \bigl(\sigma b\beta _{mh}N_{m}^{*}+(\theta+\mu_{h})N_{h} \bigr) \bigl(b\beta_{mh}N_{m}^{*}+(\psi + \mu_{h})N_{h} \bigr)+\theta\psi N_{h}^{2} \bigr], \\& \begin{aligned} B&= b\beta_{hm}N_{h} \bigl\{ -\sigma\mu_{h}b^{3} \beta_{mh}^{2}\beta_{hm}N_{m}^{*2} +2d_{m}(1-q)\mu_{h}(\mu_{h}+\eta_{h}) (\psi+\theta+\mu _{h})N_{h}^{2} \\ &\quad{} +b\beta_{mh}N_{m}^{*}N_{h} \bigl[d_{m}(1-q) (\mu_{h}+\eta_{h}) \bigl(\theta+ \mu _{h}+\sigma(\psi+\mu_{h}) \bigr) -\mu_{h}b \beta_{hm}(\theta+\sigma\psi+\mu_{h}) \bigr] \bigr\} , \end{aligned} \\& \begin{aligned} C&= \mu_{h}d_{m}(1-q)N_{h}^{3} \bigl[d_{m}(1-q) (\mu_{h}+\eta_{h}) (\psi + \theta+\mu_{h})N_{h} -(\theta+\sigma\psi+ \mu_{h})b^{2}\beta_{mh}\beta_{hm}N_{m}^{*} \bigr] \\ & =\mu_{h}d_{m}(1-q)N_{h}^{3} \frac{1}{d_{m}(1-q)(\mu_{h}+\eta _{h})(\psi+\theta+\mu_{h})N_{h}}(1-\mathcal{R}_{02}). \end{aligned} \end{aligned}$$ It is obvious that \(A>0\) for positive parameters, and \(\mathcal {R}_{02}\geq1\) if and only if \(C\leq0\). Further, if \(B>0\) and \(C>0\), there is no positive root of equation (5); if \(B<0\) and \(B^{2}-4AC>0\), there are two positive roots of equation (5); if \(C<0\), there is a unique positive root of equation (5). According to the above-mentioned discussion, we have a conclusion as follows. If \(\mathcal{R}_{01}\leq1\), then model (1) has a unique disease-free equilibrium without mosquitoes \(E_{01}\); if \(\mathcal {R}_{01}>1\) and \(\mathcal{R}_{02}<1\), then model (1) has a unique disease-free equilibrium with mosquitoes \(E_{02}\). Furthermore, if \(\mathcal{R}_{01}>1\), the following statements are valid: if \(C\leq0\), then model (1) has a unique endemic equilibrium; if \(B<0\) and \(B^{2}-4AC>0\), then model (1) has two endemic equilibria; if \(B>0\) and \(C\geq0\), then model (1) has no endemic equilibrium. Noting that \(C\leq0\) if and only if \(\mathcal{R}_{02}\geq1\). It is clear from Theorem 2 (Case (i)) that the model has a unique endemic equilibrium if \(\mathcal{R}_{01}\geq1\) and \(\mathcal {R}_{02}>1\). Further, Case (ii) indicates the possibility of backward bifurcation (where a local asymptotically stable disease-free equilibrium co-exists with a locally asymptotically stable endemic equilibrium) in model (1) for \(\mathcal{R}_{01}\geq1\) and \(\mathcal{R}_{02}<1\). To check for this, the discriminant \(B^{2}-4AC\) is set to zero and solved for the critical value of \(\mathcal{R}_{02}\), denoted by \(\mathcal{R}_{02}^{c}\). Thus, backward bifurcation would occur for values of \(\mathcal{R}_{02}\) such that \(\mathcal{R}_{01}\geq 1\) and \(\mathcal{R}_{02}^{c}<\mathcal{R}_{02}<1\). To obtain the stability of the equilibria of model (1), we take out the variate of \(R_{h}(t)\) and linearize model (1) about equilibria \((S_{h}^{*},V_{h}^{*},I_{h}^{*},S_{m}^{*},I_{m}^{*})\) and we get the following Jacobian matrix: $$ J= \begin{pmatrix} a_{11}-\lambda&\theta&0&0&-b\beta_{mh}\frac{S_{h}^{*}}{N_{h}}\\ \psi&a_{22}-\lambda&0&0&-\sigma b\beta_{mh}\frac{V_{h}^{*}}{N_{h}}\\ b\beta_{mh}\frac{I_{m}^{*}}{N_{h}}&\sigma b\beta_{mh}\frac {I_{m}^{*}}{N_{h}}&-(\eta_{h}+\mu_{h})-\lambda&0&b\beta_{mh}\frac {S_{h}^{*}+\sigma V_{h}^{*}}{N_{h}}\\ 0&0&-b\beta_{hm}\frac{S_{m}^{*}}{N_{h}}&a_{44}-\lambda&a_{45}\\ 0&0&b\beta_{hm}\frac{S_{m}^{*}}{N_{h}}&a_{54}&a_{55}-\lambda \end{pmatrix} , $$ $$\begin{aligned} &a_{11}=- \biggl( b\beta_{mh} \frac{I_{m}^{*}}{N_{h}}+\psi+\mu_{h} \biggr) ,\qquad a_{22}=- \biggl( \sigma b\beta_{mh}\frac {I_{m}^{*}}{N_{h}}+\theta+ \mu_{h} \biggr) , \\ &a_{45}=r_{m}e^{-(d_{j}\tau+\alpha N_{m}^{*})} \bigl[ (1-q)e^{-\lambda \tau}- \alpha N_{m}^{*}+\alpha qI_{m}^{*} \bigr] , \\ &a_{54}=b\beta_{hm}\frac{I_{h}(t)}{N_{h}}-\alpha qI_{m}^{*}(t-\tau )e^{-(d_{j}\tau+\alpha N_{m}^{*}}, \\ &a_{44}=r_{m}e^{-(d_{j}\tau+\alpha N_{m}^{*})} \bigl( e^{-\lambda\tau }- \alpha N_{m}^{*}+\alpha qI_{m}^{*} \bigr) -d_{m}, \\ &a_{55}=qr_{m}e^{-(d_{j}\tau+\alpha N_{m}^{*})} \bigl( e^{-\lambda\tau }- \alpha I_{m}^{*} \bigr) -d_{m}, \end{aligned}$$ and λ is an eigenvalue. We obtain the characteristic equation about \(E_{01}\) according to the Jacobian matrix of model (1) $$ \begin{aligned} F(\lambda)&= ( \lambda+\eta_{h}+ \mu_{h} ) \bigl[ \lambda ^{2}+(\theta+\psi+2 \mu_{h})\lambda+(\theta+\psi+\mu_{h})\mu _{h} \bigr] \\ &\quad{} \times \bigl( \lambda+d_{m}-qr_{m}e^{-(d_{j}+\lambda)\tau} \bigr) \bigl( \lambda+d_{m}-r_{m}e^{-(d_{j}+\lambda)\tau} \bigr) . \end{aligned} $$ To continue, we recall Theorem 4.7 in [26], which states that \(\lambda=A+Be^{-\lambda\tau}\) has a root with positive real part if \(A+B>0\), and has no roots with nonnegative real parts if \(A+B<0\) and \(B\geq A\). By this theorem, we see that all roots of the above characteristic equation have negative real parts for \(\mathcal {R}_{01}<1\). Therefore, \(E_{01}\) is asymptotically stable. Now, on the globally asymptotically stable disease-free equilibrium without mosquitoes \(E_{01}\) of model (1), we have Theorem 3. If \(\mathcal{R}_{01}<1\), then model (1) has a unique disease-free equilibrium without mosquitoes \(E_{01}\), which is globally asymptotically stable. It obvious that \(\lim_{t\rightarrow\infty} S_{m}(t)=\lim_{t\rightarrow\infty} I_{m}(t)=0\) for \(\mathcal{R}_{01}<1\) depending on Theorem 1. So we merely prove that $$ \lim_{t\rightarrow\infty} S_{h}(t)= \frac{(\theta+\mu _{h})N_{h}}{(\psi+\theta+\mu_{h})}, \qquad\lim_{t\rightarrow \infty} V_{h}(t)= \frac{\psi N_{h}}{(\psi+\theta+\mu_{h})}, $$ $$ \lim_{t\rightarrow\infty}I_{h}(t)=\lim_{t\rightarrow\infty} R_{h}(t)=0. $$ Due to \(\lim_{t\rightarrow\infty} I_{m}(t)=0\), for a small enough positive constant ϵ, there is a constant \(T>0\) such that \(I_{m}(t)<\epsilon\), for all \(t>T\). Then, from the third equation of model (1), we have $$ \frac{\mathrm{d} I_{h}(t)}{\mathrm{d} t}< b\beta_{mh}\epsilon-(\mu _{h}+ \eta_{h})I_{h}(t), \quad\text{for all }t>T. $$ By the comparison theorem and the arbitrariness of ϵ, we have \(\lim_{t\rightarrow\infty} I_{h}(t)=0\). Further, it follows that \(\lim_{t\rightarrow\infty} R_{h}(t)=0\). From the first and second equations of model (1), we have $$ \begin{gathered} \mu_{h}N_{h}+\theta V_{h}(t)- \biggl( b \beta_{mh}\frac{\epsilon }{N_{h}}+\psi+\mu_{h} \biggr) S_{h}(t)\\ \quad \leq\frac{\mathrm{d}S_{h}(t)}{\mathrm{d}t}\leq\mu_{h}N_{h}+ \theta V_{h}(t)-(\psi+\mu_{h})S_{h}(t) \end{gathered} $$ $$ \psi S_{h}(t)- \biggl( \sigma b\beta_{mh} \frac{\epsilon}{N_{h}}+\theta +\mu_{h} \biggr) V_{h}(t)\leq \frac{\mathrm{d}V_{h}(t)}{\mathrm {d}t}\leq\psi S_{h}(t)-(\theta+\mu_{h})V_{h}(t). $$ Then it is easy to see that (6) is valid, that is, \(E_{01}\) is globally attractive. This completes the proof. □ Finally, we give a conclusion on the global attractiveness of the disease-free equilibrium with mosquitoes \(E_{02}\) of model (1). Supposing that \(\mathcal{R}_{01}>1\). If $$ \mathcal{R}_{02}^{*}:=\frac{b^{2}\beta_{mh}\beta_{hm}N_{m}^{*}}{d_{m}(1 -qe^{d_{m}\tau})(\mu_{h}+\eta_{h})N_{h}}< 1, $$ then model (1) has a unique disease-free equilibrium with mosquitoes \(E_{02}\), which is globally attractive. From the expressions of \(\mathcal{R}_{02}\) and \(\mathcal{R}_{02}^{*}\), we get \(\mathcal{R}_{02}<1\) for \(\mathcal{R}_{02}^{*}<1\). Therefore, model (1) has a unique disease-free equilibrium with mosquitoes \(E_{02}\) for \(\mathcal{R}_{02}^{*}<1\) and \(\mathcal {R}_{01}>1\). From the sixth equation of model (1) we get $$ \frac{\mathrm{d} I_{m}(t)}{\mathrm{d} t}\geq-d_{m}I_{m}(t). $$ By integrating the above inequality from \(t-\tau\) to t, we obtain \(I_{m}(t-\tau)\leq e^{d_{m}\tau}I_{m}(t)\). Then $$ \textstyle\begin{cases} \frac{\mathrm{d} I_{m}(t)}{\mathrm{d} t}\leq d_{m} ( qe^{d_{m}\tau }-1 ) I_{m}(t)+b\beta_{hm} \frac{N_{m}^{*}}{N_{h}}I_{h}(t), \\ \frac{\mathrm{d} I_{h}(t)}{\mathrm{d} t}\leq b\beta _{mh}I_{m}(t)-( \eta_{h}+\mu_{h})I_{h}(t). \end{cases} $$ Consider the following auxiliary system: $$ \textstyle\begin{cases} \frac{\mathrm{d} u(t)}{\mathrm{d} t}=d_{m} ( qe^{d_{m}\tau }-1 ) u(t)+b\beta_{hm}\frac{N_{m}^{*}}{N_{h}}v(t), \\ \frac{\mathrm{d} v(t)}{\mathrm{d} t}=b\beta_{mh}u(t)-(\eta_{h}+ \mu_{h})v(t). \end{cases} $$ It is obvious that the equilibrium \((0, 0)\) always exists. The characteristic equation of model (7) about \((0, 0)\) is $$ I(\lambda)=\lambda^{2}+(b_{2}-a_{1}) \lambda-(b_{2}a_{1}+a_{2}b_{1})=0, $$ where \(a_{1}=d_{m} ( qe^{d_{m}\tau}-1 ) \), \(a_{2}=b\beta _{hm}N_{m}^{*}/N_{h}\), \(b_{1}=b\beta_{mh}\) and \(b_{2}=\eta_{h}+\mu _{h}\). To obtain two negative solutions about (8), it is required that $$ \lambda_{1}+\lambda_{2}=a_{1}-b_{2}< 0, \quad \quad \lambda_{1}\cdot\lambda_{2}=-(b_{2}a_{1}+a_{2}b_{1})>0. $$ So we see that the equilibrium \((0, 0)\) of model (7) is globally asymptotically stable for \({\mathcal{R}_{02}^{*}<1}\). According to the above discussion and the comparison theorem of differential equations, we know that \(\lim_{t\rightarrow\infty} I_{m}(t)=0\) and \(\lim_{t\rightarrow\infty} I_{h}(t)=0\) for \(\mathcal {R}_{01}>1\) and \(\mathcal{R}_{02}^{*}<1\). Finally, in the light of Theorem 1, we get \(\lim_{t\rightarrow\infty }(S_{h}(t),I_{h}(t),V_{h}(t),R_{h}(t),S_{m}(t),I_{m}(t))=E_{02}\). This completes the proof. □ Obviously, \(qe^{d_{m}\tau}\approx q\) due to the small vertical transmission probability q according to existing literature, therefore \(\mathcal{R}_{02}\approx\mathcal{R}_{02}^{*}\). To discuss the stability of the endemic equilibrium \(E^{*}\), we write the corresponding characteristic equation for \(E^{*}\) as follows: $$\begin{aligned} H(\lambda)&= \bigl[ \lambda+d_{m} \bigl( 1+\alpha N_{m}^{*}-e^{-\lambda \tau} \bigr) \bigr] \biggl\{ (\lambda+\mu_{h}) \biggl\{ \biggl[ (\lambda+ \mu_{h}+\eta_{h}) \biggl( \lambda+d_{m} \bigl( 1-qe^{-\lambda\tau} \bigr) \\ &\quad{} +b\beta_{hm}\frac{I_{h(1,2)}^{*}}{N_{h}} \biggr) -b^{2} \beta_{mh}\beta_{hm}\frac{S_{m(1,2)}^{*}(S_{h(1,2)}^{*}+\sigma V_{h(1,2)}^{*})}{N_{h}^{2}} \biggr] \biggl[ \lambda^{2}+ \biggl( (1+\sigma )b\beta_{mh} \frac{I_{m(1,2)}^{*}}{N_{h}} \\ &\quad{} +\psi+\theta+\mu_{h} \biggr) \lambda+b \beta_{mh} \frac {I_{m(1,2)}^{*}}{N_{h}} \biggl( \sigma b\beta_{mh} \frac {I_{m(1,2)}^{*}}{N_{h}}+\theta+ \sigma\psi+\mu_{h} \biggr) \biggr] \\ &\quad{} +\sigma b^{3}\beta_{mh}^{2} \beta_{hm}\frac {V_{h(1,2)}^{*}I_{m(1,2)}^{*}S_{m(1,2)}^{*}}{N_{h}^{3}} \biggl[ \sigma \biggl( \lambda +b \beta_{mh}\frac{I_{m(1,2)}^{*}}{N_{h}} +\psi \biggr) +\theta+\mu_{h} \biggr] \biggr\} \\ &\quad{} -\mu_{h}b\beta_{mh}\frac{I_{m(1,2)}^{*}}{N_{h}}(\lambda+\mu _{h}+\eta_{h}) \biggl[ \lambda +d_{m} \bigl( 1-qe^{-\lambda\tau} \bigr) +b\beta_{hm}\frac {I_{h(1,2)}^{*}}{N_{h}} \biggr] \\ &\quad{} \times \biggl[ \lambda+\sigma b\beta_{mh}\frac {I_{m(1,2)}^{*}}{N_{h}}+ \theta+ \sigma\psi+\mu_{h} \biggr] \biggr\} =0. \end{aligned}$$ Nevertheless, the study of solving this transcendental equation (9) is very difficult. And though we get the conditions by math software, it is not difficult to imagine that the conditions are very complex. Of course, it is very difficult to make a rational interpretation on biology. So the solving of (9) is insignificant, and we omit it. Description of sensitivity analysis Sensitivity indices enable us to measure the relative change in a state variable when a model parameter changes. The normalized forward sensitivity index of a variable to a model parameter is the ratio of the relative change in the variable to the relative change in the parameter. When the variable is a differentiable function of the parameter, the sensitivity index may be alternatively defined using partial derivatives. Sensitivity index [27] The normalized forward sensitivity index of a variable, u, that depends differentiably on a parameter, p, is defined as $$ \gamma_{p}^{u}:=\frac{\partial u}{\partial p}\times \frac{p}{u}. $$ Table 2 represents sensitivity indices of model parameters to \(\mathcal{R}_{02}\) and \(\mathcal{R}_{02}^{*}\), as the values of parameters for model (1) are fixed as: \(b=0.8\), \(\beta_{mh}=\beta_{hm}=0.375\), \(\mu_{h}=0.00004\), \(\eta _{h}=0.2\), \(\tau=10\), \(d_{m}=0.02\), \(q=0.01\), \(\sigma=0.2\), \(\theta =0.01\) and \(\psi=0.6\). Table 2 Sensitivity indices of \(\pmb{\mathcal{R}_{02}}\) and \(\pmb{\mathcal {R}_{02}^{*}}\) to the parameter values for model ( 1 ) Note from Table 2 that \(\mathcal{R}_{02}\) and \(\mathcal {R}_{02}^{*}\) all show the greatest sensitivities to the biting rate b, followed by the transmission probabilities \(\beta_{mh}\) and \(\beta_{hm}\). Accordingly, a reduction of 1% in the biting rate b decreases \(\mathcal{R}_{02}\) by 2%, which equals the decrease in \(\mathcal{R}_{02}^{*}\) when identically varying the biting rate parameter b; further, a reduction of 1% in the transmitting rate \(\beta_{mh}\) or \(\beta_{hm}\) decreases \(\mathcal{R}_{02}\) and \(\mathcal{R}_{02}^{*}\) both by 1%. Next, a reduction of 1% in the waning rate θ decreases \(\mathcal{R}_{02}\) by 0.06051%, a reduction of 1% in the infection rate of vaccinated members σ decreases \(\mathcal{R}_{02}\) by 0.92280%, and a reduction of 1% of the vaccinated fraction of the susceptible class ψ increases \(\mathcal{R}_{02}\) by 0.06075%. Lastly, a reduction of 1% in the vertical transmission probability q decreases \(\mathcal{R}_{02}\) and \(\mathcal{R}_{02}^{*}\) by 0.01010% and 0.01237%, respectively; a reduction of 1% in the maturation time of the mosquito τ decreases \(\mathcal{R}_{02}^{*}\) by 0.0025%. Obviously, the sensitivity index of the infection rate of vaccinated members σ exceeds that of the fraction ψ of the susceptible class that was vaccinated, though the value of σ (\(\sigma=0.2\)) is smaller than the value of ψ (\(\psi=0.6\)). The sensitivity index of the fraction ψ of the susceptible class that was vaccinated is substantial near the sensitivity index of the waning rate θ for the values above. Then the sensitivity index of the vertical transmission probability q is very small. This is perhaps related to the small value of q (\(q=0.01\)). The sensitivity level of τ is the smallest, that is, the maturation time of the mosquito has less effect on the variation of \(\mathcal{R}_{02}^{*}\). Analysis of optimal vaccination Optimal control techniques are of great use in developing the optimal strategies to prevent the spread of the Dengue virus. To face the challenges of obtaining an optimal control strategy, we make the following notational conventions. Suppose \(t_{f}\) and △ are given constants and define an admissible control set \(U=\{\psi(t) \text{ is measurable},0\leq\psi(t)\leq\triangle, t\in[0, t_{f}]\}\). Here \(\psi(t)\) is called a control variable, to reduce or even eradicate the disease, and to find a suitable compromise between minimal number of the infected individuals and the costs of the campaign. The objective function is given by $$ \min J[\psi]= \int_{0}^{t_{f}}{ \bigl[ \gamma_{D}I_{h}(t)^{2}+ \gamma _{V}\psi(t)^{2} \bigr] \mathrm{d}t}, $$ $$ \textstyle\begin{cases} \frac{\mathrm{d} S_{h}(t)}{\mathrm{d} t}= \mu_{h}N_{h}+\theta V_{h}(t)- ( b \beta_{mh}\frac{I_{m}(t)}{N_{h}(t)}+\psi(t)+\mu _{h} ) S_{h}(t), \\ \frac{\mathrm{d} V_{h}(t)}{\mathrm{d} t}= \psi(t) S_{h}(t)- ( \sigma b \beta_{mh}\frac{I_{m}(t)}{N_{h}(t)}+\theta+\mu_{h} ) V_{h}(t), \\ \frac{\mathrm{d} I_{h}(t)}{\mathrm{d} t}= b\beta_{mh}\frac {I_{m}(t)}{N_{h}(t)} (S_{h}(t)+ \sigma V_{h}(t) )-(\eta_{h}+\mu _{h})I_{h}(t), \\ \frac{\mathrm{d} R_{h}(t)}{\mathrm{d} t}= \eta_{h} I_{h}(t)-\mu_{h} R_{h}(t), \\ \frac{\mathrm{d} S_{m}(t)}{\mathrm{d} t}= r_{m}S_{m}(t-\tau )e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}-b \beta_{hm}\frac {I_{h}(t)}{N_{h}(t)}S_{m}(t)-d_{m}S_{m}(t) \\ \hphantom{\frac{\mathrm{d} S_{m}(t)}{\mathrm{d} t}=}{}+(1-q)r_{m}I_{m}(t-\tau)e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}, \\ \frac{\mathrm{d} I_{m}(t)}{\mathrm{d} t}= qr_{m}I_{m}(t-\tau )e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}+b\beta_{hm}\frac {I_{h}(t)}{N_{h}(t)}S_{m}(t)-d_{m}I_{m}(t), \end{cases} $$ with the initial condition (2). Here, positive constants \(\gamma_{D}\) and \(\gamma_{V}\) represent the weights of the costs of treatment of infected individuals and vaccination, respectively. Since the state variables are continuous, the solutions of the control system are bounded. Also, the objective function is convex in the control \(\psi(t)\). Hence, the existence of the optimal control comes as a direct result from the Filippove-Cesari theorem [28–31]. We, therefore, have the following result. There is an optimal control \(\psi^{*}(t)\) such that \(J(\psi ^{*}(t))=\min J(\psi(t))\), subject to the control system (12) with the initial condition (2). In order to find the optimal solution, we find that the Lagrangian and Hamiltonian methods serve for the optimal control problem (11) with (12). In fact, the Lagrangian of the optimal problem is given by $$ \widetilde{L}(I_{h}, \psi)=\gamma_{D}I_{h}(t)^{2}+ \gamma_{V}\psi(t)^{2}. $$ To find the optimal control function for the optimal control problem, we define the corresponding Hamiltonian as $$ \begin{aligned}[b] &H(S_{h}, V_{h}, I_{h}, R_{h}, S_{m}, I_{m}, \lambda, \psi) \\ &\quad =\gamma_{D}I_{h}^{2}+\gamma_{V} \psi^{2}+\lambda_{1} \biggl[ \mu _{h}N_{h}+ \theta V_{h}(t)- \biggl( b\beta_{mh}\frac {I_{m}(t)}{N_{h}}+\psi+ \mu_{h} \biggr) S_{h}(t) \biggr] \\ &\quad\quad{} +\lambda_{2} \biggl[ \psi S_{h}(t)- \biggl( \sigma b \beta_{mh}\frac {I_{m}(t)}{N_{h}}+\theta+\mu_{h} \biggr) V_{h}(t) \biggr] \\ &\quad\quad{} +\lambda_{3} \biggl[ b\beta_{mh}\frac{I_{m}(t)}{N_{h}} \bigl(S_{h}(t)+\sigma V_{h}(t) \bigr)-(\eta_{h}+ \mu_{h})I_{h}(t) \biggr] \\ &\quad\quad{} +\lambda_{4} \bigl( \eta_{h} I_{h}(t)- \mu_{h} R_{h}(t) \bigr) +\lambda_{5} \biggl[ r_{m}S_{m}(t-\tau)e^{-d_{j}\tau}e^{-\alpha N_{m}(t)} \\ &\quad\quad {}-b\beta_{hm}\frac {I_{h}(t)}{N_{h}}S_{m}(t)-d_{m}S_{m}(t)+(1-q)r_{m}I_{m}(t- \tau )e^{-d_{j}\tau}e^{-\alpha N_{m}(t)} \biggr] \\ &\quad\quad{} +\lambda_{6} \biggl[ qr_{m}I_{m}(t- \tau)e^{-d_{j}\tau}e^{-\alpha N_{m}(t)}+b\beta_{hm}\frac{I_{h}(t)}{N_{h}}S_{m}(t)-d_{m}I_{m}(t) \biggr] , \end{aligned} $$ where \(\lambda_{i}(\cdot)\), \(i=1, \ldots, 6\), are the adjoint functions to be determined suitably. Now, let us derive a necessary condition for the optimal control strategy by means of the Pontryagin maximum principle [32]. Similar proof methods can also be found in [33–35] and the references therein. Given an optimal control variable \(\psi^{*}(t)\) and the corresponding solution \((\widetilde{S}_{h}(\cdot) , \widetilde{V}_{h}(\cdot) , \widetilde{I}_{h}(\cdot) , \widetilde{R}_{h}(\cdot) , \widetilde {S}_{m}(\cdot) , \widetilde{I}_{m}(\cdot))\) of state system (12), there are adjoint functions \(\lambda_{i}(\cdot)\), \(i=1, \ldots, 6\), satisfying $$ \textstyle\begin{cases} \frac{\mathrm{d}\lambda_{1}(t)}{\mathrm{d}t}= ( \lambda_{1}-\lambda _{3})b\beta_{mh} \frac{\widetilde{I}_{m}(t)}{N_{h}}+(\lambda_{1}-\lambda_{2})\psi ^{*}+\lambda_{1}\mu_{h}, \\ \frac{\mathrm{d}\lambda_{2}(t)}{\mathrm{d}t}= (\lambda_{2}-\lambda _{1})\theta +( \lambda_{2}-\lambda_{3})\sigma b\beta_{mh} \frac{\widetilde {I}_{m}(t)}{N_{h}}+\lambda_{2}\mu_{h}, \\ \frac{\mathrm{d}\lambda_{3}(t)}{\mathrm{d}t}= -2\gamma_{D}\widetilde{I}_{h}(t) +( \lambda_{3}-\lambda_{4})\eta_{h}+ \lambda_{3}\mu_{h}+(\lambda _{5}- \lambda_{6})b\beta_{hm} \frac{\widetilde{S}_{m}(t)}{N_{h}}, \\ \frac{\mathrm{d}\lambda_{4}(t)}{\mathrm{d}t}= \lambda_{4}\mu_{h}, \\ \frac{\mathrm{d}\lambda_{5}(t)}{\mathrm{d}t}= (\lambda_{5}-\lambda_{6}) ( b \beta_{hm}\frac{\widetilde{I}_{h}(t)}{N_{h}}-\alpha qr_{m} \widetilde{I}_{m}(t-\tau)e^{-d_{j}\tau}e^{-\alpha\widetilde {N}_{m}(t)} ) \\ \hphantom{\frac{\mathrm{d}\lambda_{5}(t)}{\mathrm{d}t}=}{} +\lambda_{5} [ r_{m}e^{-d_{j}\tau}e^{-\alpha\widetilde {N}_{m}(t)} (\alpha\widetilde{N}_{m}(t-\tau)-1 )+d_{m} ] \\ \hphantom{\frac{\mathrm{d}\lambda_{5}(t)}{\mathrm{d}t}=}{} -\Phi_{[0, t_{f}-\tau]}(t)r_{m}e^{-(d_{j}\tau+\alpha\widetilde {N}_{m}(t))}\lambda_{5}(t+ \tau), \\ \frac{\mathrm{d}\lambda_{6}(t)}{\mathrm{d}t}= (\lambda_{1}-\lambda _{3})b \beta_{mh} \frac{\widetilde{S}_{h}(t)}{N_{h}}+(\lambda_{2}- \lambda_{3})\sigma b\beta_{mh}\frac{\widetilde{V}_{h}(t)}{N_{h}}+\lambda _{5}r_{m}e^{-d_{j}\tau}e^{-\alpha\widetilde{N}_{m}(t)} [\alpha \widetilde{S}_{m}(t-\tau)\hspace{-20pt} \\ \hphantom{\frac{\mathrm{d}\lambda_{6}(t)}{\mathrm{d}t}=}{} +(1-q) (\alpha\widetilde{I}_{m}(t-\tau)-1 ) ]+\lambda _{6} [qr_{m}e^{-d_{j}\tau}e^{-\alpha\widetilde{N}_{m}(t)} (\alpha \widetilde{I}_{m}(t-\tau)-1 )+d_{m} ] \\ \hphantom{\frac{\mathrm{d}\lambda_{6}(t)}{\mathrm{d}t}=}{}-\Phi_{[0, t_{f}-\tau]}(t)r_{m}e^{-(d_{j}\tau+\alpha\widetilde {N}_{m}(t))} [(1-q) \lambda_{5}(t+\tau)+q\lambda_{6}(t+\tau) ], \end{cases} $$ and the transversality conditions \(\lambda_{i}(t_{f})=0\), \(i=1, \ldots, 6\). Here \(\Phi_{[0, t_{f}-\tau]}(t)=1\) if \(t\in[0, t_{f}-\tau]\). Otherwise \(\Phi_{[0, t_{f}-\tau]}(t)=0\). Furthermore, $$ \psi^{*}(t)=\min \biggl\{ \triangle, \max \biggl\{ 0, \frac{(\lambda _{1}-\lambda_{2})\widetilde{S}_{h}(t)}{2\gamma_{V}} \biggr\} \biggr\} . $$ To determine the adjoint equations and transversality conditions, we use the Hamiltonian (13). We obtain the adjoint system as follows: $$ \begin{aligned} &\frac{\mathrm{d}\lambda_{1}(t)}{\mathrm{d}t}=\frac{\partial H}{\partial S_{h}}- \Phi_{[0, t_{f}-\tau]}(t)\frac{\partial H}{\partial S_{h}(t-\tau)}(t+\tau), \\ &\frac{\mathrm{d}\lambda_{2}(t)}{\mathrm{d}t}=\frac{\partial H}{\partial V_{h}}-\Phi_{[0, t_{f}-\tau]}(t) \frac{\partial H}{\partial V_{h}(t-\tau)}(t+\tau), \\ &\frac{\mathrm{d}\lambda_{3}(t)}{\mathrm{d}t}=\frac{\partial H}{\partial I_{h}}-\Phi_{[0, t_{f}-\tau]}(t) \frac{\partial H}{\partial I_{h}(t-\tau)}(t+\tau), \\ &\frac{\mathrm{d}\lambda_{4}(t)}{\mathrm{d}t}=\frac{\partial H}{\partial R_{h}}-\Phi_{[0, t_{f}-\tau]}(t) \frac{\partial H}{\partial R_{h}(t-\tau)}(t+\tau), \\ &\frac{\mathrm{d}\lambda _{5}(t)}{\mathrm{d}t}=\frac{\partial H}{\partial S_{m}}-\Phi_{[0, t_{f}-\tau]}(t) \frac{\partial H}{\partial S_{m}(t-\tau)}(t+\tau), \\ &\frac{\mathrm{d}\lambda_{6}(t)}{\mathrm{d}t}=\frac{\partial H}{\partial I_{m}}-\Phi_{[0, t_{f}-\tau]}(t) \frac{\partial H}{\partial I_{m}(t-\tau)}(t+\tau). \end{aligned} $$ Thus, the adjoint system can be rewritten as system (14). By the optimal conditions, we have $$ \frac{\partial H}{\partial\psi}\bigg|_{\psi=\psi^{*}(t)}=2\gamma _{V}\psi^{*}(t)-( \lambda_{1}-\lambda_{2})\widetilde{S}_{h}(t)=0. $$ It follows that Considering the feature of the admissible control set U, we obtain (15). Thus we complete the proof. □ Numerical simulation and discussion We perform some numerical simulations to illustrate the main theoretical results above for stability of equilibria using the Runge-Kutta method with the software MATLAB. According to the possible values of model (1) from Table 1, the values of the model parameters are listed in Table 3. We choose the parameters \(N_{m}^{*}\approx1452300\) and \(N_{h}\approx480000\). Table 3 The parameter values for model ( 1 ) First, from the values of the parameters in Table 3, it is easy to see that \(\mathcal{R}_{01}\approx0.9330<1\). Thus from the theoretical conclusion of Theorem 3, we know that the disease-free equilibrium without mosquito \(E_{01}\) of model (1) is globally attractive. That is, infectious individuals and the total number of mosquitoes are all decreasing to zero eventually for any initial value. The plots in Figures 2(a) and 2(b) coincide with the theoretical result. We choose, however, model parameters \(\tau=10\), \(b=0.5\), \(\beta_{mh}=0.1\) and \(\beta _{hm}=0.1\), and other parameters are fixed as in Table 3. It is easy to calculate \(\mathcal{R}_{01}\approx5.9336>1\) and \(\mathcal {R}_{02}^{*}\approx0.4591<1\). Then the conditions of Theorem 4 are valid. Therefore, the disease-free equilibrium with mosquitoes \(E_{02}\) is globally attractive. Theoretical result and numerical simulations in Figures 3(a) and 3(b) imply that infectious individuals and infectious mosquitoes are decreasing to zero eventually, whereas the number of susceptible mosquitoes are not decreasing to zero. The global attractiveness of the disease-free equilibrium without mosquitoes \(\pmb{E_{01}}\) of model ( 1 ), where \(\pmb{\mathcal{R}_{01}\approx0.9330<1}\) . Further, letting \(\tau=5\), \(b=1\) and \(\theta=0.375\), while other parameters are fixed as in Table 3, we get \(\mathcal {R}_{01}\approx37.7369>1\) and \(\mathcal{R}_{02}\approx1.3698>1\) by direct calculation. The plots in Figures 4(a) and 4(b) show that the infected classes (including individuals and mosquitoes) have an obvious explosion in the early phase. Additionally, from Figure 4(a), we also notice the fact that the number of susceptible individuals directly determines the strength and time of Dengue outbreaks, as more susceptible individuals correspond to more violent and earlier outbreaks. Similar results also can be found in Figure 4(b). It really shows that immunization of susceptible individuals is an effective strategy to control outbreaks of the Dengue virus. The global attractiveness of the disease-free equilibrium with mosquitoes \(\pmb{E_{02}}\) of model ( 1 ), where \(\pmb{\mathcal {R}_{01}\approx5.9336>1}\) and \(\pmb{\mathcal{R}_{02}^{*}\approx0.4591<1}\) . Next, we consider how the maturation delay τ and vaccinated fraction ψ affect the prevention and control of the Dengue virus. We fix \(b=1\) and τ to be 25, 20, 15, 12 and 10, and other parameters are fixed as in Table 3. Obviously, the plots in Figure 5(a) show that the maturation time directly determines the scale of the mosquito population. This confirms that the change of weather plays an important role in the spread of mosquito-transmitted infectious diseases. Further, to study the effects of the vaccinated fraction of ψ, we fix \(\tau=5\), \(b=1\), and ψ to be 0.1, 0.2, 0.3, 0.4, 0.5 and 0.6, and other parameters are fixed as in Table 3. Figure 5(b) indicates that the number of infected individuals is falling fast with the increase of the vaccinated fraction ψ. These facts imply that we can prevent the spread of the Dengue virus by adjusting the vaccination rate ψ. The influences of the maturation delay and the vaccinated fraction to eliminate Dengue: (a) \(\pmb{b=1}\) , \(\pmb{\tau=25}\) , 20, 15, 12, and 10; (b) \(\pmb{\tau=5}\) , \(\pmb{b=1}\) , \(\pmb{\psi=0.1}\) , 0.2, 0.3, 0.4, 0.5, and 0.6. Next, we simulate the effects of the waning rate θ and the infection rate of vaccinated members σ for eliminating Dengue. We choose \(\tau=5\), \(b=1\), and other parameters are fixed as in Table 3. From the plots in Figure 6(a), we clearly see that the number of infected humans reaches a peak at day ≈126, and the amplitude of the peak is large. Further, both the amplitude and the peaking time vary with the waning rate of immunity θ. That is, the peaking time decreases and the amplitude of the peak decreases as θ reduces. In addition, as the infection rate of vaccinated members σ increases, the peaking time decreases and the amplitude of the peak decreases. This is shown in Figure 6(b). Numerical simulations demonstrate that the validity period and the effectiveness of vaccination are two key factors to control the spread of the Dengue virus. The influences of the waning rate θ and the infection rate of vaccinated members σ on the elimination of Dengue, where \(\pmb{\tau=5}\) , \(\pmb{b=1}\) , and other parameters are fixed as in Table 3 : (a) \(\pmb{\theta=1/20}\) , \(\pmb{1/50}\) , \(\pmb{1/100}\) , \(\pmb{1/200}\) , and \(\pmb{1/300}\) ; (b) \(\pmb{\sigma=0.05}\) , 0.1, 0.15, 0.2, and 0.25. Finally, we also simulate the relationships of ψ and σ, and ψ and θ. The plots in Figure 7(a) show that, due to the high rate of vaccines losing effect, though the vaccinated rate is high, the Dengue virus can outbreak in a short span of time. Further, we notice that, as the rate of loss of vaccine effectiveness decreases, though the vaccination rate decreases, the number of infected humans is kept in a lower range. Numerical simulations indicate that the improvement of the rate of loss of vaccine effectiveness is more effective than the improvement of the vaccination rate for controlling Dengue. Meanwhile, Figure 7(b) also implicates that the improvement of the period of validity of the vaccine is more effective than the improvement of the vaccination rate for controlling Dengue. Theoretical results and numerical simulations show that the development of highly effective vaccines is the most effective method to control the spread of the disease. The relationships of ψ and σ , and ψ and θ : (a) \(\pmb{\psi=0.6}\) and \(\pmb{\sigma=0.25}\) , \(\pmb{\psi=0.5}\) and \(\pmb{\sigma=0.2}\) , \(\pmb{\psi=0.4}\) and \(\pmb{\sigma=0.15}\) , \(\pmb{\psi=0.3}\) and \(\pmb{\sigma =0.1}\) , \(\pmb{\psi=0.2}\) and \(\pmb{\sigma=0.05}\) ; (b) \(\pmb{\psi=0.1}\) and \(\pmb{\theta =1/365}\) , \(\pmb{\psi=0.2}\) and \(\pmb{\theta=1/200}\) , \(\pmb{\psi=0.3}\) and \(\pmb{\theta =1/100}\) , \(\pmb{\psi=0.4}\) and \(\pmb{\theta=1/50}\) , \(\pmb{\psi=0.5}\) and \(\pmb{\theta=1/20}\) . In this paper, we propose a mathematical model to describe Dengue virus transmission between mosquitoes and humans, where imperfect vaccination and vector maturation delay are introduced. The notation used in our mathematical model includes the compartment \(V_{h}\), which represents the group of human population that is vaccinated, in order to distinguish the resistance obtained through vaccination and the one achieved by disease recovery. By using some analytical skills, the dynamical behavior of this model is discussed. This includes the global attractiveness of two disease-free equilibria, the existence of a positive equilibrium, the sensitivity analysis of threshold conditions, and the optimal control strategy for the disease. In addition, numerical simulations are also carried out to verify the correctness of the theoretical results and the feasibility of the vaccination control strategy. Theoretical results and numerical simulations show that the vaccination rate and effectiveness of the vaccine are two key factors for control of the spread of Dengue. It is well known that there are four distinct serotypes of Dengue virus (DEN1, DEN2, DEN3 and DEN4), according to clinical data collected during the past years. Therefore, one person in an endemic area can suffer from four Dengue infections during his lifetime, one with each serotype. Epidemiological studies [36] support the hypothesis that recovered0 people can be re-infected with a different serotype, and face an increased risk of developing Dengue hemorrhagic fever and Dengue shock syndrome. In recent publications, some multi-strain Dengue fever transmission models have been discussed (see [37–40] and the references therein). However, all individuals who are capable of transmitting the disease are in one class in our model for the purpose of mathematical analysis. Therefore, for a more detailed understanding of the transmission of four Dengue virus strains between mosquitoes and humans, we intend to study the influences of vaccination and maturation delay for a multi-strain Dengue model in the future. Dengue is a tropical vector-borne disease, difficult to prevent and manage. Researchers agree that the development of a vaccine for Dengue is a question of high priority. Recently, a novel method to fight mosquitoes is using a bacterium called Wolbachia, which exists in spiders and up to 75% of the insects, including ticks and mites. Stable Wolbachia strains in Aedes aegypti have also been established. And subsequent studies have shown that, very importantly, Wolbachia blocks the replication of Dengue viruses in mosquitoes. Thence, an increasing number of people realize that replacing the wild mosquitoes with Wolbachia infected mosquitoes is safer, and more feasible than vaccination to some extent. Based on this, there are many mathematical models (including discrete-time and continuous-time models) that are used to investigated the spread of Wolbachia infection (see [41–46] and the references therein). As future work we intend to compare the advantages and disadvantages of the two control strategies (Wolbachia and vaccination). It would also be interesting to investigate what happens if two control strategies are taken at the same time. Dengue and dengue haemorrhagic fever. WHO factsheets No. 117 (2009) Jousset, FX: Geographic Aedes aegypti strains and dengue-2 virus: susceptibility, ability to transmit to vertebrate and transovarial transmission. Ann. Inst. Pasteur., Virol. 132, 357-370 (1981) Aldila, D, Götz, T, Soewono, E: An optimal control problem arising from a dengue disease transmission model. Math. Biosci. 242, 9-16 (2013) Kooi, BW, Maira, A, Nico, S: Analysis of an asymmetric two-strain dengue model. Math. Biosci. 248, 128-139 (2014) Coutinho, FAB, Burattini, MN, Lopez, LF, Massad, E: Threshold conditions for a non-autonomous epidemic system describing the population dynamics of dengue. Bull. Math. Biol. 68, 2263-2282 (2006) Esteva, L, Vargas, C: Analysis of a dengue disease transmission model. Math. Biosci. 150, 131-151 (1998) Wang, WD, Zhao, XQ: A nonlocal and time-delayed reaction-diffusion model of dengue transmission. SIAM J. Appl. Math. 71, 147-168 (2011) Garba, SM, Gumel, AB, Bakar, MRA: Backward bifurcations in dengue transmission dynamics. Math. Biosci. 215, 11-25 (2008) Keeling, MJ, Rohani, P: Modeling Infectious Diseases in Humans and Animals p. 415. Princeton University Press, Princeton (2008) Rodrigues, HS, Monteiro, MTT, Torres, DFM: Vaccination models and optimal control strategies to dengue. Math. Biosci. 247, 1-12 (2014) Scherer, A, McLean, A: Mathematical models of vaccination. Br. Med. Bull. 62, 187-199 (2002) Kar, TK, Jana, S: Application of three controls optimally in a vector-borne disease: a mathematical study. Commun. Nonlinear Sci. Numer. Simul. 18, 2868-2884 (2013) Nelson, OO: Determination of optimal vaccination strategies using an orbital stability threshold from periodically driven systems. J. Math. Biol. 68, 763-784 (2014) Murrel, S, Butler, SCW: Review of dengue virus and the development of a vaccine. Biotechnol. Adv. 29, 239-247 (2011) Clark, DV, Mammen, MPJ, Nisalak, A, Puthimethee, V, Endy, TP: Economic impact of dengue fever/dengue hemorrhagic fever in Thailand at the family and population levels. Am. J. Trop. Med. Hyg. 72, 786-791 (2005) Shepard, DS, et al.: Cost-effectiveness of a pediatric dengue vaccine. Vaccine 22, 1275-1280 (2004) Suaya, JA, et al.: Cost of dengue cases in eight countries in the Americas and Asia: a prospective study. Am. J. Trop. Med. Hyg. 80, 846-855 (2009) http://www.cdc.gov/ncidod/dvbid/westnile/ Bayoh, MN, Lindsay, SW: Effect of temperature on the development of the aquatic stages of Anopheles gambiae sensu stricto (Diptera: Culicidae). Bull. Entomol. Res. 93, 375-381 (2003) Shaman, J, Spiegelman, M, Cane, M, Stieglitz, M: A hydrologically driven model of swamp water mosquito population dynamics. Ecol. Model. 194, 395-404 (2006) Zhao, XQ, Zou, XF: Threshold dynamics in a delayed SIS epidemic model. J. Math. Anal. Appl. 257, 282-291 (2001) Fan, GH, Wu, JH, Zhu, HP: The impact of maturation delay of mosquitoes on the transmission of West Nile virus. Math. Biosci. 228, 119-126 (2010) Tridip, S, Sourav, R, Joydev, C: A mathematical model of dengue transmission with memory. Commun. Nonlinear Sci. Numer. Simul. 22, 511-525 (2015) Smith, HJ: Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems. Mathematical Surveys and Monographs, vol. 41. Am. Math. Soc., Providence (1995) Cooke, KL, Driessche, P: Analysis of an SEIRS epidemic model with two delays. J. Math. Biol. 32, 240-260 (1996) Smith, HL: An Introduction to Delay Differential Equations with Applications to the Life Sciences, vol. 57. Springer, New York (2011) Nakul, C, James, MH, Jim, MC: Determining important parameters in the spread of malaria through the sensitivity analysis of a mathematical model. Bull. Math. Biol. 70, 1272-1296 (2008) Cesari, L: Optimization-Theory and Applications, Problems with Ordinary Differential Equations. Applications and Mathematics, vol. 17. Springer, New York (1983) Kamien, MI, Schwartz, NL: Dynamics Optimization: The Calculus of Variations and Optimal Control in Economics and Management. Elsevier, Amsterdam (2000) Nababan, S: A Filippov-type lemma for functions involving delays and its application to time-delayed optimal control problems. J. Optim. Theory Appl. 27(3), 357-376 (1979) Seierstad, A, Sydsaeter, K: Optimal Control Theory with Economic Applications. Elsevier, Amsterdam (1975) Fleming, WH, Rishel, RW: Deterministic and Stochastic Optimal Control. Springer, New York (1975) Bashier, EBM, Patidar, KC: Optimal control of an epidemiological model with multiple time delays. Appl. Math. Comput. 292, 47-56 (2017) Chen, LJ, Hattaf, K, Sun, JT: Optimal control of a delayed SLBS computer virus model. Physica A 427, 244-250 (2015) Zhu, QY, Yang, XF, Yang, LX, Zhang, CM: Optimal control of computer virus under a delayed model. Appl. Math. Comput. 218, 11613-11619 (2012) Stech, H, Williams, M: Alternate hypothesis on the pathogenesis of dengue hemorrhagic fever (DHF)/dengue shock syndrome (DSS) in dengue virus infection. Exp. Biol. Med. 233(4), 401-408 (2008) Chung, KW, Lui, R: Dynamics of two-strain influenza model with cross-immunity and no quarantine class. J. Math. Biol. 73(6), 1-23 (2016) Esteva, L, Vargas, C: Coexistence of different serotypes of dengue virus. J. Math. Biol. 46(1), 31-47 (2003) Feng, Z, Velasco-Hernández, JX: Competitive exclusion in a vector-host model for the dengue fever. J. Math. Biol. 35(5), 523-544 (1997) Hartley, LM, Donnelly, CA, Garnett, GP: The seasonal pattern of dengue in endemic areas: mathematical models of mechanisms. Trans. R. Soc. Trop. Med. Hyg. 96, 387-397 (2002) Zheng, B, Tang, MX, Yu, JS: Modeling Wolbachia spread in mosquitoes through delay differential equations. SIAM J. Appl. Math. 74(3), 743-770 (2014) Zheng, B, Tang, MX, Yu, JS, Qiu, JX: Wolbachia spreading dynamics in mosquitoes with imperfect maternal transmission. J. Math. Biol. 2017 (2017, in press). doi:10.1007/s00285-017-1142-5 Huang, MG, Tang, MX, Yu, JS: Wolbachia infection dynamics by reaction-diffusion equations. Sci. China Math. 58(1), 77-96 (2015) Huang, MG, Yu, JS, Hu, LC, Zhang, B: Qualitative analysis for a Wolbachia infection model with diffusion. Sci. China Math. 59(7), 1249-1266 (2016) Hu, LC, Huang, MG, Tang, MX, Yu, JS, Zheng, B: Wolbachia spread dynamics in stochastic environments. Theor. Popul. Biol. 106, 32-34 (2015) Turelli, M, Bartom, NH: Deploying dengue-suppressing Wolbachia: robust models predict slow but effective spatial spread in Aedes aegypti. Theor. Popul. Biol. 115, 45-60 (2017) This research is supported by the Natural Science Foundation of Xinjiang (Grant No. 2016D01C046). College of Mathematics and Systems Science, Xinjiang University, Urumqi, 830046, P.R. China Lin-Fei Nie & Ya-Nan Xue Lin-Fei Nie Ya-Nan Xue Correspondence to Lin-Fei Nie. The authors declare that the study was realized in collaboration with the same responsibility. All authors read and approved the manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Nie, LF., Xue, YN. The roles of maturation delay and vaccination on the spread of Dengue virus and optimal control. Adv Differ Equ 2017, 278 (2017). https://doi.org/10.1186/s13662-017-1323-y DOI: https://doi.org/10.1186/s13662-017-1323-y Dengue vaccination maturation delay disease-free equilibrium and endemic equilibrium attractiveness and bifurcation sensitivity and optimal control
CommonCrawl
An uncertainty inequality involving $L^1$ norms by Enrico Laeng and Carlo Morpurgo PDF We derive a sharp uncertainty inequality of the form \begin{equation*}\|x^{2} f\|_{1}^{} \|\xi \; \hat {f}\|_{2}^{2}\ge {\frac {\Lambda _{0}}{4\pi ^{2}}} \|f\|_{1}^{} \|f\|_{2}^{2},\end{equation*} with $\Lambda _{0}=0.428368\dots$. As a consequence of this inequality we derive an upper bound for the so-called Laue constant, that is, the infimum $\lambda _{0}^{}$ of the functional $\lambda (p)=4\pi ^{2} \|x^{2} p\|_{1}^{}\|x^{2} \hat p\|_{1}^{}/(p(0)\hat p(0))$, taken over all $p\ge 0$ with $\hat p\ge 0\;$ ($p\not \equiv 0$). Precisely, we obtain that $\lambda _{0}^{}\le 2\Lambda _{0}=0.85673673\dots ,$ which improves a previous bound of T. Gneiting. William Beckner, Geometric inequalities in Fourier anaylsis, Essays on Fourier analysis in honor of Elias M. Stein (Princeton, NJ, 1991) Princeton Math. Ser., vol. 42, Princeton Univ. Press, Princeton, NJ, 1995, pp. 36–68. MR 1315541 Eric A. Carlen and Michael Loss, Sharp constant in Nash's inequality, Internat. Math. Res. Notices 7 (1993), 213–215. MR 1230297, DOI 10.1155/S1073792893000224 I. Dreier, On the uncertainty principle for positive definite densities, Z. Anal. Anwendungen 15 (1996), no. 4, 1015–1023. MR 1422654, DOI 10.4171/ZAA/743 Gerald B. Folland and Alladi Sitaram, The uncertainty principle: a mathematical survey, J. Fourier Anal. Appl. 3 (1997), no. 3, 207–238. MR 1448337, DOI 10.1007/BF02649110 T. Gneiting , On the uncertainty relation for positive definite probability densities, to appear in Statistics. Elliott H. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities, Ann. of Math. (2) 118 (1983), no. 2, 349–374. MR 717827, DOI 10.2307/2007032 Elliott H. Lieb and Michael Loss, Analysis, Graduate Studies in Mathematics, vol. 14, American Mathematical Society, Providence, RI, 1997. MR 1415616, DOI 10.2307/3621022 D. S. Mitrinović, J. E. Pe�arić, and A. M. Fink, Inequalities involving functions and their integrals and derivatives, Mathematics and its Applications (East European Series), vol. 53, Kluwer Academic Publishers Group, Dordrecht, 1991. MR 1190927, DOI 10.1007/978-94-011-3562-7 Leonard Eugene Dickson, New First Course in the Theory of Equations, John Wiley & Sons, Inc., New York, 1939. MR 0000002 H.-J. Rossberg, Positive definite probability densities and probability distributions, J. Math. Sci. 76 (1995), no. 1, 2181–2197. MR 1356657, DOI 10.1007/BF02363232 Retrieve articles in Proceedings of the American Mathematical Society with MSC (1991): 26D15, 42A82 Retrieve articles in all journals with MSC (1991): 26D15, 42A82 Enrico Laeng Affiliation: Dipartimento di Matematica, Politecnico di Milano, 20133 Milano, Italy Email: [email protected] Carlo Morpurgo Affiliation: Dipartimento di Matematica, Università degli Studi di Milano, 20133 Milano, Italy Email: [email protected] Received by editor(s): February 13, 1998 Published electronically: May 17, 1999 Additional Notes: The second author was partially supported by NSF grant DMS-9622891. Communicated by: Christopher D. Sogge MSC (1991): Primary 26D15, 42A82
CommonCrawl
Is there a limit as to how fast a black hole can grow? Astronomers find ancient black hole 12 billion times the size of the Sun. According to the article above, we observe this supermassive black hole as it was 900 million years after the formation of the universe, and scientists find its extreme specifications mysterious because of the relatively young age of the Universe at that time. Why would the 12 billion Solar Masses mass value be mysterious, unless there was a limit of sorts to the rate of mass consumption by a black hole? (naive point: Why would 900 million years not suffice for this much accumulation, keeping in mind that most supermassive stars which form black holes have life-spans of a few tens of millions of years at most?) general-relativity classical-mechanics black-holes astrophysics Hritik Narayan Hritik NarayanHritik Narayan The accretion of matter onto a compact object cannot take place at an unlimited rate. There is a negative feedback caused by radiation pressure. If a source has a luminosity $L$, then there is a maximum luminosity - the Eddington luminosity - which is where the radiation pressure balances the inward gravitational forces. The size of the Eddington luminosity depends on the opacity of the material. For pure ionised hydrogen and Thomson scattering $$ L_{Edd} = 1.3 \times 10^{31} \frac{M}{M_{\odot}}\ W$$ Suppose that material fell onto a black hole from infinity and was spherically symmetric. If the gravitational potential energy was converted entirely into radiation just before it fell beneath the event horizon, the "accretion luminosity" would be $$L_{acc} = \frac{G M_{BH}}{R}\frac{dM}{dt},$$ where $M_{BH}$ is the black hole mass, $R$ is the radius from which the radiation is emitted (must be greater than the Schwarzschild radius) and $dM/dt$ is the accretion rate. If we say that $L_{acc} \leq L_{Edd}$ then $$ \frac{dM}{dt} \leq 1.3 \times10^{31} \frac{M_{BH}}{M_{\odot}} \frac{R}{GM_{BH}} \simeq 10^{11}\ R\ kg/s \sim 10^{-3} \frac{R}{R_{\odot}}\ M_{\odot}/yr$$ Now, not all the GPE gets radiated, some of it could fall into the black hole. Also, whilst the radiation does not have to come from near the event horizon, the radius used in the equation above cannot be too much larger than the event horizon. However, the fact is that material cannot just accrete directly into a black hole without radiating; because it has angular momentum, an accretion disc will be formed and will radiate away lots of energy - this is why we see quasars and AGN -, thus both of these effects must be small numerical factors and there is some maximum accretion rate. To get some numerical results we can absorb our uncertainty as to the efficiency of the process and the radius at which the luminosity is emitted into a general ignorance parameter called $\eta$, such that $$L_{acc} = \eta c^2 \frac{dM}{dt}$$ i.e what fraction of the rest mass energy is turned into radiation. Then, equating this to the Eddington luminosity we have $$\frac{dM}{dt} = (1-\eta) \frac{1.3\times10^{31}}{\eta c^2} \frac{M}{M_{\odot}}$$ which gives $$ M = M_{0} \exp[t/\tau],$$ where $\tau = 4\times10^{8} \eta/(1-\eta)$ years (often termed the Salpeter (1964) growth timescale). The problem is that $\eta$ needs to be pretty big in order to explain the luminosities of quasars, but this also implies that they cannot grow extremely rapidly. I am not fully aware of the arguments that surround the work you quote, but depending on what you assume for the "seed" of the supermassive black hole, you may only have a few to perhaps 10 e-folding timescales to get you up to $10^{10}$ solar masses. I guess this is where the problem lies. $\eta$ needs to be very low to achieve growth rates from massive stellar black holes to supermassive black holes, but this can only be achieved in slow-spinning black holes, which are not thought to exist! A nice summary of the problem is given in the introduction of Volonteri, Silk & Dubus (2014). These authors also review some of the solutions that might allow Super-Eddington accretion and shorter growth timescales - there are a number of good ideas, but none has emerged as a front-runner yet. Rob JeffriesRob Jeffries 73k77 gold badges156156 silver badges255255 bronze badges $\begingroup$ Good answer. I would just note that "speculative" means that we aren't sure which details are right, not that we have no good ideas. Overcoming Eddington is easy in principle -- just break spherical symmetry, letting matter flow inward in some places and radiation flow outward elsewhere. It's not like accretion disks are spherically symmetric anyway. $\endgroup$ – user10851 Feb 26 '15 at 20:54 $\begingroup$ @ChrisWhite Of course. But most such get-outs are small numerical factors, not the order(s) of magnitude required. But you are correct - no shortage of ideas. $\endgroup$ – Rob Jeffries Feb 26 '15 at 21:03 $\begingroup$ The Eddington radiation would keep out gas. I don't see how it could stop heavy infalling objects, though--say an area of really massive stars that left behind neutron stars and black holes. Or even galactic mergers. $\endgroup$ – Loren Pechtel Feb 27 '15 at 3:05 $\begingroup$ @LorenPechtel You are right, though I have not heard that suggested as a solution. I think the problem with the idea is that you need most of the gas to have already turned into stars in the first 900 million years. This sounds like an even bigger problem than growing the black hole. It takes most galaxies much longer to assemble even a fraction of their gas into stars. $\endgroup$ – Rob Jeffries Feb 27 '15 at 7:06 $\begingroup$ @HritikNarayan Well you still have to grow the smaller black holes. So you have a slightly smaller individual growth timescale, but then you have to factor in some sort of collisional timescale. I don't think there is ever a problem in explaining one particular object in a variety of ways; but there are actually a population of these things. $\endgroup$ – Rob Jeffries Feb 27 '15 at 15:06 A 12 billion Solar mass black hole sounds massive, but actually it's not all that big. The radius of the event horizon is given by: $$ r_s = \frac{GM}{c^2} $$ and for a 12 billion Solar mass black hole this works out to be about $1.8 \times 10^{13}$m. This seems big, but it's only about 0.002 light years. For comparison, the radius of the Milky way is 50,000 to 60,000 light years, so the black hole is only 0.00000003% the size of the Milky Way. Black holes can't just suck in stars. A star orbiting in a galaxy has an orbital angular momentum, and it can't dive into the centre of the galaxy where the black hole is unless it can shed that angular momentum. In fact, given what a small target an 0.001 light year black hole makes, a star would have to shed almost all its angular momentum to hit the event horizon. But shedding angular momentum is hard because angular momentum is conserved. You can't just make angular momentum disappear, you have to transfer it to something else. Typically a star does this by interacting with other stars. Generally speaking, in an interaction the more massive star emerges with less angular momentum and the lighter star with a higher angular momentum. This process is known as dynamical friction. And all this takes time. The interactions are random and you need lots of them. Interactions are far more frequent in the central bulge of galaxies than our where we are in the suburbs, but even so the surprise is that there has been enough time for billions of stars to hit the black hole and merge with it. John RennieJohn Rennie $\begingroup$ Plausible, but not relevant. $\endgroup$ – Rob Jeffries Feb 26 '15 at 18:36 $\begingroup$ The reason I say it cannot be relevant is that we have known for some considerable length of time that it is possible for AGN/quasars to have huge luminosities that require them to be fed by huge amounts of mass at a rapid rate. So funnelling huge quantities of matter into a small volume does not appear to be a major obstacle. The real difficulty is in growing the black hole because the Eddington rate is smaller for smaller black holes and the seeds for SMBH cannot have been more than of order 1000 solar masses. Radiation pressure is likely what limits the growth of a black hole. $\endgroup$ – Rob Jeffries Feb 26 '15 at 19:30 $\begingroup$ Interesting answer, although I do agree with @RobJeffries $\endgroup$ – Hritik Narayan Feb 27 '15 at 7:23 $\begingroup$ A better comparison for the size of the event horizon might be 120 AU (which is about four times Neptune's semi-major axis). $\endgroup$ – Raidri Feb 27 '15 at 12:11 Not the answer you're looking for? Browse other questions tagged general-relativity classical-mechanics black-holes astrophysics or ask your own question. Primordial galaxies and associated mass of blackholes How is it possible to have such massive black holes? Formation of supermassive black holes Is a Black Hole Gun Possible? Could Hyper-Massive Black Holes be due to Dark Matter in the Early Universe? How do we know that supermassive black holes formed in the early universe? What are black holes made of, and if one "eats" enough, will it cease being a black hole? How does the mass of black holes become so much larger than an average star How does blackholes become supermassive? How can a black hole produce sound? Is there a binary black hole system in the middle of the galaxy? Could the black hole in the center of the galaxy be a white hole? Can many universes exist inside black holes? Could black hole mergers be the source of density waves that show up as galaxies' spiral arms? Is time absolute? How entropy connect to "death" of a black hole? If you hover just above the event horizon of a black hole, would you see the future of the universe? Why do galaxies have a super massive black hole at their center?
CommonCrawl
Neoreactionary Society and Dysgenics On the one hand, neoreactionaries believe in reviving the traditional society (with a hereditary aristocracy and the ability to execute heretics) that governed much of the human race for the past ten thousand years. On the other hand, they also believe in eugenics, especially for the purpose of increasing brain power. On the gripping hand, the traditional-society era also saw shrinking brain sizes. For that matter, brain sizes having been coming back up in the less traditional society of the past few centuries. Wait a moment … I'm not sure why the shrinkage occurred. Is it because the most successful men were Pointy-Haired Bosses of yesteryear? (ObSF: the Sooners in Brightness Reef by David Brin) Is it because the brightest people were burnt at the stake? If neoreactionaries argue the same way leftists do, they will no doubt claim that traditional society wasn't traditional enough. Did This "Study" Actually Exist? The Answer The study discussed here (from there) turned out to exist. I still cannot find the actual publication but the reference seems detailed enough for it to exist. As far as I can tell, there does not seem to be a control group in the study. In addition, there were several dietary changes and they may have focused on on irrelevant ones. Changes in micronutrients sound more plausible than changes in food additives. After all, the students would later go home and eat Twinkies but the school meal might have been their only meal with real food. One of the authors was later involved in a study that did have a control group and found that micronutrients are important. I won't more than mention that, if I recall correctly, the use of mind-altering chemicals was dropping rapidly in New York City at the time. Maybe second-hand toke had an effect. I have started a log at Ask for Evidence but their system appears to be down now. The Latest in Ignorant Paranoia The NRC got around to releasing the radiation readings for Florida produce in the aftermath of the Fukushima meltdown over three years later and low-information environmentalists are going nuts: Fukushima fallout on vegetation in South Florida exceeded gov't notification limit by over 1,000% — Nearly triple the highest level reported anywhere on West Coast First, the notification level is not the same as the danger level. It merely means it's enough to be reasonably sure it's out of the ordinary. Second, it's only the I-131 level that was extraordinary. The total radioactivity was within the normal range. For example, the highest radioactivity level mentioned in the article was 1,220 pCi/kg whereas bananas have 3,520 pCi/kg and brazil nuts have up to 12,600 pCi/kg. Third, it went away in a few weeks. It's not enough to know what's going on in the event you're looking at; you also have to know what else is going on. Speaking of low-information paranoid activists … The news that the violators of a social contract in restraint of trade will now have the opportunity to pay Social-Security taxes is being misinterpreted by the Other Ignorant Army as implying that the aforesaid competitors will get Social-Security benefits immediately, even despite the fact that there is a ten-year delay (the deferred action on immigration might get past the courts but attempts to get around Social-Security law certainly won't) and even despite the fact that immigrants tend to be young. By the way, some nativists claim the immigrants are coming over to go on welfare and some claim they're taking our highly-skilled jobs. There should be a debate between the two sides but somehow I doubt if there will be one. Explaining the Ferguson Riots There's only one explanation of why the reaction to a slightly-dubious grand-jury decision was to loot stores and burn down neighborhoods: This story is emblematic of something I've noticed seems increasingly common in the 21st century---political movements that appear exceedingly stupid. On the other hand, it appears that this was stupidity imported from outside Ferguson. Come to think of it, this might explain why the prosecutors waited until nightfall to announce the decision instead of announcing it in narrow daylight. (In November, daylight isn't broad.) Two Bulshytt Claims That "white people" will never riot. Wrong. That law enforcement would never shoot a "white" person for no good reason. Wrong. Time to Buy Google? If it was time to sell Google a few years ago, was it time to buy Google more recently? On the other hand, I noticed they were much quieter about this than the initial announcement. (I only heard of it quite recently.) Maybe they're embarrassed at being sensible. An IQ Speculation What if looking down on others raises IQ? That might explain the odd swings in measured IQs for different groups. If we could just get everybody to look down at everybody else, then that would raise everybody's IQ! Except then we would be smart enough to realize how silly that method is and then become dim again. Maybe that's why the Flynn effect might have leveled off… Dance, Puppets, Dance … It should be obvious to the meanest intelligence that the purpose of the proposed amnesty is to ensure that people in the Republican base say things that can be spun as racist. Dance, puppets, dance … Meanwhile, one possible way to defuse this (besides charging admission) is to give the "green light" to immigration from areas with people who are particularly supportive of capitalism. The top three are Vietnam, Bangladesh, and South Korea. Promoting immigration from Vietnam will also help make Democrats look like idiots. How to Avoid Being Too Partisan Pick a couple of issues you feel strongly about, one on each side. When you're too close to being a partisan, remind yourself "There are the people who are wrong about X." Picking two issues will prevent you from being partisan on the other side, in case you switch. It might even be helpful to pick four issues, two where each side is wrong about the facts and two where it's wrong about the morals. Make sure you read something on the Other Side regularly, preferably from people you have something in common with. Immigration Amnesty and the Contraception Mandate The contraception mandate may have been inserted into the "Affordable" Care Act in order to provoke opposition. The game plan was quite simple: Tell low-information voters that Republicans hate s*x. Add a preposterous mandate for something used while doing you-know-what. Wait for Republicans to oppose it. Claim that opposition is evidence for Step 1. Get re-elected. Alternative possibility: Wait for Republicans to go along with it. Let the Republican base get disgusted enough to stay home. Both of these were avoided when Republicans backed over-the-counter birth control. It was in accordance with current Republican principles and disproved the initial claim. The immigration amnesty might be an attempt at the same strategy: Tell low-information voters that Republicans are racists. Issue an executive decree of dubious constitutionality. The Republicans will have to come up with a plan that is in accordance with current Republican principles and disproves the initial claim. One suggestion: Charge admission to the US. Hand over $1000 (is that a reasonable amount?) and you're on the way to citizenship. A Few Thoughts on Net Neutrality At first, I thought the net-neutrality controversy was about the standard left-wing line that we can bring the millennium by passing the right regulations. That turned out not to be what it was about. They're saying it's a matter of stopping a horrible situation. My second thought was they were talking about real problems. Leftists sometime identify real problems (e.g., stagflation in the 1970s) and propose absurd solutions. I thought that was the matter here. That turned out not to be what it was about either. They're saying they want to keep the current system. As far as I can tell, they're assuming that all good things come from regulations and if the present unregulated system is good it must be due to the regulations yet to be passed. The future regulations will be so beneficial that their good effects extended back in time. I must admit they have a dangling shred of evidence for potential exploitation: the Comcast vs. Netflix controversy. OTOH, it would be more believable if the same people hadn't been recommending the same policy for years. OTGH, it sure looks like Netflix was a much bigger violator of the Internet spirit (one content provider hogging 30% of the bandwidth?). Query: If net-neutrality regulations are passed, how long will it take for them to be repealed? GMO potatoes might prevent cancer. This is almost as good as the news that beets have more anti-oxidants after being microwaved. Evidence for the KOK Hypothesis There appears to be more infrared light in the universe than can be accounted for by current theories: Prof Jamie Bock from Nasa's Jet Propulsion Laboratory, one of the report's authors, described the extragalactic background light (EBL) as "kind of a cosmic glow". "It's very faint - but basically the spaces between the stars and galaxies aren't dark. And this is the total light made by stars and galaxies during cosmic history," Prof Bock told the BBC. Earlier measurements from rockets and satellites had shown that there was more fluctuation in this background than the sum total of known galaxies could explain. At least two proposals were made to account for the extra light: it might come from very early, distant galaxies that formed when the universe was much younger, or it might come from stray stars outside galactic boundaries. There's also the possibility the light might come from civilizations in apparently-empty parts of the universe. Bug Facts A gamergate is a type of ant It's a reproductively viable female worker ant. Gamergate ants have an odd habit Even feminists don't do this: Or consider the gamergate ants, whose females capture a male and snip off his genitals during copulation. They discard the male's body, but his severed genitals continue to fertilize for an hour. Eeeewwww!!!! Parents of infants might prefer this In some species of hymenoptera, the larvae have a lack of a habit: [The larvae] are also unable to defecate until they reach adulthood due to having an incomplete digestive tract, presumably to avoid contaminating their environment. Spiders can apparently build webs in six dimensions According to a British newspaper: Experts estimated there were around 35,176 spiders per cubic square metre of space. … the "big data" that was supposed to ensure Democratic dominance? Did "gamergate" alienate the programming community? Did the programmers spend the last few months sabotaging the effort? Did they try to arrange for the get-out-the-vote people to call conservatives? That would explain many of the phone calls I received … The Mickey Kaus explanation of the recent election reminds me of how the Village Voice covered a conservative victory in a Swedish election. I had thought it was a rejection of absurdly-high taxes but the Village Voice attributed it to the Socialist support for nuclear energy. The lesson of the comments at the Instapundit thread on the above is that statists never accept that Big Government has been rejected. Any attempt to reject Big Government is interpreted to mean a rejection of the isolated attempts at limiting government by the losing side. As for whether this election means Republicans must turn nativist: In 2008, the Republicans nominated a pro-immigration candidate. The resulting loss was attributed to that. In 2012, the Republicans nominated someone more neutral. It didn't work. Not THAT Nuts! Okay. Libertarians are supposed to be eccentric but the Libertarian Party candidate for State Senate in my district is just plain nuts. Does this mean I have to vote for that hack Marcellino? I'm Considering a Boycott I'm considering a boycott of Lightlife and Smart Balance for bowing to modern superstitions about GMOs. There are other candidates that I'm looking at. It's possible to make a case that GMOs "aren't necessary." If a business rejects GMOs on that ground, it's as though they had a mouse infestation problem and controlled it by acquiring brown cats, yellow cats, and gray cats. (Black cats aren't necessary.) Black cats might not be necessary to control mice, but if they're excluded, one might wonder what other superstitions are being taken seriously. Besides, such a rejection implies they're selling to idiots. What If Math Is Politicized? After reading about the politicization of absolutely everything, I've wondered what might happen if mathematics is politicized. Is the Axiom of Choice libertarian? Is the Axiom of Determinacy part of the regulatory state? Is the Power Set Axiom fascist? (It's a collectivist axiom that treats the individual points on a line as part of an amorphous blob that is given without regard to the points that make it up.) Meanwhile, it looks like Ezra Klein ended the essay with an attempt to inspire fear on the Right. Please recall the Left tried to get Phil Robertson fired. They failed. They had earlier tried to get Rugh Limbaugh fired. They failed. Addendum: Since the context of "amorphous blob" was not present in the Google search, I'll include it below: … set theory with \(\mathsf{V}=\mathsf{L}\) does not take the continuum as an amorphous blob whose existence is provided by the power set axiom. … What is emphasized here is that the abstract power set axiom is the basis of Zermelo set theory, while the notion of transfinite ordinal number is the basis of constructible set theory. … It is in questions concerning the axiom of choice that these two approaches begin to differ. If one is to put his faith in an amorphous blob, why should it be well-ordered. Alternatively, if the real line is something which arises from faith in transfinite iteration, there had better be a definable well-ordering. Points just want to be free!
CommonCrawl
Why is a book on a table not an example of Newton's third law? My textbook explains Newton's Third Law like this: If an object A exterts a force on object B, then object B exerts an equal but opposite force on object A It then says: Newton's 3rd law applies in all situations and to all types of force. But the pair of forces are always the same type, eg both gravitational or both electrical. And: If you have a book on a table the book is exerted a force on the table (weight due to gravity), and the table reacts with an equal and opposite force. But the force acting on the table is due to gravity (is this the same as a gravitational force?), and the forcing acting from the table to the book is a reaction force. So one is a gravitational, and the other is not. Therefore this is not Newton's Third Law as the forces must be of the same type. newtonian-mechanics forces free-body-diagram Jonathan.Jonathan. $\begingroup$ You've been given a rather confusing and imprecise explanation. The answer to this question is wrapped up in the same issues as the answer to your question about the ball. The Newtonian pair are the force of the book on the table and the force of the table on the book. They are both equal in magnitude to the weight of the book, but that is because the problem is static (nothing undergoing acceleration). I recommend that you try to understand the other question first, and then come back to this one. $\endgroup$ – dmckee♦ May 24 '12 at 20:12 $\begingroup$ Sorry I got the question slightly wrong, gravity is acting on the book, and the table pushing upwards is acting on the book. So they are both acting on the book. $\endgroup$ – Jonathan. May 24 '12 at 20:15 $\begingroup$ @dmckee, I have edited my question and I think it is different? $\endgroup$ – Jonathan. May 24 '12 at 20:18 $\begingroup$ Yes. And because the book is not accelerating you know the $F_g = -F_N$. You also know that the table feels a force from the book equal to $-F_N = F_g$. Got it? $\endgroup$ – dmckee♦ May 24 '12 at 20:19 $\begingroup$ @dmckee, I've ended up getting confused so I rewrote the question from scratch. $\endgroup$ – Jonathan. May 24 '12 at 20:26 And: If you have a book on a table the book is exerted a force on the table (weight due to gravity), That's where you went wrong. The force that the book exerts on the table is not a gravitational force, it's a normal force. and the table reacts with an equal and opposite force. That's also a normal force. So the book exerts a (normal) force on the table, and the table exerts a (normal) force on the book. But the force acting on the table is due to gravity (is this the same as a gravitational force?), No, it's not, and in fact this force (the normal force) is only indirectly due to gravity. The only relevant gravitational force is the force exerted by the Earth on the book. And the book also exerts a gravitational force back on the Earth, but because the Earth is so heavy, that force has no noticeable effect. (The Earth also exerts a gravitational force on the table, and the table on the Earth, but those don't matter so much in this particular scenario.) David Z♦David Z This is common misconception with my students too, and the only way to understand it you must draw all forces that act on both objects (in total five forces)! In order to make things clearer, I will label the force with which table acts on book as $F_{12}$ and not $F_\text{N}$! Also suppose that $z$ axis is vertically up, so positive forces push upward and negative forces push downward. There are two forces acting on book, its gravitational force $-F_\text{g,book}$ (downward) and the force of table on the book $F_{12}$ (upward). According to first Newton law for the book they are equal by magnitude $$F_{12} - F_\text{g,book} = 0.$$ According to the third Newton law book must be acting on table with the force $-F_{12}$ (downward). So there are three forces acting on table: its gravitational force $-F_\text{g,table}$, force of the book $-F_{12}$ (both downward) and the force of the ground $F_\text{N}$ (upward)! Now let's write the first Newton's law for the table $$F_\text{N} - F_{12} - F_\text{g,table} = 0.$$ Consequently $$F_\text{N} = F_{12} + F_\text{g,table} = F_\text{g,book} + F_\text{g,table}$$ The ground force must support both book and table! Isn't that obvious? Conclusion: So third Newton's law is perfectly valid for this case as well! If you still do not understand, write on the paper book, table, and all five forces (two acting on the book and three acting on the table). AndrewC PygmalionPygmalion $\begingroup$ Why isn't $F_g$ and $F_N$ the same force, as gravity cuases the book to push down on the table. $\endgroup$ – Jonathan. May 24 '12 at 20:28 $\begingroup$ $-F_\text{g,book}$ is gravitational (downward) force of the book and $F_\text{N}$ is (upward) force of the table. According to first Newton's law, they are equal by the magnitude and opposite in direction. These are two separate forces. $\endgroup$ – Pygmalion May 24 '12 at 20:32 $\begingroup$ @Jonathan. I edited the answer to distinguish between inter-force $F_{12}$ between book and table and ground force to table. $\endgroup$ – Pygmalion May 24 '12 at 20:36 One way to make it obvious is think about how the down-momentum is flowing. The book is getting down-momentum from the Earth (through action-at-a-distance gravity), and this down-momentum then flows downwards to the table, and across the table to the legs, then through the legs of the table back down to the Earth, making a closed circuit of down-momentum, like a closed electrical circuit. Each time momentum leaves an object A and enters another object B, we say a force is acting from A to B, and simultaneously that a reaction force is acting from B to A (since the momentum gained by B is the momentum lost by A). This is Newton's third law. In this circuit, the down-momentum goes Earth $\rightarrow$ book $\rightarrow$ table $\rightarrow$ Earth So there is an action/reaction pair from the Earth to the book (the Earth is pulling the book and transferring down-momentum to it, and the book is pulling the Earth, transferring an equal amount of negative down-momentum--- or up momentum--- to the Earth). There is an action reaction pair from the book to the table ( the book is transferring down-momentum to the table through a contact normal force, and the table is transferring negative down-momentum to the book by the same contact normal force), then the table has an action/reaction pair with the Earth (the table sends the down-momentum into the Earth, and the Earth sends negative down-momentum into the table) Each of these flows is describing how a conserved quantity, namely down-momentum is going from place to place. It is easiest to sort this out with flows of charge, because unlike charge, momentum is a vector. Ron MaimonRon Maimon Newton's third law is about pairs of objects interacting. The force that acts on one object is equal and opposite to the force acting on the other object. So you can never have a third law pair acting on the same object. The equality of the reaction force and the weight force is nothing to do with the third law, and is just as a result of the first law applied to the forces acting on the book. Let's look at some third law pairs in this scenario: The weight of the book and the weight of the earth. Yup, the earth is pulled up by the book, but because $F=ma$ and the earth is more than a little heavier, it doesn't result in a great deal of movement on the earth's part when the book is released! The normal force of the table on the book and the book on the table. The force that the book exerts on the table is a normal force, not a weight force. (The book's weight doesn't act on the table, it acts on the book.) It's equal in magnitude to the weight of the book, again, because of the first law. The book and the table press on each other. It's probably better to think of the normal force as being generated by the electromagnetic forces between molecules in the table and book. You get a normal pair like this in the man-leaning-on-wall example. The normal forces between the desk and the earth The weight forces between the desk and the earth (The gravitational forces between the book and the table are negligable.) Force 1=Force 2 in magnitude by law 1, not by law 3. (Same for forces 3 and 4.) AndrewCAndrewC A lot of questions here talk about "normal force", but I get the feeling that you're still confused about what that is. First consider the book - Whether it is resting on the table or not, it has a weight. Here weight is different from mass. The weight is the mass $m$ times the acceleration due to Earth's gravity $g$, or more familiarly $$F = mg$$ The same goes for the table. Now this is the important part - The weight isn't gravitational force. The gravitational force that you are thinking of is expressed as $$F_g = \frac{Gm_1 m_2}{r^2}$$ and that is the force due to the gravitational attraction between two bodies. In the case of the table and the book, the gravitational attraction is absolutely negligible, since they are both so tiny. The force that the table experiences because of the book is what is being called normal force. The table then exerts an equal and opposite force. This is also clearly seen, because if the table didn't exert an equal and opposite force, the book would be accelerating downward. But the whole system is at rest, therefore the total force on the book-table system must be zero. EDIT: @AndrewC has mentioned in the comments below why my earlier reasoning was wrong. Basically normal force is only indirectly due to gravity. Khan Academy has a brilliant explanation of these concepts. KitchiKitchi $\begingroup$ Nonono, the "if the table didn't exert an equal and opposite force" argument is Newton's first law. If that's what Newton's third law said (every action has an equal and opposite reaction), it would mean nothing ever moved! My trailer exerts an equal and opposite tension force on my car, even when I'm accelerating. $\endgroup$ – AndrewC Nov 28 '12 at 21:38 $\begingroup$ Would you like to explain your interesting statement about Weight force not being gravitational force? $\endgroup$ – AndrewC Nov 28 '12 at 21:39 $\begingroup$ Newton's first law says that anything that's moving keeps moving, and anything that's at rest stays at rest, unless you have an external force. In this case, the external force is gravity, which is trying to pull the book down. That force is nicely cancelled with the force the table exerts on the book. $\endgroup$ – Kitchi Nov 29 '12 at 5:57 $\begingroup$ My point is that your last paragraph sounds like it's talking about Newton's third law by using the phrase equal and opposite, but you're actually using Newton's first law. That's exactly the confusion the textbook was trying to avoid and the question is trying to unpick, so it's unhelpful in this context. $\endgroup$ – AndrewC Nov 29 '12 at 22:44 $\begingroup$ I thought you were making an interesting point in distinguishing weight force from gravitational force (perhaps about the discrepency between $g=9.81m/s^2$ and $Gm_E/r_E^2$ in practice) but actually I think you were just making a mistake. Weight is the force due to gravity in the sense you're using it in your answer, calling the distinction important is misleading in this context. $\endgroup$ – AndrewC Nov 29 '12 at 22:45 You need to sort out these ideas. 1 Free body diagrams: Book table Book and earth Table and earth 2 sort the force pairs by 'kind' of force: Interaction is contact (due to electric forces) Gravity is force due to each of the bodies So book-table has force pairs due to interaction forces, balanced and oppsite, call them normal due to book, normal due to table. Both same kind. Sorted. Book-earth has force pair due to gravity of each acting on other. Both same kind of forces, equal and opposite, and on different bodies Table-earth, there is contact, which is electric interaction at electronic charge level. Equal, opposite yet same kind of force. Finally, each mass has gravity and the mass exerts force on other mass - NOTE: "on other mass!!!!" Same kind of force again. Conditions for N3: Equal magnitude Opposite direction Same kind of force KandahariKandahari protected by Qmechanic♦ Mar 29 '16 at 10:46 Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces free-body-diagram or ask your own question. With Newton's third law, why are things capable of moving? Newton's third law.. please explain here Newton's third law confusion Newton's Third Law for Normal Force Newton's third law's intuition Newton's third law, why does this body move? Why doesn't limiting friction contradict Newton's Third Law? Applicability of Newton's third law Newtons third law explanation on a push of table Surface friction and Newton's third law
CommonCrawl
Lung cancer mortality of residents living near petrochemical industrial complexes: a meta-analysis Cheng-Kuan Lin1, Huei-Yang Hung2, David C. Christiani1,3, Francesco Forastiere4 & Ro-Ting Lin ORCID: orcid.org/0000-0002-2687-203X5 Environmental Health volume 16, Article number: 101 (2017) Cite this article This article has been updated The Correction to this article has been published in Environmental Health 2017 16:122 Lung cancer, as the leading cause of cancer mortality worldwide, has been linked to environmental factors, such as air pollution. Residential exposure to petrochemicals is considered a possible cause of lung cancer for the nearby population, but results are inconsistent across previous studies. Therefore, we performed a meta-analysis to estimate the pooled risk and to identify possible factors leading to the heterogeneity among studies. The standard process of selecting studies followed the Cochrane meta-analysis guideline of identification, screening, eligibility, and inclusion. We assessed the quality of selected studies using the Newcastle-Ottawa scale. Reported point estimates and 95% confidence intervals were extracted or calculated to estimate the pooled risk. Air quality standards were summarized and treated as a surrogate of exposure to air pollution in the studied countries. Funnel plots, Begg's test and Egger's test were conducted to diagnose publication bias. Meta-regressions were performed to identify explanatory variables of heterogeneity across studies. A total of 2,017,365 people living nearby petrochemical industrial complexes (PICs) from 13 independent studied population were included in the analysis. The pooled risk of lung cancer mortality for residents living nearby PICs was 1.03-fold higher than people living in non-PIC areas (95% CI = 0.98–1.09), with a low heterogeneity among studies (I 2 = 25.3%). Such effect was stronger by a factor of 12.6% for the year of follow-up started 1 year earlier (p-value = 0.034). Our meta-analysis gathering current evidence suggests only a slightly higher risk of lung cancer mortality among residents living nearby PICs, albeit such association didn't receive statistically significance. Reasons for higher risks of early residential exposure to PICs might be attributable to the lack of or less stringent air pollution regulations. Lung cancer is the leading cause of cancer deaths globally [1]. The Global Burden of Disease Study estimated that 1.7 million people died from lung cancer in 2015 [1]. Although tobacco smoking acts as one of the major risk factors for the disease [2, 3], there is still a considerable fraction of lung cancer mortality that remains unexplained [4]. This is particularly noticeable in many high-income countries, which showed an apparent trend of decrease in the smoking prevalence [5]. Therefore, research in the past two decades has focused on environmental determinants of lung cancer [4, 6]. Petrochemical manufacturing industry, defined as petroleum refining (Standard Industrial Classification code [SIC] 2911) or industrial organic chemicals manufacturing (SIC 2869), involves processes that produce and potentially emit hazardous chemicals into the surrounding air, soil, and water. These petrochemical manufacturing factories are usually clustered in an industrial area together with other manufacturing processes or industry, such as steel, coking, and thermoelectric plants [7, 8], and called petrochemical industrial complexes (PIC). Several studies have detected environmental air pollutants near petrochemical manufacturing plants [9,10,11,12] and also after occasional fire accidents at petrochemical plants [13]. Long-term exposure to the poor air quality, as well as radon, chemicals, and arsenic compounds among residents living near petrochemical manufacturing complexes raised general awareness and the need to understand the possible adverse health effects among nearby residents [4, 14]. Several epidemiological studies have explored associations between the PICs and lung cancer risks of nearby people. Given high public concerns of health, the US started several investigations of suspected cancer risks for people living nearby PICs back to the 1970s [15,16,17]. For example, US white males living in petroleum industry counties had 1.10- to 1.17-fold higher risks of lung cancer mortality than males in other counties [17]. Subsequent studies in Italy and UK also revealed similar results with relative risks of 1.26 and 1.04, respectively, among white females [7, 8]. Fast-growing economies in Asia stimulated by the increasing demand for petrochemicals in manufacturing sectors also faced corresponding increases of lung cancer mortalities among residents nearby PICs [18]. However, several studies reported different results. For instance, Tsai and colleagues reported that male residents living in Louisiana's Industrial Corridor had lower risks of lung cancer compared to other Louisiana citizens, even after adjusting for age [19]. Similarly, Simonsen and colleagues reported that the risk of lung cancer was not elevated significantly in accordance with the residence proximity to the industrial area [20]. Due to the inconsistent results, our study aimed to estimate lung cancer mortality risk associated with the PICs by combining cross-country data from different studies via a systematic review and meta-analysis. Data source and study selection We selected exclusively articles from PubMed, Cochrane Library, Web of Science, Science Direct, and other sources that published before July 11, 2017. We used "(Lung cancer OR Lung neoplasm) AND (Refinery OR Petroleum OR Petrochemical OR Oil and Gas Industry)" as the search term. Two researchers—HY Hung and RT Lin—selected independently articles that met the inclusion and exclusion criteria as below. The inclusion criteria were: (1) original articles that clearly defined exposure group as residents living nearby PICs; (2) original articles that clearly defined lung cancer mortality according to International Classification of Diseases (ICD); (3) original articles that reported either confidence intervals (CI), standard errors (SE), or both; and (4) original articles that were written in English and full-texts were available. The exclusion criteria were: (1) studies with subjects overlapped with other publication; (2) studies that focused on occupational exposure in petrochemical plants only; and (3) studies that reported lung cancer incidence only and lack of mortality data. Review process and data extraction Figure 1 shows the selection process of the articles, including four steps: identification, screening, eligibility, and included. First, we identified 1249 articles from library databases and excluded 131 duplicated articles. Second, we screened articles by titles and abstracts. We chose 30 of them as relevant to our study objective for full-text review. Third, we carefully reviewed and checked whether those articles clearly defined exposure and health outcome and also reported estimates and CI or SE. Considering that a study population might appear in different articles, we selected the latest article to avoid bias towards the specific population. Finally, we included seven articles that reported 13 estimates for meta-analysis: three articles reporting sex-specific mortality rate ratios of lung cancer [7, 18, 21]; one article reporting sex-specific age-adjusted mortality rates of industrial corridors and Louisiana, respectively [19]; one article reporting odds ratios for both sexes combined [22]; one article reporting standardized mortality ratios by sex [8] and another one reporting standardized mortality ratios sex combined [23]. The ratio of Belli's study was regarded as for males in subgroup analysis because male accounted for 84% of the study group [22]. Flow of systematic literature search on lung cancer mortality for residents living nearby petrochemical sites. N = number of studies; n = number of estimates included into meta-analysis; RR = relative risk (rate ratio or risk ratio); OR = odds ratio; SMR = standardized mortality ratio Since lung cancer mortalities were less than 10−3 per year [24], we could appropriately interpret estimated odds ratios as relative risks [25, 26]. The adjusted standardized mortality ratios could be interpreted as relative risks as well because the estimates were derived from the comparison to general population in Rome [8, 27]. For the study not reporting CI or SE [19], we estimated the variances and SE of lnRR using following equations: $$ Var(lnRR)= Var\left({lnR}_1+{lnR}_0\right) $$ $$ = Var\left({lnR}_1\right)+ Var\left({lnR}_0\right) $$ $$ ={\left(\frac{1}{R_1}\right)}^2\times Var\left({R}_1\right)+{\left(\frac{1}{R_0}\right)}^2\times Var\left({R}_0\right) $$ $$ SE(lnRR)=\sqrt{Var(lnRR)} $$ where Var(lnRR) represents the variance of natural log of relative risks (RRs); R1 and R0 represents mortality rates of the studied group and the reference group, respectively; and SE(lnRR) represents the standard error of natural log of relative risks. We applied a random-effects model to examine whether there were within- and between-study heterogeneities using the I 2 test [28]. We set I 2 less than 10% as no heterogeneity, 10%–30% as low heterogeneity, 30%–60% as moderate heterogeneity, and more than 60% as high heterogeneity based on Cochrane handbook [29]. We further did subgroup analysis by different characteristics [30], including sex, location, ethnicity, PM10 standard, latency period (first year of study period more than 20 years after operation year of PICs vs. less or equal to 20 years), and bona fide observation (defined as 10 or more years of observation after 20 years of PIC operations vs. less than 10 years). Then, we applied meta-regressions to investigate the possible factors of heterogeneity, including sex, ethnicity, location, year of publication, and the starting year of follow-up. We also conducted sensitivity analyses to assess the influence of individual study on the overall RR by adding one estimate into the pooled estimates gradually. Finally, we used a funnel plot and the Begg's and Egger's regressions for asymmetry test to examine whether there was publication bias and small-study bias. All analyses were performed using the Stata Software version 11.2 (StataCorp, TX, US). We set the statistical significance level at 0.05, using a two-sided test. Assessment of data quality To assess the risk of bias in each study, the quality of each study was recorded and assessed using the Newcastle-Ottawa Quality Assessment Scale [31]. Records on data quality for each study were reviewed by CK Lin and HY Hung. We evaluated potential bias based on three categories (selection, comparability, and outcome) with eight measurements [31]. Although the discussion on the validity of the Scale remained inconclusive, the reliability of the Scale is quite fair and widely used in meta-analysis [32, 33]. Pollutants emitted from PIC might vary over time, likely due to the change of manufacturing process and pollution control technology. Since data on air quality around studied petrochemical areas were limited, we reviewed national or regional ambient air quality standards in studied countries or regions: European Union (EU), Taiwan, and the US. We summarized three air quality standards for studied countries, including total suspended particles (TSP), PM10, and PM2.5 [34,35,36,37,38,39,40,41]. Table 1 shows the basic characteristics of studies included in our meta-analysis. A total of 13 study groups were extracted, covering around 2,017,365 people living near petrochemical areas in Taiwan, Louisiana in the US, Teesside, West Glamorgan in the UK, and Brindisi, Sicily, and Rome in Italy. Seven out of 13 study groups reported RRs for males, five for females, and one for both sexes combined. The follow-up years ranged widely from 1960 to 2002. Most PICs operated at least 14 years. Table 1 Basic characteristics of studies included in the meta-analysis Figure 2 shows the pooled estimate of mortality risk for lung cancer among residents living nearby PICs. The estimated overall RR of 1.03 indicated that lung cancer mortality among residences might be associated with exposure to PICs, but it didn't reach statistical significance (95% CI = 0.98–1.09). Although Belli's study (study ID = H in Fig. 2) reported point estimate of lung cancer risk as high as 3.10, its broad CI ranging from 0.82 to 11.79 led to the smallest weighting factor of 0.17% in our meta-analysis. Among the selected studies, the highest weighting factor of 23.35% (study ID = K in Fig. 2) indicated that Michelozzi's study on males in Rome contributed to the largest proportion of the pooled estimate, mainly because this study had the narrowest CI. The overall I 2 was 25.3%, indicating low heterogeneity existed among these studies. Forest plot of studies on lung cancer risks of residents living nearby petrochemical industrial complexes. RR = relative risk Table 2 shows the results of pooled estimates and 95% CI by different characteristics, including sex, location, ethnicity, PM10 standard, latency period, and bona fide observation. For each characteristic, there was no significant difference among pooled estimates between subgroups based on overlapping 95% CIs. However, we found a higher risk of lung cancer associated with residential exposure to PICs in the era of looser PM10 standard (RR = 1.12, 95% CI = 0.97–1.29 vs. RR = 1.01, 95% CI = 0.96–1.06). Table 2 Pooled estimates of relative risks of lung cancer mortality for residents living nearby petrochemical industrial complexes, by different characteristics Except for the starting year of follow-up, we did not find any possible heterogeneous factor from the meta-regression analysis. The slope of the meta-regression line suggested that for an increment in the starting year of follow-up, the RR of lung cancer would be 0.874-fold lower (p-value = 0.034, Fig. 3). The relationship between natural log of relative risk of lung cancer mortality and starting year of follow-up. ln(RR) = natural log of relative risk Figure 4 shows the sensitivity analysis for the effect of individual study on pooled results. We gradually added each study into the sensitivity analysis—from studies published in the earlier period to studies published in the later period. None of them significantly affected the pooled results. There was no significant publication bias among the studies for 13 study groups (Egger's test: p-value = 0.059; Begg's test: p-value = 0.051). The funnel plot also indicated no asymmetry for the estimates for the 13 study groups was observed (Fig. 5). Sensitivity analysis of random effects estimates after adding each additional study according to the publication year. RR = relative risk Funnel plot for lung cancer mortality relative rates associated with residential exposure to petrochemical industrial complexes of the 13 study groups. ln(RR) = Natural log of relative risks; SE of ln(RR) = standard error of natural log of relative risks Additional file 1 listed details of the quality assessment for cohort and case-control study, respectively. All studies reported sex-specific, age-adjusted point estimates. Some studies further adjusted ethnicity, socioeconomic levels (e.g., school levels, job collars categories, unemployment, number of family members, overcrowding, and ownership of dwellings), or study periods. Four studies had full score of nine stars [18, 21,22,23]; two had 8 out of 9 stars [7, 8]; and one study had seven out of nine stars [19] (see Additional file 2). Air quality standards in the EU, Taiwan, and the US were summarized in Fig. 6. The earliest standard of ambient air quality was for TSP, followed by PM10 and PM2.5. All countries have set stricter air quality standards over the years. For example, the standard for annual average TSP concentration was 150 μg/m3 in the EU in 1983. The EU tightened the regulation by setting up annual PM10 standard at 60 μg/m3 in 1996, and then lowering it to 40 μg/m3 in 1999. In 2008, the EU set up the annual PM2.5 standard at 25 μg/m3. Similarly, the US set the annual TSP standard at 75 μg/m3 in 1971, and further tightened the limits to 50 μg/m3 in 1987. In contrast, Taiwan adopted the US's 1971 standard for TSP and PM10 and announced the regulation in 1992, but the limits have not been changed since then. Historical air quality standards of studied regions. TSP = total suspended particles; 1' = primary pollutant; 2' = secondary pollutant To our best knowledge, this is the first meta-analysis that estimated the pooled RR of lung cancer mortality for residents living nearby PICs. We aggregated lung cancer risks for 13 study groups from seven published papers in the US, the UK, Italy, and Taiwan. Based on these studies, people living in the PICs had higher lung cancer mortality risks than residents in non-PICs by a factor of 1.03, despite such associations didn't reach statistically significant (95% CI = 0.98–1.09). Stratification analysis by different characteristics, such as sex and ethnicity, did not change the magnitude of this association. In contrast, the starting year of follow-up affected the association between lung cancer mortality and exposure to PICs by a factor of 0.874. That is, the estimated risk of lung cancer mortality was higher among subjects recruited in earlier periods, and the risk decreased by 12.6% if the year of follow-up started 1 year later. The scientific evidence of the study is sound and solid from several perspectives. First, the outcome variable was based on pathological samples and/or the ICD-9. Individual data were obtained by linking to governmental database. Second, the large sample size (n = 2,017,365) and diverse populations (e.g., by sex, ethnicities, and locations) made the pooled estimate more representative and enhanced the generalizability. Third, by applying the random-effect model, we were able to address the heterogeneity between studies and further reported the pooled effects. We found higher lung cancer mortality risks among residents near PICs by a factor of 1.03, although this adjusted RR did not reach statistical significance. We identified the following possible limitations of the study. First, the definition of exposure varied slightly between studies. Most studies defined the exposure based on the geographical locations or distances of residencies from PIC [7, 8, 19, 21,22,23], while one study compared the exposed group and reference group by matching job categories in PIC and non-PIC towns [18]. Misclassification of exposure and non-exposure might exist and bias the pooled estimates towards the null. Second, the operation of PICs started as early as 1960 and some PICs are still in operation. Exposure to pollutants emitted from PICs might be quantitatively and qualitatively different in each period. Third, although our subgroup analysis didn't show different risks for residents in different latency periods, still not everyone in the selected studies had sufficient latency periods or adequate follow-up period. The estimations on latency period for lung cancer diagnosis varied widely but usually required approximate years to decades [42, 43]. Inadequate inclusion of residents with insufficient latency might bias the result toward the null in the original studies. An effective air quality intervention involved a series of steps, including regulatory establishments, pollution reductions, and anticipated improvements in health [44]. Although data on ambient pollution monitoring around PICs in the early periods were very limited and hard to obtain, previous studies have documented pollution reductions could be attributable to changing regulations [45, 46]. We could reasonably assume that most petrochemical factories followed the local regulations to some extent. Therefore, the historical air quality standards for TSP, PM10, and PM2.5 could reflect the relative trends of exposure to air pollutants emitted from PICs. Most air quality standards became stricter over the years [34,35,36,37, 41]. Such trend partially explains our findings in the heterogeneity regression; that is, studies on populations with earlier exposure to PICs were associated with significantly higher risk of lung cancer mortality. There are some limitations need to be addressed when interpreting our results. First, not all potential confounders were adjusted in the seven articles, such as smoking, radon exposure, meteorological factors, and socioeconomic status. However, these unadjusted confounders posed an unknown or even lower risk of lung cancer to the exposure group compared to the reference group. For example, the smoking rate of exposure group was lower than the reference group in Bhopal and colleagues' study [7]. Similarly, people lived in the Industrial Corridor had higher socioeconomic status (less unemployed, higher income, and higher educational attainment) compared to the average of Louisiana [19]. Since lower smoking rate and higher neighborhood socioeconomic status were associated with fewer lung cancer incidence [47, 48], health benefits from the improvement of socioeconomic status along with industrial development were likely to outweigh the negative effects of exposure to the petrochemical industry. The data on radon exposure, as one of the risk factors of lung cancer, were absent in all selected papers. However, there is no evidence of higher radon exposure in PIC areas than non-PIC ones [49, 50]. Similarly, seasonal variations of wind directions might either increase or decrease the effect of PIC exposure on residents' health. Since all studies have exposure and reference groups from both upwind and downwind locations, subject-selection bias and meteorological effects due to location variance were reduced. Second, studies with available data for meta-analysis were originated from the US, the UK, Italy, and Taiwan. The generalization of the impact of petrochemical industry on lung cancer might be restricted to these countries. However, these four countries represented the majority of countries with the largest petrochemical industries in terms of ethylene production capacity [51], the major base of petrochemicals and a common index to estimate production capacity of a petrochemical company. Third, each PIC might involve other manufacturing processes (such as steel, cocking, and power plants) and the exposure level could also be affected by geographical factors across different countries. Limited by the lack of corresponding exposure data, our findings were not able to address the heterogeneity between PICs. Fourth, certain portion of residents living nearby PICs might risk occupational exposure as well. Some studies have separated the environmental exposure from the occupational exposure (study ID = A, B, G, H) or at least considered job categories in the analysis (study ID = M) to reduce the influence of occupational exposure. Our meta-analysis gathering current evidence suggests only a slightly higher risk of lung cancer mortality among residents living nearby PICs. Our analysis also underline the role of stringent regulations on improving air quality and reducing the residential exposure to air pollution, which can further contribute to lowering the risk of lung cancer. After publication of the article [1], it has been brought to our attention that the original version of this Article contained a typo in the 3rd paragraph of the section 'Review process and data extraction'. It concerns the equation published as "Var(lnRR) = Var(lnR1 + lnR0)". On the right part, the "+" within the parenthesis should be "-", as defined and derived from the left part. As a result, Var(lnRR) = Var(lnR1 + lnR0) should be revised to Var(lnRR) = Var(lnR1 - lnR0). ICD: PIC: Petrochemical industrial complex Standard Industrial Classification code TSP: Total suspended particles Wang H, Naghavi M, Allen C, Barber RM, Bhutta ZA, Carter A, et al. Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980-2015: a systematic analysis for the Global Burden of Disease Study 2015. Lancet. 2016;388(10053):1459–544. Doll R, Peto R, Boreham J, Sutherland I. Mortality in relation to smoking: 50 years' observations on male British doctors. Br Med J. 2004;328(7455):1519–28. Hecht SS. Tobacco smoke carcinogens and lung cancer. J Natl Cancer Inst. 1999;91(14):1194–210. Field RW, Withers BL. Occupational and environmental causes of lung cancer. Clin Chest Med. 2012;33(4):681–703. Bilano V, Gilmour S, Moffiet T, d'Espaignet ET, Stevens GA, Commar A, et al. Global trends and projections for tobacco use, 1990-2013: an analysis of smoking indicators from the WHO Comprehensive Information Systems for Tobacco Control. Lancet. 2015;385(9972):966–76. Dockery DW, Pope CA, Xu XP, Spengler JD, Ware JH, Fay ME, et al. An association between air pollution and mortality in six U.S. cities. N Engl J Med. 1993;329(24):1753–9. Bhopal RS, Moffatt S, Pless-Mulloli T, Phillimore PR, Foy C, Dunn CE, et al. Does living near a constellation of petrochemical, steel, and other industries impair health? Occup Environ Med. 1998;55(12):812–22. Michelozzi P, Fusco D, Forastiere F, Ancona C, Dell'Orco V, Perucci CA. Small area study of mortality among people living near multiple sources of air pollution. Occup Environ Med. 1998;55(9):611–5. Na K, Kim YP, Moon KC, Moon I, Fung K. Concentrations of volatile organic compounds in an industrial area of Korea. Atmos Environ. 2001;35(15):2747–56. Yuan TH, Chio CP, Shie RH, Pien WH, Chan CC. The distance-to-source trend in vanadium and arsenic exposures for residents living near a petrochemical complex. J Expo Sci Environ Epidemiol. 2016;26(3):270–6. Cetin E, Odabasi M, Seyfioglu R. Ambient volatile organic compound (VOC) concentrations around a petrochemical complex and a petroleum refinery. Sci Total Environ. 2003;312(1–3):103–12. Kalabokas PD, Hatzianestis J, Bartzis JG, Papagiannakopoulos P. Atmospheric concentrations of saturated and aromatic hydrocarbons around a Greek oil refinery. Atmos Environ. 2001;35(14):2545–55. Shie RH, Chan CC. Tracking hazardous air pollutants from a refinery fire by applying on-line and off-line air monitoring and back trajectory modeling. J Hazard Mater. 2013;261:72–82. Downey L, Van Willigen M. Environmental stressors: the mental health impacts of living near industrial activity. J Health Soc Behav. 2005;46(3):289–305. Blot WJ, Fraumeni JFJr. Geographic patterns of lung cancer: industrial correlations. Am J Epidemiol 1976;103(6):539-550. Gottlieb MS, Pickle LW, Blot WJ, Fraumeni JFJr. Lung cancer in Louisiana: death certificate analysis. J Natl Cancer Inst 1979;63(5):1131-1137. Blot WJ, Brinton LA, Fraumeni JF Jr, Stone BJ. Cancer mortality in U.S. counties with petroleum industries. Science. 1977;198(4312):51–3. Yang CY, Chiu HF, Chiu JF, Kao WY, Tsai SS, Lan SJ. Cancer mortality and residence near petrochemical industries in Taiwan. J Toxicol Environ Health. 1997;50(3):265–73. Tsai SP, Cardarelli KM, Wendt JK, Fraser AE. Mortality patterns among residents in Louisiana's industrial corridor, USA, 1970-99. Occup Environ Med. 2004;61(4):295–304. Simonsen N, Scribner R, Su LJ, Williams D, Luckett B, Yang T, et al. Environmental exposure to emissions from petrochemical sites and lung cancer: the lower Mississippi interagency cancer study. J Environ Public Health. 2010;2010:759645. Pasetto R, Zona A, Pirastu R, Cernigliaro A, Dardanoni G, Addario SP, et al. Mortality and morbidity study of petrochemical employees in a polluted site. Environ Health. 2012;11:34. Belli S, Benedetti M, Comba P, Lagravinese D, Martucci V, Martuzzi M, et al. Case-control study on cancer risk associated to residence in the neighbourhood of a petrochemical plant. Eur J Epidemiol. 2004;19(1):49–54. Sans S, Elliott P, Kleinschmidt I, Shaddick G, Pattenden S, Walls P, et al. Cancer incidence and mortality near the Baglan Bay petrochemical works, South Wales. Occup Environ Med. 1995;52(4):217–24. United States Cancer Statistics: 1999–2013. Incidence and Mortality Web-based Report https://nccd.cdc.gov/uscs/toptencancers.aspx. Accessed 5 Apr 2017. Tsuang MT, Hsieh CC, Fleming JA. Appendix: odds ratio and its interpretation in case-control studies. In: Hsu LKG, Hersen M, editors. Research in psychiatry. US: Springer; 1992. p. 125–32. Accessed 5 Apr 2017. Schmidt CO, Kohlmann T. When to use the odds ratio or the relative risk? Int J Public Health. 2008;53(3):165–7. Symons MJ, Taulbee JD. Practical considerations for approximating relative risk by the standardized mortality ratio. J Occup Med. 1981;23(6):413–6. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. Br Med J. 2003;327(7414):557–60. Jonathan J Deeks, Julian PT Higgins, Altman DG. Identifying and measuring heterogeneity. In: Julian PT Higgins, Green S, editors. Cochrane handbook for systematic reviews of interventions version 5.1.0. 2011. http://handbook-5-1.cochrane.org/. Accessed 5 Apr 2017. Schabath MB, Cress D, Munoz-Antonia T. Racial and Ethnic Differences in the Epidemiology and Genomics of Lung Cancer. Cancer Control. 2016;23(4):338–46. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp. Accessed 5 Apr 2017. Hootman JM, Driban JB, Sitler MR, Harris KP, Cattano NM. Reliability and validity of three quality rating instruments for systematic reviews of observational studies. Res Synth Methods. 2011;2(2):110–8. Hartling L, Hamm M, Milne A, Vandermeer B, Santaguida PL, Ansari M, et al. Validity and inter-rater reliability testing of quality assessment instruments [internet]. In: Introduction. Agency for Healthcare Research and Quality: Rockville, MD; 2012. Particulate matter (PM) standards-table of historical PM NAAQS. https://www3.epa.gov/ttn/naaqs/standards/pm/s_pm_history.html. Accessed 14 Apr 2017. Air quality standards (In Chinese). http://law.moj.gov.tw/LawClass/LawAll.aspx?PCode=O0020007. Accessed 14 Apr 2017. European Union. Ambient air quality and cleaner air for Europe (Directive 2008/50/EC). http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32008L0050&from=en. Accessed 14 Apr 2017. European Union. Limit values for sulphur dioxide, nitrogen dioxide and oxides of nitrogen, particulate matter and lead in ambient air (Directive 1999/30/EC). http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:31999L0030&from=EN. Accessed 14 Apr 2017. Decreto Legislativo 13 Agosto 2010, n.155 (In Italian). http://www.minambiente.it/normative/dlgs-13-agosto-2010-n-155-attuazione-della-direttiva-200850ce-relativa-alla-qualita. Accessed 17 Apr 2017. Decreto Ministeriale 2 Aprile 2002 (In Italian). http://www.gazzettaufficiale.it/atto/serie_generale/caricaDettaglioAtto/originario?atto.dataPubblicazioneGazzetta=2002-04-13&atto.codiceRedazionale=002G0089&elenco30giorni=false. Accessed 17 Apr 2017. Decreto Ministeriale 25 Novembre 1994 (In Italian). http://www.gazzettaufficiale.it/eli/id/1994/12/13/094A7814/sg. Accessed 17 Apr 2017. Decreto del Presidente del Consiglio dei ministri 28.03.1983 (In Italian). http://www.arpab.it/aria/normativa/D.P.C.M.28marzo1983.pdf. Accessed 17 Apr 2017. Howard J: Minimum latency & types or categories of cancer. In World Trade Center Health Program 2015. Weiss W. Cigarette smoking and lung cancer trends. A light at the end of the tunnel? Chest. 1997;111(5):1414–6. van Erp AM, Kelly FJ, Demerjian KL, Pope CA, Cohen AJ. Progress in research to assess the effectiveness of air quality interventions towards improving public health. Air Qual Atmos Health. 2011;5(2):217–30. Hedley AJ, Wong C-M, Thach TQ, Ma S, Lam T-H, Anderson HR. Cardiorespiratory and all-cause mortality after restrictions on sulphur content of fuel in Hong Kong: an intervention study. Lancet. 2002;360(9346):1646–52. Clancy L, Goodman P, Sinclair H, Dockery DW. Effect of air-pollution control on death rates in Dublin, Ireland: an intervention study. Lancet. 2002;360(9341):1210–4. Hystad P, Carpiano RM, Demers PA, Johnson KC, Brauer M. Neighbourhood socioeconomic status and individual lung cancer risk: evaluating long-term exposure measures and mediating mechanisms. Soc Sci Med. 2013;97:95–103. Sidorchuk A, Agardh EE, Aremu O, Hallqvist J, Allebeck P, Moradi T. Socioeconomic differences in lung cancer incidence: a systematic review and meta-analysis. Cancer Causes Control. 2009;20(4):459–71. Catalano R, Immè G, Mangano G, Morelli D, Tazzer AR. Indoor radon survey in Eastern Sicily. Radiat Meas. 2012;47(1):105–10. Dubois G. An overview of radon surveys in Europe. European Communities: In. Italy; 2005. Nakamura D. Ethylene capacity rising, margins continue to suffer. Oil Gas J. 2002;100(10):66–79. We are grateful to Professor Chung-Cheng Hsieh of the Department of Epidemiology at the Harvard T.H. Chan School of Public Health for his support for methodology of meta-analysis and Professor Jung-Der Wang of the Department of Public Health, College of Medicine at the National Cheng Kung University for his comments on our response letter. We also acknowledge the great support of Professor Andrea Baccarelli of the Department of Environmental Science at Columbia University on air quality regulations in Europe. This research received no financial support from any agency in the public, commercial, or not-for-profit sectors. No funding source or external agency played any role in study design, data collection, data analysis, data interpretation, or writing of this report. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Department of Environmental Health, Harvard T.H. Chan School of Public Health, 665 Huntington Avenue, Building 1, Room 1401, Boston, MA, 02115, USA Cheng-Kuan Lin & David C. Christiani Department of General Medicine, Kaohsiung Medical University Hospital, No. 100, Tzyou 1st Road, Kaohsiung, 807, Taiwan Huei-Yang Hung Department of Epidemiology, Harvard T.H. Chan School of Public Health, 665 Huntington Avenue, Building 1, Room 1401, Boston, MA, 02115, USA David C. Christiani Department of Epidemiology Lazio Regional Health Service, Via Cristoforo Colombo, 112, Rome, Italy Francesco Forastiere Department of Occupational Safety and Health, College of Public Health, China Medical University, 91 Hsueh-Shih Road, Taichung, 40402, Taiwan Ro-Ting Lin Cheng-Kuan Lin CKL contributed to study design, data analysis, reporting results, data interpretation, and writing of the manuscript. HYH contributed to literature reviews, data preparation, data interpretation, and writing of the manuscript. DCC and FF contributed to data interpretation and writing of the manuscript. RTL contributed to study design, literature reviews, data interpretation, and writing of the manuscript. All authors read and approved the final manuscript. Correspondence to Ro-Ting Lin. A correction to this article is available online at https://doi.org/10.1186/s12940-017-0339-9. Assessment of study quality using the Newcastle-Ottawa Quality Assessment Scale for cohort and case-control studies. (DOCX 36 kb) Data quality assessment on the Newcastle-Ottawa Quality Scale. (DOCX 15 kb) Lin, CK., Hung, HY., Christiani, D.C. et al. Lung cancer mortality of residents living near petrochemical industrial complexes: a meta-analysis. Environ Health 16, 101 (2017). https://doi.org/10.1186/s12940-017-0309-2 Received: 26 May 2017 Accepted: 21 September 2017 Lung neoplasm
CommonCrawl
Thermophysics Undulatory Static of solids There are two simple ways of movement in a rigid system: translation and rotation. Any other possible form of movement, as complex it may be, can always be considered as the combination of a rotation and a translation. Static Solid It is not always possible to consider a body as a point particle, in general, when we are interested not only in the displacement of an object, but also in its rotation. The following definition is important: Rigid Body The model adopted for large objects considers that the size and shape of these are practically unaltered when subjected to external forces, although the actual molecular bonds are not perfectly rigid. Bridges, airplanes and many other objects can be considered good examples of rigid bodies. When they receive the action of very strong forces, they are little deformed, or on a extreme case, they get broken. The two possible types of motion of a rigid body can be defined as: It is the movement which changes the position of an object, i.e., all points of the body are moved from a fixed distance in the same direction. In rotational motion all points of the body move in circumferential arcs, whose centers are on the same axis, called the axis of rotation. Balance of solids For a solid to be in equilibrium in an inertial frame, it is necessary to satisfy two conditions: one relating to the balance of translation and other related to the rotation balance, defined as follows: Translational Balance The condition of translational equilibrium of a rigid body is that the center of mass is at rest or in uniform rectilinear motion, or the resultant of external forces acting on the body is zero. In mathematical form it is: $$\vec{F}_{net} = \vec{0}$$ where \(\vec{F}_{net}\) is the resultant of the forces in the system. Rotational Balance The condition of rotational equilibrium is that the body does not rotate or rotates in a uniform circular motion. For this to happen, in a rigid body under the action of a system of forces, it is necessary that the sum of moments of all forces about any axis be zero. In mathematical form it is: $$ \vec{M_{net}} = \vec{0},$$ where \(\vec{M_{net}}\) is the resultant torque on the system. The figure above illustrates an extensive body, the bridge, suffering the action of various forces. Whereas the system of interest is the bridge, since we desire it to stay in a static equilibrium, the forces acting in it must meet the following conditions: to have no translation, the resultant should be zero, \begin{align} +& \vec{N}^{(1)} + \vec{N}^{(2)} + \vec{F}_{T,B}^{(1)} +\\ +& \vec{F}_{T,B}^{(2)} + \vec{P} = 0, \end{align} and so, to not have a rotation, the torque on the bridge must also be zero \begin{align} -& N^{(1)} d_{1} + N^{(2)} d_{2} + F_{T,B}^{(1)} d_{3} +\\ +& F_{T,B}^{(2)} d_{4}+ P d_{5}= 0, \end{align} where each \(d_i\) is the distance of each force in respect to the bridge's center of mass (position of vector \(P\)), then \(d_5 = 0\). Lamy's Theorem If a rigid system is in equilibrium under the action of only three external forces, \(F_1\), \(F_2\) and \(F_3\), not parallel, the module of each is proportional to the sine of the angle between the other two, namely: $$ \frac{F_1}{sen(a)} = \frac{F_2}{sen(b)} = \frac{F_3}{sen(c)},$$ where \(a\), \(b\) and \(c\) are the angles between the forces, as shown in the figure below. Poisont's Theorem Any system of forces, as complex as it seems, can always be reduced to a single force, known as the net force, and a torque. The torque and net force are orthogonal, always. In our APP you shall find: Offline Content Progress controller Hundreds of examples with solutions Calculator and Unit Converter And much more!!! STUDY PHYSICS ANYTIME ANYWHERE Dynamic Exams Differentiated Content Top approval rate © Mesoatomic 2018 USE TERMS
CommonCrawl
EECT Home Null controllability for a heat equation with dynamic boundary conditions and drift terms June 2020, 9(2): 561-579. doi: 10.3934/eect.2020024 On a backward problem for two-dimensional time fractional wave equation with discrete random data Nguyen Huy Tuan 1,2, , Tran Ngoc Thach 3,, and Yong Zhou 4,5, Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam Department of Mathematics and Computer Science, University of Science, VNU-HCM, Ho Chi Minh City, Vietnam Applied Analysis Research Group, Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China Faculty of Mathematics and Computational Science, Xiangtan University, Hunan 411105, China *Corresponding author: Tran Ngoc Thach Received February 2019 Revised May 2019 Published June 2020 Early access December 2019 This paper is concerned with a backward problem for a two- dimensional time fractional wave equation with discrete noise. In general, this problem is ill-posed, therefore the trigonometric method in nonparametric regression associated with Fourier truncation method is proposed to solve the problem. We also give some error estimates and convergence rates between the regularized solution and the sought solution under some assumptions. Keywords: Backward problem, ill-posed, discrete random data, time fractional wave equation, truncation method. Mathematics Subject Classification: 35K99, 47J06, 47H10, 35K05. Citation: Nguyen Huy Tuan, Tran Ngoc Thach, Yong Zhou. On a backward problem for two-dimensional time fractional wave equation with discrete random data. Evolution Equations & Control Theory, 2020, 9 (2) : 561-579. doi: 10.3934/eect.2020024 P. Agarwal, J. J. Nieto and M. J. Luo, Extended Riemann-Liouville type fractional derivative operator with applications, Open Math., 15 (2017), 1667-1681. doi: 10.1515/math-2017-0137. Google Scholar A. Anguraj, S. Kanjanadevi and J. J. Nieto, Mild solutions of Riemann-Liouville fractional differential equations with fractional impulses, Nonlinear Anal. Model. Control, 22 (2017), 753-764. Google Scholar [3] A. Atangana, Fractional Operators with Constant and Variable Order with Application to Geo-Hydrology, Academic Press, London, 2018. Google Scholar N. Bissantz and H. Holzmann, Statistical inference for inverse problems, Inverse Problems, 24 (2008), 17 pp. doi: 10.1088/0266-5611/24/3/034009. Google Scholar L. Boyadjiev and Y. Luchko, Multi-dimensional a-fractional diffusion-wave equation and some properties of its fundamental solution, Comput. Math. Appl., 73 (2017), 2561-2572. doi: 10.1016/j.camwa.2017.03.020. Google Scholar L. Cavalier, Nonparametric statistical inverse problems, Inverse Problems, 24 (2008), 19 pp. doi: 10.1088/0266-5611/24/3/034004. Google Scholar K. Diethelm, The Analysis of Fractional Differential Equations. An Application-Oriented Exposition Using Differential Operators of Caputo Type, Lecture Notes in Mathematics, 2004. Springer-Verlag, Berlin, 2010. doi: 10.1007/978-3-642-14574-2. Google Scholar X. L. Ding and J. J. Nieto, Analytical solutions for multi-term time-space fractional partial differential equations with nonlocal damping terms, Fract. Calc. Appl. Anal., 21 (2018), 312-335. doi: 10.1515/fca-2018-0019. Google Scholar R. L. Eubank, Nonparametric Regression and Spline Smoothing, Second edition. Statistics: Textbooks and Monographs, 157. Marcel Dekker, Inc., New York, 1999. Google Scholar W. Fan, F. Liu, X. Jiang and I. Turner, A novel unstructured mesh finite element method for solving the time-space fractional wave equation on a twodimensional irregular convex domain, Fract. Calc. Appl. Anal., 20 (2017), 352-383. doi: 10.1515/fca-2017-0019. Google Scholar S. Guo, L. Mei and Y. Li, An efficient Galerkin spectral method for two-dimensional fractional nonlinear reaction-diffusion-wave equation, Comput. Math. Appl., 74 (2017), 2449-2465. doi: 10.1016/j.camwa.2017.07.022. Google Scholar S. Holm and S. Peter Nasholm, Comparison of Fractional Wave Equations for Power Law Attenuation in Ultrasound and Elastography, Ultrasound in Medicine and Biology, 40, (2014), 695–703. doi: 10.1016/j.ultrasmedbio.2013.09.033. Google Scholar C. König, F. Werner and T. Hohage, Convergence rates for exponentially ill-posed inverse problems with impulsive noise, SIAM J. Numer. Anal., 54 (2016), 341-360. doi: 10.1137/15M1022252. Google Scholar D. Kumar, J. Singh and D. Baleanu, A new analysis for fractional model of regularized long-wave equation arising in ion acoustic plasma waves, Math. Methods Appl. Sci., 40 (2017), 5642-5653. doi: 10.1002/mma.4414. Google Scholar P. D. Lax, Functional Analysis, Pure and Applied Mathematics (New York). Wiley-Interscience [John Wiley & Sons], New York, 2002. Google Scholar Z. G. Liu, A. J. Cheng and X. L. Li, A novel finite difference discrete scheme for the time fractional diffusion-wave equation, Appl. Numer. Math., 134 (2018), 17-30. doi: 10.1016/j.apnum.2018.07.001. Google Scholar F. Mainardi, Fractional Calculus in Wave Propagation Problems, Forum der Berliner Mathematischer Gesellschaft, 2011, arXiv: 1202.0261. Google Scholar B. Mair and F. H. Ruymgaart, Statistical estimation in Hilbert scale, SIAM J. Appl. Math., 56 (1996), 1424-1444. doi: 10.1137/S0036139994264476. Google Scholar J. Masoliver, Fractional telegrapher's equation from fractional persistent random walks, Phys. Rev. E, 93 (2016), 10 pp. doi: 10.1103/physreve.93.052107. Google Scholar J. Masoliverdag and G. H. Weiss, Finite-velocity diffusion, Eur. J. Phys, 17 1996,190 pp. Google Scholar [21] I. Podlubny, Fractional Differential Equations. An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications, Mathematics in Science and Engineering, 198. Academic Press, Inc., San Diego, CA, 1999. Google Scholar Z. S. Ruan, S. Zhang and S. C. Xiong, Solving an inverse source problem for a time fractional diffusion equation by a modified quasi-boundary value method, Evol. Equ. Control Theory, 7 (2018), 669-682. doi: 10.3934/eect.2018032. Google Scholar K. Sakamoto and M. Yamamoto, Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems, J. Math. Anal. Appl., 382 (2011), 426-447. doi: 10.1016/j.jmaa.2011.04.058. Google Scholar S. G. Samko, A. A. Kilbas and O. I. Marichev, Fractional Integrals and Derivatives, Theory and Applications, Gordon and Breach Science Publishers, Yverdon, 1993. Google Scholar T. Sandev, Z. Tomovski, L. A. J. Dubbeldam and A. Chechkin, Generalized diffusion-wave equation with memory kernel, J. Phys. A, 52 (2019), 22 pp. Google Scholar P. Straka, M. M. Meerschaert, J. R. McGough and Y. Zhou, Fractional wave equations with attenuation, Fract. Calc. Appl. Anal., 16 (2013), 262-272. doi: 10.2478/s13540-013-0016-9. Google Scholar B. E. Treeby and B. T. Cox, Modeling power law absorption and dispersion in viscoelastic solids using a split-field and the fractional Laplacian, Acoustical Society of America, 127 (2014), 2741-2748. doi: 10.1121/1.4894790. Google Scholar A. B. Tsybakov, Introduction to Nonparametric Estimation, Revised and extended from the 2004 French original. Translated by Vladimir Zaiats. Springer Series in Statistics. Springer, New York, 2009. doi: 10.1007/b13794. Google Scholar T. Wei and J. G. Wang, A modified quasi-boundary value method for the backward time-fractional diffusion problem, ESAIM Math. Model. Numer. Anal., 48 (2014), 603-621. doi: 10.1051/m2an/2013107. Google Scholar T. Wei and Y. G. Zhang, The backward problem for a time-fractional diffusion-wave equation in a bounded domain, Comput. Math. Appl., 75 (2018), 3632-3648. doi: 10.1016/j.camwa.2018.02.022. Google Scholar Y. Zhang, M. M. Meerschaert and R. M. Neupauer, Backward fractional advection dispersion model for cantaminant source prediction, Water Resources Research, 52 (2016), 2462-2473. Google Scholar G. H. Zheng and T. Wei, Two regularization methods for solving a Riesz-Feller space-fractional backward diffusion problem, Inverse Problems, 26 (2010), 22 pp. doi: 10.1088/0266-5611/26/11/115017. Google Scholar Stefan Kindermann. Convergence of the gradient method for ill-posed problems. Inverse Problems & Imaging, 2017, 11 (4) : 703-720. doi: 10.3934/ipi.2017033 Paola Favati, Grazia Lotti, Ornella Menchi, Francesco Romani. An inner-outer regularizing method for ill-posed problems. Inverse Problems & Imaging, 2014, 8 (2) : 409-420. doi: 10.3934/ipi.2014.8.409 Misha Perepelitsa. An ill-posed problem for the Navier-Stokes equations for compressible flows. Discrete & Continuous Dynamical Systems, 2010, 26 (2) : 609-623. doi: 10.3934/dcds.2010.26.609 Peter I. Kogut, Olha P. Kupenko. On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1273-1295. doi: 10.3934/dcdsb.2019016 Alfredo Lorenzi, Luca Lorenzi. A strongly ill-posed integrodifferential singular parabolic problem in the unit cube of $\mathbb{R}^n$. Evolution Equations & Control Theory, 2014, 3 (3) : 499-524. doi: 10.3934/eect.2014.3.499 Bin Fan, Mejdi Azaïez, Chuanju Xu. An extension of the landweber regularization for a backward time fractional wave problem. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2893-2916. doi: 10.3934/dcdss.2020409 Eliane Bécache, Laurent Bourgeois, Lucas Franceschini, Jérémi Dardé. Application of mixed formulations of quasi-reversibility to solve ill-posed problems for heat and wave equations: The 1D case. Inverse Problems & Imaging, 2015, 9 (4) : 971-1002. doi: 10.3934/ipi.2015.9.971 Markus Haltmeier, Richard Kowar, Antonio Leitão, Otmar Scherzer. Kaczmarz methods for regularizing nonlinear ill-posed equations II: Applications. Inverse Problems & Imaging, 2007, 1 (3) : 507-523. doi: 10.3934/ipi.2007.1.507 Matthew A. Fury. Estimates for solutions of nonautonomous semilinear ill-posed problems. Conference Publications, 2015, 2015 (special) : 479-488. doi: 10.3934/proc.2015.0479 Matthew A. Fury. Regularization for ill-posed inhomogeneous evolution problems in a Hilbert space. Conference Publications, 2013, 2013 (special) : 259-272. doi: 10.3934/proc.2013.2013.259 Zonghao Li, Caibin Zeng. Center manifolds for ill-posed stochastic evolution equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021142 Junxiong Jia, Jigen Peng, Jinghuai Gao, Yujiao Li. Backward problem for a time-space fractional diffusion equation. Inverse Problems & Imaging, 2018, 12 (3) : 773-799. doi: 10.3934/ipi.2018033 Felix Lucka, Katharina Proksch, Christoph Brune, Nicolai Bissantz, Martin Burger, Holger Dette, Frank Wübbeling. Risk estimators for choosing regularization parameters in ill-posed problems - properties and limitations. Inverse Problems & Imaging, 2018, 12 (5) : 1121-1155. doi: 10.3934/ipi.2018047 Ye Zhang, Bernd Hofmann. Two new non-negativity preserving iterative regularization methods for ill-posed inverse problems. Inverse Problems & Imaging, 2021, 15 (2) : 229-256. doi: 10.3934/ipi.2020062 Guozhi Dong, Bert Jüttler, Otmar Scherzer, Thomas Takacs. Convergence of Tikhonov regularization for solving ill-posed operator equations with solutions defined on surfaces. Inverse Problems & Imaging, 2017, 11 (2) : 221-246. doi: 10.3934/ipi.2017011 Lianwang Deng. Local integral manifolds for nonautonomous and ill-posed equations with sectorially dichotomous operator. Communications on Pure & Applied Analysis, 2020, 19 (1) : 145-174. doi: 10.3934/cpaa.2020009 Youri V. Egorov, Evariste Sanchez-Palencia. Remarks on certain singular perturbations with ill-posed limit in shell theory and elasticity. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1293-1305. doi: 10.3934/dcds.2011.31.1293 Johann Baumeister, Barbara Kaltenbacher, Antonio Leitão. On Levenberg-Marquardt-Kaczmarz iterative methods for solving systems of nonlinear ill-posed equations. Inverse Problems & Imaging, 2010, 4 (3) : 335-350. doi: 10.3934/ipi.2010.4.335 Sergiy Zhuk. Inverse problems for linear ill-posed differential-algebraic equations with uncertain parameters. Conference Publications, 2011, 2011 (Special) : 1467-1476. doi: 10.3934/proc.2011.2011.1467 Olha P. Kupenko, Rosanna Manzo. On optimal controls in coefficients for ill-posed non-Linear elliptic Dirichlet boundary value problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1363-1393. doi: 10.3934/dcdsb.2018155 Nguyen Huy Tuan Tran Ngoc Thach Yong Zhou
CommonCrawl
Simulation of leaf curl disease dynamics in chili for strategic management options Buddhadeb Roy1, Shailja Dubey1, Amalendu Ghosh1, Shalu Misra Shukla1, Bikash Mandal1 & Parimal Sinha1 Scientific Reports volume 11, Article number: 1010 (2021) Cite this article 58 Accesses Leaf curl, a whitefly-borne begomovirus disease, is the cause of frequent epidemic in chili. In the present study, transmission parameters involved in tripartite interaction are estimated to simulate disease dynamics in a population dynamics model framework. Epidemic is characterized by a rapid conversion rate of healthy host population into infectious type. Infection rate as basic reproduction number, R0 = 13.54, has indicated a high rate of virus transmission. Equilibrium population of infectious host and viruliferous vector are observed to be sensitive to the immigration parameter. A small increase in immigration rate of viruliferous vector increased the population of both infectious host and viruliferous vector. Migrant viruliferous vectors, acquisition, and transmission rates as major parameters in the model indicate leaf curl epidemic is predominantly a vector -mediated process. Based on underlying principles of temperature influence on vector population abundance and transmission parameters, spatio-temporal pattern of disease risk predicted is noted to correspond with leaf curl distribution pattern in India. Temperature in the range of 15–35 °C plays an important role in epidemic as both vector population and virus transmission are influenced by temperature. Assessment of leaf curl dynamics would be a useful guide to crop planning and evolution of efficient management strategies. Begomoviruses (family Geminiviridae) are the most destructive plant viruses that cause diseases like leaf curl, mosaic, yellow mosaic and yellow vein mosaic in numerous crop plants in the tropical and subtropical world. Leaf curl disease in chili is known to cause by several begomoviruses of which Chili leaf curl virus (ChiLCV) is the most predominant in India1,2,3. ChiLCV is efficiently spread by whitefly (Bemisia tabaci) which is abundant year-round in tropical and subtropical climates where wide variety of hosts serves as reservoirs3,4,5,6,7. Leaf curl in chili (ChiLCD) is a common disease but recent frequent epidemic outbreaks witnessed repeatedly in the chili growing areas of Central and Southern India is now a growing concern3,8,9,10. ChiLCD is the most damaging in terms of yield loss across the cropping regions11. In severe cases, even up to 100% losses of marketable fruit have been reported12. In absence of epidemiological information so far no definite management strategy has been evolved. Interaction or contact rate between viruliferous vector and the growing host at initial stages is crucial in epidemic development and thus the period of high contact rate appears critical point of interventions13,14. Simulation of epidemics resulted from complex host–vector interaction is of great interest for on-farm management and assessment of agricultural policies and practices15,16,17. Time of whitefly population abundance, especially the growth of viruliferous flies and its effect on host associated leaf curl risk at regional and global scales is of great interest for precise interventions. Type of intervention that ensures low contact rates and keeps the proportion of infectious host at minimum level, are useful in designing precise management strategies18. Chili being cultivated in a wide variety of agro-ecosystems in India, disease dynamics in relation to tripartite interaction and transmission behavior are useful for evolving management strategies universally applicable to all climates. Understanding the complex host–vector interactions is also necessary to assess emerging disease risk under rapidly changing cropping patterns as well as global temperature rise scenario6,19,20. Leaf curl epidemic is resulted from a complex tripartite interaction between host, vector, and virus where virus transmission plays a central role15. Population dynamics models have been used to characterize epidemics resulted from tripartite interaction especially for whitefly transmitted begomovirus in cassava21,22,23,24 and tomato25. Environmental factor particularly temperature plays important role in transmission and vector movements17,22,26. Virus transmission, latent period, infectious period, replication, titer, movements and symptoms expression etc. are directly linked to temperature27,28,29. For prediction of whitefly population abundance, temperature-dependent model, explaining developmental rates of each life stage of B. tabaci, is used30,31,32,33. Availability of large scale geo-spatial temperature data facilitates prediction of spatio-temporal distribution of whitefly abundance. To unfold tripartite interaction in the agroecosystem, it is necessary to link the time of whitefly abundance in relation to virus transmission and actual disease risk. Establishment of disease dynamics in terms of epidemic parameters such as transmission and acquisition rate, prediction of vector abundance time and associated disease risk period may help to develop a testing framework for evolving precise intervention strategies15,34. Further, assessment of temperature influence on transmission parameters and prediction of potential geographical distribution in terms of whitefly abundance is likely to shed light on tripartite interaction which may provide insights for developing a rational management policy. The current paper has addressed the estimation of transmission parameters for simulation of leaf curl dynamics and explaining tripartite interaction in a population dynamics modeling framework. Leaf curl disease dynamics established was used to identify the most sensitive parameter contributing to the growth of infectious host and viruliferous whitefly population in the field. Temperature influence on transmission parameters and whitefly population abundance was used for approximation of disease risk and compare known leaf curl disease distribution patterns in the chili growing areas of the subcontinent. Leaf curl virus (ChiLCV) transmission in chili Transmission parameters, estimated under semi-controlled condition, were observed to vary in response to temperature. Transmission or inoculation rate as the number of plant/vector/week was 0.12 at 25 °C as compared to 0.04 and 0.07 at 15 °C and 35 °C, respectively (Fig. 1). The number of vectors used in the inoculation experiment influenced the transmission rate. The transmission rate was nine times higher when plants were inoculated with one viruliferous vector as compared to ten. Therefore, the transmission rate was affected by vector aggregation as well as temperature. Acquisition rate at 25 °C was observed to be 0.70 vector/plant/week which was higher than that at 15 °C and 35 °C where the values were in the range of 0.17 to 0.28. Within 20 min, vectors were observed to acquire virus and by 6 h of feeding access period, almost all the vectors were found to acquire ChiLCV as judged by the positive result in PCR test. Temperature response was typically follow a unimodal pattern as the transmission parameter values were observed to increase with the increase in temperature up to a favorable point. Higher transmission and acquisition efficiency at moderate temperature (25 °C) than at lower (15 °C) and decreased afterword higher ranges of temperatures (35 °C) indicated transmission parameters are governed by optimal and suboptimal ranges of temperature. Transmission parameter estimates at 15 °C, 25 °C and 35 °C on susceptible chili cultivar (cvHPH-1041) under semi-controlled conditions (eight plants grown in insect-proof cage; ten viruliferous whiteflies released through suction device; 7 days after release flies were killed spraying Confidor (@ 1 ml/L), blue bar = transmission rate (a) and red bar = acquisition rate (b); a and b estimated fitting the equation dS/dt = – aVS and dV/dt = b XI = b(1 − V)I. It appeared leaf curl disease or incidence distribution would likely be affected by prevalent temperature in time and location as virus transmission is influenced by temperature. Leaf curl incidence as proportion of leaf curl infection and virulifreous vector in chili field In the experimental field, chili plants were noted to have an infection from the very first week of transplantation as a proportion (0.01) of samples tested PCR-positive for ChiLCV (Table 1). Proportion of infection was observed to increase rapidly to 0.52 within the fifth week period and by 10th week reached 0.76 level. Population of viruliferous whitefly (V) from the time of transplantation was 0.65 and went on increasing and reached 0.98 by the end of 12th week. Whitefly population sampled from the field (brinjal, cucurbits, tomato and Leucana leucocephala) was observed to be PCR-positive even at the time of transplantation and proportion of viruliferous vector was estimated to be 0.65. Table 1 Proportion (P) of infectedchilli plant (I) and viruliferous whitefly (V) population tested PCR-positivea at IARI-New Delhi experimental field, on cvHPH-1041 (Syngenta), transplanted in 2019. It appeared that chili planting during August was at very high-risk period when a high proportion of viruliferous vector already existed nearby field or in border plants. Population dynamics model for simulation of leaf curl epidemic in chili For simulation of epidemics, population dynamics models were fitted on leaf curl incidence data observed under field conditions. Transmission parameters estimated under semi-controlled conditions were tuned to match population of healthy host and viruliferous vector. Values of model parameters and variables tuned or adjusted for leaf curl epidemic analysis are given in Table 2. Population of healthy host (S) was predicted fitting host dynamics model (dS/dt = − aVS) and found to correspond with the observed proportion of healthy chili plants in the field (Fig. 2). Transmission or inoculation rate (a = 0.12) was approximately close to simulate healthy host population in the field as indicated by low RMSE. Acquisition rate (b = 0.70) estimated under semi-controlled conditions and used to simulate virulifeorus vector population fitting vector population dynamics model (dV/dt = b[1 − V]I/(Kx + 1 − V) + iv − ev − uV)was in good agreement as indicated by the low RMSE value. Immigration rate (iv) was approximated to be 0.34 flies plant−1 week−1 with high infectivity (0.65) which led to a high infection rate in chili within 2-month period of transplantation. Adjustment of death rate to 0.34 indicated high mortality than the usual rate. Dynamics of vector population were observed to be better explained by the Michaelis–Menten parameter (Kx = 2.8) than the simple kinetic parameter. Table 2 Model parameters and variables used for leaf curl epidemic analysis in chili. Simulated dynamics of healthy host (S) and viruliferous whitefly population (V) on susceptible chili cultivar(cvHPH-1041) grown (August–December, 2019) under natural infection conditions at experimental field, New Delhi; Model fitting parameters; transmission rate a = 0.12; acquisition rate b = 0.70; immigration rate (iv) = 0.34 and; mortality rate = 0.34; Kx = 2.8; RMSE = 0.0105 (host) and 0.0167 (vector). Estimation of basic reproduction number (R0) indicated from one infected host about 13.54 plants are infected in the next generation. High transmission rate resulted in quick growth in the leaf curl epidemic. Fitted population dynamics model on field data based on transmission parameters (a and b estimated) appeared to represent the dynamic tripartite interaction in the field. Thus tripartite interaction captured in the model is likely to represent the dynamics of host and viruliferous vector population that were involved in the epidemic process. Critical epidemic parameters in leaf curl disease dynamics-sensitivity analysis Tripartite interaction captured in a dynamic model was used to assess the behavior of other parameters like immigration and emigration, death and birth rate of whiteflies which are difficult to observe but to be determined analytically. Sensitivity analysis was made to determine how changes in parameter values affected the epidemic process underpinning disease management clues. Changes in parameter values particularly for transmission rate (a), immigration rate (iv) and acquisition rate (b) were noted to have a marked influence on population of host as well as viruliferous vector. Transmission rate parameter (a) was observed to be responsive throughout the crop period indicating the importance of the vector activity in disease spread (Fig. 3A). Equilibrium population of infectious host, as well as viruliferous vector, was sensitive to immigration parameter (iv) as both the population got increased with the increase in parameter value. Increase in infectious host population was more at a lower level (0.05 and 0.1) than at a higher level (0.4) of vector population (Fig. 3B). It indicated immigration of viruliferous-vector is very important at the initial stage of virus transmission than at a later stage of epidemic development when high level of viruliferous population may be there. Measures to check migration of viruliferous vectors or avoiding migrant viruliferous vectors would be important in disease management. Immigration rate was also realized to be responsive for an increase in viruliferous vector population. The sudden jump in viruliferous vector population is likely to be in presence of an infectious host population (I = 0.4) even at a low rate of vector migration (Fig. 3C). Therefore, immigration of viruliferous vector was the most important parameter which acted concurrently in the flare-up of both the infectious population of host and vector. Acquisition rate was sensitive to increase viruliferous vector population at a higher level of infectious host (I = 0.4) as compared to the low level (I = 0.05–0.1) of population (Fig. 3D). Parameter sensitivity on host and viruliferous whitefly population; A = transmission rate on healthy host; B = immigration rate on infectious host population; C = immigration rate on viruliferous vector and D = acquisition rate on viruliferous vector population. Therefore, tripartite interaction for leaf curl virus in chili was simplified through a schematic diagram maintaining the qualitative behavior of the process that is useful in understanding the dynamics of the transmission process and possible management interventions cues (Fig. 4). Schematic representation of dynamic tripartite interactions in leaf curl epidemic on a susceptible chili cultivar. Left = interaction between host and vector leading to virus transmission; Right = migrant viruliferous vector infects host that serves as source of virus for aviruliferousvectors and which in turn infect healthy plants. Migrant viruliferous vector population appeared to play an important role in initial infection as well as to flare up both infectious host and virulifeorus vector population in the field for later exponential growth and make epidemic progress. Temperature influence on whitefly population abundance At the experimental site, whitefly population was observed in two distinct seasonal peaks, one during February–May and another in mid July–November period (Fig. 5A). Normalized population count was higher during July–November than February-May period. Temperature profile during Feb–March-mid April and mid July–November was within the threshold range (11–35 °C for whitefly population development) than the rest period. Whitefly developmental rate r(T) estimated based on threshold temperature (11–35 °C) and expressed as weekly temperature index [WTI = sum r(T)]was found to correspond with higher peak of whitefly population count in July–November (mean 22–35 °C) and low peak in February–May (10–32 °C) (Fig. 5B). Accumulated weekly temperature index and accumulated population count was in good agreement (r = 0.97). Therefore, accumulated weekly temperature index as an indicator of environmental suitability represented whitefly population (Fig. 5C). Temperature influence on whitefly population in experimental field IARI New Delhi, A = whitefly population count in Julian weeks; B = weekly temperature index in Julian weeks; and C = cumulative weekly temperature index and whitefly population count. It was evident that sum of developmental rate based on temperature profile reflected whitefly population abundance. Therefore, based on temperature index, whitefly population can be predicted for large areas as the geo-spatial temperature data is available. Prediction of spatio-temporal distribution of whitefly population abundance Geographical distribution of monthly temperature index as an indicator of whitefly population abundance had shown distinct variation in intensity in time and space (Fig. 6). Monthly temperature index (MTI) or environmental suitability for whitefly abundance was at maximum intensity during July–November period almost throughout the country. In general, Southern peninsular region was observed to be suitable almost throughout the year as compared to other regions. It indicated major part of chili growing areas in the subcontinent found to be at high risk as possibility of vector population during this period is expected to be high. Marked increase in temperature index observed from July and continued till October. Chili planting during this season demands attention to check potential vector activity. During January–June period, Northwestern, Eastern and parts of Central India appeared to be relatively less favorable for whitefly development and likely to be less favorable for disease transmission. Prediction of spatio-temporal distribution of whitefly abundance (measured as growth and development index) in relation to monthly temperature index based on the prevalent temperature data (2001–2018) across the major chili growing areas in India; spatial interpolation (IDW in ArcGis 10.0 http://www.arcgis.com); red and dark green colors indicate high and low risk, respectively for whitefly population as well as leaf curl disease in chili. Because of overlapping threshold temperatures of population abundance (11 °C, 22 °C and 35 °C) and transmission ability (measured at 15 °C, 25 °C and 35 °C) spatio-temporal distribution of vector abundance was appropriated as the measure of leaf curl risk. Therefore, spatio-temporal distribution pattern of vector abundance can be used as a reflection of probable disease risk or otherwise disease transmission ability. Seasonal variation in leaf curl incidence in the major chili growing areas was confirmed based on preliminary survey and geographical distribution pattern for the disease reported8,9. In Northern India the chili crop grown during August-November period is affected by severe leaf curl epidemic, whereas the crop transplanted during December–January receives fewer incidences when whitefly population is comparatively low. Southern region is perpetually affected by severe epidemics as favorable temperature prevails almost throughout the year. It became evident that the period of high vector population and leaf curl incidence are overlapping as whitefly abundance and virus transmission are favored by similar temperature ranges. Leaf curl disease dynamics has been simulated taking chili-ChiLCV-whitefly interaction into an epidemiological modeling frame work. Real process in tripartite interaction has been simplified based on mathematical models while maintaining the qualitative behavior of the mechanism useful in understanding the kinetics of transmission process. Immigration of viruliferous vectors and transmission parameters are the major parameters in the model noted to have a strong influence on the leaf curl epidemic. Otherwise, tripartite interaction is predominantly a vector mediation process. Current finding provides an important clue for emphasizing prevention of migrant vectors rather than the vector control normally thought for. Based on underlying principles of temperature influence on vector abundance and virus transmission spatio-temporal pattern of disease risk has been predicted. Spatio-temporal patterns for disease risk mapped across wide variants of agro-ecosystems may be used for crop planning if possible to avoid the period of high vector population. Modeling framework assumes susceptible chili cultivar and the viruliferous vectors are present in the agroecosystem. Therefore, parameter estimates are based on susceptible hosts. Adjustment of parameter values for host resistance may be required for general application of the finding. Epidemic analysis framework can be followed for testing and evolution of management strategies. A model framework of tripartite interaction necessary for evaluation of chili germplasms against leaf curl disease is now available. For leaf curl dynamics in chili, SI model frame is good enough to explain epidemics as E (latently infected) and R (removed or post-infectious) categories are not essential elements in the model. Transmission studies have indicated inoculated plants 2-days onwards become PCR- positive. It appears virus multiplication in infected chili plants is quick and can serve as a potent source within a short period. Weekly observation is taken for estimation of infection proportion which covered enough multiplication time. Therefore, it may be the reason the model could explain dynamics without consideration of E type as latent period got covered in the observation interval. Based on the present finding it is surmised that latent period for leaf curl virus in chilli is shorter as far as transmission capability is concerned. Infected chili plants (maintained in isolation) are observed to remain infectious for more than 5 months as both acquisition and transmission of the virus is possible from those plants. Chili being a crop of 4–5-month duration, SI model serves the purpose for epidemic analysis without inclusion of R type as post-infectious category does not exist. A high population of viruliferous vector as well as high disease incidence within a short period was an important feature in the epidemic. For such large number, both infectious host and vector are to be borne out of the field. It is possible under a situation where both the population goes increasing simultaneously as they are inter-dependent. Whitefly as persistent-circulative vector, requires virus acquisition from available infectious hosts in the field. Present finding suggests a gradual increase in both the population occurs in the field simultaneously. Basic reproduction number estimated has reflected a higher number of new infections in the next generation. Viruliferous migrant vectors make new infections or infectious hosts and new generation fly acquires virus from the newly infected host. Therefore, migrant viruliferous vector is the driving force to operate the tripartite interaction. We hypothesize a positive feedback mechanism is in work where few migratory viruliferous vectors produce few infectious hosts which in turn facilitates more viruliferous vectors and the process continues till restrictions are imposed in the agro-ecosystem. Positive feedback is a process in which the end products of action cause more of that action to occur in a feedback loop and leads to exponential growth35,36. Therefore, complete avoidance or intervention to prevent viruliferous vector population in the field is very important. As gradual built-up of both infectious host and vector happens at early stages of crop, complete prevention of migrant vectors at the start of chili planting is expected to keep the plants infection-free. Therefore, management of leaf curl is dependent on how best measures are taken to check migrant vector's entry into the field without interference in intercultural operations. Prevention of infectious vector's entry from the very beginning or complete avoidance of migrant vectors from the crop-start may be possible by covering the rows with synthetic plant covers that prevents insects but not interfering plant growth. Now-a-days plant covers are available. But the crop requires irrigation, fertilization and intercultural operations. Before going for management trials, there are important issues to resolve. Firstly, how long the plants are to be under complete cover as it hinders intercultural? It requires testing what is the minimum period of cover after which minimum tripartite interaction may happen but not affecting yield and quality. Secondly, what are the cares (say mulching to check weeds) at the time of transplanting to be taken so that plants are protected from vector and at the same time intercultural operations can be allowed without vector's entry. Thirdly, the cost involvement for the alternatives to be used for vector prevention requires evaluation. Without sorting out these issues management trials are again poised to remain open without any conclusion. Effect of vector aggregation affects transmission parameters21. Variation in transmission rate has been realized in the current study as the parameter estimates (a) with one whitefly was nine times higher than with ten flies (expected some degree of aggregation). Density of the host also affects vector mobility and therefore virus transmission17. Therefore, inclusion of aggregation parameter for both host and vector would have given a more precise simulation of the dynamics. Both transmission and acquisition rates are affected by temperature and the unimodal response may possibly be a general phenomenon as far as vector transmissions of viruses are concerned. Similar behaviour of transmission is reported for aphid-transmitted viruses like Banana-bunchy-top in banana27 and leaf roll in potato37 where temperature thresholds (below 10 °C and above 35 °C) affected both the events. Maximum rate of transmission is obtained at 25 °C which is the normal temperature for aphid growth. For chili leaf curl, greater virus transmission ability (transmission and acquisition efficiency) at 25 °C and decreasing effect of temperature at or below 15 °C and at 35 °C or above, suggests vector population growth, as well as virus transmission, is remarkably influenced by temperature. Thermo tolerance and gene expression following heat stress in the whitefly Bemisia tabaci B and Q biotypes has shown low sensitivity in transmission for low or high temperature38,39. Temperature influence on virus transmission abilities and its approximation with vector abundance is important information that facilitated its spatio-temporal distribution. Geographical distribution of begomoviruses has been reported to be analogous to occurrence of whitefly in the world40. It is to be seen whether a similar pattern of temperature influence on the transmission is applicable for other plant viruses. If it is true, then prediction of vector abundance and transmission abilities may be used as a critical input for evolving management strategies against most virus diseases. Seasonal whitefly population abundance has been predicted based on temperature thresholds for developmental rate. Potential vector growth as a map of potential risk of pest occurrence is an important component for management decisions18. Spatio-temporal distribution of whitefly abundance and virus transmission ability is approximated as leaf curl risk has shown good correspondence with empirical observation as well as available reports on leaf curl incidence3,12. For the prediction of potential disease risk based on temperature thresholds, characteristic behavior of unimodal developmental response to temperature has been captured by three-parameter beta functions41. The beta function of three parameters is more robust to capture the developmental rate as parameters have meaningful biological interpretation than the two-parameter models30,33,42,43,44. To sum up, leaf curl dynamics is established and migrant viruliferous vectors are denoted as the initiator of the epidemic process. For efficient management of the disease, emphasis must be on prevention of vector's entry right from the stage of seedling preparation and transplanting. Temperature plays an important role, particularly in transmission process. Thus leaf curl risk prediction based on temperature may be a useful guide to evolve intervention strategies. The assessment framework may be also applicable to other viruses transmitted by whiteflies. Population dynamical models for simulation of leaf curl epidemic For population dynamics modeling, host population is divided into healthy (S, virus free), latent (E, carrying virus) and infectious (I, serves as virus source) and post-infectious or removed (R, no longer virus source) type. Similarly, whitefly population is divided into aviruliferous (X, virus free) and viruliferous (V, capable of transmission) type. Due to interaction, host population (S) is converted into latent category (E) once visited and probed by viruliferous whitefly (V) and after a period latent plants are transferred into infectious (I) type. Over-infectious or dead plants are called as post-infectious or removed (R) type. For leaf curl dynamics, following population dynamics models previously used to explain tripartite interactions for plant viruses, were considered21,23,24,25,26. $$dS/dt \, = \, {-} aVS$$ $$dI/dt = \, aVS$$ $$dX/dt \, = cX + cV \, {-} \, bXI/\left( { \, K_{x} + X} \right){-} uV \, + \, i_{x} {-} e_{x}$$ $$dV/dt \, = \, bXI/\left( { \, K_{x} + X} \right) \, + \, i_{v} {-} e_{v} {-} \, uV \, = \, b\left( {1{-} \, V} \right)I/\left( { \, K_{x} + 1{-} \, V} \right) \, + \, i_{v} {-} e_{v} {-} uV$$ For chili, host population was divided only into two categories, healthy S, and infectious I type. Chili plants once infected were observed to remain infectious till the end of the crop season. So post-infectious or removed R type was not included in the model. Similarly, vector population was categorized into aviruliferous X, and viruliferous V type. Since the vector was capable of virus transmission with 20 min inoculation feeding period, latent category was not included in the model. In tripartite interaction, virus transmission between host and vector is dependent on parameters a (transmission or inoculation rate) and b (acquisition rate). Parameter a determines conversion or transfer rate of healthy host (S) into infectious category (I) which is dependent on the number of healthy host (S) and viruliferous vector population (V). Virus acquisition rate by an aviruliferous vector (X) is defined by parameter b, which is dependent on the number of healthy vector (X) and number of available infectious host (I). Other parameters included in the model are rate of immigration and emigration ofviruliferous (iv and ev respectively) and aviruliferous (ix and ex respectively) vectors, c and u birth and death rate respectively (both aviruliferous and viruliferous), Michaelis–Menten constant (Kx). Estimated parameters are tuned by fitting models on field data where infections were observed under natural conditions. Setting the model in a state of real situation (calibrated), behaviors of other parameters were assessed. Parameter sensitivity was performed on the equilibrium points of population of host and vector to see the impact of changes in their values make changes in host (S* and I*) and vector (V*) population. Equilibrium points: $$S* = 1 - I*$$ $$I* = \, \left( {u \, e_{v} + u^{2} V - u \, i_{v} } \right)/\left( {b\left( { \, i_{v} - \, e_{v} + \, i_{x} - \, e_{x} - uV} \right)} \right)$$ $$X* = \, \left( {e_{x} - i_{x} + V\left( {u - c} \right)} \right)/\left( {c - bI} \right)$$ $$V* = \, \left( {u \, e_{v} - u \, i_{v} - I\left( { \, i_{x} - \, e_{x} } \right) + I\left( { \, i_{v} - \, e_{v} } \right)} \right)/\left( {uI - u^{2} } \right)$$ The equilibrium points for host and vector population (S*, I* and V*) were derived from simpler equations of tripartite interaction to avoid quadratic roots in system solution. Temperature influence on major transmission parameters was assessed for prediction of probable geographical distribution of the disease to explain epidemic process and predict disease risk. Parameter estimation-transmission (a) and acquisition (b) rates Transmission or inoculation rate for the ChiLCV was estimated under semi-controlled conditions26. Ten viruliferous whiteflies were allowed to feed on eight chili plants (susceptible cv HPH-1041, Syngenta) grown in earthen pot (dai 10ʺ) and maintained in an insect proof cage. Experiment was performed under three levels of temperature exposure viz., 15 °C, 25 °C and 35 °C. Whiteflies were killed after 7 days by spraying insecticide (@1 ml/L Confidor, Bayer). Leaves from each plant were collected for virus detection through PCR (ChiLCV specific primer pairs) after 1, 2, 3, 4, 5, and 7 days' post inoculation. Transmission rate (a) was estimated by fitting the dynamical model Eq. (1). For estimation of acquisition rate (b), ten aviruliferous whiteflies were released on an infectious plant maintained in an insect proof cage. The experiment was performed at three temperatures viz., 15 °C, 25 °C and 35 °C to assess temperature influence on virus acquisition. Vectors were collected at 0, 10, 20 min, 1, 2, 4, 8, 12, 24, and 48 h interval for PCR detection (ChilCV specific primers). Acquisition rate, b was estimated fitting the dynamical model: $$dV/dt \, = bXI = \, b\left( {1 - V} \right)I$$ X = number of aviruliferous vectors in time; V = number of virulifeorus vector in time; I = number of infectious plants in the exposure in time; b = acquisition rate, X encounters with I. For fitting dynamic models, R software was used. Precision of the model was tested by comparing simulated values of healthy host and viruliferous vector population with observed values estimated through sampling in the field. The RMSE was used as a measure of the discrepancies between observed values and model simulations. $$RMSE = \sqrt {\frac{{\sum\nolimits_{i = 1}^{n} {(x_{sim,i} - x_{obs,i} )^{2} } }}{n}}$$ where xsim,I is the simulated value for healthy host and viruliferous vector, xobs,I is the observed value for corresponding populations, and n is the number of observation, with a value of 12 in the present evaluation. Assessment of infectious host (I) and whitefly vector (V) populations in experimental field In a field plot experiment at IARI New Delhi (18.6317°N, 77.2241°8E, 219.7 m), 500 seedlings of susceptible chili (cv HPH-1041) were transplanted in the second week of August 2019 and grown under normal package of practice without any management interventions. For estimation of host (infectious) and vector population (viruliferous) samples of leaves and flies were taken from the first week of post transplanting. Weekly composite of 200 leaves, taking one leaf from the top of 200 randomly selected plants was prepared. Similarly, a composite of 100 whiteflies collected (using a suction device) from five randomly selected plants. Both leaf and whitefly samples were grouped for testing of ChiLCV through PCR. Group testing was done following the scheme mentioned in the Table S1. Uniform PCR protocol (ChiLCV specific primers) was followed for leaf and whitefly samples45 (Table S1). Proportion of infected leaf or whitefly (P) was estimated using the following formula46,47,48,49: $$P = 1 - \left( {\left( {n - X} \right)/n} \right)^{1/m}$$ where n = number of groups made for leaf and whitefly samples; m = number of leaves or whiteflies in each group; and X = number of groups tested as PCR-positive. At transplanting, leaf, as well as whitefly samples from the resident crops (cucurbits, brinjal and tomato) and border tree (Leucana leucocephala), were also collected for testing in similar methods of sampling. Weekly proportion of infected leaves and flies based on PCR test was considered as the proportion of infectious host (I) and viruliferous vector (V) population in the field. Proportion of infected host and vectors estimated were considered as I and V as once they are infected remain infectious. Basic reproduction number (R 0) for leaf curl in chili Basic reproduction number calculated as the total number of infections arising from one newly infected host and whitefly introduced into the healthy host population50. It represents the ratio between subsequent generations at population level. The R0 was calculated as dominant eigen value of the next generation matrix (A) where the time step of the matrix is the generation time51: $$A=\left(\begin{array}{cc}0& aS\\ bX& 0\end{array}\right)\left(\begin{array}{cc}\tau I& 0\\ 0& \tau Y\end{array}\right)$$ where a = transmission rate, b = acquisition rate, \(\tau I\)= time period an infected host remains infectious, \(\tau Y\)= time period the vector remains viruliferous, S and X = healthy host and non-viruliferous vector population density (number/m2), respectively. R0value calculated from the equation50,52: $$R_{0}^{2} - \alpha S\tau Y \cdot bX\tau I = 0$$ Prediction of whitefly population abundance based on temperature influence Whitefly population was observed during 2016–2019 at IARI New Delhi experimental field where regular planting of chili was followed. Flies from five plants were collected through a suction device at the weekly intervals (Julian week) and the mean count was expressed as a normalized population per week53. $${\text{Normalized}}\;{\text{population}}\;{\text{count }} = \left( {Mean \, count - Min_{mean \, count} } \right)/ \, (Max_{mean \, count} - \, Min_{mean \, count} )$$ To assess the effect of temperature on developmental rate [r(T)] for whitefly population, non-linear beta function which utilizes cardinal temperatures of whitefly growth and development was used41,54: $$\begin{gathered} {\text{r}}(T) = \left[ {\left( {T_{{upper}} - T_{{air {\text{-}} h}} } \right)/\left( {T_{{upper}} - T_{{opt}} } \right)} \right]*\left[ {{{\left( {T_{{air {\text{-}} h}} - T_{{lower}} } \right)} \mathord{\left/ {\vphantom {{\left( {T_{{air {\text{-}} h}} - T_{{lower}} } \right)} {\left( {T_{{opt}} - T_{{lower}} } \right)}}} \right. \kern-\nulldelimiterspace} {\left( {T_{{opt}} - T_{{lower}} } \right)}}} \right]\wedge {{\left( {T_{{opt}} - T_{{lower}} } \right)} \mathord{\left/ {\vphantom {{\left( {T_{{opt}} - T_{{lower}} } \right)} {\left( {T_{{upper}} - T_{{opt}} } \right)}}} \right. \kern-\nulldelimiterspace} {\left( {T_{{upper}} - T_{{opt}} } \right)}} \hfill \\ {\text{if}}\;\;T_{{lower}} \le T \le T_{{upper}} \;\;{\text{and}}\;\;0\;\;{\text{otherwise}} \hfill \\ \end{gathered}$$ where, Tupper (35 °C), Tlower (11 °C) and Topt (22 °C) are the upper, lower and optimum threshold temperatures, respectively, for egg laying and adult development4,31,55,56. Tair-h is the hourly air temperature (°C). Daily temperature for the year 2016–2019 was collected from the Institute's Meteorological observatory IARI New Delhi (28.7041° N, 77.1025° E). Daily minimum (Tmin) and maximum (Tmax) temperatures were converted into hourly temperatures (Tair-h) based on the standard formula57, which provides smooth transformation from minimum to maximum air daily temperature: $$T_{air - h} = \left( {T_{max} + T_{min} } \right)/2 + \left( {T_{max} - T_{min} } \right)/2*cos(0.2618*(h - TimeVar)$$ where TimeVar is the hour of the day corresponding to the time of occurrence of Tmax. For estimation of temperature influence, hourly temperature (Tair-h) was corrected for upper and lower threshold limits for whitefly growth and development. Hourly estimates of r (T)were calculated and daily r(T) were obtained summing hourly values for the day. Weekly temperature index (WTI) was calculated by summing daily r(T) values for the week: $$WTI={\sum }_{t1}^{t2}r\left(T\right)$$ where t1 and t2 are the r (T) for the first and last day of the week. Accumulated temperature index (WTI) for the crop period (August–November) was related to the accumulated population of whitefly count as a measure of environmental suitability for population abundance. Prediction of spatio-temporal distribution for whitefly population abundance For prediction of whitefly population abundance based on environmental suitability, daily minimum and maximum temperature were downloaded from the IMD portal (http://mausam.imd.gov.in). Temperature data for the period 2001–2018 from 85 geo-referenced meteorological stations of important chili growing states of India was considered. For probable distribution of vector population, monthly cumulative temperature index (MTI) was estimated. $$MTI={\sum }_{t1}^{t2}r\left(T\right)$$ where t1 and t2 are the r (T) for the first and last day of the month. The cumulative temperature indices (MTI) were plotted using ArcGis10.0 (http://www.arcgis.com). For continuous surface, IDW (inverse distance weightage) interpolation technique was applied and spatial maps were generated. For verification of predicted distribution, a preliminary survey across major chili growing areas conducted for leaf curl incidence. Chattopadhyay, B. et al. Infectivity of the cloned components of a begomovirus: DNA beta complex causing chilli leaf curl disease in India. Adv. Virol. 153, 533–539 (2008). Khan, M. S., Raj, S. K. & Singh, R. First report of tomato leaf curl New Delhi virus infecting chilli in India. Plant. Pathol. 55, 289 (2006). Senanayake, D. M. J. B., Mandal, B., Lodha, S. & Varma, A. First report of Chilli leaf curl virus affecting chilli in India. Plant. Pathol. 56, 343 (2007). Ellango, R. et al. Distribution of bemisiatabaci genetic groups in India. Environ. Entomol. 44, 1258–1264 (2015). Mishra, M. D., Raychaudhuri, S. P. & Jha, A. Virus causing leaf curl of chilli (Capsicum annum L.). Indian J. Microbiol. 3, 73–76 (1963). Navas-Castillo, J., Fiallo-Olive, E. & Sanchez-Campos, S. Emerging virus diseasestransmitted by whiteflies. Annu. Rev. Phytopathol. 49, 219–248 (2011). Ram, K., Singh, B. P. & Parihar, S. B. S. Population dynamics of whitefly (Genn.) on potato crop in relation to weather factors. Proc. Natl Acad. Sci. India 75, 43–46 (2005). Chaubey, A. N. & Mishra, R. S. Survey of chilli leaf curl complex disease in eastern part of Uttar Pradesh. Biomed. J. Sci. Tech. Res. https://doi.org/10.26717/BJSTR.2017.01.000589 (2017). Kumar, R., Kumar, V., Kadiri, S. & Palicherla, S. R. Epidemiology and diagnosis of chilli leaf curl virus in central India, a major chilli growing region. Indian Phytopathol. 69(4s), 61–64 (2016). Uday, B. O. & Jayanta, T. Occurrence and distribution of chilli leaf curl complex disease in West Bengal. Biomed. J. Sci. Tech. Res. https://doi.org/10.26717/BJSTR.2018.03.000948 (2018). Nigam, K., Suhail, S., Verma, Y., Singh, V. & Gupta, S. Molecular characterization of begomovirus associated with leaf curl disease in chilli. World J. Pharm. Res. 4, 1579–1592 (2015). Thakur, H., Jindal, S. K., Sharma, A. & Dhaliwal, M. S. Chilli leaf curl virus disease: A serious threat for chilli cultivation. J. Plant Dis. Prot. 125, 239–249 (2018). Cohen, S. & Antignus, Y. Tomato yellow leaf curl virus, a whitefly-borne geminivirus of tomatoes. Adv. Dis. Vector Res. 10, 259–288 (1994). Cohen, S. Epidemiology of whitefly-transmitted viruses. In Whiteflies, their Bionomics, Pest Status and Management (ed. Gerling, D.) 211–225 (Intercept, Andover, 1990). Jeger, M. J., Bosch, F. V. D., Madden, L. V. & Holt, J. K. A model for analysing plant-virus transmission characteristics and epidemic development. IMA J. Math. Appl. Med. Biol. 15, 1–18 (1998). MATH Article Google Scholar Jeger, M. J., Madden, L. V. & Bosch, F. V. D. Plant virus epidemiology: Applications and prospects of mathematical modeling and analysis to improve and disease control. Plant Dis. 102, 837–854 (2018). Zhang, X. S., Holt, J. J. & Colvin, J. A general model of plant-virus disease infection incorporating vector aggregation. Plant. Pathol. 49, 435–444 (2000). Lapidot, M. & Polston, J. E. Biology and epidemiology of Bemisia-vectored viruses. In Bemisia: Bionomics and Management of a Global Pest (eds Stansly, P. A. & Naranjo, S. E.) 227–231 (Springer, Berlin, 2010). Anderson, P. K. et al. Emerging infectious diseases of plants: Pathogen pollution, climate change and agrotechnology drivers. Trends Ecol. Evol. 19, 535–544 (2004). Jones, R. A. C. Plant virus ecology and epidemiology: Historical perspectives, recent progress and future prospects. Ann. Appl. Biol. 164, 320–347 (2014). Chan, M. & Jeger, M. J. An analytical model of plant virus disease dynamics with roguing and replanting. J. Appl. Ecol. 3, 413–427 (1994). Holt, J. & Chancellor, T. C. B. Simulation modelling of the spread of rice tungro virus disease: The potential for management by roguing. J. Appl. Ecol. 33, 927–936 (1996). Holt, J. K., Jeger, M. J., Thresh, J. M. & Otim-Nape, G. W. An Epidemiological model incorporating vector population dynamics applied to African cassava mosaic virus disease. J. Appl. Ecol. 34, 793 (1997). Kinene, T., Luboobi, L., Nannyonga, B. & Mwanga, G. D. A mathematical model for the dynamics and cost effectiveness of the current controls of cassava brown streak disease in Uganda. J. Math. Comput. Sci. 5, 567–600 (2015). Holt, J., Colvin, J. & Muniyappa, V. Identifying control strategies for tomato leaf curl virus disease using an epidemiological model. J. Appl. Ecol. 36, 625–633 (1999). Nakasuji, R., Miyai, S., Kawamoto, H. & Kiritani, K. Mathematical epidemiology of rice dwarf virus transmitted by green rice leafhoppers: A differential equation model. J. Appl. Ecol. 22, 839–847 (1985). Anhalt, M. D. & Almeida, R. P. Effect of temperature, vector life stage, and plant access period on transmission of banana bunchy top virus to banana. Phytopathology 98, 743–748 (2008). Chellappan, P., Vanitharani, R. & Fauquet, C. M. MicroRNA-binding viral protein interferes with Arabidopsis development. Proc. Natl. Acad. Sci. USA 102, 10381–10386 (2005). CAS PubMed Article ADS PubMed Central Google Scholar Kassanis, B. Therapy of virus infected plants. J. R. Agric. Soc. Engl. 126, 105–114 (1965). Bonato, O., Amandine, L., Claire, V. & Jacques, F. Modeling temperature-dependent bionomics of Bemisiatabaci (Q-biobiotype). Physiol. Entomol. 32, 50–55 (2007). Butler, G. D. J., Henneberry, T. J. & Clayton, T. E. Bemisiatabaci (Homoptera; Aleyrodidae): Development, oviposition, and longevity in relation to temperature. Ann. Entomol. Soc. Am. 76, 310–313 (1983). Gilioli, G., Pasquali, S., Parisic, S. & Winterd, S. Modelling the potential distribution of Bemisiatabaci in Europe in light of the climate change scenario. Pest Manag. Sci. 70, 1611–1623 (2014). Han, E. J., Choi, B. R. & Lee, J. H. Temperature-dependent development models of Bemisiatabaci (Gennadius) (Hemiptera: Aleyrodidae) Q biotype on three host plants. J. Asia-Pac. Entomol. 16, 5–10 (2013). Jones, R. A. C. Trends in plant virus epidemiology: Opportunities from new or improved technologies. Virus Res. 186, 3–19 (2014). CAS PubMed Article ADS Google Scholar Crespi, B. J. Vicious circles: Positive feedback in major evolutionary and ecological transitions. Trends Ecol. Evol. 19, 627–633 (2004). Thomas, R., Thieffry, D. & Kaufman, M. Dynamical behavior of biological regulatory networks. 1. Biological role of feedback loops and practical use of the concept of the loop-characteristic state. Bull. Math. Biol. 57, 247–276 (1995). CAS PubMed MATH Article Google Scholar Singh, N. P., McCoy, M. T., Tice, R. R. & Schneider, E. L. A simple technique for quantitation of low levels of DNA damage in individual cells. Exp. Cell Res. 175, 184–191 (1988). Mahadav, A., Kontsedalov, S., Czosnek, H. & Ghanim, M. Thermotolerance and gene expression following heat stress in the whitefly Bemisiatabaci B and Q biotypes. Insect Biochem. Mol. Biol. 39, 668–676 (2009). Pusag, J. C., Hemayet, J. S. M., Lee, K. S., Lee, S. & Lee, K. Y. Upregulation of temperature susceptibility in Bemisiatabaci upon acquisition of Tomato yellow leaf curl virus (TYLCV). J. Insect Physiol. 58, 1343–1348 (2012). Brown, J. K., Frolich, D. R. & Rosell, R. C. The sweet potato or silver leaf whiteflies: Biotypes of B. tabaci or a species complex?. Ann. Rev. Entomol. 40, 511–534 (1995). Yan, W. & Hunt, L. A. An equation for modelling the temperature response of plants using only the cardinal temperatures. Ann. Bot. 84, 607–614 (1999). Briere, J. F., Pracros, P., Le Roux, A. Y. & Pierre, J. S. A novel rate model of temperature-dependent development for arthropods. Environ. Entomol. 28, 22–29 (1999). Shi, P., Feng, G., Sun, Y. & Chen, C. A simple model for describing the effect of temperature on insect developmental rate. J. Asia-Pac. Entomol. 16, 5–10 (2011). Tsai, J. H. & Wang, K. H. Development and reproduction of Bemisiaa rgentifolii (Homoptera: Aleyrodidae) on five host plants. Environ. Entomol. 25, 810–816 (1996). Mandal B. Epidemics, transmission, characterisation and diagnosis of begomoviruses inchilli leaf curl in India. In, Asian Solanaceous Round Table- 2017 (ASRT-2): 'Challenges and Future Trends in R&D for Solanaceous Crops in Asia'. February 23–25, 2017. Venue: Ruangkaw Room, Vajiranusorn Building, the Faculty of Agriculture, Kasetsart University, Bangkok, Thailand. https://apsaseed.org/wp-content/uploads/2017/02/5.-Epidemics-Transmission-Characterisation-and-Diagnosis-of-begomoviruses-in-chilli-leaf-curl-in-India-Bikash-Mandal.pdf (2017). Hepworth, G. Exact confidence intervals for proportions estimated by group testing. Biometrics 52, 1134–1146 (1996). MathSciNet MATH Article Google Scholar Hughes, G. & Gottwald, T. R. Survey methods for assessment of citrus tristeza virus incidence. Phytopathology 88, 715–723 (1998). Moran, J. R., Wilson, J. M., Garrett, R. G. & Smith, P. R. ELISA indexing of commercial carnations for carnation mottle virus using a urease-antibody conjugate. Plant. Pathol. 34, 467–471 (1985). Walter, S. D., Stephen, W. H. & Barry, J. B. Estimation of infection rates in populations of organisms using pools of variable size. Am. J. Epidemiol. 112, 124–128 (1980). Van den, B. F. & Jeger, M. J. The basic reproduction number of vector-borne plant virus epidemics. Virus Res. 241, 196–202 (2017). Heesterbeek, J. A. P. A brief history of R0 and a recipe for its calculation. Acta Biotheor. 50, 189–204 (2002). van den Bosch, F., McRoberts, N., van den Berg, F. & Madden, L. V. The basic reproduction number of plant pathogens: Matrix approaches to complex dynamics. Phytopathology 98, 239–249 (2008). Youn, E. & Jeong, M. K. Class dependent feature scaling method using naïve Bayes classifier for text data mining. Pattern Recogn. Lett. 30, 477–485 (2009). Magarey, R. D., Sutton, T. B. & Thayer, C. L. A simple generic infection model for foliar fungal plant pathogens. Phytopathology 95, 92–100 (2005). Horowitz, M., Shimoni, Y., Parnes, S., Gotsman, M. S. & Hasin, Y. Heat acclimation: Cardiac performance of isolated rat heart. J. Appl. Physiol. 60, 9 (1986). Wang, K. & Tsai, J. H. Temperature effect on development and reproduction of silverleaf whitefly (Homoptera: Aleyrodidae). Ann. Entomol. Soc. Am. 89, 375–384 (1996). Bregaglio, S., Donatelli, M., Confalonieri, R., Acutis, M. & Orlandini, S. An integrated evaluation of thirteen modelling solutions for the generation of hourly values of air relative humidity. Theor. Appl. Climatol. 102, 429–438 (2010). Article ADS Google Scholar Authors are thankful to the Head, Division of Plant Pathology, Joint Director (R) and Director IARI New Delhi for providing support and encouragements in the study. Authors express indebted to DBT for financial support and encouragements provided for the work. Funding was provded by Department of Biotechnology, Ministry of Science and Technology, India BT/PR28199/AGIII/103/990/2018. Division of Plant Pathology, ICAR-Indian Agricultural Research Institute, New Delhi, 110012, India Buddhadeb Roy, Shailja Dubey, Amalendu Ghosh, Shalu Misra Shukla, Bikash Mandal & Parimal Sinha Buddhadeb Roy Shailja Dubey Amalendu Ghosh Shalu Misra Shukla Bikash Mandal Parimal Sinha P.S., B.M. and A.G. conceived the idea and designed the experiments. B.R. and S.D. carried out the experiment. B.R., S.M.S., and P.S. interpreted the results and P.S. analysed the data. P.S. and B.R. wrote the manuscript. B.M. and A.G. reviewed the manuscript. Correspondence to Parimal Sinha. Supplementary Information. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Roy, B., Dubey, S., Ghosh, A. et al. Simulation of leaf curl disease dynamics in chili for strategic management options. Sci Rep 11, 1010 (2021). https://doi.org/10.1038/s41598-020-79937-0 By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights Scientific Reports ISSN 2045-2322 (online)
CommonCrawl
Multi-dimensional geospatial data mining in a distributed environment using MapReduce Mazin Alkathiri ORCID: orcid.org/0000-0003-1195-55501, Abdul Jhummarwala2 & M. B. Potdar2 Journal of Big Data volume 6, Article number: 82 (2019) Cite this article Data mining and machine learning techniques for processing raster data consider a single spectral band of data at a time. The individual results are combined to obtain the final output. The essence of related multi-spectral information is lost when the bands are considered independently. The proposed platform is based on Apache Hadoop ecosystem and supports performing analysis on large amounts of multispectral raster data using MapReduce. A novel technique of transforming the spectral space to the geometrical space is also proposed. The technique allows to consider multiple bands coherently. The results of clustering 106 pixels for multiband imagery with widely used GIS software have been tested and other machine learning methods are planned to be incorporated in the platform. The platform is scalable to support tens of spectral bands. The results from our platform were found to be better and are also available faster due to application of distributed processing. Satellites orbiting the Earth with their remote sensing capabilities captures information about the geography of Earth in form of remotely sensed images. These images are representations of the Earth's surface as seen from space and contains intensity about the physical quantities such as the solar radiance reflected from the ground, emitted infrared radiation or backscattered radar intensity [14]. This information is captured by multiple sensors on board the satellites which capture radiation for various wavelengths and is provided in the form of multispectral raster data. The use of multiple sensors for the same geographic area captures various types of information which includes thermal imaging (infrared), visible radiation (Blue, Green and Red), etc., and is stored as individual bands [2]. Multi spectral and multi-dimensional data is usually available in form of multi-band georeferenced tagged image file format (GeoTIFF) files, an extension of TIFF format. A Landsat 7 image comes in the form of a GeoTIFF file consisting of 8 spectral bands and each of the spectral band stores a different wavelength scattered or emitted from the Earth's surface. The earlier GeoTIFF standard was limited to supporting 4 GB of raster data which has been superseded by the current Big GeoTIFF standard and allows storage of image files larger than 4 GB in the TIFF container [17]. This was required due to the increasing spatial resolution and number of concurrent bands that needed to be stored for a geographic area. There is also a large availability of images of Giga Pixel resolution (109 pixels) images from domains such as bio-technology and forensics which are also stored in Big GeoTIFF format. Organizing and managing this kind of data is in itself a huge task and processing of it requires designing parallel and distributed systems which will allow for faster processing of terabytes of data and provide the results in a limited amount of time. Raster image consists of representation of geographic objects in a two-dimensional scene and it is a two dimensional array of individual picture elements called pixels arranged in columns and rows [45]. Each pixel individually represents information about an area on the Earth's surface. The information about the area is represented by an intensity value and a location address in a two dimensional image. While the intensity value is represented by the measured reflectance, the location is represented by (longitude, latitude) value for a geo-referenced image [43]. A single pixel in a multiband image has several values depending on the number of sensors which captured information for that geographic location. The individual bands are usually used independently depending upon the geospatial analysis required and the intermediary outputs combined to form the final results. All of these bands when used in conjunction for geospatial analysis will provide more accurate representation about the phenomena on the Earth's surface. There are many techniques available to store and organize the multiband data (pixels) of an image in binary files such as band sequential (BSQ), band interleaved by pixel (BIP), and band interleaved by line (BIL). The BIL format stores the data of the first pixel from all the different bands in the first row, and the data of the second pixel from all the different bands in the second row and so on [20]. One example of such format is the sensor's data that comes from French satellite [also known as SPOT (Satellite Pour l'Observation de la Terre which translates to Satellite for observation of Earth) data] [64]. This study uses a custom input format which is similar to BIL to overcome some of the difficulties which are faced when processing such binary data formats in a MapReduce environment. There is a separate section ("Geometrical space to spectral space (preparation phase)") discussing the details and the data format required as input to the developed mining framework. The paper has the following structure. A review of the advancements in Big Geospatial data mining has been presented in "Related works" section. The novel approach of converting multispectral data to geometrical space is discussed and developed in "Proposed methodology" section. "Result and discussion" section provides an analysis summary of the obtained results. Finally, "Conclusion and future work" section concludes the paper and provides directions for further research. Due to the requirements for various applications related to planning and decision making, the Landsat 7 program was launched in 1999 [29, 36]. The planning and decision making include landuse change analysis, environment conservation and impact assessment, wildlife habitat mapping, disaster management, urban sprawl analysis, agriculture and horticulture, natural resource management and monitoring, etc. The Landsat 7 program served to make a complete temporal archive of cloud free images of the Earth and is still active after the launch of its successor Landsat 8 in 2013 [15]. The Earth's surface as depicted by true colour on widely used web mapping services like Google Maps/Earth, Bing Maps, Yahoo Maps, etc., is based on colour enhanced Landsat 7 satellite imagery. In addition to the satellite imagery, geospatial data is also being acquired by use of aircrafts, unmanned aerial vehicles (drones) and ground based operations such as land surveys. Big-geospatial data Collectively the geospatial data available from several sources has grown into petabytes and increases by terabytes in size every day [24]. The increase in the sources of data and its acquisition have been exponential as compared to the development of processing systems which can process the data in real time. Large amount of processed geospatial data is available for development of virtual globe applications from Nasa World Wind, temporal datasets archived at Google Earth Engine, etc. Beside these crowd sourcing online efforts such as OpenStreetMaps [5, 30] and Wikimapia have also assimilated terabytes of geospatial data. The data available from these efforts may have been derived from satellite imagery but is only applicable for a few applications such as for routing and navigation purposes. USGS [47], an organization established for the development of public maps and geo-sciences expertise has started providing access to applications and data related to disaster management during earthquakes, landslides, volcanoes, etc. but is limited in providing support for processing related to other planning and decision making applications. Private organizations such as earth observation system (EOS) have started providing automated on-the-fly earth observation (EO) imagery processing and analysis. Their products include providing realtime processing of classic GIS algorithms [21] on several of the open data sets available from the earth observation satellites (EOS). The amount of geospatial data available is not just an increase in size but with availability of higher resolution has also increased the complexity of processing it and led to the geospatial "Big Data" phenomenon. According to Bhosale and Gadekar [8], the term 'Big Data' describes innovative techniques and technologies to capture, store, distribute, manage and analyze petabytes or larger-sized datasets with high-velocity and different structures. It is the data sets or combinations of data sets whose size (volume), complexity (variability), and rate of growth (velocity) make them difficult to be captured, managed, processed or analyzed by conventional technologies and tools. It has been stated for geospatial data in [48] that, "the size of spatial big data tends to exceeds the capacity of commonly used spatial computing systems owing to their volume, variety and velocity", which truly encompasses the amount of spatial data available today and the complexity of the operations to be performed into the boundaries of the big data problem. The authors in [44] have supported that spatial data are large in quantity and are complex in structures and relationships. The study in [49] draws our attention to spatial interaction, spatial structure and spatial processes in addition to the spatial location which forms the basis of any spatial processing system. The richness of information contained in raster data is only limited by the number of captured bands and its resolution. To derive the full benefits by processing such data it has become of utmost importance to overhaul existing multi-dimensional approaches and consider the geospatial characteristic of the data. This will not only ease and simplify the way geospatial data is processed and analyzed but will also allow to further exploit the available richness of data. The variety in attributes that can be gathered from multiple spectrums for a geographic feature must be studied, visualized, interpreted and mined so as to extract qualitative, meaningful, useful information and new relationships. The results can provide insights into accurate geographic phenomena which is not available from analysis of individual bands. With the accumulation of large amount of data comes the difficult challenge of processing it and derive meaningful information which can be used for planning and decision making. The main aim of this work is to discover hidden knowledge from big geo-spatial data by considering multiple dimensions collectively for a geographical area rather than processing the bands individually. The novel approach of converting from spatial space to geometrical space preserves the essential multispectral characteristics of the data. The work addresses the shortcomings of existing approaches while processing big geospatial data and new distributed techniques required for processing both raster and vector data have been presented. In the present work, k-means clustering has been described in detail. The developed techniques can be adapted to several of the spatial data mining tasks including spatial prediction; spatial association rule mining; and temporal geo-visualization. For processing raster data, image processing techniques have been well developed and are available with open source packages such as OpenCV, Scilab and other closed source packages and libraries. These are limited in scale and processing of giga-pixel scale images such as large multiband GeoTIFF files require tens of hours if not days. This inhibits the discovery of important knowledge, the realtime provision of which may be highly useful in applications of disaster relief, etc. Knowledge Discovery in Databases (KDD) is defined as the process of discovering useful knowledge from a collection of data and is closely related to data mining and it is important for the spatial data as well [35, 45]. The data mining process has been depicted in Fig. 1. It includes data preparation and selection, data cleansing, incorporating prior knowledge on data sets and interpreting accurate solutions from the observed results [63]. The data mining life cycle (DMLC) starts with understanding the inputs or requirements, to formation of the system and until the last stage of deployment. Each of these depicted phases could be repeated in case the requirement changes. Geospatial data mining is an extension of classical data mining approach with the addition of geospatial component which requires application of complex image processing and spatial data processing techniques. Data mining life cycle Big-geospatial data processing The classical data mining approach is no longer fit for processing of Big Data and has been modified and adapted by many frameworks which been developed to utilize the computing and storage available from distributed computing devices [37]. Big geo-spatial data adds another level of complexity to this Big Data ecosystem which now also requires considering the spatial and geographical location. This has furthered the complexity big data challenge [66]. A framework such as GS-Hadoop [31] can process millions of geospatial vector features in a matter of minutes but is limited to only processing vector data. De Smith et al. [19] have addressed the full spectrum of spatial analysis and associated modeling techniques that were available at the time with widely used geographic information systems (GIS) and associated softwares. The existing geospatial data processing systems are overwhelmed with the amount of data available and the complex operations required to be performed demands urgent development of tools capable of managing and analyzing such Big geospatial data [32]. Bradley et al. [12] aim to reduce the size of the data to be processed by identifying regions of the data that can be compressed, regions that must be kept, and regions that can be discarded. The transmission of high resolution raster images over low-bandwidth connections requires a great amount of time. This problem can be mitigated to a little extent by transmitting a series of low resolution approximations which converge to the final image [52]. Low bandwidth connections are no longer of concern due to the development of faster networks and internet bandwidth available to gigabit speeds for organizations [33, 54]. The above mentioned studies address a few of the shortcomings of traditional geoprocessing while some others [25, 68] extend the geoprocessing functionality to work upon parallel and distributed processing systems. Beside the complex processing of geospatial data, a considerable amount of work has been done on use of multidimensional data structures in information processing systems (IPS), the applications of which have been in fields of business analysis, astronomy, geomatics, bioinformatics, etc. [9]. The term "Multidimensional" essentially describes the way in which numerical information can be categorized and viewed [16]. It has been already established that large geospatial databases consist of multidimensional information. Several multidimensional models have also been proposed for establishing multidimensional databases (MDB) and on-line analytical processing (OLAP) [55]. It has also been stated that the traditional database systems are inappropriate for storage and analysis of multidimensional data since these systems are optimized for online transactional processing (OLTP) in which an enormous number of concurrent transactions containing normally few records are involved. Multidimensional data cannot be stored in OLTP databases. Geodatabases store multidimensional geospatial data with associated vector attributes and features for the raster data [6]. Distributed processing of geospatial data The distributed data processing framework MapReduce was first introduced by Google and later it was incorporated into Hadoop as its strong capability [46, 60]. Apache Hadoop is an open-source software for reliable, scalable, distributed computing on commodity hardware [27, 56]. Hadoop is one of the most widely used distributed processing frameworks developed to address the challenges of big data. The framework is extensible and can be adapted to support big geospatial data. The main concept of the framework is segregated in two parts, viz., the Hadoop Distributed File System (HDFS) for storing data and the MapReduce programming model to process the data which is usually stored on HDFS. The framework subsequently has been in development by the Apache Software Foundation. Apache defines Hadoop as software library framework that allows for the distributed processing of large datasets across clusters of computers using simple programming models [26]. It is important to highlight that initially Hadoop only supported MapReduce type of applications but has later been extended to support other programming paradigms [28]. Hadoop is capable of storing and managing large volumes of data efficiently, using a large number of commodity hardware as Nodes forming a Hadoop cluster. The same cluster is used for processing the data locally stored on the Nodes to reduce the network communication. Hadoop can be used for many applications involving large volume of Geospatial as well as Spatio-temporal data analysis, Biomedical Imagery analysis, simulation of various physical, chemical and computationally intensive application biological processes [2, 13, 18, 38]. MapReduce model of programming has become one of the best ways to processes big data which is inherently stored by Hadoop on its own distributed file system (HDFS) [10, 11]. The MapReduce model takes care of managing the whole processes from receiving the data, processing it and aggregating the results to form a single output. It takes care of distributing the data and managing the distributed resources throughout the whole processes. Applications which require to work with big data benefits hugely from HDFS as it provides high throughput access and streaming capabilities to large amounts of data. It has been developed to be fault-tolerant, can run on cheap commercially of the shelf (COTS) hardware and support streaming data. The main MapReduce phases include the Map and the Reduce phases which have been depicted in Fig. 2. Schematic diagram for processing big data using Map and Reduce The Key-Value approach used by the Map phase groups input values with their associated keys. The keys along with their set of values will be sent to the Reduce phase and the required functions will be applied on those groups of values to get the needed output. There are other phases in MapReduce such as Shuffle phase, Sorting phase, Partitioner phase and Combine phase. The Combiner collects different (Key, Value) pairs, group similar keys and send them to the required node for the reducer. Keys and Values are sorted during the sorting phase. An appropriate partitioning logic can also be made available to ease the transferring of the data between the nodes. To identify spatial patterns, most well-known statistical techniques are based on the concept of intra- and inter-cluster variances (like the k-means algorithm or the Empirical Orthogonal Function) [7]. There are various Classification and Clustering algorithms supported by Mahout, a data mining platform built on Hadoop. It supports k-means, canopy, fuzzy k-mean, naive bayes, etc. [26]. It should also be noted that these algorithms can be easily used with any framework based on Hadoop such as Apache Spark. K-means algorithm is made to group a set of data into K sub-groups of the data or as we can say into K number of clusters, where the data can be in N dimensions, and in each cluster the sum of squares is minimized. Zhang et al. [67] improve the initial focal point and determine the K value, through simulation experiments while [1] propose new cost function and distance measure based on co-occurrence of values that works well for data with mixed numeric and categorical features. Sarode and Mishra [50] have mentioned, "It is not practical to require that the solution has minimal sum of squares against all partitions", except if the size of the data and dimensions is very small and the number of the clusters K is two. Eldawy and Mokbel [23] developed SpatialHadoop, which is a comprehensive extension to Hadoop for support for geo-spatial vector data over Hadoop. It supports spatial constructs and the awareness of spatial data inside Hadoop code base. SpatialHadoop is composed of four main layers, which are language, storage, map-reduce and operations layer. The language layer provides Pigeon, a high level SQL-like language. The storage layer employs two level index structure of global and local indexing. And it introduces two components, spatialFileSplitter and spatialRecordReader, through the MapReduce layer. Finally the operation layer in reduce some basic spatial operations like range query, K-Nearest Neighbours (kNN), and spatial join, etc. SpatialHadoop is meant for the spatial data but it support only supports single dimension vector data and it does not have any of the data mining (classification and clustering) techniques listed above which may be required for processing satellite imagery [3]. Mennis and Guo [39] described the urgent need for effective and efficient methods to extract unknown and unexpected information from datasets of unprecedentedly large size having millions of observations, high dimensionality by hundreds of variables, and coming from heterogeneous data sources and having other complex attributes. Yao [65] stressed the development of spatial data infrastructure and efficient and effective spatio-temporal data mining methods. The development of CLARANS [42] is based on randomized search and is based on PAM and CLARA used for cluster analysis. Vatsavai et al. [58] studied into the IO and CPU requirements of spatial data mining algorithms for analyzing big spatial data and have presented the applications of bio-mass monitoring, complex object recognition, climate change studies, social media mining and mobility applications. STING [61] use hierarchical statistical information grid based approach for spatial data mining and STING+ [62] extends the approach by suspending the effects of the updates in the hierarchy until their cumulative effect triggers to mulitple layers in the hierarchy. Bédard et al. [4] highlighted the requirement of efficient spatial data mining methods to cope with the huge size of spatial data which is increasing rapidly in the spatial data warehouses. To analyse the collected data at multiple resolutions, it is required to develop techniques with the ability to keep up the performance independent of the size of the spatial data. A clustering model represented using choropleth to identify spatial relationships between the clustering obtained by spatial data mining has been developed using ArcView (a desktop GIS) and highlights the importance of correct visualization of geospatial data [41]. PixelMap [34] technique combines kernel-density-based clustering with visualization for displaying large amounts of spatial data. Visualization of big spatial datasets at various levels is an important requirement. The output from the proposed mining techniques can be scaled at various levels and passed to Desktop GIS for visualization. Proposed methodology In this paper, the development and implementation of distributed framework for mining multiband raster geospatial data has been described. The framework has been evaluated using k-means clustering function which has also been updated to support our multi-dimensional data format in MapReduce environment. The proposed framework also supports multi-distance calculating functions such as the Euclidean distance and Manhattan distance while it is also simple to extend it to support other distance calculations such as Mahalanobis distance depending on the number of dimensions involved for data processing [51]. K-means clustering is a method commonly used to automatically partition a dataset into k-groups. It proceeds by selecting k initial cluster centers and then iteratively refining them as follows [59]: Each instance di is assigned to its closest cluster center. Each cluster center cj is updated to be the mean of its constituent instances. The algorithm converges when there is no further change in assignment of instances to clusters. In the present work, multi spectral (multi-dimensional) geospatial data derived from Landsat 8 have been used. To derive the experimental results, four to six spectral bands have been taken from several satellite images for the experimentations. The data is transformed for use with the developed mining platform. In the first stage, each pixel value of the different four bands are considered from the spectral space to the geometrical space. This has been further discussed in "Geometrical space to spectral space (preparation phase)" section of the paper. In the second stage, K-means clustering is applied in the MapReduce distributed mode and finally return the data into its initial form so it can be used for visualization. There are several implementations which have been based upon increasing the efficiency of k-means either supervised or non-supervised and there are several others which support multi-dimensional k-means clustering but none of them directly consider the information available in multispectral format. The present work can be easily extended for application of any other classification or clustering technique and any number of bands with simple modifications to support the proposed index file format. The proposed work forms one of the first distributed implementation for mining multi spectral data (supporting multiple dimensions) collectively and the techniques described can form a candidate for inclusion in the machine learning Mahout framework or for raster processing support in other distributed geospatial processing systems. The performance of the mining platform has also been found to be satisfactory with respect to the amount of resources allocated and this has been highlighted in the succeeding sections. Geometrical space to spectral space (preparation phase) SpatialHadoop supports working with the geometrical location of the different features and implements spatial operations according to the type of shapes of the geo-spatial data which may be point, line, or polygon (rectangle). It does not support raster data. This study deals with the special case of working with multi spectral raster data available from Landsat 8 imagery in which every pixel can have 11 different values available from different bands. We use a subset of these bands All of these values are considered as the positional value of that pixel in different dimensions. In this way, all the different values of a single pixel are used to form a multidimensional spatial shape. E.g., polygon in a multi-dimensional space. The data in the geo-spatial mining process can then be used to perform the desired spatial operation. Table 1 describes the sample dataset. Table 1 Subset of the bands selected from the Landsat 8 image for experimentations Dataset description: predominantly limestone mining area No. of bands: 6 (subset from Landsat 8 image has been taken) Area: Aravalli Fort Hills (North of Gujarat, India) Geographic Location: Lat, Lon: 24° 00′ North, 72° 54′ East Landsat 8 (path): (148,149); Landsat 8 (row): (43,44) For implementation of distributed k-means for supporting multi-dimensional data, four bands from a raster image have been considered initially. The polygon thus formed with four points (one in each dimension) can be also reduced to a two dimension rectangle. Spatial operations can then be simplistically applied to this form of data. The following formula represents pixel values for four bands which have then been converted to a polygon and has been represented as a rectangle in two dimensions what is called as indexed pixel data and the process has been depicted in Fig. 3. $$\left( {{\text{X}}_{1} ,{\text{Y}}_{1} ,{\text{X}}_{2} ,{\text{Y}}_{2} } \right) \to {\text{Polygon}} \;\left( {{\text{X}}_{1} ,{\text{Y}}_{1} ,{\text{X}}_{2} ,{\text{Y}}_{1} ,{\text{X}}_{2} ,{\text{Y}}_{2} ,{\text{X}}_{1} ,{\text{Y}}_{2} ,{\text{X}}_{1} ,{\text{Y}}_{1} } \right)$$ $$\left( {45,46,47,48} \right) \to {\text{Polygon}}\;\left( {45,46,47,46,47,48,45,48,45,46} \right)$$ Phase-1 indexing Figure 4 represents the total workflow which is divided into three main stages. In the first stage, the data is transformed it into 4-dimensional data set in which each pixel is transformed essentially into a rectangle owing to '4' values obtained from each band of the image. The '4' values for each pixel from every band is put into a resultant file which contains a matrix resembling the image's pixels. The process can quickly iterate over thousands of rows and columns and in the resultant file each row contains the pixel's values for the same geographic location from the '4' different bands. These '4' different values are represented as shown in Fig. 4a which also contains an index value unique to every pixel. The process is extensible to support 'N' number of bands. The image after the preparation (a, c) and after being clustered (b, d) The proposed mining mode demonstrates application of k-means clustering to work with multi-dimensional images in a MapReduce environment. Figure 5 represents the clustering and editing functions available for the input format. This work can further be extended to support other geospatial operations available with SpatialHadoop or can alternatively be integrated with Mahout. The MapReduce implementation of the workflow and support for working with data stored on Hadoop Distributed File System (HDFS) opens opportunities for mainly supporting the big data ecosystem. Phase-2 K-means clustering Index all the pixels and assigning geographic location Map Phase The proposed k-means clustering model goes through the main two phases of the MapReduce programming module which are the Map and the Reduce phases [69]. In the Map phase, the k-means model will take two different files as the inputs, the data set file and the initial centroid file. It will calculate the distance from each point in the data input to each of the initial centroids. This way the nearest centroid to each point in the whole data set is obtained. The Map phase will then send to the reducer the values obtained for each point along with the nearest centroid to that point. The output from the Mapper to the Reducer task will include the nearest key (point) together with centroid values. Multi-spectral k-means clustering depicting transformation of values from Map to Reduce phases. Input: A list of < key l, value l > pairs, k global centroids. Where value 1 is its content of online of the input file which contain the multi values of each pixel and its location. Output: A list of < key 2, value 2 > pairs. Where key 2 is the index of the cluster and value 2 is the point values and its location which belonging to that cluster (key 2). Reduce phase The reduce phase will receive each key with its attached group of values which are all the points from the data input for which the corresponding key is the nearest centroid. The main job of the Reducer is to calculate the optimal centroid out of each group of points which will have the average distant to all the element of that group of points. The reducer will produce the final output which is the new optimal centroids which again along the data input file will be taken to go through the Map and Reduce phases for the next iteration. The process is repeated till all the centroids get converged. Input: A list of < key 3, value 3 > pairs Where key 3 is the index of the cluster and value 3 is the list of points values belonging to that cluster Output: it will have two cases, one if the running iteration is any iteration but not the last, and the other case if the running iteration is the last iteration. And it can be described as: In the case of any iteration it will give the new calculated set of centroids. In case of the last iteration A list of < key 4, value 4 > pairs will be added as an output. Where key 4 is the index of the cluster and value 3 is the position of the pixel in the image as shown in Fig. 4c The final output of the model will be two main files. The first file is the set of final centroids set, Fig. 4c. The second file will contain the coordinates of each point in the input dataset along with the cluster number which that point belong to as shown in Fig. 4d. Using the last output file from Reduce phase, we can get the clustered image back again for the visualization purposes, which is further done using MapReduce. Figure 6 represents the final clustering output from the MapReduce model. Figure 7 describes the phases for processing the image. 8k image file after being clustered and plotted (cluster visualization from ENVI) Phase-3 filtering and topology study Spectral space to geometrical space The input multiband raster image is converted from geometrical space to the spectral space for processing purposes. Values from all the available bands from the image are considered. Those values represent different values of one pixel in an image and the same has been explained earlier. The location of that pixel is also added as the index to that same line where it has the pixel's values similar to BIL format. Due to the present work, it has become possible to study and analyze multiband raster images without the need to process different bands individually and infer the phenomena for a particular area. After processing the image it is required to convert the obtained output from ASCII to image from the geometrical space to the spectral space, to be able to visualize the image or perform further processing and annotations that might be required after the mining process. The next part of the paper discusses several image processing functions that have been developed to work with multiband raster data on Hadoop. Image filtering and after processing phase Mode filtering Mode filtering [57] model to the raster image, the mode filtering involves assigning to the central pixel the most common values inside the window around the pixel. Programs to work upon a distributed platform have been developed which will apply the mode filtering on the image. The window size that is needed by the algorithm can be specified at runtime. Mode filtering works to smoothen the edges of the polygons and at the same time to reduce the noise. Figure 8 shows the working of the filter. The window size here is (5 × 5) pixels, the filter will find out which value inside the window is the most common value and it will assign it to the widow's central cell. The most common valued in the window, i.e., 8 will replace all the 9 values in the 5th column. Mode filtering process To execute the mode filtering, the following four arguments are required: <The input files >—the clustered image <File size >—the dimensions of the image <The window size> <The output file name> In Fig. 9, a considerable amount of difference can be noticed after applying mode filter with different window size. The mode filtering can also be applied iteratively. An example image of the size 8000 × 8000 pixels is depicted in Fig. 9a. Figure 9b, c, d shows the results of mode filtering with window of the size 5 × 5, 9 × 9 and 11 × 11 pixels respectively on the same input image. One may select a window size appropriate for the data and to remove the desired amount of noise from the input image. Image after applying mode filtering with different window sizes Boundaries highlighting enhancement An application for vectorization has built to derive the boundaries of all the identified clusters so the output can be used with desktop GIS softwares for further analysis. Options to filter certain clusters or a group of clusters according to the requirements are available and can be specified as parameters when executing the vectorization tool. Two examples have been represented in Figs. 10 and 11. Figure 10 shows the polygons of Cluster No. 7 after filtering the results of the image of size of 8000 × 8000 pixels. Figure 11 shows the polygons of Cluster No. 3 filtered from the image of size 4000 × 4000 pixels. In both the figures, the left side contain the whole image and the right side contains a small part of the image (shown zoomed in). The visualization is accomplished using QGIS. Highlighting the boundaries of Cluster No. 7 in 8k image Highlighting the boundaries of Cluster No. 3–4k image Application for further editing including clipping any part of an image, splitting any image (horizontal/vertical) or for joining adjoining images. Figure 12 shows two different examples after clipping two different parts of an image. The left side is 1000 × 1000 pixels while a small part on the top left corner of the complete image is also highlighted with red boundaries. On the far right side of the another polygon with blue boundaries is clipped. The size of the clipped raster is 2000 × 1000 pixels. Clipping two parts of different sizes from an image An editing tool for splitting images into multiple parts is also integrated with the data mining framework. This tool has two different modes; the first one is for vertical splitting and the second one is for horizontal splitting. After selecting the appropriate splitting method, the column(s) and/or row(s) for the split can be specified. The number of partitions or splits can also be specified and the application will automatically calculate the relevant rows and columns to be passed as arguments. Examples for both the cases have been presented. Figure 13 shows the vertical splitting in three different windows. The center window (highlighted in blue) shows the full image whereas the right and left windows shows the new images created after splitting the original image. Figure 14 depicts the same using horizontal splitting mode in which the left image represent the original image and on the right, the horizontally split parts are represented. Vertical splitting of the image in the middle Horizontal-split of the image in the left Join is the editing tool that is made for joining any two images and it also have different modes, which are right, left, up, and down. Two input images and the mode for joining those two images is passed as an argument to attach the second images to the first one accordingly. This can be done to the images before the mining phase or after that according to the need. It has been made possible to process several images together after getting them from different sources, converting it to the proposed format and finally joining them together in a distributed environment. Figure 15 shows an example of an image that has been joined with itself using the Duplicate copy → MapReduce join process. Joining two copies of the same image Cleaning the image and removing small polygons (or clusters) is a technique called salt and pepper [53]. This technique has been modified to work in a distributed environment and has been integrated with the mining platform. To perform cleaning of the clustered image(s), a limit for the minimum size of polygons has to be specified which will remove all sub-clusters. Those small sub-clusters can be safely ignored while performing spatial operations or calculating statistics for large geographic areas. This function can also remove all the sub-clusters below a specified threshold and a clean output is obtained. In Fig. 16, an image of 8000 × 8000 pixels is represented after application of the above discussed techniques. Clustering have been applied and the noise in the resultant image is cleaned by application of mode filter with a window of size 11 × 11 pixels. All the small sub-polygons with a threshold of the size 100 × 100 or less have been cleaned. The resultant output is still left with several small polygons which may not be needed for further study. An appropriate threshold can also be decided automatically depending on certain parameters which can be specified by the user. A percentage of sub-clusters can be removed and which will only keep the required polygons which cover the maximum amount of geographical area. After removing shapes of size 100 × 100 pixels and less Figure 17 represents results from the same image (of size of 8000 × 8000 pixels) after application of the above discussed techniques. As discussed above, after clustering mode filtering is performed with a window size of 11 × 11 pixels. The application automatically decides to clean all the small sub-clusters keeping the rest which covers more than 90% of the geographic area. The clustered data is filtered by automatically calculating the minimum polygon size, which may be different for each cluster. E.g., in Cluster No. 1 (Blue), the threshold size of polygon which is not removed or cleaned is 1000 pixels. In Cluster No. 2 (Green) the minimum size of polygons is found to be 1694 and thus polygons which represent more than 90% of the area in each cluster will not be discarded. Removing the smallest 10% of shapes from each cluster Figure 18 represents output from the same input image. The application of the above discussed techniques is the same. For this sample, the biggest 10 sub-polygons from each cluster are kept where those are most likely to represent the area for further study. In each cluster, these top ten polygons represented a different percentage of the area for the cluster. E.g., in Cluster No. 4 (Purple), the top ten polygons represent 48% of the cluster; The total area size and the top ten polygons from Cluster No. 10 (Blue) represent 77% of the area of the cluster. The biggest 10 shapes from each cluster Studying the polygons in the clusters The binary topological relation [22] between two objects A and B. is based on the intersection of the three part of each object which are the interior, boundary, and exterior of those two objects using the nine-intersection matrix which is shown below: $$\Gamma_{9} \left( {{\text{A}},{\text{B}} } \right) = \left( {\begin{array}{*{20}ll} {A^{o} \cap B^{o} } & {A^{o} \cap \partial B} & {A^{o} \cap B^{ - } } \\ {\partial A \cap B^{o} } & {\partial A \cap \partial B} & {\partial A \cap B^{ - } } \\ {A^{ - } \cap B^{o} } & {A^{ - } \cap \partial B} & {A^{ - } \cap B^{ - } } \\ \end{array} } \right) .$$ where: \({A}{:}\; {\text{object}}; A^{o}: {\text{interior}}; A^{ - }{:}\; {\text{boundary}}\;{\text{and}}\; \partial {A}{:}\; {\text{exterior}}\), \(B{:}\; {\text{object}}; B^{o}{:}\; {\text{interior}}; B^{ - } {:} \; {\text{boundary}}\;{\text{and}}\;\partial B{:}\; {\text{exterior}}.\) In Fig. 19, two polygons are taken from the 4 k data set after clustering. The first polygon is highlighted with a yellow colour and the other one is gray in colour. The topological relationship is established by the application and it can be identified that polygon number 114 is inside polygon number 739 and they are not touching the boundaries. The binary topological relationship between these two objects is found with the following: $${\text{Case}}\;1{:} \,\,\Gamma_{9} \left( {114,739} \right) = \left( {\begin{array}{*{20}ll} {1 0 0} \\ {1 0 0} \\ {1 1 1} \\ \end{array} } \right).$$ Topological relationship for Case 1 In Fig. 20, a polygon which is numbered 245 on the left side of the diagram belongs to the Cluster No. 2 that is outside of polygon number 1277 which belongs to Cluster No. 4 The polygon is shown on the right side of the diagram. Those polygons are touching each other's boundaries and we find the binary topological relationship between those to objects as the following: $${\text{Case}}\;2{:} \,\,\Gamma_{9} \left( {245 ,1277 } \right) = \left( {\begin{array}{*{20}ll} {0 0 1} \\ {0 1 1} \\ {1 1 1} \\ \end{array} } \right).$$ In Fig. 21, two polygons have been selected and highlighted with a yellow colour. Polygon No. 245 is outside Polygon No. 1270 and they are not touching each other's boundaries. The binary topological relationship between those two objects is realized from the following: An exhaustive list of relations between the largest polygons can be iteratively computed for topological study of the area. This list can then be further used to perform statistical studies and application of other machine learning approaches to derive interactions between different environmental factors and conditions. Result and discussion Technical specification of the Hadoop cluster used for the experiments The Hadoop-cluster was set-up in a HP Proliant DL580 G7 server with 4 × Intel® Xeon® CPU E7-4870 @ 2.40 GHz totalling to 80 cores. The server is equipped with 512 GB of RAM. HPE 3PAR StoreServ served as storage backend with 2 × 8 Gbps connectivity. Hadoop (v.2.6.0) cluster was configured in the server with 50 Virtual Machines. The storage capacity of the cluster is 2.7 TB and which has been represented in Fig. 22. The cluster consisted of one Master machine (Name Node) and 50 data machines (Data Nodes). The Name Node is configured with 4 virtual processors and 12 GB RAM whereas the Data Nodes were heterogeneously configured with the following: 1 data node (on Name Node) with 4 virtual processor and 12 GB RAM 38 data nodes with 1 virtual processor and 8 GB RAM The storage capacity of the Hadoop cluster Data set preparation The data used to test the functionality of the mining framework consists of two multispectral images of different size for the same location as described in "Geometrical space to spectral space (preparation phase)" section. The pre-preparation phase: It consists of converting the multi-spectral data set into two dimension image by indexing all the pixels in the image obtained from different spectral bands using each pixel's location. This generalization of multiple values for a single pixel obtained from multiple band into a single file also makes it easier for the data to processed irrespective of the number of bands from the image. As an example the values of the first pixel taken from 6 different files (in case of 6 band image) are gathered in a single line along with the (number of the row and column which both represent the index for that pixel). The process is repeated for all the remaining pixels in the image. Everything is performed using MapReduce paradigm to utilize the potential of the Hadoop Cluster. New format: From the pre-preparation phase, the example image with 6 bands is converted into two dimension image stored in an ASCII file which looks like the following: #Hadoop fs -cat bigdata-256bs/part-r-00000 | head -n 10 8650,8075,7650,13779,11029,8582,1,10001 10275,10484,11249,17210,16363,13831,1,1003 The ASCII file is comma separated file and the first six values (in Red) represent the values of a pixel gathered from different bands whereas the last two values (in Black) represents the index of that pixel (geographic location). With this format it become easier to process the data for the pixel collectively using MapReduce. Benchmarking the Hadoop cluster (I/O) As the storage backend consists of a single Storage Area Network (HPE 3PAR StorServ) with multiple disks, it is also important to test and benchmark the throughput of the HDFS. The benchmarking has been performed in conjunction with several suggestions provided by Mukherjee et al. [40]. It is appropriate to use a distributed file system such as HDFS on top of a shared disk infrastructure such as a storage area network. It is also possible to reduce the replication factor to 1 as redundancy and fault tolerance are not desired in such an enterprise storage system. Table 2 shows various statistics from the DFSIO benchmark which ships with Hadoop. Table 2 Benchmarking the cluster Running K-means clustering In the above sections, several functions provided by the mining framework have been tested with many multiband images. The current section provides detailed discussion about the proposed MapReduce extensions to the k-means algorithm to work with multiband data in a geometric space. The results for clustering two multiband images for the same geographical location but with different resolutions using our approach to k-means have been described in detail. The proposed approach does not just perform clustering but with support for various image processing techniques allows to describe the geographical features in the image and in particular the topological relationships between different object that exist in the image. Those functions have already been described earlier. The multiband images are uploaded to the HDFS and an indexed file is generated using the technique discussed in "Data set preparation" section using MapReduce. A parameter file containing the list of initial centroids is also provided for the uploaded image. In the subsequent testing, as the two different images are for the same geographic location with different resolutions, the same set of the initial centroids for clustering both the images is used. This will also help to compare the output from both of the images and to identify the quality of our clustering model. This has been further demonstrated in Table 7. To test the performance of clustering, each image was clustered several times with a different block-size. The number of blocks that an image will be divided into depends on the size of the image and the block size that has been specified in the cluster configuration. A small block size will lead to a large number of blocks even for a small image while a large block size is configured if it is desired that the image is not split into a large number of block. Small block size is desired in case of small images so that several block are distributed across the cluster and thus cluster storage and computing capacity can be used. A large block size would reduce the network communication as several small block need not be communicated across the cluster of 50 nodes. The results of the clustering approach has been tested twice; once in a full Hadoop cluster consisting of 50 nodes and the second time with just two nodes. The performance of the approach to cluster the multi-spectral raster images in the Hadoop framework (a distributed environment) and related image processing techniques is represented in Tables 3, 4, 5 and 6. Each of the tables shows the elapsed time of the k-means clustering process, the average time used for each of mapping, shuffling, merging, and reduce phases in addition to the total mapping time with respect to the block size. Table 3 Clustering (1.9 GB) raster image with a 50 node Hadoop cluster Table 4 Clustering (1.9 GB) raster image with a 2 node Hadoop cluster Table 5 Clustering (17.79 GB) raster image using a 50 nodes Hadoop-cluster Table 6 Clustering 17.79 GB raster image using a 2 node Hadoop cluster K-means using 50 node Hadoop cluster for Image No. 1 Table 3 shows the several statistics of clustering the Image No. 1 with 8000 × 6000 pixels using a full Hadoop cluster of 50 nodes. It is evident that the minimum elapsed time was recorded in the time where the image was stored using the block size 32 MB, that was followed by the block size of 64 MB and so on till the maximum block size of 256 MB. The same is clearly illustrated in Fig. 23 and it can be seen that all of average mapping time, average shuffling time, and the totally mapping time increase with the block size. The average merge and reduce time was not affected with the change in the block size of the image. Clustering 1.9 GB raster image with a 50 nodes Hadoop cluster Figure 23 clearly shows the increase in the execution time with respect to the block size. From the statistics we can infer that by increasing the block size, the number of the blocks created from the image will be reduced and due to less number of blocks, only a few number of Hadoop nodes (processing machines) will compute over the data. This is because several Data Nodes will not be even utilized due to non-availability of data local to them and this leads to increase in the execution time. K-means using 2 node Hadoop cluster for Image No. 1 The Hadoop cluster was resized keeping only a couple of Data Nodes. Table 4 represents the average mapping, shuffling, merging, and reduce phases in addition to the total mapping time with respect to the block size. It is found that the execution time was the least in the case of the blocks with size of 64 MB. It was followed by 96, 128 and 256 MB block size respectively. It can be noticed from Fig. 24 that the execution time increases with the increase in the block size. Clustering 1.9 GB raster image with a 2 node Hadoop cluster From the statistics of executing K-means clustering model over Hadoop clusters with 50 nodes, as available in Table 3, it is evident that the fastest execution time is 4.1 min with block size of 32 MB. From the statistics, available in Table 4, it can be noticed that the execution time has increase by 3 times due to limited number of nodes (= 2) and the minimum execution time is 11.5 min with block size of 64 MB. Figure 25 illustrates the elapsed time for both the clusters varying the block size. Comparing the elapsed time for running k-means with 1.9 GB raster image in 2 and 50 nodes Hadoop cluster Comparing results of K-means between 2 nodes and 50 nodes Hadoop cluster for Image No. 1 Figure 26 shows a large increase in the time needed for shuffling the blocks with increasing the size of the blocks. The shuffle time remains considerably the same with up to 64 MB of block size for both 2 nodes and 50 nodes Hadoop cluster. For the 50 nodes cluster, the shuffle time varies from 1 min to 6.1 min for 64 MB to 256 MB block size respectively. For the 2 Nodes Hadoop cluster, the average shuffling time goes up to 13.5 min in case of 256 MB. Comparing the average shuffle time for running k-means with 1.9 GB raster image in 2 and 50 nodes Hadoop cluster It has been mentioned before that the processing time increases in the case of bigger blocks of data. With bigger blocks there will be less number of blocks. E.g., if an image is of 1 GB size, with a block size of 256 MB, it will result in only 4 blocks. This means that only four mappers can ever run for processing the data from this image. For a large Cluster, this will leave the resources unutilized and lead to an increase the execution time. K-means using 50 nodes Hadoop cluster for Image No. 2 The second image (Image No. 2) is for the same geographical location but with different resolution. It is a 6-band raster image and each band has 24000 rows and 18000 columns. In an uncompressed form this image requires 4.83 GB storage with pixel depth of 2 bytes (24,000 rows × 18,000 columns × 6 bands × 2 bytes of data). After running the indexing method upon this binary file we get a plain text ASCII representation (an indexed image) of size 17.76 GB. For further processing, the initial set of centroids are provided (these are the same centroids which were provided for the previous image considering the same geographical location and to make sure that the k-means clustering technique receives the same input). The values presented in Table 5 and represented in Fig. 27 have been averaged by running the clustering technique four times upon a full cluster of 50 nodes. For each of the run, the block-size is changed and the file is updated to follow the new block size. As the current image is much larger than Image No. 1, the block size have also been increase accordingly. The block size for the experimentations used are 128 MB, 256 MB, 512 MB, and 1024 MB respectively. Clustering 17.79 GB raster image with a 50 nodes Hadoop-cluster From the values in Table 6 it can be observed that the most appropriate block size for the Image No. 2 is 512 MB which requires the minimum execution time. The values have been averaged over four consecutive runs to minimize error margin. For very large block size (of 18 GB), as represented in Fig. 28, the Shuffle time increases exponentially as the phase requires data to be transferred from one node to another and is heavily dependent on the network. Clustering 17.79 GB raster image using a 2 node Hadoop cluster From the results represented in Fig. 29, the advantage of utilizing the Hadoop distributed environment for processing large raster images is evident. It can be seen that the execution time decreases considerably when the same Image (No. 2) is processes upon a full cluster of 50 nodes rather than two nodes cluster. In Fig. 29, the red line represent the time needed to cluster the image using two node or a Hadoop cluster of to machines whereas the blue line represent the time needed to do the same processes using 50 nodes. Even for larger block size, the results remain constant and as is evident. Comparing the elapsed time for running k-means over (17.76 GB) raster image in (1 and 50) nodes Hadoop cluster Figure 29 also shows that running the algorithm using two nodes, for the dataset, the block size was 512 MB provided the best performance. While processing the same dataset using 50 machines, 256 MB block size is preferable. With a quick comparison of clustering Image No. 2 (size: 18 GB) and Image No. 1 (size: 1.9 GB), it is found that 32 MB and 64 MB block size provided the minimum execution time for 50 Node cluster and 2 node cluster respectively. It is concluded that the block size is an important factor to be considered for preparing the dataset along with other factors such as the number of nodes (machines) in the Hadoop cluster when deploying a geo-spatial Big Data processing framework. The 50 nodes Hadoop cluster is configured with about 360 GB of memory. From Fig. 30, it is visible that a large amount of memory is being used with 128 MB of data blocks and this can be attributed to the fact that all of the data nodes in the cluster are contributing. The use of memory decreases with the increase in the data blocks because several of the data nodes remain idle and does not contribute as no blocks are available local to them for processing. Similar results have been obtained with Image No. 1 with block size ranging from 16 to 256 MB and have been excluded for brevity. Resources utilized in the case on 1 node cluster Evaluating the results of clustering using the proposed technique with widely used geospatial image analysis software Table 7 represents the percentage of the number of pixels in each cluster from the two different images. As the images are for the same geographic area but with a different resolution, the results of clustering with the proposed techniques and using ENVI, a widely used image processing application, are identical. The percentage difference for both the images have been calculated and verified for multiple runs as described in the above sections. The percentage difference is found to be negligible except in case of Cluster No. 4 and Cluster No. 10 in the discussed case which validates the use of the proposed technique for multi-dimensional data. Table 7 Comparing the number of pixels for the important clusters Conclusion and future work The main objective of the present work is to utilize the advances in distributed processing, specifically MapReduce programming paradigm, to facilitate big geospatial data processing and mining. The multispectral essence of the raster images has been preserved by converting multi-spectral space to multi-dimensional geometric space. The existing data mining and machine learning techniques are limited by scale and the ones even which are available for use in a distributed environment for processing raster data and scalability only support processing of single spectral band at a time. With the development of this work, it has now become easier to port existing image processing techniques for distributed processing of giga-pixel imagery and application of custom logic for development of applications. A big geospatial data mining platform based on Apache Hadoop distributed processing environment has been developed in this work. The developed mining platform comes with a group of editing, filtering and other image processing techniques to help better extracting the geographical features out of the image data. The processing (mining) capabilities and other image processing facilities of the framework have been tested using several images of unconventional size in scale of giga-pixels, and it shows the advantages of using the developed framework upon a distributed environment as compared to a desktop GIS application using a single machine. In a distributed environment, clustering was performed on sample images over 50 node (machines) cluster and over 2 nodes (machines). ENVI, a desktop GIS application, was also used over the same image of a reduced resolution to analyze the results of the proposed clustering technique. There is a gain of factor of about 2 to 2.5 in time of data process depending on the block size employed. The clustering results comparison shows maximum deviation of 2.7 percent which is negligible. The analysis of times of various sub processes shows clearly advantages of this processing in MapReduce programming. The classified image data can be used further for spatial as well as temporal characterizing the geo-spatial objects in an area for scene modelling and also modelling geo-spatial processes. The results of clustering have been tested by going from spatial to geometrical space and similarly other methods can also be adapted to support processing of multiband data coherently. The results derived also highlight the importance of requirement of compute, memory, storage and network infrastructure for processing such large datasets. Appropriate data storage mechanisms are also required for fast access to large amounts of data as in a distributed environment the data is distributed across a number of nodes. The nodes where the data blocks are stored contribute to the overall performance of the system. This has also been evaluated by using different block size when storing the data. The proposed work can be extended to support other spatial mining techniques. Distributed processing techniques developed for clustering can be extended to support other types of processing. Workflows are an important component of any geospatial data mining systems and the current system can be extended to support workflow type of applications. The current system does not have a visualization interface which can be provided and will allow to see big geospatial data at various levels of abstractions. A workflow pipeline which can inserts the data into a database for an application server such as GeoServer is highly desirable and is currently being worked out. The data set used in this paper is derived from Landsat 8 multispectral images for the location of Aravalli Fort Hills, an area North of Gujarat state, India with Geographic Location (around): Lat: 24° 00′ North, and Lon: 72° 54′ East). The dataset has been processed and provided by BISAG, Gandhinagar, Gujarat, India. GIS: GeoTIFF: georeferenced tagged image file format GB: BSQ: band sequential BIP: band interleaved by pixel BIL: band interleaved by line Satellite Pour l'Observation de la Terre GS-Hadoop: Geo Spatial Hadoop IPS: information processing systems MDB: multidimensional databases OLAP: on-line analytical processing OLTP: online transactional processing KDD: DMLC: HDFS: Hadoop Distributed File System commercially of the shelf KNN: K-Nearest Neighbours Ahmad A, Dey L. A k-mean clustering algorithm for mixed numeric and categorical data. Data Knowl Eng. 2007;63:503–27. Alarabi L, Mokbel MF, Musleh M. St-hadoop: a mapreduce framework for spatio-temporal data. GeoInformatica. 2018;22:785–813. Alkathiri M, Jhummarwala A, Potdar M. Geo-spatial big data mining techniques. Int J Comput Appl. 2016;135:28–36. Bédard Y, Merrett T, Han J. Fundamentals of spatial data warehousing for geographic knowledge discovery. Geogr Data Min Knowl Discov. 2001;2:53–73. Bennett J. OpenStreetMap. Birmingham: Packt Publishing Ltd.; 2010. Bereta K, Koubarakis M. Ontop of geospatial databases. In: International semantic web conference. Cham: Springer; 2016. p. 37–52. Bernard E, Naveau P, Vrac M, Mestre O. Clustering of maxima: spatial dependencies among heavy rainfall in France. J Clim. 2013;26:7929–37. Bhosale HS, Gadekar DP. A review paper on big data and hadoop. Int J Sci Res Publ. 2014;4:1. Borodin A, Mirvoda S, Porshnev S. Analysis of multidimensional data with high dimensionality: data access problems and possible solutions. In: ITM web of conferences. Les Ulis: EDP Sciences; 2016. p. 01005. Borthakur D. The hadoop distributed file system: architecture and design. Hadoop Proj Website. 2007;11:21. Borthakur D. HDFS architecture guide. Hadoop apache project. 2008. http://hadoopapache.org/common/docs/current/hdfsdesign.pdf. p. 39. Bradley PS, Fayyad UM, Reina C. Scaling clustering algorithms to large databases. KDD. 1998;98:9–15. Calimeri F, Caracciolo M, Marzullo A, Stamile C. BioHIPI: biomedical hadoop image processing interface. In: International workshop on machine learning, optimization, and big data. Cham: Springer; 2017. p. 540–8. Campbell JB, Wynne RH. Introduction to remote sensing. New York: Guilford Press; 2011. Council NR. Landsat and beyond: sustaining and enhancing the nation's land imaging program. Washington, D.C: National Academies Press; 2013. Coveney M. Corporate performance management (CPM). 2003. http://www.businessforum.com/Comshare04B.html. Crema S, Cavalli M. SedInConnect: a stand-alone, free and open source tool for the assessment of sediment connectivity. Comput Geosci. 2018;111:39–45. Dagade V, Lagali M, Avadhani S, Kalekar P. Big data weather analytics using hadoop. Int J Emerg Technol Comput Sci Electron (IJETCSE). 2015;14:0976–1353. de Smith MJ, Goodchild MF, Longley P. Geospatial analysis: a comprehensive guide to principles, techniques and software tools. Leicester: Troubador Publishing Ltd; 2009. Ding Q, Khan M, Roy A, Perrizo W. The P-tree algebra. In: Proceedings of the 2002 ACM symposium on applied computing. ACM; 2002. p. 426–31. EARTH_OBSERVATION_SYSTEM. 2019. EOS processing—classic gis algorithms. https://eos.com/eos-processing/. Egenhofer MJ, Herring J. Categorizing binary topological relations between regions, lines, and points in geographic databases. Technical report, Department of Surveying Engineering, University of Maine, Orono, ME; 1990. Eldawy A, Mokbel MF. Spatialhadoop: a mapreduce framework for spatial data. In: 2015 IEEE 31st international conference on data engineering (ICDE). New York: IEEE; 2015. p. 1352–63. Eldawy A, Niu L, Haynes D, Su Z. Large scale analytics of vector + raster big spatial data. In: Proceedings of the 25th ACM SIGSPATIAL international conference on advances in geographic information systems. New York: ACM; 2017. p. 62. Evans MR, Oliver D, Yang K, Zhou X, Ali RY, Shekhar S. Enabling spatial big data via CyberGIS: challenges and opportunities. CyberGIS for geospatial discovery and innovation. Dordrecht: Springer; 2019. Foundation AS. Apache hadoop. The Apache Software Foundation. 2018. https://hadoop.apache.org/. Ghazi MR, Gangodkar D. Hadoop, MapReduce and HDFS: a developers perspective. Procedia Comput Sci. 2015;48:45–50. Gopalani S, Arora R. Comparing apache spark and map reduce with performance analysis using k-means. Int J Comput Appl. 2015;113:8–11. Goward SN, Masek JG, Williams DL, Irons JR, Thompson R. The Landsat 7 mission: terrestrial research and applications for the 21st century. Remote Sens Environ. 2001;78:3–12. Haklay M, Weber P. Openstreetmap: user-generated street maps. IEEE Pervas Comput. 2008;7:12–8. Jhummarwala A, Mazin A, Potdar M. Geospatial hadoop (GS-Hadoop) an efficient mapreduce based engine for distributed processing of shapefiles. In: Proceedings of the the 2nd international conference on advances in computing, communication, & automation, Bareilly, India; 2016. p. 1–7. Jo J, Lee K-W. High-performance geospatial big data processing system based on MapReduce. ISPRS Int J Geo-Inf. 2018;7:399. Johnson L. Fiber optics: mature and growing fast. Tech Dir. 2016;76:22. Keim DA, Panse C, Sips M, North SC. Pixel based visual data mining of geo-spatial data. Comput Gr. 2004;28:327–44. Koperski K. A progressive refinement approach to spatial data mining. Canada: Simon Fraser University; 1999. Lauer DT, Morain SA, Salomonson VV. The Landsat program: its origins, evolution, and impacts. Photogramm Eng Remote Sens. 1997;63:831–8. Lausch A, Schmidt A, Tischendorf L. Data mining and linked open data—new perspectives for data analysis in environmental research. Ecol Model. 2015;295:5–17. Lenka RK, Barik RK, Gupta N, Ali SM, Rath A, Dubey H. Comparative analysis of SpatialHadoop and GeoSpark for geospatial big data analytics. In: 2016 2nd international conference on contemporary computing and informatics (IC3I). New York: IEEE; 2016. p. 484–8. Mennis J, Guo D. Spatial data mining and geographic knowledge discovery—an introduction. Comput Environ Urban Syst. 2009;33:403–8. Mukherjee A, Datta J, Jorapur R, Singhvi R, Haloi S, Akram W. Shared disk big data analytics with apache hadoop. In: 2012 19th international conference on high performance computing. New York: IEEE; 2012. p. 1–6. Murray AT, Shyy T-K. Integrating attribute and space characteristics in choropleth display and spatial data mining. Int J Geogr Inf Sci. 2000;14:649–67. Ng RT, Han J. CLARANS: a method for clustering objects for spatial data mining. IEEE Trans Knowl Data Eng. 2002;14:1003–16. Nijmeijer R, de Haas A, Dost R, Budde P. ILWIS 3.0 academic: user's guide; 2001. Ooi B, Sacks-Davis R, Han J. Indexing in spatial databases. Unpublished/Technical Papers; 1993. Parker JR. Extracting vectors from raster images. Comput Gr. 1988;12:75–9. Peralta D, del Río S, Ramírez-Gallego S, Triguero I, Benitez JM, Herrera F. Evolutionary feature selection for big data classification: a mapreduce approach. Math Probl Eng. 2015. https://doi.org/10.1155/2015/246139. Rabbitt MC. The United States geological survey, 1879–1989, US Government Printing Office. 1989. Samson G, Lu J, Xu Q. Large spatial datasets: present challenges, future opportunities. In: Proceedings of the international conference on change, innovation, informatics and disruptive technology ICCIIDT'16, London-UK, October 11, 12, 2016; 2016. p. 204–17. Samson GL, Lu J, Wang L, Wilson D. An approach for mining complex spatial dataset. In: Proceedings of the international conference on information and knowledge engineering (IKE), 2013. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp). p. 1. Sarode AJ, Mishra A. Audit and analysis of impostors: an experimental approach to detect fake profile in online social network. In: Proceedings of the sixth international conference on computer and communication technology 2015. New York: ACM; 2015. p. 1–8. Shahid R, Bertazzon S, Knudtson ML, Ghali WA. Comparison of distance measures in spatial analytical modeling for health service planning. BMC Health Serv Res. 2009;9:200. Sloan KR, Tanimoto SL. Progressive refinement of raster images. IEEE Trans Comput. 1979;28:871–4. Szeliski R. Computer vision: algorithms and applications. New York: Springer Science & Business Media; 2010. Talbot D, Warner E, Anderson C, Hessekiel K, Jones D. A Massachusetts municipal light plant seizes internet access business opportunities; 2015. Trujillo J, Palomar M. An object oriented approach to multidimensional database conceptual modeling (OOMD). In: Proceedings of the 1st ACM international workshop on data warehousing and OLAP. New York: ACM; 1998. p. 16–21. Uzunkaya C, Ensari T, Kavurucu Y. Hadoop ecosystem and its analysis on tweets. Procedia-Soc Behav Sci. 2015;195:1890–7. van de Weijer J. Local mode filtering J. van de Weijer R. van den Boomgaard Intelligent Sensory Information Systems Faculty of Science, University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam, The Netherlands. Vatsavai RR, Ganguly A, Chandola V, Stefanidis A, Klasky S, Shekhar S. Spatiotemporal data mining in the era of big spatial data: algorithms and applications. In: Proceedings of the 1st ACM SIGSPATIAL international workshop on analytics for big geospatial data; 2012. p. 1–10. Wagstaff K, Cardie C, Rogers S, Schrödl S. Constrained k-means clustering with background knowledge. In: ICML; 2001. p. 577–84. Wang C-S, Lin S-L, Chang JY. MapReduce-based frequent pattern mining framework with multiple item support. In: Asian conference on intelligent information and database systems. New York: Springer; 2017. p. 65–74. Wang W, Yang J, Muntz R. STING: a statistical information grid approach to spatial data mining. In: VLDB; 1997. p. 186–195. Wang W, Yang J, Muntz R. STING+: an approach to active spatial data mining. In: Proceedings 15th international conference on data engineering (Cat. No. 99CB36337). New York: IEEE; 1999. p. 116–125. Witten IH, Frank E, Hall MA, Pal CJ. Data mining: practical machine learning tools and techniques. Burlington: Morgan Kaufmann; 2016. Xiaoke Z, Chao M, Haifeng H, Fangfang L. Radiometric correction based on multi-temporal spot satellite images. In: 2009 international conference on wireless communications & signal processing. New York: IEEE; 2009. p. 1–6. Yao X. Research issues in spatio-temporal data mining. In: Workshop on geospatial visualization and knowledge discovery, University Consortium for Geographic Information Science, Virginia; 2003. p. 1–6. Yoon I, Yi S, Oh C, Jung H, Yi Y. Distributed video decoding on hadoop. IEICE Trans Inf Syst. 2018;101:2933–41. Zhang Z, Zhang J, Xue H. Improved K-means clustering algorithm. In: 2008 congress on image and signal processing. New York: IEEE; 2008. p. 169–72. Zhao T, Zhang C, Anselin L, Li W, Chen K. A parallel approach for improving Geo-SPARQL query performance. Int J Digit Earth. 2015;8:383–402. Zhao W, Ma H, He Q. Parallel k-means clustering based on mapreduce. In: IEEE international conference on cloud computing. New York: Springer; 2009. p. 674–9. We are grateful to Shri T. P. Singh, Director, BISAG for his keen interest in and support to this work. We are also grateful to Apache Software Foundation and the Open Source community for making a plethora of softwares open source and without the availability of which the current development would not have been possible. Administrative Sciences College Hadhramout, University of Science and Technology, Hadhramout, Yemen Mazin Alkathiri Bhaskaracharya Institute for Space Applications and Geo-Informatics, Gandhinagar, 382007, India Abdul Jhummarwala & M. B. Potdar Search for Mazin Alkathiri in: Search for Abdul Jhummarwala in: Search for M. B. Potdar in: MA has developed the proposed mining platform development of the required programs and test cases, analysis and interpretation of data and drafting of the manuscript. AJ has served as advisor in study conception and for critical revision. MBP has also served as advisor, and critically reviewed the complete developments. All authors read and approved the final manuscript. Correspondence to Mazin Alkathiri. Alkathiri, M., Jhummarwala, A. & Potdar, M.B. Multi-dimensional geospatial data mining in a distributed environment using MapReduce. J Big Data 6, 82 (2019) doi:10.1186/s40537-019-0245-9 Multiband raster processing Multi-dimensional data processing Geospatial processing Spectral to geometrical space
CommonCrawl
Peering below the diffraction limit: robust and specific sorting of viruses with flow cytometry Shea T. Lance1,2,3, David J. Sukovich1,2, Kenneth M. Stedman4 & Adam R. Abate1,2,3 Virology Journal volume 13, Article number: 201 (2016) Cite this article Viruses are incredibly diverse organisms and impact all forms of life on Earth; however, individual virions are challenging to study due to their small size and mass, precluding almost all direct imaging or molecular analysis. Moreover, like microbes, the overwhelming majority of viruses cannot be cultured, impeding isolation, replication, and study of interesting new species. Here, we introduce PCR-activated virus sorting, a method to isolate specific viruses from a heterogeneous population. Specific sorting opens new avenues in the study of uncultivable viruses, including recovering the full genomes of viruses based on genetic fragments in metagenomes, or identifying the hosts of viruses. PAVS enables specific sorting of viruses with flow cytometry. A sample containing a virus population is processed through a microfluidic device to encapsulate it into droplets, such that the droplets contain different viruses from the sample. TaqMan PCR reagents are also included targeting specific virus species such that, upon thermal cycling, droplets containing the species become fluorescent. The target viruses are then recovered via droplet sorting. The recovered virus genomes can then be analyzed with qPCR and next generation sequencing. Results and Conclusions We describe the PAVS workflow and demonstrate its specificity for identifying target viruses in a heterogeneous population. In addition, we demonstrate recovery of the target viruses via droplet sorting and analysis of their nucleic acids with qPCR. Viruses impact every form of life on earth, from applying evolutionary stresses to enhancing the transfer of genes between organisms [1–4]. Many human diseases are caused by viruses, including acute diseases like Ebola [5] and influenza [6], and chronic diseases caused by Epstein-Barr Virus (EBV) [7], Human Immunodeficiency Virus (HIV) [8], and Zika virus [9]. Studying viruses is thus important to human health, but also for elucidating the incredible mechanisms they've evolved to survive, replicate, and spread; these discoveries may lead to new molecular techniques and methods for treating disease. Studying viruses, however, can be challenging. They are usually much smaller than the diffraction limit of light and thus not directly visible with optical microscopy. They contain miniscule amounts of nucleic acid and protein, making direct sequencing or proteomic characterization of individual virus particles challenging [10]. To overcome these issues, the standard strategy is to culture the virus of interest to produce sufficient quantities for biological assays, such as gel electrophoresis, infection assays, or visualization with super-resolution or electron microscopy. However, like microbes, most viruses cannot be cultured [11], as this requires knowledge of which host cells the virus replicates in which, for most viruses, are also likely uncultivable [12, 13]. When a virus cannot be cultured, molecular methods are valuable. For example, viruses can be purified from a sample using filtration, flocculation, or density-dependent centrifugation, to recover particles of the appropriate size range, and the nucleic acids purified for PCR or next generation sequencing [14, 15]. This can be applied directly to environmental viruses and provides a genetic snapshot of organisms in that environment, and has yielded numerous insights into virus phylogeny and fundamental biology [14, 15]. However, viruses are also the most diverse organisms on the planet and viral samples often comprise sequences from trillions of entities, exceeding by orders of magnitude the limits of modern sequencers to sequence them [16]. As a result, such "shotgun" sequencing provides a sparse sampling of the system recovered as billions of short, hundred-base reads [17]. To extract meaningful biological insight from this complex data, the reads must be pieced into viral genomes, introducing substantial bioinformatic challenges that, often, cannot be overcome [10, 18, 19]. Most often, only genomic sequences for the most abundant organisms can be completely recovered and relatively little is learned about the vast number of new viruses present at low-to-moderate levels [20, 21]. To enhance the investigation of viral ecosystems, a method for culture-free purification of specific species would be valuable; however, as of yet, no method exists for specific sorting of viruses. In this paper, we present specific and high throughput sorting of viruses, PCR-Activated Virus Sorting (PAVS). Using microfluidics, we encapsulate single particles from a population of diverse viruses into monodisperse double emulsion droplets. PCR reagents targeting specific genetic loci are also included, interrogating every droplet for these sequences. If a particle contains them, PCR signals are generated that cause the droplet to become fluorescent, making it sortable by double emulsion flow cytometry [22]. The recovered droplets can be ruptured and the material subjected to additional analyses, such as quantitative PCR or digital PCR and sequencing. The approach is simpler than antibody-based labeling and sorting of cells [23] because designing PCR TaqMan assays for specific detection of sequences is much easier than generating high affinity antibodies with which to label and sort single virus particles by flow cytometry. Moreover, the implementation of TaqMan PCR allows multiplexing to interrogate each virus for distinct sequences. Additionally, the use of TaqMan assays can be designed to incorporate "degenerate" sequences [24], allowing for the identification and recovery of diverse viral genomes. As we show, multiplexing can be used to measure the length distributions of viral genomes in a sample and is extendable to sequences that are not physically connected, such as genomic segments of viruses like influenza, or the 16S ribosomal RNA (rRNA) sequence of a bacterial cell harboring the target virus. Flow cytometric sorting has become a universal tool in cell biology and microbiology and PAVS allows it to be applied to viruses for the first time. Preparation of viral samples Bacteriophage T4 (T4) and bacteriophage ФX174 (ФX174) were obtained from Carolina Biological Supply. T4 is propagated by infection of Escherichia coli (E.coli) B (ATCC 11303) and ФX174 by infection of E.coli C (ATCC 13706). Bacteriophage lambda cI857ts is obtained from the lambda lysogen E.coli strain KL470 and propagated by infection of E.coli C600. Viral particles are collected from the supernate of the cultures and stored at 4 °C until experimentation. Initial viral stock concentrations are: T4 1×1012 pfu/mL, ФX174 1×1010 pfu/mL, Lambda 5×109 pfu/mL. Microfabrication of devices The microfluidic devices are fabricated using soft lithography in poly(dimethylsiloxane) (PDMS) [25]. SU-8 masters are fabricated by photolithography and used to mold PDMS devices by mixing PDMS polymer and cross-linker at a ratio of 11:1, pouring over the master, degassing to remove air bubbles, and baking at 75oC for 4 h to solidify. The device is extracted from the master with a scalpel, and inlet and outlet ports added with a 0.75 mm biopsy punch (Harris, Unicore). The device is washed with isopropyl alcohol and patted with Scotch® tape to remove debris prior to plasma bonding. The flow focus drop maker is bonded to a glass slide, baked at 75oC for 15 min, and treated with Aquapel to render the channels hydrophobic for water-in-oil emulsification. The double emulsion device is bonded and baked at 75oC for 48 h to completely revert the wettability to its native hydrophobic state. To pattern the channel wettability for double emulsification, select ports are blocked with Scotch tape, leaving others open for oxygen plasma treatment of 1 min [26]. Encapsulation, PCR, and identification of viruses in single emulsions To make single emulsions for viral detection, quantification, and genome length determination, the samples are first diluted in phosphate buffered saline and mixed with PCR reagents (Platinum Multiplex PCR Master Mix, Thermo Fisher) and PCR primers and TaqMan probes (Integrated DNA Technologies; IDT) specific for the species of interest. For detection or quantification, T4 viral particles are first diluted from the stock sample to ranges of 1×106-5×108 virus per sample prior to mixing with PCR reagent. For ApaI restriction enzyme digestion (NEB) or Fragmentase digest (New England Biolabs; NEB), Lambda DNA is treated enzymatically prior to the mixing with the PCR reagents. The oil phase of the emulsion consists of HFE-7500 fluorinated oil (3M) with 2% (w/w) PEG-PFPE amphiphilic block copolymer surfactant [27]. These solutions are loaded into syringes (BD 1 mL luer lock; 27G ½" needle), and the virus and PCR solution into a syringe atop 200 μL HFE-7500 oil; the oil acts as a hydraulic to push the solution into the device to accommodate for dead volumes, allowing nearly all of the solution to be used. The syringes are mounted onto pumps (New Era) with needles (BD), polyethylene tubing (PE-2) is affixed to the needles, and the syringes are primed by flowing at 5,000 μL/h prior to connecting them to the device. Flow rates are controlled with a custom Python script and set to 300 μL/h for the virus sample and 700 μL/h for the oil. The single emulsion droplets exit the device through PE-2 tubing and are collected into a 1.5 mL microcentrifuge tube. Single emulsions produced by this method are ~20 μm in diameter and monodispersed. To prepare the single emulsion droplets for thermal cycling, the sample is transferred from the 1.5 mL microcentrifuge tube into 0.2 mL PCR tubes. The HFE-7500 oil is removed using a needle and is replaced with FC40-fluorinated oil with 5% Polyethylene glycol-perfluoropolyether (PEG-PFPE) amphiphilic block copolymer surfactant. The sample is cycled on a T100 thermal cycler (Bio-Rad) according to the Platinum Multiplex Master Mix instructions. After thermocycling, subsamples of single emulsions are subjected to visualization using a 6D High Throughput microscope (Nikon) with a 10× objective. Bright field, Cyanine 5 (Cy5) and fluorescein (FITC or FAM) images are acquired for every field of view. After image acquisition, ImageJ (National Institute of Health; NIH) is used to identify droplets based on their circular boundaries in the bright field images and then measure their fluorescence in the Cy5 and FITC images. The fraction of positive droplets is determined by counting the number with fluorescence signal above a threshold value, divided by the total number of imaged droplets. Samples are tested in triplicate and comprise a minimum of 5000 droplets. Encapsulation, PCR, and enrichment of viruses in double emulsion droplets To make double emulsions, the virus samples are first diluted in phosphate buffered saline. Approximately 1200 T4 virions are mixed with 1.4×105 ФX174 before combining with PCR reagents (Platinum Multiplex PCR Master Mix, Thermo Fisher) and PCR primers (IDT) specific for the species of interest. The middle phase of the double emulsion consists of HFE-7500 fluorinated oil (3M) with 2% (w/w) PEG-PFPE amphiphilic block copolymer surfactant [27], and the carrier aqueous phase of 4% (v/v) Tween 20, 1% (v/v) Pluronic F-68 (Gibco), and 10% (w/v) PEG (molecular weight 35 K) in water [26]. These solutions are loaded into syringes and processed through a microfluidic double emulsion maker using syringe pumps, similar to the single emulsions. The flow rates are 90 μL/h for the virus sample, 80 μL/h for the oil, and 250 μL/h for the outer aqueous phase. The double emulsion droplets exit the device through PE-2 tubing and are collected into a 1.5 mL microcentrifuge tube. Double emulsions produced by this method are ~35 μm in diameter and monodispersed. To prepare the double emulsion droplets for thermal cycling, the sample is transferred from the 1.5 mL microcentrifuge tube into 0.2 mL PCR tubes, such that each contains 90 μL of emulsion and 10 μL of fresh PCR buffer; the PCR buffer consists of 30 μL of 50 mM MgCl2 and 100 μL of 200 mM Tris pH 8.0 and 500 mM KCl and is essential for preventing PCR components from leaching out of the droplets into the carrier phase, in which they are soluble. The sample is cycled on a T100 thermal cycler (Bio-Rad) according to the Platinum Multiplex Master Mix instructions. After thermal cycling, 1× SYBR Green I (Life Technologies) is loaded into the carrier phase, permeating through the double emulsion shell and staining the droplets that have undergone PCR amplification. A fluorescence-activated cell sorter (FACS) Aria II (BD) is used to sort the emulsions to recover droplets that contain the virus of interest. The FACS chamber temperature is set to 4°C and agitation speed to the highest setting to prevent droplets from sedimenting during the sort. The droplets strongly scatter the FACS laser, requiring a 2× Neutral Density (ND) filter to decrease signal into the detectable range. The microfluidic device produces uniform double emulsions and, consequently, the droplets appear as a compact cluster in forward versus side scatter, making them easy to distinguish from particulate and small oil droplets, which the FACS is instructed to ignore. The sample is analyzed in batches by diluting 100 μL of emulsion into 200 μL of 2% (v/v) Pluronic F-68 and 1% (w/v) PEG (molecular weight 35 K) in water, and gently mixing using a 200 μL pipette tip. The sample is loaded into the FACS and the double emulsions gated in the Forward Scatter (FSC) and Side Scatter (SSC) channels [22]. To read the SYBR channel relating to amplification, we use a 488 nm laser and a 505LP optical filter (BD Biosciences); the population has two peaks, one with low average intensity representing empty or negative droplets, and another with high average intensity representing SYBR positive droplets, which we gate to recover in either Eppendorf tubes (bulk recovery of target virus from a mixed population) or 96-well plates (recovery of single virion from a mixed sample). We use the strict "purity" setting of the instrument which discards events in which multiple droplets pass through the detection window at the same time. Amplification of recovered viral DNA Sorted droplets are briefly centrifuged to the bottom of the tube. To release nucleic acids, the droplets are ruptured by adding 20 μL of DI water and 50 μL of perfluoro-1-octanol (PFO), and vortexing for 1 min. The sample is centrifuged again, and the aqueous top phase containing the viral DNA removed using a micropipette. To confirm enrichment of T4 phage in the sorted emulsion, we use quantitative PCR (qPCR). The concentration of viral DNA is too low post-sorting to be reliably detected by bulk qPCR. To address this, we non-specifically amplify the material using digital droplet multiple displacement amplification (ddMDA) prior to qPCR analysis using the Qiagen REPLI-g Single Cell Kit. ddMDA is a non-specific method that amplifies all nucleic acids in a sample uniformly [28]. The sample is incubated with 3 μL of the Denaturation Solution for 10 min at 65°C. After heating, the reaction is halted by adding 3 μL of Stop Solution Mix. 20 μL of the REPLI-g sc Master Mix containing 14.5 μL of REPLI-g sc Reaction Buffer, 4.5 μL of water, and 1 μL of REPLI-g sc Polymerase is added to each sample. The sample is encapsulated into droplets using a 20 μm flow-focus drop maker [29, 30] and HFE-7500 fluorinated oil with 2% (w/w) PEG-PFPE amphiphilic block copolymer surfactant. The emulsion is collected into a 1.5 mL microcentrifuge tube and the reaction incubated at 30°C for 16 h. After incubation, the droplets are ruptured by adding 10 μL of PFO, vortexing, and spinning as above. Quantitative PCR analysis of sorted droplets To confirm that PAVS enriches for bacteriophage T4 over bacteriophage ФX174, we estimate concentrations of both viruses in the sorted and unsorted pools using qPCR (Stratagene Mx3005P, Agilent). The qPCR primer sequences are different from the ones for PAVS detection, so that in-droplet amplification products do not skew qPCR results. Cross-threshold (C t ) values for T4 and ФX174 in pre- and post-sorted samples are used to compute an enrichment factor. The amplification reagent for all the qPCR measurements is Maxima SYBR Green Master Mix (Thermo Scientific), and the qPCR primers are listed in Additional file 1: Table S1. PAVS enriches for specific viral species from a heterogeneous sample PCR-Activated Virus Sorting allows specific viruses in a mixed population to be detected and recovered by sorting. This is accomplished by encapsulating the viruses in double emulsion droplets using microfluidic technology and performing PCR in each droplet to probe for sequences of interest. Because the viruses are encapsulated at 0.1 per droplet, most droplets are empty or contain a single virion, in accordance with Poisson statistics (Fig. 1). If the target virus is present in a droplet, PCR amplification occurs, generating a fluorescent signal that can be detected and recovered by FACS. Due to the rapid rate at which microfluidics can encapsulate individual virions in droplets (>1 KHz), millions of single virus particles can be sorted in a few hours. PCR-Activated Virus Sorting (PAVS) workflow. Virus suspensions are encapsulated with PCR reagent and probe in double emulsion droplets, then thermal cycled and stained with SYBR Green. FACS recovers fluorescent droplets containing the viral species of interest, generating an enriched sample that is ready for downstream processing Specific detection and quantification of viral genomes To identify a virus in a droplet, PAVS uses a PCR assay interrogating for sequences that exist within the target species. In this way, the PCR primers are analogous to antibodies when sorting cells with FACS, providing a detectable fluorescence signal only when the target species is present. However, whereas generating high affinity antibodies against a target virus can be challenging, especially if it is uncultivable, designing specific PCR primers is straightforward. This makes PAVS general, allowing it to recover any virus of interest to which PCR primers can be designed. To illustrate this, we perform digital TaqMan PCR on samples containing T4, ФX174, and lambda virus, using probes specific for only bacteriophage T4 (Additional file 1: Table S1). The droplets are visualized using epifluorescence imaging after thermal cycling. As expected we observe TaqMan positive droplets in the T4 sample, demonstrating successful amplification when this virus is present (Fig. 2a). By contrast, TaqMan fluorescence is absent in the ФX174 and lambda negative controls (Fig. 2a), confirming that the reaction is specific. This shows that our TaqMan PCR assay can be used to differentiate between single virus particles of these species. a T4, ФX174, and lambda virions are partitioned into droplets with TaqMan primers and probe specific for T4. After thermal cycling, the T4 sample has TaqMan signal while the ФX174 and lambda negative controls have no signal, demonstrating that digital droplet PCR specifically detects target viruses. b The fraction of TaqMan positive droplets in digital PCR for T4 is closely related to the input T4 concentration, showing that digital droplet PCR quantitatively measures viral concentration. Error bars indicate the standard deviation for triplicate measurements In addition to enabling the detection of specific viruses in a sample, PAVS can count individual virus particles. To demonstrate this, we analyze a dilution series of T4 bacteriophage, reading out the results with fluorescence microscopy and image analysis. We find that, as expected, the fraction of positive droplets is directly proportional to T4 concentration (Fig. 2b). This is due to the viruses being loaded at limiting dilution, such that most droplets are empty but a small fraction contain virus particles. Under such conditions, the viruses are encapsulated individually and the number of droplets containing a virus is approximately equal to the number of viruses in the sample, in accordance with random Poisson encapsulation. As with qPCR, the minimum number of viruses necessary for PAVS depends on the specificity of the TaqMan assay. A strength of TaqMan assays is that they are highly specific, allowing confident detection of rare virus species. For example, in initial tests of this approach, we found that the rate of non-specific amplification in a droplet is 1 in ~100,000, so that viruses less rare than this can be confidently detected. Since we can routinely sort >2 million droplets in a single FACS run, this allows us to detect as few as ~20 virus particles in a sample. Multiplexed digital PCR can detect full-length virus genomes A unique and valuable property of PAVS is that it can differentiate between viruses that contain just one target sequence and others that contain multiple. This is possible because TaqMan PCR can be multiplexed using probes targeting different sequences labeled with fluorescent dyes of different color. Hence, viruses containing one sequence will be positive only at one color, whereas those with two sequences will be positive for two colors. These populations can then be separated by gating the fluorescence measurements to recover single- or double-positive droplets. To demonstrate the ability to multiplex the reaction, we synthesize primers and Cy5 TaqMan probes targeting a genomic region near the 5' end of the lambda genome, and others targeting regions at increasing distances away from the 5' end. The primer and probe sequences are listed in Additional file 1: Table S1, and a graphical representation of the probe locations with the Cy5 TaqMan probe in red and the FAM TaqMan probes in green is provided in Fig. 3a. Lambda DNA is combined with the PCR reagents and the sample is emulsified using the microfluidic device. After thermal cycling, the droplets are imaged using fluorescence microscopy (Fig. 3b) and analyzed to measure their intensity on the Cy5 and FAM channels (Fig. 3c). The droplets are characterized as positive for both targets (Cy5+FAM+), positive for one target (FAM + Cy5-, FAM−Cy5+), or negative for both (FAM−Cy5−). Each multiplexed PCR is performed in triplicate, containing 5000–8000 droplets. a Location of TaqMan PCR Cy5 probe in the lambda genome is shown in red, FAM probes are shown in green, and the location of the ApaI restriction site is shown in black. b Representative image of multiplexed PCR emulsion on Lambda DNA. c Representative scatterplot of Cy5 and FAM intensities for Lambda DNA. d Fraction of multiplexed droplets for Lambda DNA undigested (blue), ApaI digested (red), and Fragmentase digested (green) We observe less multiplexing when probe pairs are far apart, indicating that the probability that two target sequences exist within a given genome decreases for sequences that are more separated (Fig. 3d, blue curve); this implies that the genomes might be partially fragmented. To investigate this further, we perform a negative control in which we digest the genome with a restriction endonuclease cleaving at position 10,086 base pairs (bp), which is between the first and second FAM probes. If fragmentation is the source of lowered multiplexing signal, then the fraction of double-positives should fall precipitously beyond the cleavage point; indeed, this is what we observe, as shown by the red curve in Fig. 3d. As an additional negative control, we digest the lambda genome using a non-specific endonuclease (Fragmentase) producing ~500 bp products, and observe that double-positives are rare for all probe pairs (green curve). This demonstrates that PAVS can characterize the length distributions of viral genomes in a solution and, more generally, the presence of multiple genetic loci in a target virus; this should be useful for studying correlations between loci in single viruses that are on the same linear molecule or on entirely different molecules, such as in segmented virus genomes. PAVS can also be used to characterize the integrity of viral genomes. PAVS allows target virus to be sorted out of a mixed population The PAVS workflow consists of two steps, a first in which target viruses are detected using single virus PCR in droplets, and a second in which the droplets are sorted to recover the target viruses. To demonstrate this, we construct a mixed sample of two bacteriophages, T4 and фX174, at a ratio of 1:999 respectively. This 0.1% T4 spike-in is encapsulated at limiting dilution in droplets with PCR primers specific for T4 phage, thermally cycled, and stained with SYBR Green. If a particular droplet contains T4, the nucleic acids targeted by the PCR primers are amplified and the SYBR stain produces a fluorescent signal that fills the droplet. The fluorescence signal is detected with FACS and the positive droplets sorted into a 1.5 mL microcentrifuge tube. To validate that the PAVS workflow enriches for T4 over ФX174, we quantify virus concentrations in the sorted and unsorted pools using qPCR. The sorted droplets are ruptured and the viral genomes amplified by ddMDA. Equal concentrations of T4 and ФX174 DNA from the unsorted and sorted emulsions are subjected to qPCR (Fig. 4). The primers used to detect T4 target a different locus than the ones for PAVS sorting (Additional file 1: Table S1). The qPCR curve for T4 shifts to lower cycles post-sorting, demonstrating that T4 has been enriched. By contrast, the curve shifts to higher cycle numbers for ФX174, indicating that this virus has been de-enriched by sorting, as expected. To quantify the degree of enrichment, we compute an enrichment factor e defined as, qPCR detection of bacteriophages фX174 and T4 before and after FACS sorting. The shifts in the curves reflect the 2-fold change of the DNA quantity according to the specific primers being tested. Samples tested in triplicate $$ e=\kern0.5em \frac{\left(n\kern0.5em +\kern0.5em 1\right)\left(\frac{1}{2^{\varDelta {C}_t^{\mathrm{T}4}}}\right)}{\left(\frac{1}{2^{\varDelta {C}_t^{\mathrm{T}4}}}\right)\kern0.5em + n\left(\frac{1}{2^{\varDelta {C}_t^{\upphi \mathrm{X}174}}}\right)}, $$ where n is the ratio of the viral species with respect to one another and ΔC t T4 and ΔC t фX174 are the differences of cross-threshold values for T4 and фX174, respectively. For this experiment, n = 999, ΔC t T4 is 2.16, and ΔC t фX174 is 5.45, yielding e = 9.69, indicating that the final sample is enriched by about tenfold for T4 from an initial concentration of 0.1%. The degree of enrichment is tunable over a large range, as the rarer the target is in the droplets before sorting, the more it is enriched thereafter. Conversely, if the target is abundant, then many droplets will be positive and only a minor enrichment is possible. To increase enrichment, the sample is thus diluted prior to partitioning in droplets, which reduces the rate of co-encapsulation of different viruses and false-positive recovery of off-target species. The enrichment possible is also limited by the false-positive rate of droplet detection, which sets an upper bound to how much the sample can be diluted. The false-positive rate for our TaqMan assay is ~1/100,000 droplets, setting a theoretical upper enrichment limit of ~100,000×; however, the maximum enrichment achieved in practice can also be limited by other considerations, such as a the specificity of assays or the number of positive viruses that must be recovered for downstream characterization. PAVS recovery of single virions from a mixed sample Common FACS instruments can pool all positive droplets into one well or dispense controlled numbers into different wells, including down to single droplets. This is commonly used to isolate cells for single cell analysis (Fig. 5a). When combined with PAVS, this allows a heterogeneous mixture of viruses to be sorted, to isolate specific virions in the sample, which can then be subjected to additional analyses, such as qPCR. To illustrate this, we sort a sample of lambda virus with PAVS and dispense the positive droplets into wells in controlled numbers (Additional file 1: Table S1, Lambda FWD 2, Lambda REV 2, and Lambda probe 2). We load 1, 10, or 50 positive droplets into each well and analyze the recovered material with qPCR for primers targeting a different portion of the lambda genome than was amplified in the PAVS detection (Additional file 1: Table S1). The C t values decrease as the number of viruses dispensed increases, indicating that the viruses are present at higher numbers (Fig. 5b). When fewer than 50 viruses are sorted, it is difficult to reliably detect them in the sorted wells; wells with 10 viruses show amplification at C t values of 33, while single viruses do not amplify above the negative controls. a Workflow schematic for targeted virus sorting into well plates using PAVS. b qPCR detection of Lambda for 1, 10, or 50 viruses sorted into a well. Positive control of Lambda DNA shown in inset. c qPCR curves for 50 sorted or unsorted Lambda viruses per well. Positive control of Lambda DNA shown in inset. Samples tested in duplicate To confirm that the sorting is specific, we generate qPCR curves for wells containing 50 positive droplets and wells containing 50 unsorted droplets. The qPCR curve shifts left by an average of 4.24 C t values, demonstrating that the Lambda virus is more abundant in the sorted sample (Fig. 5c). While our results show that single viruses provide too little DNA for detection with standard qPCR, other post-sorting amplification methods could be implemented to improve sensitivity, such as higher efficiency PCR reagents, nested PCR [31], or non-specific ddMDA followed by qPCR [28]. Because DNA can fragment under flow through narrow channels, the ability to perform multiplexed TaqMan assays in the droplets can be used to identify and dispense only intact viral genomes into the wells. Discussion and conclusions PCR-activated virus sorting enables specific detection and isolation of target viruses from a mixed sample, analogous to what is possible with FACS for cells. PAVS has significant advantages over conventional approaches to virus genome study. For example, just as when studying microbes, specific and high throughput virus sorting should be valuable whenever large populations must be analyzed to recover a target species; this requires that only ~ 100 bp of the virus's genome be known with which to generate identifying TaqMan probes. For a presently-unknown species, this information may be obtained by performing short-read shotgun sequencing on a sample, or consulting existing metagenomic databases. PAVS can detect viruses residing within host microbes, including bacteria and eukaryotic cells. This is accomplished through the encapsulation of host cells followed by lysis, and subjection to TaqMan PCR for virus detection, which produces a positive signal if the virus is present within the host. The sorted hosts can then be subjected to SSU rRNA profiling or next generation sequencing to identify the species. This capability should be valuable for characterizing virus-host relationships in microbial ecologies or the human microbiome, for example, to determine which viruses infect which hosts – something that is presently extremely challenging due to the inability to culture most viruses and host microbes. PAVS opens new avenues in the study of virus-mediated disease, such as HIV and EBV, where it enables detection of viral infection within host cells. Unlike oligonucleotide staining, PAVS reliably detects single molecule viral genomes of interest. Moreover, the cells that are positive for infection can be recovered by sorting, allowing their genomes and transcriptomes to be sequenced. For example, the recovered cell lysates can be sequenced to characterize insertion sites of the virus, epigenetic correlations with virus infection, identify genome mutations of the specific virus of interest, or modulation of host cell transcriptome. This will be valuable for studying the basic biology of the virus and to better understand how it survives, replicates, and evades the host immune response. Cy5: Cyanine 5 ddMDA: digital droplet multiple displacement amplification E. coli : EBV: FACS: fluorescence-activated cell sorter FITC/FAM: FSC: Forward Scatter HIV: IDT: Integrated DNA Technologies ND: NEB: NIH: National Institute of Health PAVS: PCR-Activated Virus Sorting PCR: PDMS: Poly(dimethylsiloxane) PEG-PFPE: Polyethylene glycol-perfluoropolyether PFO: perfluoro-1-octanol qPCR: quantitative PCR rRNA: ribosomal RNA SSC: Side Scatter Walther W, Stein U. Viral vectors for gene transfer: a review of their use in the treatment of human diseases. Drugs. 2000;60:249–71. Kreppel F, Kochanek S. Modification of adenovirus gene transfer vectors with synthetic polymers: a scientific review and technical guide. Mol Ther. 2008;16:16–29. Bohannan BJM, Lenski RE. Linking genetic change to community evolution: Insights from studies of bacteria and bacteriophage. Ecol Lett. 2000;3:362–77. Suttle CA. Viruses in the sea. Nature. 2005;437:356–61. Carroll MW, et al. Temporal and spatial analysis of the 2014-2015 Ebola virus outbreak in West Africa. Nature. 2015;524:97–101. Treanor JJ. In: Kaslow RA, Stanberry LR & Le Duc JW, editors. Viral Infections of Humans: Epidemiology and Control. 455–478 (Springer, 2014). doi:10.1007/978-1-4899-7448-8. Young LS, Rickinson AB. Epstein-Barr virus: 40 years on. Nat Rev Cancer. 2004;4:757–68. Trono, D. et al. HIV Persistence and the Prospect of. 174–180. 2010. Peterson LR, Jamieson DJ, Powers AM, Honein MA. Zika Virus N Engl J Med. 2016;374:1552–63. Beerenwinkel N, Gunthard HF, Roth V, Metzner KJ. Challenges and opportunities in estimating viral genetic diversity from next-generation sequencing data. Front Microbiol. 2012;3:1–16. Mokili JL, Rohwer F, Dutilh BE. Metagenomics and future perspectives in virus discovery. Curr Opin Virol. 2012;2:63–77. Fuhrman JA, Campbell L. Marine ecology: Microbial microdiversity. Nature. 1998;393:410–1. Hugenholtz P. Exploring prokaryotic diversity in the genomic era. Genome Biol. 2002;3:REVIEWS0003. Breitbart M, et al. Genomic analysis of uncultured marine viral communities. Proc Natl Acad Sci U S A. 2002;99:14250–5. Angly FE, et al. The marine viromes of four oceanic regions. PLoS Biol. 2006;4:2121–31. Edwards RA, Rohwer F. Viral metagenomics. Nat Rev Microbiol. 2005;17:504–10. Shendure J, Ji H. Next-generation DNA sequencing. Nat Biotechnol. 2008;26:1135–45. Metzker ML. Sequencing technologies - the next generation. Nat Rev Genet. 2010;11:31–46. Alkan C, Sajjadian S, Eichler EE. Limitations of next-generation genome sequence assembly. Nat Methods. 2010;8:61–5. Mardis ER. The impact of next-generation sequencing technology on genetics. Trends Genet. 2008;24:133–41. Kunin V, Copeland A, Lapidus A, Mavromatis K, Hugenholtz P. A bioinformatician's guide to metagenomics. Microbiol Mol Biol Rev. 2008;72:557–78. Table of Contents. Lim SW, Abate AR. Ultrahigh-throughput sorting of microfluidic drops with flow cytometry. Lab Chip. 2013;13:4563–72. Pappas D, Wang K. Cellular separations: A review of new challenges in analytical chemistry. Anal Chim Acta. 2007;601:26–35. Xia Y, Whitesides GM. Soft Lithography. Annu Rev Mater Sci. 1998;28:153–84. Kim SC, Sukovich DJ, Abate AR. Patterning microfluidic device wettability with spatially-controlled plasma oxidation. Lab Chip. 2015;15:3163–9. Holtze C, et al. Biocompatible surfactants for water-in-fluorocarbon emulsions. Lab Chip. 2008;8:1632–9. Sidore AM, Lan F, Lim SW & Abate AR. Enhanced sequencing coverage with digital droplet multiple displacement amplification. Nucleic Acids Res. 2015; gkv1493. doi:10.1093/nar/gkv1493. Anna SL, Bontoux N, Stone HA. Formation of dispersions using 'flow focusing' in microchannels. Appl Phys Lett. 2003;82:364–6. Christopher GF, Anna SL. Microfluidic methods for generating continuous droplet streams. J Phys D Appl Phys. 2007;40:R319–36. Tran TM, et al. A nested real-time PCR assay for the quantification of Plasmodium falciparum DNA extracted from dried blood spots. Malar J. 2014;13. Huang Z, Buckwold VE. A TaqMan PCR assay using degenerate primers for the quantitative detection of woodchuck hepatitis virus DNA of multiple genotypes. Mol Cell Probes. 2005;19:282–9. The authors thank B. Demaree for performing enzyme digest of viral samples. This work was supported by the National Science Foundation through a CAREER Award [grant number DBI-1253293] and MCB-1243963; the National Institutes of Health (NIH) [grant numbers HG007233-01, R01-EB019453-01, DP2-AR068129-01]; and the Defense Advanced Research Projects Agency Living Foundries Program [contract numbers HR0011-12-C-0065, N66001-12-C-4211, HR0011-12-C-0066]. Funding for open access charge: [NIH grant number DP2-AR068129-01] and funds from Portland State University to K.M.S. Availability of data and material SL, DS, KS, and AA designed the research. SL and DS performed experiments. SL, DS, KS, and AA wrote the paper. The PAVS technology is the subject of a patent that is licensed to Mission Bio, Inc., which is commercializing the technology and of which A.R.A. is a founder. Bioengineering and Therapeutic Sciences, University of California San Francisco, San Francisco, California, USA Shea T. Lance , David J. Sukovich & Adam R. Abate California Institute for Quantitative Biosciences, University of California San Francisco, San Francisco, California, USA UC Berkeley-UCSF Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA Center for Life in Extreme Environments, Biology Department, Portland State University, Portland, Oregon, USA Kenneth M. Stedman Search for Shea T. Lance in: Search for David J. Sukovich in: Search for Kenneth M. Stedman in: Search for Adam R. Abate in: Correspondence to Adam R. Abate. Supplemental information. (PDF 3622 kb) Lance, S.T., Sukovich, D.J., Stedman, K.M. et al. Peering below the diffraction limit: robust and specific sorting of viruses with flow cytometry. Virol J 13, 201 (2016) doi:10.1186/s12985-016-0655-7 Single virus genomics Viruses of microbes
CommonCrawl
Cloudera Fast Forward Labs Inferring Concept Drift Without Labeled Data FF22 · Aug 2021 This is an applied research report by Cloudera Fast Forward. We write reports about emerging technologies, and conduct experiments to explore what's possible. Read our full report about Inferring Concept Drift Without Labeled Data below, or download the PDF. You can view and download the code accompanying our concept drift experiments on Github. What is concept drift? What is a data stream? Addressing the problem Methods for inferring concept drift 1. Statistical test for change in feature space 2. Statistical test for change in response variable 3. Statistical test for change in margin density of response variable 4. Detect change in margin density of response distribution using a learned threshold Inducing concept drift Experimental Setup Other Prediction Tasks Semi-supervised Learning After iterations of development and testing, deploying a well-fit machine learning model often feels like the final hurdle for an eager data science team. In practice, however, a trained model is never final. This milestone marks just the beginning of the perpetual maintenance race that is production machine learning. This is because most machine learning models are static, but the world we live in is dynamic. More specifically, the ability of a trained model to generalize relies on an important assumption of stationarity - meaning the data upon which a model is trained and tested are independent and identically distributed (i.i.d). In real-world environments, this assumption is often violated, as human behavior - and consequently the systems we aim to model - are dynamically changing all the time.[1] Figure 1: Examples of machine learning tasks where the effects of concept drift are prominent Take, for instance, the impact of the COVID-19 pandemic on algorithm-driven businesses like inventory management. Instacart's model for forecasting in-store product availability dropped from 93% to 61% accuracy, due to a drastic change in shopping behavior as consumers stockpiled what previously were infrequently purchased goods. The model was forced to adapt to this transitory shift in its prior understanding of the world. Not all changes are this sudden, though. Consider the task of maintaining an email spam filtering service. The core technology consists of a text classification model that picks up on keywords in email content to block spammers. Over time, users will begin to manually report more messages as spam that are not caught by the filter. In this adversarial environment, spammers are continuously adjusting terminology to outwit the deployed spam filters, so models must relearn what language constitutes the evolving concept of spam to remain effective. Or think about the job of forecasting energy consumption, in which historical demand is just one piece of the puzzle. In practice, future demand is driven by a slew of non-stationary forces - like climate fluctuations, population growth, or disruptive clean energy tech - that necessitate both gradual and sudden domain adaptation. Domain Adaptation Domain adaptation (a subcategory of transfer learning) is the ability to apply an algorithm trained in one or more "source domains" to a different, but related "target domain." In domain adaptation, the source and target domains share the same feature space, but different distributions.[2] Changes in environmental conditions like these are referred to as concept drift, and will cause the predictive performance of a model to degrade over time, eventually making it obsolete for the task it was initially intended to solve. Figure 2: Production model performance will decay over time without adaptation to drifting concepts To combat this divergence between static models and dynamic environments, teams often adopt an adaptive learning strategy that is triggered by the detection of a drifting concept. Supervised drift detection is generally achieved by monitoring a performance metric of interest (such as accuracy) and alerting a retraining pipeline when the metric falls below some designated threshold. While this strategy proves to be effective in theory, there are several limitations that often prevent its use in practice. Namely, it requires immediate access to an abundance of labels at inference time to quantify a change in system performance - a requirement that may be cost-prohibitive, or even outright impossible, in many real-world machine learning applications. In this report, we explore broadly applicable approaches for dealing with concept drift when labeled data is not readily accessible. We'll start by defining what we mean by concept drift and frame the limitations of supervised methods for detecting it. Then, we'll discuss why true unsupervised concept drift detection is not possible, and explore several alternative methods for dealing with it. Finally, we'll share the results of our experiments with the proposed methods, and wrap up with a discussion of considerations and limitations. Most machine learning systems today operate in a batch paradigm; they probe a historical data set to develop a model that reflects the world as it was at the time of training. But, as we've seen, the world is always changing, and the complex relationships that a model abstracts are also likely to change over time - causing model performance to deteriorate, if not accounted for. This phenomenon in which the statistical properties of a target domain change over time is considered concept drift.[3] Formally, concept drift between time \(t\) and \(t+1\) can be defined as: $$P_{t}(X,y) \not= P_{t+1}(X,y)$$ where \(P_t\) denotes the joint probability distribution at time \(t\) between the set of input variables \(X\) and the target variable \(y\). Since the joint probability can be decomposed as the product of the probability of \(X\) and the conditional probability of \(y\) given \(X\), changes in a data stream can therefore be characterized by changes in the components of this relationship according to the equation below. $$P_t(X,y) = P_t(X) \times P_t(y|X)$$ This decomposition yields two underlying sources of drift - feature drift and real concept drift. Source 1: Feature Drift Feature drift (also referred to as covariate shift, feature change, input drift) characterizes the scenario where the distribution of one or more input variables change over time (i.e., \(P(X)\) changes). Figure 3: Forms of feature drift. The classification boundary depicted at time (t+1) represents the previously learned relationship between features and targets at time (t). Colors represent ground truth classes of the data points at the specified time step. This is seen in both Figure 3.a & 3.b above, where the distribution of features has changed from time \(t\). In Figure 3.a, feature drift has occurred in a region that directly affects the outcome of the learned classification boundary, causing model performance to decrease (and thus making it classified as both feature drift and real concept drift). However, feature drift can also occur where \(P(X)\) changes over time, but the changes do not affect the learned decision boundary. This describes a specific type of feature drift called virtual drift, as seen in Figure 3.b. This is an important distinction because, as we see here, only the changes in \(P(X)\) that affect the prediction decision actually warrant a model adaptation. For example, consider a clothing brand that is looking to recommend items for a given customer as relevant or not relevant. Suppose this customer lives in a tropical climate. Lightweight, breathable clothing items are relevant to them - while heavy, cold weather apparel is not. In this scenario, the independent features \(X\) are both the customer's preferences (e.g., age, size, location) and the brand's product line. The dependent variable, \(y\), is the relevance of a clothing item to the customer. If the brand's lead designer quits and is replaced, the brand's design style will naturally change as a result (i.e., change in \(P(X)\)). However, warm weather clothing still remains relevant for this customer, despite the stylistic differences (i.e., no change in \(P(y|X)\)). This scenario corresponds to a virtual drift.[4] Suppose now that, due to a shift in brand strategy, the company alters their product focus to sell mostly cold weather gear and fewer warm weather items, but the designer (and style) stay the same. This scenario also corresponds to a feature drift (Source 1); however, it's one that does impact the decision boundary (i.e., \(P(y|X)\)). Therefore, this scenario is also categorized as Source 2 drift. Source 2: Real Concept Drift The second source of drift, called real concept drift (also commonly referred to as actual drift, concept shift, conditional change), refers to changes in \(P(y|X)\) and signals that a previously learned relationship between features and targets no longer holds true. Unlike feature drift, this type of drift will always cause a drop in model performance. Figure 4: Forms of real concept drift. The classification boundary depicted at time (t+1) represents the newly learned relationship between features and targets at time (t+1). Colors represent ground truth classes of the data points at the specified time step. It's important to note that real concept drift can happen either with or without a change in \(P(X)\).[5] This nuance is shown in Figure 4.a, where both the input feature distributions and the learned decision boundary have changed in the new time step. In contrast, Figure 4.b demonstrates a scenario where input distributions remain constant, while the ground truth class labels have actually evolved. Continuing with our previous example, suppose that the customer moves from their tropical paradise to the Alaskan tundra, while the clothing brand makes no changes to their offerings or staff. In this case, the very meaning of "relevance" flips, making cold weather gear relevant and warm weather clothing irrelevant. This describes another example of real concept drift, but with no change in \(P(X)\). However, the real world is rarely ever this clean-cut, and oftentimes both sources of drift are at play simultaneously. Let's now imagine a situation where the customer moves to a temperate climate with cold nights and warm days, and the company slightly alters their product mix towards cold weather gear. Here, we observe changes in both \(P(X)\) and \(P(y|X)\), making it difficult to attribute concept drift to any single source. Additional Classifications Both feature drift and real concept drift can be further classified based on the rate at which the concept evolves. For instance, the drift could occur abruptly, resulting in a quick change in the distribution. (Think of drifts induced by a sensor or an equipment failure.) Such cases are considered sudden concept drift. There are other instances where the drift occurs slowly over time - like the drift induced by rising temperatures in the atmosphere. These are deemed gradual concept drift. In addition, there are also recurring concept drifts, which are patterns or trends that tend to repeat themselves at intervals, and are commonly found in seasonal data. Before moving on, let's define some additional terminology that will be mentioned throughout this report. To discuss the idea of concept drift in production, we must consider dynamic data environments. For this reason, we reference concept drift detection and adaptation with regard to data streams, where instances arrive continuously and sequentially over time. Streaming data is often generated on the fly - potentially at a fast and variable rate, and with infinite range - making it a prime candidate for evolving data distributions. Despite this, concept drift is not exclusive to data streams. And, in order to frame a discussion involving both stream and batch contexts, concept drift detection methods commonly employ the notion of sliding windows, or groups of sequentially ordered observations. Figure 5: Data streams are decomposed into windows of observations to establish context upon which concept drift occurs. In general, one window contains the instances belonging to the most recent known concept, which were used to train or update the deployed model, and one window contains instances which may have suffered a concept drift. We refer to these windows as the reference window and detection window, respectively.[6] With these definitions in mind, we see that real concept drift in a data stream (Source 2) poses the main concern for production models, since it directly impacts model performance. The most effective solution to address this issue involves detecting when the learned relationship between features and targets is no longer appropriate for incoming data, and then training a new model to learn the novel concept. An adaptive workflow like this is shared among common supervised methods like Drift Detection Method (DDM), Early Drift Detection Method (EDDM), and ADaptive WINdowing (ADWIN). We describe this workflow in Figure 6, below. Figure 6: General workflow of supervised drift detection methods that use significant changes in performance metrics to signal concept drift. In general, these techniques monitor a task-dependent performance metric like accuracy, F-score, or precision/recall. If the metric of interest deviates from an acceptable level (as determined during training evaluation on the reference window), a drift is signaled. Figure 7: Impact of supervised concept drift detection on machine learning system performance over time. The cumulative effect of this approach over the lifetime of a machine learning system is highlighted in Figure 7. Initially, the system celebrates strong performance because the model has learned from recent data. After some time, accuracy declines as concepts evolve, until ultimately a metric threshold is crossed, and drift is detected. System performance then realizes an immediate boost after retraining, as the new concept is absorbed. Despite the ample research and proven effectiveness of these supervised methods, they all suffer from a shared, impractical assumption - that true labels are instantaneously available after inference. In most use cases, the immediate availability of true labels is infeasible for several reasons. First, annotating data is expensive, in both cost and labor, as it often requires hired domain expertise. The issue is described succinctly by the authors of On the Reliable Detection of Concept Drift from Streaming Unlabeled Data: "To highlight the problem of label dependence, consider the task of detecting hate speech from live tweets, using a classification system facing the Twitter stream (estimated at 500M daily tweets). If 0.5% of the tweets are requested to be labeled, using crowdsourcing websites such as Amazon's Mechanical Turk2, this would imply a daily expenditure of $50K (each worker paid $1 for 50 tweets), and a continuous availability of 350 crowdsourced workers (assuming each can label 10 tweets per minute, and work for 12 hours/day), every single day, for this particular task alone. The scale and velocity of modern day data applications makes such dependence on labeled data a practical and economic limitation." Second, in addition to label scarcity, verification latency - or the period between the availability of an unlabeled test instance and the availability of its true label - is application-dependent and often variable.[7] For example, it can take several months for an act of credit card fraud to be reported (i.e., ground truth) from the time the fraudulent transaction occurred. If F1 score is the only metric being used to track model performance (and thus detect concept drift), there may be several months of higher than normal fraudulent activity without any signal that something is wrong. Finally, some use cases operate in an extreme case of infinite verification latency, where ground truth labels are impossible to ever obtain. Consider a bank which uses machine learning to power its lending decisions. If a model predicts a loan will default for a given applicant, the loan is never granted; therefore, it can never be determined if the loan would have actually defaulted or been repaid. Use cases like this demand an alternative solution. Due to these limitations, there is a clear need for effective methods that can detect real concept drift (Source 2) in an unsupervised manner. Unfortunately, this proves to be an impossible task, as the only way to confirm a change in \(P(y|X)\) with certainty is to have some access to ground truth labels - there is no free lunch. In the absence of labeled data, the best we can do is attempt to infer real concept drift by detecting feature drift (Source 1). That is, we are interested in quantifying visible changes in \(P(X)\) and surmising that those changes correspond to meaningful change in the classification boundary \(P(y|X)\). Of course, this approach is prone to error because as we've seen: Not all changes in \(P(y|X)\) are visible from \(P(X)\), resulting in false negative detections where real drift occurs but is not signaled. Not all changes in \(P(X)\) actually affect \(P(y|X)\), resulting in overly sensitive detectors that trigger costly false positive detections. Inferring real concept drift in an unsupervised fashion thus becomes a delicate balancing act, in order to minimize the number of false positive detections (and therefore labels needed) while remaining sensitive enough to pick up on meaningful changes in the feature space that likely contribute to a change in concept. In comparison to recent research on supervised drift detectors, much less attention has been paid to unsupervised methods.[8] However, detecting shifts in data distributions is a well-explored field of data mining, with solutions ranging from multiple hypothesis testing and novelty detection to discriminative distance and algorithm-specific techniques. In our exploration, we focused on methods that are model-agnostic and truly unsupervised, to ensure broad applicability in practice. In this section, we present four methods for inferring concept drift without labels, and use a binary classification task to frame the discussion. The ultimate goal of feature drift detection is to determine if two distributions are different. Therefore, the first and most basic approach to infer concept drift applies a hypothesis test to flag if a statistically significant change has occurred between the reference and detection windows for each feature in a given data stream. For continuous features, we use a two-sample Kolmogorov-Smirnov (KS) test, which is a non-parametric hypothesis test used to check whether two samples originate from the same distribution. For categorical features, we make use of a Chi-Squared goodness of fit test. Figure 8: Hypothesis tests are performed feature-wise when dealing with multivariate tabular data. Correction is applied to the tests to arrive at overall determination of significance. In the case of multivariate tabular data, we can test each feature independently (while accounting for multiple tests) to arrive at an overall signal of drift or no-drift, as seen in Figure 8. The Bonferroni test is a notable approach for correcting multiple hypothesis tests - while making conservative assumptions about the (in)dependence among them - to arrive at a final result. In contrast to this feature-wise approach, we could also apply a multivariate two-sample hypothesis test like the kernel-based technique called Maximum Mean Discrepancy (MMD). MMD allows us to distinguish between two probability distributions, based on the mean embeddings of those distributions. While this method does side-step the need for multiple tests, the choice of kernel is critical to ensuring its correctness, and a linear time complexity imposes a potential bottleneck in streaming applications.[9] High Dimensional Data When it comes to high dimensional data, (e.g., images, text) best practices for detecting drift with two sample tests are an area of active research. Recent work proposes combining dimensionality reduction techniques (e.g., PCA, randomly initialized auto-encoders) with subsequent two-sample testing.[10] The overall idea is that these dimensionality reduction techniques yield either a uni- or multi-dimensional representation of the data. We can then choose a suitable statistical test to apply to the reduced data stream to detect drift. While feature-wise and multivariate hypothesis testing is broadly applicable across machine learning use cases, it has several limitations as a real concept drift inference tool. Because these methods consider drift in each feature to be equally important (despite p-value correction across tests), they are prone to false positive detections. Imagine the case where several features in a datastream exhibit drift, but none of them are of high importance to a classifier's decision-making process. It's likely that the present drift will not actually impact the learned decision boundary, despite the hypothesis test's ringing alarm. This limitation arises because we've excluded the classifier from the detection process and are making decisions solely on the distribution characteristics of incoming features - resulting in increased sensitivity to change and a high number of false alarms. Unlike the previous method, where only the feature space is analyzed, our second approach infers concept drift by tacitly involving the classifier in the detection process, making change detection relevant to the prediction task at hand. To do so, we apply a model that's been trained on the reference window to generate predicted class probabilities (a response distribution) for observations in the detection window. Then, we use a k-fold procedure to obtain probability estimates for the reference window. K-fold Procedure In the k-fold procedure, the entire dataset is sequentially divided into k bands of samples. In the first iteration, the first k-1 bands serve as the training set, to learn a model that is used to generate predictions over the kth band of observations. This process is repeated k times where each band functions as the test set exactly once, yielding a response distribution for the entire reference window.[11] With our two populations in hand, we can apply a Kolmogorov-Smirnov hypothesis test to see if the response distributions between reference and detection windows differ significantly. In effect, the trained model serves as a dimensionality-reducing preprocessing step. It leverages its learned relationship between features and targets (i.e., \(P(y|X)\)) to generate a response distribution that is sensitive to feature space changes that will likely affect the performance of the model in question. If important features in the detection window have drifted from those the model learned on, we would expect the classifier to produce significantly different response distributions, as depicted in Figure 9. Figure 9: Example response distributions between reference and detection windows for a binary classification task. The plot on the left shows nearly identical distributions resulting from a case where feature drift is not present, while the plot on the right depicts divergent distributions. Although it's a step in the right direction, this method is still overly sensitive. That's because, by design, a KS test is responsive to changes across the entire response distribution. But do we really care about changes in regions of high confidence? For example, if the density shape between 0 and 0.25 confidence level changes a bit, it doesn't impact the classification outcome of those points, because they're still well below the 0.5 decision threshold. This leads us to the next approach. Rather than test for changes across the entire cumulative response distribution, we can instead focus on just the regions of uncertainty around our decision threshold, where slight variations in confidence lead to different classification outcomes. To do so, we must introduce a parameter that specifies a desired margin width around the decision boundary, to define a region of uncertainty. Margin here is the portion of the prediction space which is most vulnerable to misclassification.[12] Then, for both windows, we classify each observation as in-margin or out-of-margin, based on its predicted confidence score. We compare these categorical populations between windows, using a Chi-square goodness of fit test to check for significant changes in the margin density. The underlying assumption here is that a significant change in the number of samples in the margin is indicative of a drifting concept. Figure 10: Response distributions that diverge only at tail ends do not impact classification results (left), whereas changes of distribution within the margin do (right). The decision boundary here corresponds to a confidence of 0.5. The impact of this approach is highlighted in Figure 10, above. On the left, we see the case where a divergence exists towards the tail ends of the distribution, but the rest of the probability space remains congruent. This example would fail the KS test (described in Method 2), signaling a feature drift, and consequently request costly new labels for retraining. However, because the divergence exists far from the decision boundary, it would likely not have impacted the classification results, making it a false positive detection. In contrast, the margin density approach would tolerate this inconsequential change. Only when a statistically significant divergence occurs inside the margin will Method 3 raise an alarm, as shown in Figure 10 (on the right). Introducing a margin of uncertainty to desensitize feature drift detection does help reduce the number of false positive detections. However, there is still room for improvement. Each method we have discussed so far relies on hypothesis testing to signal drift. Unfortunately, the mere falsity of a null hypothesis doesn't say much about our window samples, other than that they don't come from an identical population. But do we really care if the populations are identical? If our goal is to reduce the sensitivity of feature drift detections, we probably care more about quantifying how different two populations are, which is something that a statistical test cannot provide. A quantitative measure of similarity affords us the flexibility to set our own threshold, depending on our tolerance for error. In essence, we need a way to distinguish what level of change is statistically significant from what is practically significant. Our final method uses a learned threshold to detect change in the margin density of a response distribution. Building upon the previous two methods, we first obtain a response distribution for each window, and introduce a margin to classify predictions as in or out of the region of uncertainty. However, rather than applying a Chi-square test (as in Method 3), we establish an expected value for margin density based on the reference window. This is accomplished during the k-fold procedure by calculating the percentage of instances falling in margin, relative to the total instances in the window (i.e., margin density) for each fold. The cross-validation procedure produces a population of \(k\) margin density values from which we can calculate a mean (\(MD_{ref}\)) and standard deviation (\(\sigma_{ref}\)), providing a strong estimate of the expected margin density value and an acceptable deviation. These values are then used to signal change in the detection window based on a desired level of sensitivity, \(S\). A practically significant change is signaled when the margin density of the detection window differs by more than \(S\) standard deviations from the expected margin density value, as seen in the equation below. $$ |MD_{det} - MD_{ref}| > S \times \sigma_{ref} $$ By setting the expected margin density value from the population observed in the reference window, we establish a baseline specific to the problem at hand. Adding the sensitivity parameter offers control over the detection process. Larger values of \(S\) will reduce the number of false positive detections, but possibly at the cost of increasing false negatives - a decision that might make sense when the cost/impact of a false negative is low. Inversely, lowering \(S\) might be a good idea for critical applications, where the cost of a real drift could be harmful if undetected.[13] To gain a deeper understanding of these unsupervised concept drift detection methods, we needed to experiment. In particular, we aimed to understand the tradeoff between false positive and false negative drift detections produced by each of these methods over a machine learning system's lifetime. To do so, we designed an experimental setup - which consisted of an adaptive learning workflow with a synthetic dataset - to simulate the lifecycle of a model in production. Experimenting on production-related issues like concept drift is challenging. Concept drift research is often performed on purely synthetic datasets, where variables are randomly generated according to predefined rules to allow for control over the type, timing, and magnitude of drift. However, these datasets do not truly mimic the relationships present in real world data. In contrast, real world datasets lack precise flags for the start and end of drifting concepts and often include mixed drift types, making it difficult to cleanly evaluate drift detection methods. For our experimentation, we decided to induce drift into a real dataset, as it allowed us to retain genuine data properties while ensuring significant drift was actually present. To do so, we applied an extended version of the drift induction process that was used by the authors of On the Reliable Detection of Concept Drift from Streaming Unlabeled Data to the Covertype Data Set from the UCI Machine Learning Repository​. Before inducing drift, the dataset was reduced to a binary classification problem, by considering only the two most populous classes, and all features were normalized in the range of [0,1]. Additionally, all soil type variables were dropped, to simplify the problem. This resulted in a dataset with 14 features, one binary target variable, and ~495,000 observations. The drift induction process works by first shuffling the entire dataset in an attempt to remove any existing concept drift. We then create changepoints in the data stream, by selecting a subset of features and randomly rotating their values for all examples after a given changepoint (the equivalent of randomly swapping columns). This basic approach ensures that feature drifts are induced, while also maintaining the original properties of the dataset. We created three changepoints evenly spaced across the entire dataset. For the first changepoint, we selected the three most impactful features to rotate (determined by ranking features based on their impurity-based feature importance). For the second changepoint, we selected the fourth to tenth most important ranked features to rotate. And finally, we selected the remaining three least important features for the third changepoint. At each changepoint, drift was forward chained through the remainder of the dataset to provide consistency across concepts. This process resulted in four unique "concepts" (~124k observations each) with varying degrees of drift. The entire data preparation and drift induction process we followed can be referenced in this notebook. With a drifting dataset in hand, we then implemented a rudimentary adaptive learning workflow, in order to evaluate the proposed detection methods (discussed above) in a lifecycle context. The workflow consisted of two sliding windows (reference and detection) of fixed size passing over the drift induced datastream, where the decision to retrain at each timestep is made by the given drift detection method. Figure 11: The adaptive learning workflow used to evaluate various concept drift detection methods. The workflow described in Figure 11 ran until predictions were generated for every observation in the data set. Throughout each experiment, we recorded the incremental accuracy of predictions on the datastream, as well as the number of requested true labels. Incremental accuracy provides a cumulative measure of performance of the classification system over the entire datastream, since several different "deployed" models likely exist. The number of requested true labels corresponds directly to the number of drift detections, and thus to the number of retrainings demanded. In addition to accuracy and number of retrainings, we also captured if a real concept drift occurred at each window timestep (irrespective of what the drift detection method indicates). Of course, this is a luxury we are only afforded in an experimental setting because we have access to all ground truth labels, which allows us to evaluate our various drift detection methods. This source of truth is determined using a k-fold approach on the reference window, similar to that used in Method 4 above, except rather than gathering a population of k margin density values, we collect a population of accuracy measures to establish an expected accuracy and acceptable deviation. If the accuracy on the detection window falls outside three standard deviations of the expected accuracy, we conclude that a real concept drift occurred (i.e., significant change in \(P(y|X)\)). This approach for quantifying real concept drift (versus just using the three drift induced changepoints) allows us to account for unknown drifts that may exist in the underlying data, despite our attempt at removing it via random shuffle. This ground truth indicator serves as the basis for classifying drift detections as false positives or false negatives. Using this experimental setup, we evaluated detection methods 2, 3, and 4 from above. We compared the results against a baseline and topline scenario. The baseline case is simply a classifier that never adapts to drift (i.e., it is trained only on the initial reference window and used to evaluate on the entire remaining datastream). The topline scenario greedily retrains a new model at each window timestep. All our experiments shared a common set of parameters - including model type (random forest classifier), model hyperparameters (n_estimators=5, max_depth=5) , and window size (35,000 observations). The window size and model hyperparameters were empirically selected by finding a combination that did not result in overfitting between reference and detection sets, while allowing multiple window timesteps to fit within each induced concept. Our entire set of experiments and supporting code can be found here. The cumulative accuracy of each drift detection experiment is visualized and summarized over the full datastream in Figure 12, below. Here, the vertical lines represent the equally spaced changepoints where drift was systematically induced in the datastream. Figure 12: Summary of experimental results of six drift detection methods used to signal retraining in an adaptive learning simulation. Vertical lines represent the equally spaced changepoints where drift was systematically induced in the datastream. Note that the Topline (orange line) and Method 2 (green line) experiment lines overlap, so only green is visible. By observing the baseline case (blue line) where no model retraining occurs after the first window, we see a steep decline in accuracy immediately after the first changepoint. This makes sense because the changepoint introduces a severe drift (i.e., rotation of most important features) and, without adaptation, the model fails to reflect the new concepts - resulting in a drop of ~18% accuracy from initial training to the end of the data stream. In contrast, we see that by retraining at each window timestep, the topline case (orange line overlapping with green line) is able to recover from each concept change, resulting in a cumulative accuracy drop of just ~2% over the datastream. Of course, this comes at the cost of 13 retrainings (or 99% of the total labels requested) where only three actually experienced concept drift. This results in 10 false positives. As mentioned above, we see that Method 2 (KS test on response distribution) produces identical results to our topline case; the test signals a drift at every window timestep. Intuitively, this does not make sense, because there are three distinct windows (of 35,000 records each) within each induced concept - that is, three opportunities for the model to learn and adapt to each new concept. This points to the major flaw of Method 2: comparing entire response distributions with a KS test proves to be overly sensitive to small differences that may actually be deemed acceptable in practice (thus prompting a request for unnecessary retraining). This is further evidenced by observing the results for Method 3 (Chi-squared test on response margin) where only 10 drifts were signaled - while still capturing all six actual drift occurrences, but still producing four false positive detections. In this case, by statistically testing for significant changes inside of a defined margin vs. the entire distribution, we required just 77% of the total labels while maintaining a cumulative accuracy within 1% point of the topline case, and did not sacrifice any false negative (missed) detections. Finally, we see that Method 4 (using a learned threshold of margin density) relaxes drift detections even further. With the sensitivity value set to one, we observe only five drift detections demanding just 42% of the total labels, while producing a cumulative accuracy just 2% points lower than the topline case. However, by stepping away from statistical tests, we notice that this experiment actually misses three windows of actual drift (false negatives). Increasing the sensitivity parameter to a value of two exacerbates this problem, decreasing the cumulative accuracy even further, as seven actual drift occurrences are missed. As mentioned previously, experimenting with concept drift is challenging, because it requires us to simulate a production environment and make assumptions in an attempt to emulate it. One assumption that we made while designing our dataset is that randomly shuffling all records upfront would eliminate existing drift and provide a clean slate upon which we could introduce controlled drift. This false assumption was brought to light as we saw more than three actual drift occurrences in several of the experiments. Because we chose to define real drifts by comparing change in accuracy between reference and detection windows, we saw an unequal number of "actual drifts" across experiments, making head-to-head comparison of results difficult. Additionally, we chose to implement non-overlapping, fixed-size windows that advance in full for all experiments. Many drift detection methods today operate in an incremental or online fashion where detection windows are advanced for each incoming observation rather than batches. This strategy eliminates the period of inactivity until a new detection window reaches the minimum number of samples, and may decrease the delay in response time for sudden drift detections. Another limitation is seen in our rudimentary retraining scheme, where labels are requested for the entire detection window upon a drift signal and the existing model architecture is retrained in place. In practice, there are a variety of retraining options - like using all available historical data, weighting newer observations, dropping outdated records, or requesting just a portion of labels from the newest window. In addition, naive retraining of the same model with new data might not be enough to adapt to an evolved concept.[14] A manually selected model architecture may perform better upon each retraining, which points to another limiting assumption of our experimental setup. So far we have discussed and experimented with approaches for dealing with concept drift when the ground truth of the newly available data instances isn't readily available. While we covered methods that are model agnostic and truly unsupervised, detecting concept drift in practice is not a straightforward task. As noted by the researchers of Learning under Concept Drift: A Review, handling concept drift is generally coupled with other machine learning problems. In this section, we cover a few of these overlapping issues along with some considerations when designing a drift detection strategy. It's impossible to design a machine learning system and know everything about the domain upfront. As we've seen, concept drift occurs by default, as a result of static models operating in dynamic environments. Therefore, deployed models will naturally have unintended consequences. For this reason, it's imperative that teams plan for uncertainty post-deployment and establish robust monitoring and detection processes , so as to understand when something has gone wrong and take corrective action. However, the act of monitoring a feature or metric just to "check the box" is not enough. Blindly optimizing and maintaining a poorly selected set of metrics will result in far from optimal outcomes. This is because metric optimization often leads to manipulation, gaming, and a focus on short-term quantities at the expense of longer-term concerns. When developing a post-production strategy, it's important to use a slate of metrics to gain a fuller picture of a model's true impact, combine metrics with qualitative accounts, and involve a range of stakeholders - including those who will be impacted downstream by the model's decisions.[15] Planning for and monitoring a model's impact on a wider set of concerns than just predictive performance adds an additional layer of complexity to the already difficult task of production machine learning - but it's a requirement, not an option. Up until now, all of the methods discussed for inferring drift have been in the context of binary classification. But how can these approaches be extended to other tasks, like multiclass classification and regression? At the core of our approaches, we are simply using a trained model to produce a distribution of uncertainty and then comparing that distribution between reference and detection windows. Therefore, we can apply this same idea to other tasks by reformulating our definition of uncertainty. In the binary classification task, uncertainty exists as the difference between the two class probabilities. In a multiclass classification task, uncertainty could be defined as the difference between the top two class probabilities. Similarly, for regression tasks, the notion of uncertainty could take the form of absolute error of each prediction. The common issue of class imbalance is exacerbated when it comes to drift detection. Class imbalance occurs when the proportion of data instances belonging to each class varies, causing certain classes to be underrepresented. It is usually the underrepresented classes in such situations that end up having higher misclassifications. Detecting drift between populations with imbalanced classes is complicated, and becomes more challenging when the data between windows cannot be stored due to memory issues. As such, approaches that cater to both concept drift and class imbalance in data streams are relatively less studied.[16] That said, CDS (Concept Drift with SMOTE (Synthetic Minority class Oversampling Technique)) is one of the more recent batch-based incremental learning algorithms that strategically uses the minority class data to tackle this problem.[17] Another related problem that is largely underexplored is dealing with multi-label classification, where a particular data instance could be associated with one or more labels. For example, a news article may have overlapping classes like "politics," "environment," and "energy." Multi-label data streams contain independent relationships for each label - where each concept is likely to have its own drift pattern that may drift asynchronously from its peers. In addition, label proportions may not be consistent across detection windows. To address these challenges, a notable approach[18] associates each label with two fixed size instance-windows, one for positive examples and the other for negative examples for the training data. The size of the positive window is a user-specified parameter, and it should be large enough to learn an accurate model, but small enough to reduce the probability of drift within the window. The number of negative examples in the negative window is determined based on the ratio of the number of positive examples to another user-specified parameter: distribution ratio. This parameter plays the role of balancing the distribution of positive and negative examples in the union of the two windows and ranges typically from 0.3 to 0.7. The approach allows it to oversample the positive and undersample the negative examples for all labels. They further build and use k-nearest neighbor (k-NN) classifiers to determine the label of an unlabeled test instance. Normally a k-NN classifier outputs a class label based on whether the probability of it belonging to the positive class is >= 0.5. This default behavior is an improper choice when it comes to imbalanced classes, and the authors instead propose a batch-specific thresholding approach to combat that. Active learning is a set of machine learning techniques that reduces the number of labeled examples required to train a model. In settings where the labeled examples are available only initially or are scarce, active learning approaches utilize these labels to build an initial model, and then uses this model to request labels for data points (from a human) that the model finds hard to predict on. A scenario where the unlabeled data stream is drifting could pose additional challenges. For instance, active learning strategies that request labels for the most uncertain instances would typically concentrate around the decision boundary. Changes that occur further from the boundary may be missed, and models may fail to adapt. Some solutions to effectively tackle such challenges include learning strategies that are guided by drift detection to save labeling costs for difficult and evolving instances.[19][20] Semi-supervised and transductive learning techniques leverage both labeled and unlabeled examples to learn more generalized models when limited labeled data is available for training. Because this technique naturally learns from unlabeled data, it may be assumed that models of this type can easily track drifting concepts. That is not the case, however, because with concept drift, training data and test data are generated from different underlying distributions. Due to this unique learning paradigm, there have been many new developments specific to semi-supervised and transductive learning in the presence of concept drift. For instance, the weight estimation algorithm (an ensemble-based classifier approach[21]) uses unlabeled test data along with a set of mixture models to adjust classifier voting weights. The approach helps detect gradual or incremental drifts. There is also the COMPOSE (COMPacted Object Sample Extraction) approach, which can handle multi-class data, including the scenario of new classes or new subpopulations for gradual drift detection.[22]) Data in big data streaming environments is often generated at a fast rate in large quantities, and is highly volatile - a scenario prime for drifting concepts. Due to the high throughput nature, it may not always be feasible to capture, store, and process all the data. This complication has led to the development of scalable and parallel algorithmic implementations that only need one pass through the data, and thus train and adapt to concept drifts in real-time scenarios. For instance, the Online MapReduce Drift Detection Method (OMR-DDM)[23]) detects drift by the use of the error rate of a collection of classifiers executed concurrently. Approaches like Micro-Cluster Nearest Neighbor (MC-NN)[24]) do not need data to reside in memory, are processed incrementally, and adapt to concept drifts by monitoring classification error. As we've learned, maintaining a static representation of an ever-changing environment is challenging, and requires diligent performance monitoring to signal when a machine learning model is no longer suited for its original task. This issue becomes even more difficult when the cost or availability of ground truth labels make performance-based drift detection methods infeasible - which is often the case in real world applications. In this scenario, teams must monitor and detect changes purely from independent variables as a means to infer concept drift. Unfortunately, monitoring changes in input distributions produces many false positive detections, because not all changes in the feature space of a population actually correspond to a meaningful drift in relation to the target variable. In this report, we presented four ways to infer concept drift in an unsupervised manner, with the goal of reducing false positive drift detections. We reported experimental results comparing and contrasting the nuances of each method, and conclude that the best approach for detecting drift without labels will depend on your specific application's tolerance for error. We hope this report has brought to light a few practical challenges associated with production machine learning, and we look forward to continued research in this space! On the Reliable Detection of Concept Drift from Streaming Unlabeled Data ↩︎ Domain adaptation ↩︎ Learning under Concept Drift: A Review ↩︎ A Survey on Concept Drift Adaptation ↩︎ An overview of unsupervised drift detection methods ↩︎ Fast Unsupervised Online Drift Detection Using Incremental Kolmogorov-Smirnov Test ↩︎ Optimal kernel choice for large-scale two-sample tests ↩︎ Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift ↩︎ Machine Learning Monitoring, Part 5: Why You Should Care About Data and Concept Drift ↩︎ Reliance on Metrics is a Fundamental Challenge for AI ↩︎ Incremental learning imbalanced data streams with concept drift: The dynamic updated ensemble algorithm ↩︎ Incremental Learning of Concept Drift from Streaming Imbalanced Data ↩︎ Dealing with Concept Drift and Class Imbalance in Multi-Label Stream Classification ↩︎ Active Learning With Drifting Streaming Data ↩︎ Combining active learning with concept drift detection for data stream mining ↩︎ Semi-supervised Learning in Nonstationary Environments ↩︎ COMPOSE: A Semisupervised Learning Framework for Initially Labeled Nonstationary Streaming Data ↩︎ Parallel Concept Drift Detection with Online Map-Reduce ↩︎ Scalable real-time classification of data streams with concept drift ↩︎
CommonCrawl
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance. ISSN 1088-6842(online) ISSN 0025-5718(print) Journals Home Search My Subscriptions Subscribe Your device is paired with for another days. Previous issue | This issue | Most recent issue | All issues (1943–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article Lower bounds for the total stopping time of $3x + 1$ iterates Authors: David Applegate and Jeffrey C. Lagarias Journal: Math. Comp. 72 (2003), 1035-1049 MSC (2000): Primary 11B83; Secondary 11Y16, 26A18, 37A45 DOI: https://doi.org/10.1090/S0025-5718-02-01425-4 Published electronically: June 6, 2002 MathSciNet review: 1954983 Full-text PDF Free Access Abstract | References | Similar Articles | Additional Information Abstract: The total stopping time $\sigma _{\infty }(n)$ of a positive integer $n$ is the minimal number of iterates of the $3x+1$ function needed to reach the value $1$, and is $+\infty$ if no iterate of $n$ reaches $1$. It is shown that there are infinitely many positive integers $n$ having a finite total stopping time $\sigma _{\infty }(n)$ such that $\sigma _{\infty }(n) > 6.14316 \log n.$ The proof involves a search of $3x +1$ trees to depth 60, A heuristic argument suggests that for any constant $\gamma < \gamma _{BP} \approx 41.677647$, a search of all $3x +1$ trees to sufficient depth could produce a proof that there are infinitely many $n$ such that $\sigma _{\infty }(n)>\gamma \log n.$ It would require a very large computation to search $3x + 1$ trees to a sufficient depth to produce a proof that the expected behavior of a "random� $3x +1$ iterate, which is $\gamma =\frac {2}{\log 4/3} \approx 6.95212,$ occurs infinitely often. References [Enhancements On Off] (What's this?) David Applegate and Jeffrey C. Lagarias, Density bounds for the $3x+1$ problem. I. Tree-search method, Math. Comp. 64 (1995), no. 209, 411–426. MR 1270612, DOI https://doi.org/10.1090/S0025-5718-1995-1270612-0 David Applegate and Jeffrey C. Lagarias, Density bounds for the $3x+1$ problem. II. Krasikov inequalities, Math. Comp. 64 (1995), no. 209, 427–438. MR 1270613, DOI https://doi.org/10.1090/S0025-5718-1995-1270613-2 David Applegate and Jeffrey C. Lagarias, The distribution of $3x+1$ trees, Experiment. Math. 4 (1995), no. 3, 193–209. MR 1387477 K. Borovkov and D. Pfeifer, Estimates for the Syracuse problem via a probabilistic model, Theory Probab. Appl. 45 (2000), 300–310. R. E. Crandall, On the $"3x+1�$ problem, Math. Comp. 32 (1978), no. 144, 1281–1292. MR 480321, DOI https://doi.org/10.1090/S0025-5718-1978-0480321-3 Jeffrey C. Lagarias, The $3x+1$ problem and its generalizations, Amer. Math. Monthly 92 (1985), no. 1, 3–23. MR 777565, DOI https://doi.org/10.2307/2322189 J. C. Lagarias and A. Weiss, The $3x+1$ problem: two stochastic models, Ann. Appl. Probab. 2 (1992), no. 1, 229–261. MR 1143401 Helmut Müller, Das "$3n+1$�-Problem, Mitt. Math. Ges. Hamburg 12 (1991), no. 2, 231–251 (German). Mathematische Wissenschaften gestern und heute. 300 Jahre Mathematische Gesellschaft in Hamburg, Teil 2. MR 1144786 Tomás Oliveira e Silva, Maximum excursion and stopping time record-holders for the $3x+1$ problem: computational results, Math. Comp. 68 (1999), no. 225, 371–384. MR 1613719, DOI https://doi.org/10.1090/S0025-5718-99-01031-5 Daniel A. Rawsthorne, Imitation of an iteration, Math. Mag. 58 (1985), no. 3, 172–176. MR 789573, DOI https://doi.org/10.2307/2689917 E. Roosendaal, private communication. See also: On the $3x+1$ problem, electronic manuscript, available at http://personal.computrain.nl/eric/wondrous Stan Wagon, The Collatz problem, Math. Intelligencer 7 (1985), no. 1, 72–76. MR 769812, DOI https://doi.org/10.1007/BF03023011 Günther J. Wirsching, The dynamical system generated by the $3n+1$ function, Lecture Notes in Mathematics, vol. 1681, Springer-Verlag, Berlin, 1998. MR 1612686 D. Applegate and J. C. Lagarias, Density bounds for the $3x+1$ problem I. Tree-search method, Math. Comp. 64 (1995), 411–426. ---, Density bounds for the $3x+1$ problem II. Krasikov inequalities, Math. Comp. 64 (1995), 427–438. ---, The distribution of $3x+1$ trees, Experimental Math. 4 (1995), 101–117. R. E. Crandall, On the "$3x+1$� problem, Math. Comp. 32 (1978), 1281–1292. J. C. Lagarias, The $3x+1$ problem and its generalizations, Amer. Math. Monthly 92 (1985), 3–23. J. C. Lagarias and A. Weiss, The $3x+1$ problem: Two stochastic models, Ann. Applied Prob. 2 (1992), 229–261. H. Müller, Das '$3n+1$' Problem, Mitteilungen der Math. Ges. Hamburg 12 (1991), 231–251. T. Oliveira e Silva, Maximum excursion and stopping time record-holders for the $3x+1$ problem: computational results, Math. Comp. 68, No. 1 (1999), 371–384. D. W. Rawsthorne, Imitation of an iteration, Math. Mag. 58 (1985), 172–176. S. Wagon, The Collatz problem, Math. Intelligencer 7 (1985), 72–76. G. J. Wirsching, The dynamical system generated by the $3n+1$ function, Lecture Notes in Math. No. 1681, Springer-Verlag: Berlin 1998. Retrieve articles in Mathematics of Computation with MSC (2000): 11B83, 11Y16, 26A18, 37A45 Retrieve articles in all journals with MSC (2000): 11B83, 11Y16, 26A18, 37A45 David Applegate Affiliation: AT&T Laboratories, Florham Park, New Jersey 07932-0971 Email: [email protected] Jeffrey C. Lagarias MR Author ID: 109250 Email: [email protected] Received by editor(s): February 6, 2001 Received by editor(s) in revised form: June 7, 2001 Article copyright: © Copyright 2002 American Mathematical Society Join the AMS AMS Conferences News & Public Outreach Math in the Media Mathematical Imagery Mathematical Moments Data on the Profession Fellows of the AMS Mathematics Research Communities AMS Fellowships Collaborations and position statements Appropriations Process Primer Congressional briefings and exhibitions About the AMS Jobs at AMS Notices of the AMS · Bulletin of the AMS American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267 AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office. © Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility
CommonCrawl
Identification of an early diagnostic biomarker of lung adenocarcinoma based on co-expression similarity and construction of a diagnostic model Zhirui Fan2 na1, Wenhua Xue1 na1, Lifeng Li1,2 na1, Chaoqi Zhang2, Jingli Lu1, Yunkai Zhai3,4, Zhenhe Suo2 & Jie Zhao1,3,4 The purpose of this study was to achieve early and accurate diagnosis of lung cancer and long-term monitoring of the therapeutic response. We downloaded GSE20189 from GEO database as analysis data. We also downloaded human lung adenocarcinoma RNA-seq transcriptome expression data from the TCGA database as validation data. Finally, the expression of all of the genes underwent z test normalization. We used ANOVA to identify differentially expressed genes specific to each stage, as well as the intersection between them. Two methods, correlation analysis and co-expression network analysis, were used to compare the expression patterns and topological properties of each stage. Using the functional quantification algorithm, we evaluated the functional level of each significantly enriched biological function under different stages. A machine-learning algorithm was used to screen out significant functions as features and to establish an early diagnosis model. Finally, survival analysis was used to verify the correlation between the outcome and the biomarkers that we found. We screened 12 significant biomarkers that could distinguish lung cancer patients with diverse risks. Patients carrying variations in these 12 genes also presented a poor outcome in terms of survival status compared with patients without variations. We propose a new molecular-based noninvasive detection method. According to the expression of the stage-specific gene set in the peripheral blood of patients with lung cancer, the difference in the functional level is quantified to realize the early diagnosis and prediction of lung cancer. Among patients with non-small cell lung cancer(NSCLC), which accounts for approximately 85% of all lung cancer cases, nearly 50% have lung adenocarcinoma, which is the most common lung cancer [1]. Lung adenocarcinoma usually begins at the outer part of the lung tissue. Early lung cancer does not yield significant clinical manifestations; only in the late stage will there be a gradual emergence of chronic coughs, bloody sputum, and other symptoms. In that case, some early symptoms (such as fatigue, shortness of breath, or upper back and chest pain) are likely to be overlooked [2]. The clinical diagnosis of lung cancer is mainly dependent on an X-ray; lung cancer lesions often appear as abnormal shadows on X-rays. However, in up to 25% of lung cancer cases, the chest X-ray does not reveal any abnormal lesions and returns a perfect "normal" diagnosis. If you still suspect cancer, you can also use other more sensitive diagnostic methods, including computed tomography (CT) or MRI scans [3, 4]. According to the results, the doctor may wish to obtain a lung tissue sample by using a puncture to confirm. However, there are many controversies associated with this invasive detection method. First, nobody can guarantee that puncture sampling will always obtain tumour cells, and the invasion of the sampling process has a potential risk of cancer metastasis. Therefore, a new blood detection method called liquid biopsy has emerged, and it can accurately detect the expression of specific genes in lung adenocarcinoma under noninvasive conditions. According to the appearance of particular genes in the peripheral blood of patients, this new detection method will achieve the early diagnosis of lung cancer with 91% accuracy and long-term monitoring of therapeutic response [5]. Beyond that, this gene detection method has a high sensitivity and specificity compared with traditional detection methods while avoiding the risk of cancer cell proliferation due to invasive exposure. Once the doctor clinically confirms that a patient has lung cancer, the doctor will classify the result into different stages based on the progress of lung cancer, such as whether it has spread, and if any other articles may be involved [6]. Multi-staging helps the direct treatment in a more appropriate manner, neither under treating a malignancy or over treating it and causing more harm than good. According to the malignant degree of carcinoma, there will be four stages: stage I, stage II, stage III and stage IV [7]. Stage I: the cancer is localized and has not spread to any lymph nodes. Stage 2: the cancer has spread to the lymph nodes, the lining of the lungs, or the significant passageways of the lungs. Stage 3: the cancer has spread to nearby tissue. Stage 4: the cancer has spread (metastasized) to farther reaches of the body. Lung adenocarcinoma is classified by the qualitative analysis of cancer tissue after diagnosis, which means that in the diagnosis stage, there is no distinction between the stages of lung adenocarcinoma [8]. This process also leads to many lung cancer patients not knowing whether they are probably in an advanced stage. By combining the genetic analyses at the cellular and functional level, experimenters identify specific genes and mutations in the four stages. These genes may be the markers of different stages of lung cancer or the target for personalized treatment. Additionally, the variability function explains the biological difference between the various stages and the malignant mechanism of the fatal development of cancer to a certain extent. In conclusion, the traditional method of finding disease markers is mostly dependent on the expression level of single genes. The assumption is that each gene is relatively independent, so the doctor will use the selected gene's characteristics to design diagnostic kits, make the combination of chips or sequencing, and predict the risk of illness based on gene expression [9]. However, in organisms, the gene is not relatively independent, and there is a functional interaction. By considering the patient's molecular-level and functional-level changes from a higher perspective, this study converts gene expression information into a functional imbalance. On the one hand, this method overcomes the instability of cross-sample single gene markers. On the other hand, it prompts the underlying pathogenesis from the functional level, while the gene with the specific function we collected is likely to be a clinically significant therapeutic target or diagnostic marker. More importantly, our analysis found 12 genes that not only can predict lung cancer patients in the early phase but can also present dynamic changes during cancer development. Moreover, patients carrying variations in these genes tend to have a poor outcome in terms of survival status compared with patients without variations. Data remodelling and grouping The RNA-seq data GSE20189 were downloaded from the GEO database [10]. We first split the lung cancer data into groups based on the clinical data. The original data consistes of 22,277 genes and 162 samples. There were 81 controls, 28 patients in the early phase (stage I) and 53 patients in the late phase (stage II–stage IV). Initially, we accomplish the data pre-processing and standardization, and we removed genes and samples with the ratio of missing values higher than 10%. The missing values within remaining samples are replaced with the mean values of the corresponding genes in the other samples. We calculated the mean value and standard deviation of each gene in the control group. Then, we applied Z-score normalization [11] to all samples after which the expression of the gene in the control group obeys the standard normal distribution, with a mean value of 0 and a variance of 1. Therefore, for gene I, if there is no difference between different stages of the lung adenocarcinoma samples, the case groups should be subject to a normal distribution. Otherwise, we believe the gene i in a specific stage is obviously different compared to the control group; this differentially expressed gene is likely to act as a crucial biomarker in early diagnosis. Stage grading-specific gene extraction In the evolutionary process, most of the genes are conservative and commonly do not have an apparent difference in expression under the disease signal stimulation. If the gene is significantly associated with lung adenocarcinoma, then different expression should be observed in at least one stage group. Therefore, we used the coefficient of variation to assess the fluctuation of genes across the lung adenocarcinoma samples. The coefficient of variation can be calculated by Eq. 1. $$CV = \frac{sd}{mean}$$ Mean is the mean expression of the gene in all lung adenocarcinoma samples, and sd is the corresponding standard deviation. According to the distribution of all the coefficients of variation in the genes, we only screened genes whose absolute value of CV rank in the top 50% as genes that may be associated with lung adenocarcinoma. The remaining 50% of the genes are due to a small fluctuation in the vicinity of 0; therefore, these genes are supposed to have little impact on lung adenocarcinoma. To identify the specific genes for early or late phase, we used the limma method [12] to evaluate the differences compared with control group. The significance threshold of the P value was set to 0.05, and the cut off of the absolute value of the fold change was set to 1. The significantly differentially expressed genes in the early phase were marked as Δ0. The differentially expressed genes related to the late phase were marked as Δ1. The intersection between early and late phase specific genes was marked as Δ2. These genes can distinguish normal and cancer patients in the early phase and present dynamic changes with cancer development. Co-expression correlation analysis Since the interaction between genes changes from one stage to the next stage of lung adenocarcinoma, the two correlated genes tend to have a common biological effect in the non-disease state from the biological point of view. It appears as a mutual synergy or interaction. However, in the disease state, the function of the genes is abnormal, which may results in the change of correlation. Therefore, we examined the expression correlation in Δ0, Δ1 and Δ2 using the Pearson correlation coefficient [13] (greater than 0.5 is positively correlated, and below − 0.5 is negatively correlated). Unsupervised clustering analysis We utilize correlation analysis to construct the correlation coefficient matrix between genes and take advantage of hierarchical clustering [14] to achieve unsupervised clustering for samples and genes. From the results of unsupervised clustering, we observe and analyse the effect of distinguishing cancer patients from controls at the gene level. On the one hand, we can verify the Δ2 gene's ability to identify the specific stage of the lung adenocarcinoma sample; on the other hand, we can observe the gene expression patterns at different stages, such as genes in stage i that have a high expression in a group while having a low expression in stage j. The transformation of this expression pattern also indicates that the progression of the disease affects the function of gene regulation, which leads to some function level changes. The clustering results are visualized using heatmap thermal maps [15]. Specific and non-specific co-expression network analysis Because the correlation between gene expression is different through disease status, the status specific networks should also reflect a significant difference in network characteristics from the perspective of the system network. We constructed specific networks of control, early and late phase based on the co-expression relationship between genes from Δ0 and Δ1. The non-specific network is constructed using genes from Δ2. If there are co-expression relationships between the two genes, there is an edge between genes. Because of the co-expression network constructed by gene co-expression in different disease states, it shows a significantly different topological nature, suggesting that the signal transmission efficiency of the system network is remarkably unlike in different vicious grades. Hence, we start analysing from six topological properties of network connectivity [16], namely, ASLP (average shortest path), Closeness Centrality, Cluster Coefficient, and Degree Distribution. If the edge of the network is missing, that is, the co-expression relationship between the genes disappears, the average shortest path of the network increases, and along with a decrease in the Degree Distribution, Clustering Coefficient, and Closeness Centrality, the network signal transmission performance is declining as well. The specific network of the control status reflects intrinsic relationships among genes. The specific network of the early phase reflects the changes in the initial stage of cancer. The specific network of the late phase reflects more variations occurring among genes. On the other hand, as the overlapped genes (Δ2) present deviations in both early and late phase, these genes can be regard as biomarkers associated with cancer progression. Therefore, we also construct a non-specific network using Δ2. Eventually, we use the degree of distribution of gene nodes in the network to evaluate the importance of genes; the higher the degree of the gene, the more adjacent genes are becoming affected by the abnormal gene. We convert the degree of all genes through the sigmoid function [17] to the weight of 0–1, and the weight of the gene not in the network defaults to the minimum. $${\text{sigmoid}}\;\left( {\text{degree}} \right) = \frac{1}{{1 + e^{ - degree} }}$$ Functional pathway enrichment To further analyse the biological functions involved in the specific genes in the different disease states from the functional level, we apply the functional enrichment analysis on the Δ2 genes. By using Fisher's exact test [18], the significant pathways selected are the functions for regulation of these specific genes. Since these genes have dissimilar expression patterns in the different stages, at the same time, the genes that have a co-expression correlation tend to participate in the same biological ability; we deduce that these pathways can better indicate the efficacy of the genes in distinguishing stages. Beyond that, these pathways with abnormal functional levels in different stages can be used to explain the mechanisms of disease progression, and these pathways may contain potential drug targets or diagnostic markers. Identification of significantly deviated pathways Suppose we used the intersection gene to obtain N pathways through function enrichment. We first identify the differentially expressed genes in each gene pool and then use the inverse cumulative distribution function to convert the P value of ANOVA [19] to the Z value and multiply the weight of the gene. The differentially expressed genes in pathway P are calculated from the formula to calculate the deviation score A (P) of the pathway [20], as follows: $$\begin{aligned} A(p_{j} ) = \hbox{max} \left\{ {\frac{1}{\sqrt t }\sum\limits_{i = 1}^{t} {Z^{\prime}_{{j_{i} }} \left| {1 \le t \le k} \right.} } \right\} \hfill \\ A_{\text{corrected}} (p_{j} ) = \frac{{A(p_{j} ) - \mu_{k} }}{{\sigma_{k} }} \hfill \\ \end{aligned}$$ In the calculation process, we first sort the differentially expressed genes from large to small; therefore, the larger the Z value is, the more significant is the gene expressed. Suppose that there are k differentially expressed genes in pathway P, then 2, 3……k genes are used to calculate the Z score mean value. When the Z score of the t genes is the maximum, the corresponding t genes have the greatest contribution to pathway P, and we calculate the deviation score A(P) of pathway P in the disease state. To eliminate the influence of the size of the pathway itself, we calibrate the A(P) score with the correction method of the random perturbation principle. For the pathway P, the deviation score is A(P), and we recalculate a new A(p)' by k sets of genes in the random selected pathways. After 10,000 random cycles, we calculate the mean μ and standard deviation of the A(p)' background distribution. A calibrated Acorrected is obtained using Eq. 3. There are some problems to be considered when calculating the deviation score of pathway P. First, the number of genes differentially expressed in the pathway is large, but not all of them have a significant impact on the pathway, such as some genes belonging downstream of this pathway. The difference in expression of these genes is likely due to changes in upstream abnormal signals. In contrast, certain upstream genes, important enzymes, transporters, and other genes that have important regulatory effects may have a greater impact on the pathway. Second, the number of the genes in each pathway is significantly different; in order to eliminate the impact of the size of a certain pathway itself, we utilize the random punctuation process. Early screening gene filtering using RFE Significant pathways identified from random permutation present functional differences in the early and late phases of lung cancer development. This further suggests that the genes regulating these pathways have important roles in lung cancer. On the one hand, lung adenocarcinoma-related genes exhibit differences in the level of expression under the disease signal stimulation. On the other hand, the co-expression relation changes between the genes and thus affects the levels of downstream functional pathways. These genes involved in the regulation may be potential targets for lung cancer treatment or new clinical monitoring and diagnostic indicators. To accurately identify the optimal combination of gene features, we used the recursive feature elimination (RFE) algorithm [21] to filter genes. Finally, we screened out the risk-related genes of lung cancer to train the diagnostic model. Establishing an early diagnostic model Actually, as the SVM performs better in the binary classification, we combined stage I and stage II as the early benign group, while stage III and stage IV as the advanced malignant group. To distinguish patients belonging to benign and malignant groups using risk-related genes as features, we utilized the supervised classifier support vector machine (SVM) to train a diagnostic model [22]. Default parameters were used to initialize the model, including the RBF nonlinear kernel function, gamma of 0 and so on. A grid search algorithm was used to optimize all parameters [23]. Cross validation was used to calculate true positive rates and false positive rates, and an ROC curve was drawn to estimate the model's performance. Survival analysis of independent validation data To further validate the early risk genes, our screen can not only achieve the early diagnosis of lung cancer but also estimate the prognosis of patients to a certain extent, to provide the basis for the individual treatment strategy. We downloaded the lung adenocarcinoma samples from the TCGA database as independent validation data. We used a Cox regression analysis of the lung adenocarcinoma sample's overall survival data based on the risk genes. The P value was calculated to estimate the correlation between the survival time and risk gene variance. Phase-specific gene recognition We used lung adenocarcinoma samples of the early phase and late phase late to compare with healthy control groups, respectively. According to the result of the limma algorithm, 866 early phase-related differentially expressed genes were identified, including 136 up-regulated genes and 730 down-regulated genes; 913 late phase-related differentially expressed genes were identified, of which 419 were down-regulated genes and 494 were up-regulated genes. The distribution of the two groups of differentially expressed genes at the P value and logFC levels is shown in Fig. 1a. The distribution of differentially expressed genes in the early and late phase. a The horizontal axis is logFC, and the vertical axis is the P-value after the negative logarithm conversion. The left panel shows the distribution of early-phase lung cancer related genes, and the right panel shows the distribution of late-phase lung cancer related genes. Up- and down-regulated genes are marked in red and green, respectively. b Up/down-regulated genes in the early and late phases are marked with four colour markers. c Each colour block corresponds to the correlation coefficient of two genes: the red represents a positive correlation, and the blue represents a negative correlation. The heatmap matrix from left to right corresponds to the control and the early and late phases In Fig. 1a, the down-regulated genes are labelled with green, and the up-regulated genes are labelled with red. It can be observed from the figure that the down-regulated genes are dominant in the early stage, whereas the up-regulated genes are dominant in the late phase as the cancer progresses. This suggests that an increasing number of genes are up-regulated during the malignant process of lung adenocarcinoma. The intersection genes of the early phase and late phase are shown in Fig. 1b. We found 108 shared down-regulated genes in early and late lung cancer-related genes as well as 90 shared up-regulated genes. This indicates that the expression levels of these 198 genes in early and late lung cancer patients are both different. In the process of the malignant progression of lung adenocarcinoma, the 198 genes showed a dynamic change. On the one hand, this dynamic pattern of genes associated with the disease can be used as a clinical indicator to achieve an early diagnosis. On the other hand, the function of these genes is also likely to regulate the progress of cancer. If the intrinsic correlation of a pair of genes is absent after entering a disease stage, this indicates one of these genes is associated with lung cancer. In contrast, if a pair of non-related genes, appears to be correlated in disease status, then it suggests that this two genes are likely initially divided into two parallel pathways. In the process of lung cancer progression, one of the pathways is dysfunctional; therefore, another pathway is activated as a compensatory pathway, thus reflecting the new gene co-expression relationship [24]. Other than these two groups of co-expression patterns, the remaining groups in any stage in the stable co-expression are those genes that always maintain the relationship along with disease progression. Therefore, it is essential to identify the co-expressed genes of these three different patterns, which is useful for explaining the specificity of the lung adenocarcinoma stage. In cancer studies, co-expression correlations between genes are more important because the correlation between genes varies dynamically with cancer progression. This dynamic change, on the one hand, provides a basis for the pathogenesis of cancer progression and, on the other hand, is also an important feature of dynamic monitoring of patient status. We use the Pearson algorithm to internally compute the correlation coefficient between any two genes for each stage-specific gene set. Table 1 calculates the number of genes in the phase-specific gene sets that have a remarkable correlation. Most of the relevant genes are positively correlated, and a few genes are negatively correlated. At the same time, we found that there are 3015 gene pairs in the three phases' intersections across the normal and cancer status. These gene pairs are stable in the expression of relevance within the three phases; 3015 gene pairs involve 164 genes in total, and therefore, we investigate the 164 genes in the three phases of the co-expression state, as shown in Fig. 1c. Table 1 Correlated gene pair numbers in three phases We observe that the 164 genes of the intersection reflect apparent differences in the four stages. Most genes are still positively correlated, but the degree and type of correlation between any two genes are not consistent in the different stages. This result again suggests that the covariant correlation between the two genes changes with the progression of lung adenocarcinoma. Since the 164 specific genes shared by the early and late phases are expressed differently during the cancer process, it is an important feature to monitor the progression of the disease. Therefore, we take advantage of the 164 genes of the intersection to perform a cluster analysis on the three groups. By using the Pearson correlation coefficient, we construct a correlation matrix, and for the clustering method, we use hierarchical clustering. Unsupervised clustering analysis is performed on all the samples to examine the discriminant effects of these genes on different phase samples. The results are shown in Fig. 2a. The expression and function property of the genes. a The horizontal axis represents the sample; the vertical axis represents the genes. We use three colours to mark the different levels of classification of lung adenocarcinoma: blue for the control group, green for the early phase, and red for the late phase. The red and green blocks represent the expression pattern of the gene; green represents low expression, and red represents high expression. b shows the analysis of four topological network properties, including the average shortest path, node degree distribution, closeness centrality, and clustering coefficient. Different groups are marked in different colours. c A–C corresponds to the pathway enrichment results of three phases. The horizontal axis is the pathway term, and the vertical axis is the P-value after the negative logarithmic transformation. We label the number of genes in the pathway by dark blue and light blue markers. The brighter the colour, the more genes that are enriched in the pathway; the darker the colour, the fewer genes that are enriched We can intuitively observe that almost all of the cancer samples are clustered together, and the control samples are clustered together. Therefore, we can conclude that at the 164 genes' molecular level, the control and cancer samples have remarkable differences. However, early- and late-phase cancer samples are mixed together and are not easy to distinguish. It is also found that the expression pattern of the gene changes significantly from the normal state to the tumour state. The genes that are highly expressed in the control group have a low expression level in lung cancer samples, and vice versa. At the same time, we also found that a certain proportion of lung adenocarcinoma samples and other cancer samples are not clustered together but are mixed in the control sample. This suggests that some samples, although defined as being from lung cancer patients, have a molecular level that is close to the healthy control sample and that these samples are likely to have a better prognosis. Specific and non-specific co-expression of network analysis We continue to construct specific and non-specific networks in different phases. In networks, the gene is a node, and the correlation is an edge; if the two genes are positively correlated, then the edge is red, and if the correlation is negative, the edge becomes green. The Cytoscape software realizes the network construction, and the network topology is analysed by using the network analysis plug-in. The edge between nodes represents the correlation coefficient; the stronger the correlation is, the thicker the edge. It is clear that in each phase-specific network, some genes assemble into clusters. There is a significant co-expression correlation between the genes within each cluster, indicating that the genes within the cluster may be functionally consistent. We use the network analysis plug-into do the topology analysis for the four networks separately, and the results are shown in Fig. 2b. The average shortest path measures the average state of the shortest path of a node to the other nodes in the network. Therefore, the shorter the average shortest path is, the more convergent the network and the higher the signal transmission efficiency. The degree distribution is a measure of the number of adjacent nodes in the network. The higher the degree, the higher the number of adjacent nodes that can affect the gene and the higher the signal transmission efficiency. Closeness centrality reflects the proximity of a node to other nodes in the network; the closer to the centre, the stronger the network shrinkage is and the closer the distance is between genes. The clustering coefficient is a sub-module that represents the ability of adjacent nodes to form a complete graph in a graph, a high clustering coefficient, and a sub-module that may exist in the network. It can be seen from Fig. 2b that the changes in the early phase of lung adenocarcinoma in three specific networks are the most obvious, which is reflected in the average shortest path, the increase in the degree, the clustering coefficient and the closeness centrality. This series of network topological features suggests that in patients with early lung cancer signal stimulation, the patient's body produces a significant stress response to resist and compensate for abnormal molecular function. In the early stage of lung cancer, the specific network is obviously contracted, thus reducing the average shortest path, increasing the clustering coefficient and approaching the centre, and improving the performance of the network signal transmission as a whole. However, with the further progress of lung cancer, this stress response in the early phase disappears or is insufficient to compensate for functional abnormalities and reflected in a gradually decreased network performance. The average performance of a non-specific network is between the normal and disease states, and because of the greater number of nodes and edges in nonspecific networks, the degree distributions are relatively more discrete. Detailed network structures and genes information of specific or non-specific networks can be seen from Fig 3. The specific and nonspecific coexpression networks. The specific and non-specific networks of different phases, from a to d, corresponding to control, early-, and late-phase related genes and the overlapping gene set. The closer the node colour is to the blue, the higher degree the node has in the network; the closer the node colour is to red, and the lower degree the node has Functional pathway enrichment analysis We apply a functional enrichment analysis to each stage-specific gene set. For the analytical method, we adopt the Fisher method, and the significance threshold is P < 0.05. We obtain the significance (P) of the path involved in each stage and the number of genes enriched in the corresponding pathway. The results are shown in Fig. 2c. The three graphs in Fig. 2c correspond to early phase genes, late phase genes and their intersections. By observation, the significant functions involved in the early phase include primary immunodeficiency and RNA transport. The specific functions of the late phase concentrate in the T/B cell receptor signalling pathway, chemokine signalling pathway, NF-kappa B signalling pathway and primary immunodeficiency. Functions related to intersection genes concentrate on primary immunodeficiency. Functional enrichment analysis suggests that the mechanism of immune regulation in the process of lung adenocarcinoma significantly changes. Additionally, abnormal immune systems include innate immunity, T/B lymphocyte-regulated specific adaptive immunity, natural killer-regulated nonspecific immunity, and other infections and inflammation-related functions. This further suggests that abnormalities of the immune system are essential causes of the progression of lung adenocarcinoma. Moreover, combined with the network topology in the early and late phase changes in lung cancer, we speculated that during the early phase of lung cancer, the collective immune system in the stress response process plays an important role. Abnormalities occurred in the functions related with the initiation of innate immunity and adaptive immunity in the early phase, and therefore, the system network performance has a brief increase. However, with the disease progression, the immune system is insufficient to compensate for abnormal function. Therefore, the functional level of the immune system is an important factor in determining the risk of lung adenocarcinoma and an important indicator of the early diagnosis of lung adenocarcinoma. Functional pathway imbalance score We use Eq. 3 to calculate the variance score for each enriched functional pathway. To investigate whether the abnormality of these functional levels is significantly present in different phases, we analyse the imbalance scores in each of the three groups of samples. Finally, we select 12 significant pathways in the ANOVA analysis, as shown in Table 2. Table 2 The functional significance in ANOVA We can see that these 12 pathways have obvious differences in the three phases; the P value is less than 0.05. To more intuitively analyse the imbalance of each pathway in these three phases' samples, we use the scatter plot to visualize the dynamic process of 12 pathways, as shown in Fig. 4. The pathway dynamic change across the three phases. We use red, green, and blue to mark normal, early phase and late phase, one by one. According to the dynamic change in the pathway in each phase, we use non-parametric linear fitting to observe the trend It can be seen that the functional levels in the normal phase are inconspicuous and fluctuate near 0; the functional levels from the early phase begin to produce a significant fluctuation. To further clarify the direction of imbalance and variation of each pathway in the three phases, we compare the average distribution of each pathway through the boxplot visualization, as shown in Fig. 5. The boxplot visualization. The 12 pathways in three different phases of the sample in the scoring boxplot graph; the graph demonstrates the median and confidence interval. The three phase groups are also marked with red, green, and blue It can be intuitively seen that in most of the pathways, the scores of the three groups are gradually reduced. This suggests that these levels of function are inhibited during the progression of lung cancer, and the level of immune system function is reduced by cancer invasion. The functional level of the actin cytoskeleton pathway presents an upward trend in the early phase, and then, with the progression of cancer, the functional level further declines. This may be caused by the stress response of the body under the stimulation of the cancer. Therefore, functional abnormalities can be found to occur in the early phase of lung cancer, and thus, the use of genes regulating these functions as a diagnostic feature can achieve the early diagnosis of lung cancer. Early-phase diagnostic marker screening using the RFE algorithm We found that the functional level of variation occurs in the pathways enriched from early- and late-phase specific genes, compared with the normal state. To achieve the early diagnosis of lung cancer and identify the genetic markers that significantly change with the progression of lung cancer, we used the recursive feature elimination algorithm on the 198 genes. Finally, 12 genes were screened as diagnostic markers. Figure 6a shows the RFE algorithm optimization process; when the number of feature genes is 12, the model has the highest precision. Therefore, we use these 12 genes as the features to train the diagnostic model. The performance of biomarkers. a The horizontal axis is the number of selected features. The vertical axis is the corresponding accuracy. b The red curve is the initial model accuracy without optimization. The green curve is the accuracy of the model after feature selection and parameter optimization. The blue curve is the average accuracy calculated using a fivefold cross-validation method. c The survival analysis. The horizontal axis is the survival time (month). The vertical axis is the percentage of patients in different groups Risk diagnostic model construction based on early screening genes We take 12 genes as features, using an SVM (support vector machine) to construct the classification model. During the initialization process, we set all the model parameters as the default parameters and intensively test the initial accuracy of the model in the training set. The parameter optimization process utilizes the grid search algorithm and iterates to find the optimal combination of parameters. The final model classification prediction results are shown in Fig. 6b. Figure 6b illustrates the classification efficiency of the model by ROC curve evaluation. In the process of cross-validation, we randomize samples each time, taking four to do the training. For a prediction graph, the horizontal axis is the false positive rate; the vertical axis is a true positive rate, and the final model achieves an average accuracy of 0.91. The five-time cross validation accuracy is similar to the accuracy of the model in the training set, indicating that the model does not exhibit significant over-fitting. Using our predictive model of training, we can achieve early prediction of lung adenocarcinoma and distinguish benign and malignant progression. Notably, it not only provides new insight into the pathogenesis of lung adenocarcinoma but also suggests a new diagnostic marker or therapeutic target. Survival analysis for validation We downloaded samples of lung adenocarcinoma from the TCGA database as independent data for survival analysis. We identified 12 marker genes that could predict the diagnosis of early lung cancer patients, suggesting that these genes were significantly altered by disease signal stimulation in the early stage of cancer. After the functional analysis, we found that these genes were involved in the inherent immune and adaptive immune system; therefore, we speculate that with the progression of lung cancer, the patient's immune system followed with abnormal functional levels. To investigate whether these genes affected the survival of the patients by interfering with the immune level of the patients, we set the samples with variations of these 12 genes to be considered as high-risk groups, and the samples with no significant variations were set as the low-risk group and combined with Cox regression. The survival analysis results are shown in Fig. 6c. The significance (P) value was 0.03, indicating that the two groups of samples present significant differences in the survival level. X-ray is so far the primary tool for the clinical diagnosis of lung cancer, but for early lung cancers, misdiagnosis is unavoidable because of its low sensitivity. However, another method to confirm the diagnosis is tissue puncture, which is also a diagnostic tool that may cause cancer cells to spread and metastasize. Lung adenocarcinoma in the early stage is under restriction in local lesions by using ultrasound or X-ray, and other detective methods may lead to a missed diagnosis [25]. However, in the early stages of cancer development, the level of molecular expression has changed. This change occurs because the microenvironment of cancer tissue lesions is usually accompanied by the local inflammatory response, which activates the body's stress response and immune response. In the initial stage, under the regulation of innate immunity and adaptive immunity, multiple genes are differentially expressed to compensate for function and resist cancer [26]. However, the low sensitivity and specificity impede the broad application of these biomarkers in clinics. Therefore, to develop new methods and novel diagnostic biomarkers is necessary for the early detection of lung adenocarcinoma. In the late phase of lung cancer, malignancy is increased due to the impact of decompensation. Therefore, more genes tend to present diverse expression. At the same time, due to the different stages of pathogenesis and degrees of malignancy, the gene expression patterns are remarkably different [27]. Hence, precisely distinguishing early- and late-phase patients from gene expression patterns as well as the level of functional abnormalities is a milestone for the early diagnosis of lung adenocarcinoma. In this study, we identified early- and late-phase specific genes and the shared genes that are always differentially expressed across the cancer process. The flow chart of the whole study was shown in Fig. 7. Combined with unsupervised clustering analysis, we found that there were significant differences in the expression patterns of 164 shared genes between early-phase lung cancer (stage I) and advanced lung cancer (stage II–IV). To further discover the specificity between the two phases, we separately analysed each phase-specific gene set. We used the Pearson correlation coefficient to calculate the similarity between any two genes in each phase-specific gene set [13]. By using the relevant gene relationship, we constructed the co-expression network and analysed the topology of the network. By comparing the specific characteristics of the co-expression networks, we found that the network structure changes when the degree of malignancy increases [28]. Workflow summary for analysis process. The flow chart from the datasets downloaded, different stages, methods used for different analyses, to gene sets generated from one analysis and used for another analysis With the rise in the degree of malignancy of lung cancer, the average shortest path of the system network gradually increases, suggesting that the network is becoming looser, and the signal transmission efficiency is gradually reducing [29]. Similarly, the closeness centrality reduces as the progression of the disease also prompts the network performance to decline [30]. Changes in the network structure suggest that in the process of lung cancer progression, the interactions between the genes exhibit dynamic changes under the regulation of the immune system. This interaction under the immune response reflects the complex changes in the biological system from stress compensation to decompensation during tumour progression [31]. Some genes embody the correlation at an early stage, but in the process of tumour mutation, the inherent correlation between genes is absent, suggesting that at least one of them is a tumour-related gene and is abnormal in expression level due to mutation [32]. Beyond that, some genes are not related to the early phase, but in the late phase, they reflect the correlation, suggesting that these genes are likely to have a functional consistency, and they are initially in a silent state. Only when the activated gene is abnormal does the silent gene assume its role, which results in a new correlation. We analysed the functional enrichment of each phase-specific gene, and the results showed that the mechanism of immune regulation in the process of lung adenocarcinoma dramatically changes. Abnormal immune systems include innate immunity, T/B lymphocyte-regulated specific adaptive immunity, natural killer-regulated nonspecific immunity, and other infections and inflammation-related functions. This further suggests that the abnormalities of the immune system are essential causes of progression of lung adenocarcinoma [33]. Compared with the normal phase, early phase-related genes and late phase-related genes are differentially expressed. To further screen out the markers with diagnostic efficacy from these genes, we screened them through machine learning algorithms. Finally, we screened 12 genes with diagnostic efficacy and used these genes as features to construct a diagnostic model. The model's diagnostic accuracy for early lung cancer patients reaches 91%. The innovation of this paper is about identifying phase-specific genes and functions of lung adenocarcinoma, which serves as a guide for selecting personalized diagnostic markers or therapeutic targets. The limitation of this paper is that the experimenters are not able to further discover the co-expression of gene dynamics. Actually, more than two genes can be involved in the same function, which means using "gene cluster" instead of "gene pair" might provide more information. However, to guarantee the correlation with a gene cluster is not randomness, we have to rely on adequate annotation data to prove genes correlation. Meanwhile, using the functionality as the feature, the explanation of the specific mechanism driving changes to different stages can become more intuitive. If it is possible to screen out the stabilized gene pairs, the method is still more stable than the detection methods using single genes as features. In this study, we demonstrated that the co-expression and its reconstruction between genes reflect the progress of genes associated with the progression of lung adenocarcinoma and as the feature to distinguish lung cancer with different levels of risk. Based on co-expression similarity and construction of a diagnostic model, we identify an early diagnostic biomarker of lung adenocarcinoma. Overall, this may provide a new insight into the diagnosis and prediction of lung cancer. NSCLC: non-small cell lung cancer TCGA: the cancer genome atlas RFE: recursive feature elimination ROC: receiver operating characteristic curve Chansky K, Sculier JP, Crowley JJ, Giroux D, Van Meerbeeck J, Goldstraw P, International Staging C, Participating I. The International Association for the Study of Lung Cancer Staging Project: prognostic factors and pathologic TNM stage in surgically managed non-small cell lung cancer. J Thorac Oncol. 2009;4:792–801. Nakaya A, Kurata T, Yokoi T, Takeyasu Y, Niki M, Kibata K, Satsutani N, Torii Y, Katashiba Y, Ogata M, et al. Retrospective analysis of single-agent nab-paclitaxel in patients with platinum-resistant non-small cell lung cancer. Mol Clin Oncol. 2017;7:803–7. Cheng CY, Hsu CY, Tsai YH, Lin KL, Huang CE, Fan YH, Chin SC, Huang YC. Novel anterior brainstem magnetic resonance imaging findings in non-small cell lung cancer with leptomeningeal carcinomatosis. Front Neurol. 2017;8:579. International Early Lung Cancer Action Program I, Henschke CI, Yankelevitz DF, Libby DM, Pasmantier MW, Smith JP, Miettinen OS. Survival of patients with stage I lung cancer detected on CT screening. N Engl J Med. 2006;355:1763–71. Hughey JJ, Butte AJ. Robust meta-analysis of gene expression using the elastic net. Nucleic Acids Res. 2015;43:e79. Article PubMed PubMed Central CAS Google Scholar Schiller JH, Gandara DR, Goss GD, Vokes EE. Non-small-cell lung cancer: then and now. J Clin Oncol. 2013;31:981–3. Zhang C, Kuang M, Li M, Feng L, Zhang K, Cheng S. SMC4, which is essentially involved in lung development, is associated with lung adenocarcinoma progression. Sci Rep. 2016;6:34508. Aramini B, Casali C, Stefani A, Bettelli S, Wagner S, Sangale Z, Hughes E, Lanchbury JS, Maiorana A, Morandi U. Prediction of distant recurrence in resected stage I and II lung adenocarcinoma. Lung Cancer. 2016;101:82–7. Wang Y, Wu W, Zhu M, Wang C, Shen W, Cheng Y, Geng L, Li Z, Zhang J, Dai J, et al. Integrating expression-related SNPs into genome-wide gene- and pathway-based analyses identified novel lung cancer susceptibility genes. Int J Cancer. 2018;142:1602–10. Barrett T, Troup DB, Wilhite SE, Ledoux P, Rudnev D, Evangelista C, Kim IF, Soboleva A, Tomashevsky M, Marshall KA, et al. NCBI GEO: archive for high-throughput functional genomic data. Nucleic Acids Res. 2009;37:D885–90. Farooq M, Sazonov E. Detection of chewing from piezoelectric film sensor signals using ensemble classifiers. Conf Proc IEEE Eng Med Biol Soc. 2016;2016:4929–32. He B, Yin J, Gong S, Gu J, Xiao J, Shi W, Ding W, He Y. Bioinformatics analysis of key genes and pathways for hepatocellular carcinoma transformed from cirrhosis. Medicine. 2017;96:e6938. Li L, Ma X, Pandey S, Fan A, Deng X, Cui D. Bibliometric analysis of journals in the field of endoscopic endonasal surgery for pituitary adenomas. J Craniofac Surg. 2018;29:e83–7. Singh AK, Rooge SB, Varshney A, Vasudevan M, Bhardwaj A, Venugopal SK, Trehanpati N, Kumar M, Geffers R, Kumar V, Sarin SK. Global microRNA expression profiling in the liver biopsies of hepatitis B virus-infected patients suggests specific microRNA signatures for viral persistence and hepatocellular injury. Hepatology. 2018;67:1695–1709. Hummel M, Edelmann D, Kopp-Schneider A. Clustering of samples and variables with mixed-type data. PLoS ONE. 2017;12:e0188274. Bandyopadhyay S, Ray S, Mukhopadhyay A, Maulik U. A review of in silico approaches for analysis and prediction of HIV-1-human protein–protein interactions. Brief Bioinform. 2015;16:830–51. Lebron S, Yan G, Li J, Lu B, Liu C. A universal parameterized gradient-based method for photon beam field size determination. Med Phys. 2017;44:5627–37. Bultijnck R, Van de Caveye I, Rammant E, Everaert S, Lumen N, Decaestecker K, Fonteyne V, Deforche B, Ost P. Clinical pathway improves implementation of evidence-based strategies for the management of androgen deprivation therapy-induced side effects in men with prostate cancer. BJU Int. 2018;121:610–8. Haverkamp N, Beauducel A. Violation of the sphericity assumption and its effect on type-I error rates in repeated measures ANOVA and multi-level linear models (MLM). Front Psychol. 1841;2017:8. Nam H, Lee J, Lee D. Computational identification of altered metabolism using gene expression and metabolic pathways. Biotechnol Bioeng. 2009;103:835–43. Gholami B, Norton I, Tannenbaum AR, Agar NY. Recursive feature elimination for brain tumor classification using desorption electrospray ionization mass spectrometry imaging. Conf Proc IEEE Eng Med Biol Soc. 2012;2012:5258–61. Miura S, Takazawa J, Kobayashi Y, Fujie MG. Accuracy to detection timing for assisting repetitive facilitation exercise system using MRCP and SVM. Robotics Biomim. 2017;4:12. Kong X, Sun Y, Su R, Shi X. Real-time eutrophication status evaluation of coastal waters using support vector machine with grid search algorithm. Mar Pollut Bull. 2017;119:307–19. Wang X, Xu N, Hu S, Yang J, Gao Q, Xu S, Chen K, Ouyang P. d-1,2,4-Butanetriol production from renewable biomass with optimization of synthetic pathway in engineered Escherichia coli. Bioresour Technol. 2017;250:406–12. Hocker JR, Deb SJ, Li M, Lerner MR, Lightfoot SA, Quillet AA, Hanas RJ, Reinersman M, Thompson JL, Vu NT, et al. Serum monitoring and phenotype identification of stage I non-small cell lung cancer patients. Cancer Invest. 2017;35:573–85. Zhao Y, Varn FS, Cai G, Xiao F, Amos CI, Cheng C. A P53-deficiency gene signature predicts recurrence risk of patients with early stage lung adenocarcinoma. Cancer Epidemiol Biomarkers Prev. 2018;27:86–95. Subramanian J, Simon R. Gene expression-based prognostic signatures in lung cancer: ready for clinical use? J Natl Cancer Inst. 2010;102:464–74. Ostlund G, Lindskog M, Sonnhammer EL. Network-based Identification of novel cancer genes. Mol Cell Proteomics. 2010;9:648–55. Zhang W, Zhang Q, Zhang M, Zhang Y, Li F, Lei P. Network analysis in the identification of special mechanisms between small cell lung cancer and non-small cell lung cancer. Thorac Cancer. 2014;5:556–64. Bhattacharjee A, Richards WG, Staunton J, Li C, Monti S, Vasa P, Ladd C, Beheshti J, Bueno R, Gillette M, et al. Classification of human lung carcinomas by mRNA expression profiling reveals distinct adenocarcinoma subclasses. Proc Natl Acad Sci USA. 2001;98:13790–5. Roepman P, Jassem J, Smit EF, Muley T, Niklinski J, van de Velde T, Witteveen AT, Rzyman W, Floore A, Burgers S, et al. An immune response enriched 72-gene prognostic profile for early-stage non-small-cell lung cancer. Clin Cancer Res. 2009;15:284–90. Yamauchi Y, Muley T, Safi S, Rieken S, Bischoff H, Kappes J, Warth A, Herth FJ, Dienemann H, Hoffmann H. The dynamic pattern of recurrence in curatively resected non-small cell lung cancer patients: experiences at a single institution. Lung Cancer. 2015;90:224–9. Lavin Y, Kobayashi S, Leader A, Amir ED, Elefant N, Bigenwald C, Remark R, Sweeney R, Becker CD, Levine JH, et al. Innate immune landscape in early lung adenocarcinoma by paired single-cell analyses. Cell. 2017;169(750–765):e717. JZ and ZS designed and edited this study; LL and WX searched the databases and collected full-textpapers; ZF and YZ extracted and analysed the data; CZ and JL wrote the manuscript. All authors read and approved the final manuscript. The authors would like to express our sincere thanks for sharing the data from The Cancer Genome Atlas (TCGA) database. All data generated or analyzed during this study are included either in this article or in the supplementary information files. Ethical approval for this investigation was obtained from the Research Ethics Committee of First Affiliated Hospital of Zhengzhou University and with the 1964 Helsinki declaration and its later amendments. This study was supported by grants from the National Natural Science Foundation of China (Grant No. 31670895, 71673254). The National Key Research and Development Program of China—The construction and promotion of the demonstration system of based on telemedicine/Mhealth network (Grant No. 2017YFC0909900), Program of Science & Technology of Henan Province (Grant No. 201602037). Special funds from central government for the guidance of local science and technology development: Construction and demonstration of telemedicine big data application system for precision medicine. Special funds of major science and technology project in Henan province (2016): Construction and demonstration application of medical and health big data analysis system based on telemedicine cloud platform (Grant No. 151100310800). Doctor research team fund from the first affiliated hospital of Zhengzhou University for the in-hospital Interdisciplinary Collaboration Research: The impact of air pollution exposure levels on lung cancer and its potential molecular mechanism, based on satellite remote sense big data (Grant No. 2016-BSTDJJ-15). Zhirui Fan, Wenhua Xue and Lifeng Li contributed equally to this work Department of Pharmacy, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China Wenhua Xue, Lifeng Li, Jingli Lu & Jie Zhao Cancer Center, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China Zhirui Fan, Lifeng Li, Chaoqi Zhang & Zhenhe Suo Center of Telemedicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China Yunkai Zhai & Jie Zhao Engineering Laboratory for Digital Telemedicine Service, Zhengzhou, 450052, Henan, China Zhirui Fan Wenhua Xue Lifeng Li Chaoqi Zhang Jingli Lu Yunkai Zhai Zhenhe Suo Jie Zhao Correspondence to Jie Zhao. Fan, Z., Xue, W., Li, L. et al. Identification of an early diagnostic biomarker of lung adenocarcinoma based on co-expression similarity and construction of a diagnostic model. J Transl Med 16, 205 (2018). https://doi.org/10.1186/s12967-018-1577-5 Lung adenocarcinoma Early diagnostic Diagnostic model Medical bioinformatics
CommonCrawl
Calculators Topics Solving Methods Go Premium ENG • ESP Tap to take a pic of the problem Step-by-step Solution Integrate $\frac{1}{\sqrt[3]{x}}$ from $-125$ to $\left|\infty15\right|$ (◻) ◻/◻ ◻2 ◻◻ √◻ ◻√◻ ◻√ log◻ d/dx D□x ∫◻ |◻| acot acsc coth csch acoth asech acsch Step-by-step explanation Problem to solve: $\int_{-125}^{\left|∞15\right|}\frac{1}{x^{\frac{1}{3}}}dx$ Choose the solving method Basic Integrals Integration by substitution Integration by parts Integration by trigonometric substitution Integrals by partial fraction expansion Tabular Integration Learn how to solve definite integrals problems step by step online. $\int_{-125}^{\left|\infty15\right|} x^{-\frac{1}{3}}dx$ Unlock this full step-by-step solution! Show intermediate steps Learn how to solve definite integrals problems step by step online. Integrate 1/(x^0.3333333333333333) from -125 to abs(\infty15). Rewrite the exponent using the power rule \frac{a^m}{a^n}=a^{m-n}, where in this case m=0. Apply the power rule for integration, \displaystyle\int x^n dx=\frac{x^{n+1}}{n+1}, where n represents a number or constant function, such as -\frac{1}{3}. Evaluate the definite integral. Simplifying. $\frac{3}{2}\cdot \sqrt[3]{\left|\infty15\right|^{2}}-\frac{75}{2}$ Share this Solution Evaluate the integral $\int_0^{\infty}\left(\frac{1}{1+x^2}\right)dx$ $\int_{1}^{3}\cos\left(x\right)^2dx$ $\int_{-5}^{5}\frac{1}{\sqrt{5-x}}dx$ $\int_{2}^{4}\frac{1}{x^2-6x+5}dx$ $\int_{2}^{4}\left(x^2+5x+6\right)dx$ $\int_{2}^{5}\frac{1}{\left(x-1\right)\left(x+2\right)}dx$ $\int_{1}^{2}\frac{1}{x\cdot\left(x+1\right)}dx$ Definite Integrals Time to solve it: ~ 0.94 s (SnapXam) Definite IntegralsIntegral CalculusCalculusIntegrals of polynomial functions Supercharge your math learning By signing up, you agree that you're 13 years old or older, and also agree to Snapxam's Terms of Service and Privacy Policy. Start learning math Create your account! Get a sneak peek of step by step solutions of thousands of problems. Go premium to unlock full solutions. Get sleep and stress. Save up time for your hobbies. Join 100k+ students in problem solving. Register here if you don't have an account. © 2020 SnapXam, Inc. About Privacy Terms Contact What's New Calculators Topics Methods Go Premium
CommonCrawl
Find Study Questions by Subject Disease in Health and Medicine Disorders and Treatment in Health Introduction to the Arts Probability and Statistics Nature of Science Tech and Engineering Business Economics Labour economics Let the production function for the firm be Cobb-Douglass, with fixed capital (K):... Let the production function for the firm be Cobb-Douglass, with fixed capital (K): Y=zF(K,N{eq}^d {/eq})=z(K){eq}^\alpha {/eq}(N{eq}^d {/eq}){eq}^{(1-\alpha)} {/eq} where 0 is less than {eq}\alpha {/eq} is less than 1. a. Solve for labor demand as a function K, z, w, and {eq}\alpha {/eq}. b. How does labor demand change if total factor productivity doubles? c. How does labor demand change if capital doubles? Demand for Labor: Demand for labor is a type of derived demand because firms' demand for labor is driven by consumer's demand for goods, which are produced using labor. When firms decide how many workers to hire, they compare the marginal product of labor to the wage rate. Answer and Explanation: 1 Become a Study.com member to unlock this answer! Create your account View this answer a. Firms hire labor until the marginal product of labor is equal to the wage rate, i.e., {eq}zK^{\alpha}(1 - \alpha)(N^d)^{-\alpha} = w {/eq} which... See full answer below. Become a member and unlock all Study Answers Start today. Try it now Our experts can answer your tough homework and study questions. Ask a question Ask a question Learn more about this topic: Get access to this video and our entire Q&A library Labor Market: Definition & Theory Chapter 3 / Lesson 41 Learn the labor market definition and what happens in the labor market. See what the split labor market theory is and learn the different types of labor market. Related to this Question In a Cobb-Douglas production function with constant returns to scale for a firm that produces using only labor and capital, if the share of income that goes to labor is 42%, what does alpha equal? (En Long-run factor demand A price-taking firm (p = 100) chooses amounts of labor and capital that maximizes its profits. It has a decreasing returns to scale Cobb-Douglass production function: q(L, K) = L ^(1/6) K ^(1/8). The wage rate and rental rate of c Suppose the wage is 8, the rental rate of capital is 128, and the firm's CRS Cobb-Douglas production function is q=3L^(1/3)K^(2/3) a. What is the cost-minimizing bundle of labor and capital for produc Consider a Cobb-Douglas production function of: q(L,K) = 30K^0.3*L^0.7 where q is the production level, K is the quantity of capital, and L is the amount of labor. Suppose that a firm is interested in Consider a Cobb-Douglas production function: f(L,K)=0.5K^0.5L^0.5. Using this production function, solve a short-run profit maximization problem for a fixed capital stock K=4, output price p=8, wage rate w=2, and capital rental rate r=4. Let y = output, K = capital, L = labor, and W = wood. The Cobb-Douglass production function is y = AL^a K^b W^c, where A, a, b, and c are constants. Using statistical techniques, we can estimate the e For the following Cobb-Douglass production function, q = f ( L , K ) = L 0.45 K 0.7 a. Derive expressions for marginal product of labor and the marginal product of capital, M P L and M P K . b. D The Cobb-Douglas production function and the steady state. Suppose that the economy's production function is given by Y = K^alpha} N^1 - alpha. a. Is this production function characterized by constant Suppose that the production function for Halloween candy is given by Q(L, K) = AK^{\alpha} L^{1 - \alpha}, the price of labor is w, the price of capital is r and 0 less or equal than \alpha less or eq Use the Cobb-Douglas production function to show that a one-unit increase in the labor input will reduce the marginal product of labor and increase the marginal product of capital. Explain each of the Suppose that an economy's production function is Cobb-Douglas with parameter alpha = 0.3. Suppose that immigration increases the labor force by 10 percent. What happens to the rental price of capital? A production function is a mathematical relation between a firm or country's inputs (capital and labor) and outputs. Use the Cobb-Douglas Production Function Y = K alpha L 1 - alpha where Y is output (GDP or national income), K is capital, and L is labor Assume a firm has a Cobb-Douglas production function Y = L^{0.5} K^{0.5} . Assume (w) wage = $1, (r) rental = $2 and price of output (p) = $5 and firm has linear cost function. What is the marginal A firm produces output that can be sold at a price of $10. The Cobb-Douglas production function is given by Q = F(K,L) = K^1/2 L^1/2. If capital is fixed at 1 unit in the short run, how much labor should the firm employ to maximize profits if the wage rat Suppose that the production function for Halloween candy is given by Q(L,K) = AK^{alpha} L^{1-alpha} the price of labor is w, the price of capital is r and 0 less than or equal to alpha less than or e If a business has 'L' units of labor (e.g. workers) and K units of capital (e.g. production machines), then its production can be modeled by a Cobb-Douglass Production Function of: P(L,L) = beta L^alp Suppose that an economy's production function is Cobb-Douglas with parameter alpha = 0.3. Suppose that a gift of capital from abroad raises the capital stock by 10 percent. What happens to the rental price of capital? This question will walk you through finding the profit-maximizing level of output for a firm with Cobb-Douglas production. Suppose the firm's production function for output y is given by The firm is i Consider the Cobb-Douglas production function. K is the amount of capital, and L is the amount of labor. The isoquant associated with this function reflects the levels of capital and labor that yield Suppose that an economy's production function is Cobb-Douglas with parameter alpha = 0.3. Suppose that a gift of capital from abroad raises the capital stock by 10 percent. What happens to the real wage? For the following Cobb-Douglas production function, q=f(L,K)=L^(0.45) K^(0.7), derive expressions for marginal product of labor MP_L and marginal product of capital, MP_K. Suppose a firm's production function is given by Q = F(L, K) = 5LK where L is the amount of Labor and K is the amount of capital. For this particular Cobb-Douglas production function, MRTS(L,K) = K/L. The wage rate is $100 per unit of labor and the rental Suppose that an economy's production function is Cobb-Douglas with parameter alpha = 0.3. What fractions of income do capital and labor receive? When alpha = 3/4 and beta = 1/4 for the Cobb-Douglas production function, returns to scale are A) constant B) increasing C) decreasing D) first increasing and then decreasing A more general form of the Cobb Douglas production function is q = f(L, K) = AL^aK^b where A, a, b > 0 are constants. Use calculus to solve for the marginal product of labor (MPL). Consider an economy with the following Cobb-Douglas production function: Y = F(K, L) = K^1/3 L^2/3. A. Derive the equation describing labor demand in this economy as a function of the real wage and the capital stock. B. The economy has 27,000 units of cap Suppose that an economy's production function is Cobb-Douglas with parameter alpha = 0.3. Suppose that a technological advance raises the value of the parameter A by 10 percent. What happens to the rental price of capital? Consider the case when output in the Solow model is produced according to Cobb Douglas production function with share of capital alpha: Show that marginal product of capital at the steady state when savings rate s = alpha will be equal to depreciation ra A price-taking firm (p = 100) chooses an amount of labor to employ that maximizes its profits. It has a fixed amount of capital (K= 100) and a constant return to scale Cobb-Douglass production function: q(L, K) = L^(1/2) K^(1/2). The wage rate and rental Suppose that an economy's production function is Cobb-Douglas with parameter alpha = 0.3. Suppose that a technological advance raises the value of the parameter A by 10 percent. What happens to the real wage? A more general form of the Cobb Douglas production function is q = f(L, K) = AL^aK^b where A, a, b > 0 are constants. Use calculus to solve for the marginal product of capital (MPK). Suppose that an economy's production function is cobb-douglas with parameter α = 0.3 a. What fractions of income do capital and labor receive? b. Suppose that immigration increase the labor force by 10 percent. What happens to total output (in pre In this problem, you'll compare a short-run and a long-run cost function for a Cobb-Douglas production process. More specifically, assume that a firm uses labor and capital to produce an output accord An economy has a Cobb-Douglas production function: Y=K alpha (LE) 1-alpha. The economy has a capital share of a third, a saving rate of 24 percent, a depreciation rate of 3 percent, a rate of popula A firm's production function is given by f(L, K) = LK^1/2. The prices of labor and capital are w = 20 and r = 10, respectively. Suppose the firm's capital is fixed at 25. Find, as functions of output Suppose a firm has the Cobb-Douglas production function Q= f(K, L) = 2K^0.7L^0.8, where K is capital and L is labor. Using this function, show the following: (a) Does this production function exhibit In a Cobb-Douglas production function, can capital, K, and labour, L, be negative? Consider a firm, that has production function, f(L,K)=3L^2/3K^1/3. Does this production function satisfy the law of decreasing marginal returns of capital? In a Cobb-Douglas production function with constant returns to scale, if the share of income that goes to labor is 42%, what does alpha equal? Enter your number in decimal form as opposed to percentag An economy has a Cobb-Douglas production function: Y = Kalpha(LE) 1-alpha. The economy has a capital share of a third, a saving rate of 24 percent, a depreciation rate of 3 percent, a rate of populati A price-taking firm chooses its inputs to maximize short-run profits. Its Cobb-Douglass production function has the following form: q(L, K) = L^{\frac{1}{2 K ^{\frac{1}{3. The output price is 1,000 per unit, and the cost of each unit of input is 10. I A firm has a production function,q=A LaK1-a, where 0 is less than a is less than 1. It wants to minimize cost for a given production q. The wage rate and rental rate on capital are wand r, respectivel Cobb-Douglas production function is: Q = 1.4*L6{0.6}*K^{0.5}. What would be the percentage change in output (%ChangeQ) if labor grows by 3.0% and capital is cut by 5.0%? (Hint: %ChangeQ = (E_L * % Recall that a Cobb-Douglas production function has the form P = cL(^alpha) K(^beta) with c, alpha, beta greater than 0. Economists talk about increasing returns to scale if doubling L and K more than A firm has a production function F(L,K), where K is a fixed amount of capital, and L is the variable amount of labor hired. The equation w= pF_{L}(L,K) determines the amount of labor that the firm hir Assume you have the following Cobb-Douglas production function: F(K,L) = AK^a' L^(1-a') A) Using this production function, write out the equation that represents the marginal product of capital. B) Us Firm Alpha uses capital K and labor L to produce output q. The firm's production function is F(K,L)= 5K^{0.4}\times L^{0.6}. The prices of capital and labor are r = 4 and w=4, respectively. Moreover, Some economists believe that the US. economy as a whole can be modeled with the following production function, called the Cobb-Douglas production function: Y = AK^{1/3}L^{2/3} where Y is the amount of output K is the amount of capital, L is the amount of When estimated, exponents of the Cobb-Douglas production function indicates: a) maximum profits that can be earned. b) minimum cost that can lead to efficient production. c) input elasticities. d) different price elasticities in a different market. Given a Cobb-Douglas production function: Y(t)=K(t) alpha L(t)1-alpha where K(t) and L(t) are capital and labor respectively at time t. Assume population growth n and capital depreciation delta. Write Consider the Solow growth model (without tech.) for the Taiwanese economy: The first equation is a Cobb-Douglas production function with and The second equation states that the labor force grows at 5% A firm has a Cobb Douglas production function q = AL^{\alpha}K^{\beta}, where \alpha + \bet a= 1. On the basis of this information, what properties does its cost function have? The firms long run aver A firm's production function is given by q = f(L, K) = LK + 2L^2 K - L^3. Suppose the firm is operating in the short-run with K = 9. A) What is the marginal product of labor function? B) For what values of labor does increasing marginal product exist? C) Consider a Cobb-Douglas production function with three inputs. K is capital, L is unskilled labor, and H is skilled labor: Y = K^{1/3}L^{1/3}H^{1/3} Find: 1. The marginal product of unskilled labo Given the Cobb-Douglas production function for Mabel's factory Q=(L0.4)x(K0.7) a) Based on the function above, does Mabel's factory experiencing economies or diseconomies of scale? Explain. b) If the manager wished to raise productivity by 50% and planned Suppose a firm's production function is given by Q= L^1/2 * K^1/2.The marginal product of labor and the marginal product of capital is given by: MPL= K^1/2 /2L^1/2 and MPk= L^1/2/2K^ 1/2. Suppose the price of labor is w = 24 and the price of capital is r Given a Cobb-Douglas production function Q = 100K^(0.4) L^(0.6), the price of labor per unit is $60, and the price per unit is $40. Use the Lagrangian method to answer this question. You need to show A more general form of the Cobb Douglas production function is q = f(L, K) = AL^aK^b where A, a, b > 0 are constants. Suppose that A = 20, a = b = 0.5. Derive an equation for the isoquant q = 100 and graph it with labor on the x-axis and capital on the y- In the steady state of an economy, described by the Cobb-Douglas production function, the: a) Amount of the capital/labor ratio is constant, that is, unchanging b) The output/labor ratio is positive but decreasing c) Population growth and depreciation rat Consider the following Cobb-Douglas production function Y = 30K1/2L1/2. Calculate de marginal product of labor. Find the numerical value of MPL when K = 32 and L = 4. In the equilibrium, if we consider that the economy employs 8 workers, what would be the Firms use capital, K, and Labor, L, to produce output, Y, according to the following production function. Y = K^{alpha} L^{1 - alpha} 0 less than or equal to alpha less than or equal to 1 Suppose that an economy's production function is Cobb-Douglas with parameter alpha = 0.3. Suppose that a technological advance raises the value of the parameter A by 10 percent. What happens to total output (in percent)? A firm has a production function, q = L^{0.6}K^{0.4}. It wants to minimize cost for a given production q. The wage rate and rental rate on capital are w and r, respectively. a. Write the Lagrangian ex Consider a firm with a Cobb-Douglas production function: Yt = AtKt^a Nt^1-a ; for 0 < a < 1 It is assumed that the interest rate R is 5% the depreciation rate of physical capital is 3% the capital g Consider a firm with the production function f(L,K) = L^{0.5}K^{0.5}. The wage rate and rental rate on capital are w and r, respectively. a. Use the Lagrangian for cost minimization to do derive the long-run cost function for this firm. b. Suppose the Firm Omega uses capital K and labor L to produce output q. The firm's production function is F(K,L) = 12K+3L. The prices of capital and labor are r = 40 and w=4, respectively. In the long-run, when th Some economists believe that the U.S. economy as a whole can be modeled with the following production function, called the Cobb-Douglas production function: where Y is the amount of output, K is the amount of capital, L is the amount of labor, and A is a 1. Finding the steady state K/N and Y/N. Suppose an economy's production function is given by: Y = \beta K^{\alpha} N^{1 - \alpha} where Y is output, K is capital, and N is labor. (a) Rewrite this Consider a Cobb-Douglas production function with three inputs. a. K is capital (the number of machines), b. L is labor (the number of workers), and c. H is human capital (the number of college degr Suppose that an economy's production function is Cobb-Douglas with parameter alpha = 0.3. Suppose that a gift of capital from abroad raises the capital stock by 10 percent. What happens to total output (in percent)? The Cobb-Douglas production function has the following general form: F(K,L)=ZK^{ \alpha}L^{1- \alpha} where Z > 0 is a parameter that represents overall productivity and \alpha is any constant between Suppose there is a fixed amount of capital K=20. Find a short run cost function CFK(q) when the wage is 6 and the rental rate of capital is 3 for a firm whose production function is F(K,L)=3^3/5L^2/5 Suppose you have a firm whose production function is given by Q=K^0.3L^0.7. Wages=3, Rental rates=6, Price of product = $10 a) In the short run, capital is fixed at 5. What is the optimal labor demand in the short run? b) What is the optimal ratio of c Consider a firm with production function f(L,K)=L^1/7K^6/7 (cost minimization for this firm is characterized by the tangency rule). Assume also that the price of capital r=3 and the price of labor w=2 Recall that a Cobb-Douglas production function is expressed as a. What is the marginal product of labor? b. What is the marginal product of capital? c. What is the Technical Rate of Substitution (tre Suppose that the firm's production function is given by Q = 10K(L)^1/3. The firm's capital is fixed at K. What amount of labor will the firm hire to solve its short-run cost-minimization problem? Given the Cobb-Douglas production function: (a) Graph the function. (b) Show that the function displays diminishing marginal product. Explain what you are doing. (c) Suppose the capital stock incre This firm doesn't use capital (K). They only use labor (L). Suppose the firm's production function is Y = L^(x). Furthermore, r = rental rate of capital and w = wage. Find the profit function and solv Let the production function be Q=4 x K1/4L^1/4, and assume that both factors are variable. a) Derive the contingent demand functions for K and L b) Substitute the contingent demand functions in the total cost that you minimized in part a) to obtain the to Consider the Production Function, Y = 25K1/3L2/3 (a) Calculate the marginal product of labor and capital (b) Does this production function exhibit constant/increasing/decreasing returns to scale? ( Consider the following production function: Y = zK 1/3 N 2/3 If K is fixed, set up the firm's maximization problem and solve for the labor demand in terms of exogenous variables k, w, z. Assume that the world works according to the Classical model. In a small open economy, the output is produced according to a Cobb-Douglas production function, consumption is equal to C=40+0.6(Y-T) and the investment function is I=280-10r. You know that th Consider a firm with the production function, q=(K^1/2+L^1/2)^2. In the short-run, the level of capital is fixed. a) Determine the equations for MPL and APL. b) Solve for the short-run cost function ( Suppose output is a function of skilled and unskilled labor (Cobb-Douglas production function) Y = H^a L^{(1 - a)}. What real-life factors can affect a, i.e. the elasticity of output with respect to A Cobb-Douglas production function for new company is given by ?? f(x,y) ? 50x 2 5 y 3 5 where x represents the units of labor and y represents the units of capital. Suppose units of Labor and capital Suppose that a firm has a production function given by: q= 10 L^{0.4}K^{0.6}. The firm has 10 units of capital in the short run. Which of the following will describe the marginal product of labor (MP_L) for this production function? Select one: a. Decr Derive the marginal rate of technical substitution for the Cobb-Douglass production function Q = cL^{alpha}K^{beta}. A firm has the production function: q = 10L^{0.5}K^{0.5}, the price of labor is w = 10 and the price of capital is r = 20 a) demonstrate that this function has constant returns to scale. b) derive the short-run marginal and average variable cost functio As its capital stock increases, a nation will: A. move rightward along a fixed production function. B. move leftward along a fixed production function. C. find its production function shifting upward. D. find its production function shifting downward. E. Consider the Cobb Douglas production function f (L,K) = L^alpha K^1/3. Suppose the input prices are w_L = 2 and w_K = 3. a) Formally write the long run cost minimization problem. For each value of the A firm has the production function f(k, l) = 2k sqrt l. Let the price of capital be r = 1, the price labor be w = 2, and the price of output be p. Find the marginal products of capital and labor. Does the firm have constant returns to scale? Assume that the world works according to the Classical model. In a small open economy, output is produced according to a Cobb-Douglas production function, consumption is equal to C = 40 + 0.6(Y - T), and the investment function is I = 280 - 10r. You know Suppose that a production function of a firm is given by Q= min{2L,K}, where Q denotes output, K capital, and L labor. Currently the wage is w=$10, and the rental rate of capital is r=$15. a. What is the cost and method of producing Q=20 units of capital Explore our homework questions and answers library To ask a site support question, click here © copyright 2003-2023 Homework.Study.com. All other trademarks and copyrights are the property of their respective owners. All rights reserved. Honor Code For Students Question to be answered
CommonCrawl
Spheroidal weathering of basalt from Gebel Qatrani, Fayum Depression, Egypt Essam El-Hinnawi1, S. D. Abayazeed1 & A. S. Khalil1 Bulletin of the National Research Centre volume 45, Article number: 1 (2021) Cite this article Three stages of basalt weathering are known: early or incipient weathering, intermediate weathering and advanced weathering. The Late Oligocene basalt of Gebel Qatrani in Fayum Depression, Egypt, shows signs of early weathering, particularly exhibited in basalt spheroids found at the top of the basalt flow. The present paper gives the results of detailed petrographical, mineralogical and geochemical study of the weathering of these basalt spheroids. The core-stones of the basalt spheroids are composed of phenocrystals of plagioclase feldspars and clinopyroxenes set in a groundmass of tiny feldspars and pyroxenes, relatively altered olivine and opaque minerals. The basalt is subalkali (tholeiitic). The outer weathered shells surrounding the core-stones are composed of partly altered feldspars and pyroxenes. The calculated weathering indices show that there is marked weathering trend from the core-stones of the spheroids to the outer shells. The chemical mobility of the elements shows marked depletion of Mg, Ca, Na and K from the core-stones to the outer shells due to the weathering of olivine, pyroxene and feldspars. The trace elements Rb, Sr, Ni, V, Cr and Zn are also depleted. The weathering of basalt spheroids from Gebel Qatrani, Fayum Depression, Egypt, is of the incipient type. The degree of weathering from the core-stones of the basalt spheroids to the corresponding weathered shells indicates that the weathering occurred under predominantly semiarid to arid conditions. The Fayum Depression has an area of about 12,000 km2 and lies about 100 km south-west of Cairo (Fig. 1). The geology of the area was studied in detail by Beadnell (1905), and his work has remained the basis for subsequent studies (Said 1962; Vondra 1974; Bown and Vondra 19741974; Bown and Kraus 1988; Gingerich 1992, 1993, among others). These studies indicate that the sedimentary successions encountered in Fayum Depression are of middle and late Eocene and Oligocene age and can be divided into the following, formations, from base to top (Fig. 1): Wadi Rayan Formation This formation is of middle Eocene age and forms the base of the Fayum Depression and is exposed in its southern part, reaching a thickness of about 130 m. It is mainly composed of limestone, with Nummulites gizehensis, argillaceous sand and sandy shale (Beadnell 1905; Said 1962). The formation was deposited on an open marine continental shelf. Abd-Elshafy et al. (2007) analyzed the faunal community of the formation at three locations in Wadi Rayan and concluded that the faunal content, as well as the lithology, indicates the alternation between two transgressive sedimentary cycles enclosing a regressive phase. Gehannam Formation Also known as ravine beds, this formation, which is of middle Eocene age, conformably overlies the Wadi Rayan Formation and attains a thickness of about 70 m. It is mainly composed of gypsiferous shale, marl, limestone and sands and was deposited on a shallow, but open marine shelf (Gimgerich 1992). Birket Qarun Formation This formation is of late Eocene age and conformably overlies the Gehannam Formation. It has a thickness of about 50 m and is composed of sandstones and shale, with a few bands of limestone. Lithologically the Birquet Qarun Formation is indistinguishable from the underlying one, but is paleontologically distinct, as it carries different fossil species (Beadnell 1905). The Birket Qarun Formation was a submerged barrier bar, rather than a beach complex (Bown and Kraus 1988; Gingerich 1992). Qasr El-Sagha Formation This formation is also of late Eocene age and conformably overlies the Birket Qarun Formation. It varies in thickness from one location to another in Fayum Depression and is generally in the range of 175–200 m. Four distinct facies have been recognized in Qasr El-Sagha Formation: (a) interbedded claystone, siltstone and quartz sandstone facies; (b) quartz sandstone facies; (c) arenaceous bioclasic carbonate facies; and (d) gypsiferous and carbonaceous laminated claystone and siltstone facies (Vondra 1974). (Facies (a) and (b) have been called the Dir Abu Lifa Member, and overlies facies (c) and (d) which have been called the Temple Member (Bown and Kraus 1988). Both the Dir Abu Lifa and the Temple Members represent the upper part of Qasr El-Sagha Formation. At locations south and west of Qasr El-Sagha, Gingerich (1992) introduced two additional members: the Harab Member which forms the middle part of Qasr El-Sagha Formation, and the Um Rigl Member which forms the lower part of the Formation. The Harab Member is composed of brown shale, whereas the Umm Rigl Member is of similar facies as the Temple Member but is separated from it by Harab Member. The Umm Rigl Member was deposited in a shallow outer lagoonal environment; the Harab Member in a deeper central lagoonal environment; the Temple Member in a shallow inner lagoonal environment; and the Dir Abu Lifa Member in a non-lagoonal (deltaic or inter-deltaic) environment (Vondra 1974; Bown and Kraus 1988; Gingerich 1992). Jebel Qatrani Formation This formation is of Oligocene age and is separated from the underlying Qasr El-Sagha Formation by an unconformity that involved erosion of up to 70 m of Qasr El-Sagha strata in places before the deposition of Gebel Qatrani Formation (Gingerich 1992). The lithology of Gebel Qatrani Formation is rather complex and is composed of about 340 m of variegated alluvial rocks: fine to coarse sandstone, granule and pebble conglomerate, sandy mudstone, carbonaceous mudstone and limestone. The lower part of Gebel Qatrani Formation is assumed to have been deposited by meandering streams, whereas the upper part of the formation was deposited under the influence of an encroaching marine strandline (Bown and Vondra 1974, 1974; Bown and Kraus 1988). Geological map of Fayum Depression (simplified after Said 1962 and CONOCO Geological map of Egypt) In later Oligocene times, a tensional tectonic episode occurred and was accompanied by northwest-trending normal faults throughout the northern parts of the Eastern and Western Deserts of Egypt. Basaltic lavas were extruded from these fissures at different locations (Rittmann 1954). In the northern part of Fayum Depression, these basaltic lavas capped the uppcr Qatrani escarpment varying in thickness from 2 to 25 m. (The latter thickness has been recorded at Widan El Faras.) In outcrops where the basalt is thinnest, it appears that it was a single flow; however, some authors believe that there were two or more flows (Fleagle et al. 1986; Bown and Kraus 1988). The flows overlie the Gebel Qatrani Formation with a pronounced erosional unconformity. Meneisy and Abdel Aal (1984) determined the whole rock K/Ar ages of two samples of these basalts as 23.8 ± 0.7 Ma and 24.4 ± 9.7 Ma, and Bown and Kraus (1988) reported two ages: 24.7 ± 0.4 Ma and 27.0 ± 3 Ma. Meneisy (1990) compiled other K/Ar age data of 27 ± 1 Ma, 25 ± 1 Ma and 23 ± Ma. However, Bown and Kraus (1988) indicated that a sample of basalt taken from the lowest part of the flow near the top of Gebel Qatrani Formation gave an age of 31 ± 1 Ma and, therefore, stated that the basalt might be of late Early Oligocene to Late Oligocene. However, on average the previously mentioned data show that the age of Gebel Qatrani basalt is 25.7 Ma, i.e., of Late Oligocene age, according to the International Chronostratigraphic Chart (Cohen et al. 2013; updated 2017). The petrography and geochemistry of Gebel Qatrani basalt are described in several studies (El-Hinnawi 1965; El-Hinnawi and Abdel Maksoud 1968, 1972; Heikal et al. 1983; Sharara 1984; Abdel Meguid et al. 1992; Endress et al 2011). The basalt has generally an ophitic texture, with plagioclase feldspars clinopyroxene phenocrysts set in a fine groundmass of plagioclase, clinopyroxene, occasional olivine (mostly altered), opaque minerals and altered glass. The basalt of Gebel Qatrani is unconformably overlain by Miocene alluvial sediments (the Khashab Formation). The contact between the basalt and the Khashab Formation is markedly erosional, and scours in the top of the basalt are filled with coarse sand, sandstones containing basalt debris and other pebble conglomerate (Said 1962; Bown and Kraus 1988). The basalt itself is not markedly weathered; no weathering profiles have been detected and the weathering observed is predominantly rind weathering. At the top of the basalt flows of Gebel Qatrani, spheroids of basalt are occasionally encountered. The present paper describes the mineralogy and geochemistry of the cores and weathered rims of these basalt spheroids and discusses the paleo-environmental conditions of their weathering. Samples and methods The term "spheroidal weathering" is used if concentric shells completely surround the core-stone (Ollier 1971). A difference exists between spheroidal weathering and exfoliation. In spheroidal weathering, the weathering process acts all around the spheroid, weathering the underside as well as the top. Patino et al. (2003) pointed out that basalt flows have cooling features that serve as hydrologic discontinuities which function as major pathways for fluids that are the agents of weathering. In incipient and intermediate weathering stages, the concentric shells surrounding the basalt spheroids still contain part of the primary minerals and/or secondary derivatives. In advanced stages of weathering, however, the shells are completely transformed into argillaceous material. In Gebel Qatrani the basalt spheroids (Fig. 2) encountered vary in size and have gray to gray-greenish color. In the present work, several samples were collected, taking care that their outer weathered shells are more or less complete surrounding the core-stones. The collected samples were generally egg-shaped, the longest dimension of which varied from 8 to 15 cm, the intermediate dimension from 6 to 12 cm, and shortest dimension from 6 to 10 cm. The weathered shells varied in thickness between 2 and 5 mm. Photograph of basalt spheroid showing core-stone and outer weathered shell In laboratory, the outer weathered shells were carefully detached from the core-stones. Thin sections of the latter were prepared for petrographic study. The shell material was powdered for analysis by X-ray diffraction. (The analyses were carried out at the Metallurgical Institute, Tebbin, Cairo, using a Philips X-ray diffractometer; scanning was carried out at 2 Ø of 1° per minute between 4° and 60°.) The major and trace element composition of both the powdered core-stones and the shells were determined by X-ray fluorescence (XRF), at the Central Laboratories of the Egyptian Geological Survey, Dokki, Cairo. Petrography and mineralogy of the basalt spheroids The examination of thin sections of the core-stones of the basalt spheroids under the polarizing microscope revealed that they exhibit an ophitic texture, with plagioclase feldspar and clinopyroxene phenocrysts within a fine groundmass of plagioclase, clinopyroxene, opaque minerals, glassy and altered minerals. The modal composition of the core-stones was determined by point counting. It consists of an average of 56% plagioclase feldspars, 27% clinopyroxene, 6% opaque minerals and 11% glass and alteration products. The plagioclase phenocrysts occur as tabular subhedral to euhedral crystals with an average size of 1 × 0.35 mm. The crystals exhibit albite and albite-carlsbad twinning, often with oscillatory zoning. Some of the crystals show tiny inclusions of pyroxene. The composition of the plagioclase varies from one crystal to another (El-Hinnawi and Abdel Maksoud 1968; Endress et al. 2011). On the average, it is composed of An68 Ab30 Or2. In contrast to the plagioclase phenocrysts, the plagioclase in the groundmass is present as tiny laths, about 0.5 × 0.1 mm, mingled with clinopyroxene, opaque and altered grains. Clinopyroxene is present as anhedral to subhedral tabular phenocrysts about 1.7 × 1.3 mm. The crystals exhibit lamellar twining, some are zoned. The composition of the clinopyroxene was determined optically and found to range from pigeonite to augite. This is in agreement with the findings of Endress et al. (2011). Some clinopyroxene crystals contain tiny inclusions of plagioclase, and some are corroded at the edges. The clinopyroxene in the groundmass is in the form of tiny grains intermixed with plagioclase, opaque and altered material. Few samples show the presence of altered olivine (iddingsite) in the groundmass. X-ray powder diffraction analysis of the weathered shells separated from the basalt spheroids showed the presence of sharp peaks of plagioclase feldspars as the predominant minerals present. In one sample (sample No. 1) peaks of pyroxene have been detected. This indicates that the weathering of the core-stones of the basalt spheroids was at an initial (incipient) stage; the main minerals (the plagioclase and the ferromagnesian minerals) were not completely altered as in advanced weathering stages. No marked peaks of clay minerals were detected in the studied samples, although Morsy and Attia (1983) reported the presence of montmorillonite-vermiculite, chlorite and illite in two samples. Geochemistry of the basalt spheroids Table 1 gives the chemical analyses of core-stones and shells of seven representative basalt spheroids, and Table 2 gives the distribution of some trace elements in the studied samples. Figure 3 shows the relationship between the total alkali (Na2O + K2O) and silica for these samples and indicates that all samples fall below the Macdonald and Katsura boundary line (1964) in the subalkali (i.e., tholeiitic) field. The figure indicates that shells have lower content of alkalis which illustrates the loss of these elements due to weathering. On the other hand, there is a slight increase in the silica content indicating the conservative behavior of silica during incipient weathering. Table 1 Chemical analyses and weathering indices Table 2 Trace elements (ppm) Total alkali-silica diagram; the solid line is the boundary line of Macdonald and Katsura; the core-stones (black squares) and their shells (triangles) lie below the solid line in the field of subalkali (tholeiitic) basalts To assess the extent of weathering, several weathering indices have been proposed, e.g., the weathering index of Parker (WIP, Parker 1970), the chemical index of alteration (CIA, Nesbitt and Young 1982), the chemical index of weathering (CIW, Harnois 1988) and more recently, the mafic index of alteration under oxiding and reducing conditions (MIA(O), MIA(R)) which includes Fe and Mg-oxides (Babechuk et al. 2014). Each uses the molecular proportions of major element oxides, with variations on which oxides are included, and assumptions about element mobility. In the present work, the following indices have been calculated and used to describe the changes that accompanied the spheroidal weathering of Gebel Qatrani basalt: \({\text{WIP}} = \left[ {\left( {{\text{2 Na}}_{{2}} {\text{O}}/0.{35}} \right) + \left( {{\text{MgO}}/0.{9}} \right) + \left( {{\text{2K}}_{{2}} {\text{O}}/0.{25}} \right) + \left( {{\text{CaO}}/0.{7}} \right)} \right] \times {1}00\) (Parker 1970). \({\text{CIA}} = \left[ {{\text{Al}}_{{2}} {\text{O}}_{{3}} /\left( {{\text{Al2O}}_{{3}} + {\text{CaO}}* + {\text{Na}}_{{2}} {\text{O}} + {\text{K}}_{{2}} {\text{O}}} \right)} \right] \times 100\) (Nesbitt and Young 1982). \({\text{CIW}} = \left[ {{\text{Al}}_{{2}} {\text{O}}_{{3}} /\left( {{\text{Al}}_{{2}} {\text{O}}_{{3}} + {\text{CaO}}* + {\text{Na}}_{{2}} {\text{O}}} \right)} \right] \times 100\) (Harnois 1988). \({\text{MIA}}\left( {\text{O}} \right) = \left( {\left( {{\text{Al}}_{{2}} {\text{O}}_{{3}} + {\text{ Fe}}_{{2}} {\text{O}}_{{\text{3 T}}} } \right)/\left( {{\text{Al}}_{{2}} {\text{O}}_{{3}} + {\text{ Fe}}_{{2}} {\text{O}}_{{{\text{3T}}}} + {\text{MgO}} + {\text{CaO}} + {\text{ Na}}_{{2}} {\text{O}} + {\text{ K}}_{{2}} {\text{O}}} \right)} \right) \times {1}00\) (Babechuck et al. 2014). The weathering index of Parker (WIP) is considered relatively sensitive to the chemical variations in the early stages of weathering because it allows for differential mobility of cations based on their bond strength with oxygen. WIP values range between 100 and 0, with smaller numbers indicating enhanced weathering. In all the studied samples (Table 1), there is a decrease of WIP from the core-stones to the corresponding shells, with varying degrees. Sample No. 3 shows the highest difference in WIP from the core-stone to the weathered shell, indicating higher weathering than in other samples, whereas sample No. 4 shows the least difference, indicating minimum weathering. On average, the WIP decreases from 71 in the core-stone to 58 in the weathered shell. The CIA, monitors the progressive alteration of feldspars in the samples. The high CIA values reflect the removal of mobile cations (Ca, Na, K) relative to Al during weathering (Nesbitt and Young 1982). Table 1 shows that the CIA values of the core-stones of the basalt spheroids range from 37 to 41. The weathered shells have higher CIA values than their corresponding core-stones, ranging from 45 to 50. On average the CIA increases from 39 in the core-stone to 48 in the weathered shell. This slight increase indicates the low degree of weathering characteristic of incipient weathering. In this case, the feldspars in the core-stones were not excessively altered. As a matter of fact, the X-ray diffraction analysis of the weathered shells indicated the presence of peaks of feldspars. Harnois (1988) pointed out that potassium cations leached during weathering can be fixed by adsorption on weathering products, especially clays, and this may disturb the geochemical trend of potassium. He, therefore, proposed the chemical index of weathering (CIW) as an improved index of the degree of weathering to WIP or CIA. The CIW index increases with the degree of depletion in Na and Ca relative to Al. Table I shows that CIW increased from the core-stones of the basalt spheroids (average CIW of 40) to the corresponding weathered shells (average CIW of 48). Figures 4 and 5 show the relationship between WIP and CIA, and between CIW and CIA, respectively. Figure 4 shows the decrease in WIP and increase in CIA from the core-stone samples to the corresponding weathered shells. Figure 5 shows that there is a linear correlation between CIW and CIA, both indices increase from the core-stone samples to the corresponding weathered shells. Relationship between WIP and CIA. The WIP decreases from the core-stones of the basalt spheroids to the corresponding weathered shells, whereas the CIA increases Relationship between CIW and CIA. A positive correlation exists between the two indices, both increase from the core-stones of the basalt spheroids to the corresponding weathered shells Babechuk et al (2014) proposed the mafic index of alteration (MIA) as a chemical weathering index that extends the equation of the CIA to include the mafic elements Mg and Fe. It is known that many of the mafic minerals (especially pyroxene and olivine) are susceptible to chemical weathering, resulting in the loss of Mg from the weathering products. The fate of Fe during the weathering of most mafic minerals is, however, redox-dependent. In reducing environments, ferrous iron can be mobile and is leached along with Mg. In oxidative weathering environments, however, Fe is usually retained as insoluble ferric iron oxide or oxyhydroxide and thus enriched along with Al. Two equations have been suggested by Babechuk et al. (2014) for the calculation of MIA, one for oxidizing environment and the second for reducing environment. Table 1 gives the calculated values of MIA(O), as it is expected that an oxidizing environment had prevailed during the weathering of Gebel Qatrani basalt MIA(O) increases from the core-stones of the basalt spheroids (average MIA of 37) to the corresponding weathered shells (average MIA of 46), indicating that iron from the mafic minerals in the core-stones was conservatively preserved as ferric oxide and/or oxyhydroxide, with progressive weathering. Because of the complexity of geological systems and the weathering process, Nesbitt and Young (1982) and Nesbitt and Wilson (1992) used the A-CN-K ternary diagram to explain the trend of weathering, rather than depending on single indices like PIW, CIA or CIW. This diagram portrays the molar proportions of Al2O3 (A apex), CaO* + Na2O (CN apex) and K2O (K apex). Figure 6 shows the weathering trend from the core-stones of the basalt spheroids to the corresponding weathered shells. This trend is adjacent and parallel to the CN-A axis, as predicted by Nesbitt and Wilson (1992) for basalts. If weathering had progressed further than that, clay minerals should have been produced at the expense of feldspars and the bulk composition of weathered samples would have moved up the trend in the diagram towards the A apex. Since the weathered shells of the basalt spheroids plot at the lower part of the weathering trend (Fig. 6), this indicates that the weathering had been in the initial (incipient) stage. A-CN-K ternary diagram showing the weathering trend from the core-stones of the basalt spheroids to the weathered shells. The trend is adjacent and parallel to the CN-A axis The depletion of the various elements from the weathering regime is related to the nature of the host primary minerals, the mobility of these elements once they are released by the weathering of these minerals, and the susceptibility of the elements to retention in new phases in their immediate environment. To evaluate the chemical changes during weathering, it is necessary to assume that one chemical component is stable and not removed during the weathering process. For this purpose (e.g., Nesbitt et al. 1980; Nesbitt and Wilson 1992; Kurtz et al. 2000; Patino et al. 2003; Ma et al. 2007), TiO2 was chosen by several authors (see, e.g., Nesbitt and Wilson 1992). In the present work, we studied the variability of the element concentrations in the shells of the basalt spheroids after normalizing them to TiO2 concentrations and then compared them to the core-stones. The percentage change in the TiO2 –normalized concentration of an element in the shell to that in the core-stone was calculated according to the following (Nesbitt 1979; Nesbitt and Wilson 1992): $$\% \;{\text{change}}\;{\text{to}}\;{\text{ratio}} = {1}00 \, \times \left[ {\left( {R_{{{\text{sh}}}} {-}R_{{{\text{cs}}}} } \right)/R_{{{\text{cs}}}} } \right]$$ here Rsh and Rcs are the elemental ratios in the shell and the core-stone, respectively. The assumption for this type of calculation is that there is no volume change between the weathering product and the un-weathered rock (Nahon and Merino 1996). Figure 7 shows the average percentage change in the ratios of the major elements in the shell to core-stones of the basalt spheroids. The Mg/Ti, Ca/Ti, Na/Ti and K/Ti ratios decrease from the core-stone to the weathered shell. The decrease in Mg is attributed to the rapid weathering of olivine in the core-stone. Ca is associated primarily in plagioclase, clinopyroxene and glassy material, all of which weather less rapidly than olivine. Therefore, Ca/Ti decreases less rapidly than Mg/Ti in incipient weathering. Na and K reside in plagioclase feldspars which are less readily weathered than olivine and clinopyroxenes. In contrast to Mg, Ca, Na and K, Al and Si are more conservative in incipient weathering and the Al/Ti, Si/Ti ratios show slight or no increase during weathering. Fe/Ti ratio shows an increase in early weathering, but rapid decrease during more advances weathering (Nesbitt and Wilson 1992). Mn/Ti ratio decreases with increased weathering, and P/Ti ratio increases in early weathering but decreases in advanced weathering. Percent change in the ratio of major oxides/TiO2 between average weathered shell and average core-stone Figure 8 shows the average percentage change in the ratios of trace elements in the shell to core-stones of the basalt spheroids. V/Ti, Cr/Ti, Ni/Ti, Zn/Ti, Rb/Ti and Sr/Ti ratios show varying rates of decrease during weathering. V and Cr reside in clinopyroxenes, glass and oxides, and V/Ti and Cr/Ti show slight decrease in incipient weathering. Ni is primarily located in olivine and Zn in clinopyroxene and glass; both Ni/Ti and Zn/Ti ratios decrease in early weathering. Sr resides primarily in plagioclase, and like Na, the Sr/Ti ratio decreases with increased weathering. Rb resides also in plagioclase and in glass and Rb/Ti ratio decreases like K/Ti ratio with weathering. On the other hand, the remaining trace elements (Co, Y, Zr, La, Ce, Pb and Th) are more conservative and their ratios show no change or slight increase. Ba released during weathering may be rapidly precipitated by sorption to secondary minerals, and hence the high Ba/Ti ratio (Nesbitt and Wilson 1992). Fig. 8: Percent change in the ratio of trace elements/TiO2 between average weathered shell and average core-stone The petrographical, mineralogical and chemical characteristics of the core-stones and the weathered shells of the basalt spheroids indicate that the weathering of Gebel Qatrani basalt is of the incipient stage. The chemical weathering indices, WIP, CIA and CIW, indicate slight weathering and rather low rates of release of mobile elements. Such weathering of the basalt must have taken place under semiarid to arid conditions. This interpretation is, more or less, in agreement with that reached Frey et al. (2013) in their study of the weathering of basalt in the semiarid Deschutes Basin of central Oregon. Novikov et al. (1993) studied the basalt crusts from Syria and pointed out that they reflect significant climate fluctuations in the region, from semiarid (Miocene basalt) to humid warm (Pliocene basalt) and finally to arid conditions (Quaternary basalt). In Fayum area Bown et al. (1982) and Bown and Kraus (1988) pointed out that in Oligocene times, the Fayum area was subtropical to tropical lowland coastal plain with damp soils and seasonal rainfall that supported an abundance of and variety of vegetation, and large and varied vertebrate fauna. In post-Oligocene times, after the eruption of Gebel Qatrani basalt, the climatic conditions must have changed. El-Saadawi et al (2014) studied the silicified fossil wood found in the Early Miocene Khashab Formation and found that the woods are characterized by having few wide vessels which are indicative of warm humid climate. However, they pointed out that the mode of occurrence of the fossil trunks and the absence of other plant remains such as twigs and roots indicates that fossil woods were not preserved in situ, but transported from where they grew before silicification. Consequently, the anatomical features of the fossil woods cannot be taken as an indication of the climate at the locality of Gebel Khashab. More recently, Zhang et al. (2014) studied the aridification of the Sahara desert in North Africa using different models and found out that North Africa experienced pronounced aridification from the Early Miocene to the Late Miocene. They indicated that in the Late Oligocene and Early Miocene, North Africa was dominated by semiarid steppe climate with restricted areas of arid desert climate. In the Late Miocene, the arid desert climate expanded across much of North Africa with a greater resemblance to today's conditions. Zhang et al. (2014) pointed out that aridification was due to a reduction in North African precipitation. If subtropical and tropical humid environment had prevailed in Gebel Qatrani area after the eruption of the basalt, this would have led to the formation of weathering soil profiles on the basalt substrate. No such profiles have been encountered on top of the Gebel Qatrani basalt. Therefore, it can be concluded that the incipient weathering of Gebel Qatrani basalt must have taken place under semiarid to arid conditions. The present study on the spheroidal weathering of basalt from Gebel Qatrani, Fayum Depression, Egypt, indicates that weathering was at the initial (incipient) stage. The indices of chemical weathering (WIP, CIA, CIW and MIA(o)) and element mobility studies indicate that there was a loss of Mg, Ca, Na and K during the weathering of the core-stones of the basalt spheroids to the corresponding weathered outer shells. Fe mainly resided with Al, indicating that weathering was under oxidizing conditions. There was also a depletion of V, Cr, Ni, Zn, Rb and Sr due to the weathering of the core-stone. On the other hand, Co, Y, Zr, La, Ce, Pb and Th behaved essentially conservatively. The low degree of weathering of the basalt spheroids indicates that weathering took place mainly in arid to semiarid environment. Abdel Meguid A, El-Metwally A, Morsy M (1992) Tectonic evolution of continental basalts of Egypt: geochemical evidences. Egypt Mineral 4:141–158 Abd-Elshafy E, Metwally MH, Abdel Azwwm S, Mohammed MS (2007) Paleoenvironments of Wadi El-Rayan Eocene southwest Fayum, Egypt, by using community faunal analyses. In: Proceedings of second scientific environmental conference, Zagazig, Egypt, pp 125–141 Babechuk MG, Widdowson M, Kamber BS (2014) Quantifying chemical weathering intensity and trace element release from two contrasting basalt profiles, Deccan Traps, India. Chem Geol 363:56–75 Beadnell HJL (1905) The topography and geology of the Fayum Province of Egypt. Survey Department of Egypt, Cairo Bown TM, Kraus M (1988) Geology and paleoenvironment of the Oligocene Jebel Qatrani Formation and adjacent rocks, Fayium Depression, Egypt. U.S. Geological Survey Professional Paper 1452 Bown BC, Vondra CF (1974) Paeontological interpretation of the Oligocene Gebel Qatrani Formation, Fayum Depression, Egypt. Annals Feol.Surv. Egypt, IV, 115–138. Bown TM, Kraus MJ, Wing SL, Fleagh JG, Tiffmey BH, Simons EL, Vondra SE (1982) The Fayoum primate forest revisited. J Hum Evol 11:603–632 Cohen KM, Finney SC, Gibard PL, Fan X-J (2013) The ICS international chronostratigraphic chart. Episodes, 36, 199–204. (Chart updated 15 Jan. 2017) El-Hinnawi E (1965) Petrographical and geochemical studies on Egyptian basalts. Bull Volcanol 28:3–12 El-Hinnawi E, Abdel Maksoud MA (1968) Petrography of Cenozoic volcanic rocks of Egypt. Geol Rundsch 57:879–890 El-Hinnawi E, Abdel Maksud MA (1972) Geochemistry of Egyptian Cenozoic volcanic rocks. Chem Erde 31:93–112 El-Saadawi W, Kamal El-Din M, Wheeler E, Osman R, El-Faramawi M, El-Noamani Z (2014) Early Miocene woods of Egypt. Int Assoc Wood Anat J 35:35–50 Endress CA, Furman T, Abu El-Rusm MA, Hamdan BB (2011) Geochemistry of 24 Ma basalts from NE Egypt: source components and fractionation history. Geol Soc Lond Spec Publ 357:265–283 Fleagle JG, Bown TM, Obradovich JD, Simons EK (1986) Age of the earliest African anthropoids. Science 234:1247–1249 Frey HM, Szranek KJ, Manon MK, Kisane MT (2013) Slow chemical weathering in a semi-arid climate; changes in the mineralogy and geochemistry of subarial basalt flows in Deschutes River Basin, Central Oregon. Chem Geol 347:135–152 Gingerich PD (1992) Marine mammals (Cetacea and Sirenia) from the Eocene of Gebel Mokattam and Fayum, Eygpt: Stratigraphy, age and paleoenvironments. University of Michigan Papers of aleontology, No. 30, 79p Gingerich P (1993) Oligocene age of the Gebel Qatrani Formation, Fayum, Egypt. J Hum Evol 24:207–218 Article MathSciNet Google Scholar Harnois L (1988) The CIW Index: a new chemical index of weathering. Sedim Geol 55:319–322 Heikal MA, Hassan MA, El-Sheshtawi Y (1983) The Cenozoic basalt of Gebel Qatrani. Western desert, Egypt, as an example of continental tholeiitic basalt. Ann Geol Surv Egypt 13:193–209 Kurtz AC, Derry LA, Chadwick OA, Alfano MJ (2000) Refractory element mobility in volcanic soils. Geology 28(8):683–686 Ma JL, Wei GL, Xu WG, Lang YG, Sun WD (2007) Mobilization and redistribution of majir and trace elements during extreme weathering of basalt in Hainan Island, South China. Geochim Cosmochim Acta 71:3223–3237 Macdonald GA, Katsura T (1964) Chemical composition of Hawaiian lavas. J Petrol 5:82–133 Meneisy MY (1990) Vulcanicity. In: Said R (ed) The geology of Egypt. A. Balkema, Rotterdam, pp 157–174 Meneisy MY, Abdel Aal AY (1984) Geochronology of Phanerozoic volcanic activity in Egypt. Bull Fac Sci Ain Shams Univ 25:163–175 Morsy A, Attia MS (1983) Effecys of weathering on the miberalogy and chemical composition of some Egyptian basalts. Mineral Pol 14:101–110 Nahon D, Merino E (1996) Pseudomorphic replacement versus dilation in laterites:petrographic evidence, mechanisms and comsequences for modeling. J Geochem Explor 57:217–225 Nesbitt HW (1979) Mobility and fractionation of rare earth elements during weathering of granodiorite. Nature 279:206–210 Nesbitt WH, Wilson RE (1992) Recent chemical weathering of basalts. Am J Sci 292:740–771 Nesbitt WH, Young GM (1982) Early Proterozoic climates and plate motions inferred from major element chemistry of lutites. Nature 299:715–717 Nesbitt HW, Markovics G, Price RC (1980) Chemical processes affecting alkalis and alkaline earths during continental weathering. Geochim Cosmochim Acta 44:1659–1666 Novikov VM, Sharkov EV, Chernyshev IV (1993) Geochronology of weathering crusts on flood basalts in Syria and the evolution of regional paleoclimate during the last 20 Ma. Stratigr Geol Correl 1:627–635 Ollier CD (1971) Causes of spheroidal weathering. Earth Sci Rev 7:127–141 Parker A (1970) An index of weathering for silicate rocks. Geol Mag 107:501–504 Patino LC, Welbel MA, Price JR, Wade JA (2003) Trace element mobility during spheroidal weathering of basalts and andesites in Hawaii and Guatemala. Chem Geol 202:343–364 Rittmann A (1954) Remarks on the eruptive mechanism of the Tertiary volcanoes of Egypt. Bull Volcanol 15:109–117 Said R (1962) The geology of Egypt. Elsevier Publ. Co., Amsterdam Sharara NAF (1984) Petrochemical and geochemical studies on some mid-Tertiary basalts from Egypt. Egypt J Geol 28:83–112 Vondra CF (1974) Upper Eocene transitional and near-shore marine Qasr El-Sagha Formation, Fayum Depression, Egypt. Ann Geol Surv Egypt IV:79–94 Zhang Z, Ramstein G, Schuster M, Li C, Contoux C, Yan Q (2014) Aridification of the Sahara desert caused by Tethys Sea shrinkage during the Late Miocene. Nature 513:401–404 Funding was provided from the budget of the National Research Centre. Department of Geological Sciences, National Research Centre, Tahrir Street, Dokki, Cairo, Egypt Essam El-Hinnawi, S. D. Abayazeed & A. S. Khalil Essam El-Hinnawi S. D. Abayazeed A. S. Khalil EE-H was responsible for field work and assistance in interpretation of results. SDA was responsible for analytical work and Khalil was responsible for petrography and mineralogy. All three authors were responsible for the preparation of the final manuscript submitted for publication. All authors read and approved the final manuscript. Correspondence to Essam El-Hinnawi. All authors have expressed their ethics approval and consent to participate in the present research work. All authors have expressed their consent to publish the present work. There are no competing interests in carrying out the present research. El-Hinnawi, E., Abayazeed, S.D. & Khalil, A.S. Spheroidal weathering of basalt from Gebel Qatrani, Fayum Depression, Egypt. Bull Natl Res Cent 45, 1 (2021). https://doi.org/10.1186/s42269-020-00453-2 Basalt spheroids Gebel Qatrani Fayum depression Weathering indices Element mobility
CommonCrawl
podil_verejnych_zakazek_na_celkovych_nakupech zIndex calculation Best practice literature Public procurement share on total purchases Public procurement as a share of total purchases is one of the indicators used to calculate the zIndex score for evaluating contracting authorities. What does this evaluate? This indicator measures the ratio between all purchases the contracting authority makes, and the purchases it makes through public procurement. It is used to identify deliberate splitting of contracts (in order to push contracts below the value at which certain procedures have to be followed and the details of the contract published in the Journal) and excessive misuse of legal exceptions. The indicator compares the value of contracts published in the Journal with the total volume of controllable operating costs, which is calculated as the sum of selective investment and non-investment expenses according to the State Treasury. Why do we evaluate this? If a large volume of expenses falls outside the scope of the law (and the public procurement information system), this implies a less transparent environment and more room for the contracting authority to make arbitrary decisions. A low value for this indicator implies that both the public and statutory provisions exercise less control over the contracting authorities' purchases. Best practice guides collectively agree that transparency and traceability are essential instruments in fighting corruption, not only in public procurement, but also in institutional budget management more generally. How do we evaluate? The indicator is calculated as the value of contracts published in the Journal in the reference period, together with expenses tendered through dynamic purchasing systems and electronic market places, measured against the total volume of controllable operating costs (as defined in the Ministry of Finance's methodology, presented here (in Czech))) in the same reference period. $$z_1 = \sqrt{\frac{value\:of\:public\:procurement\:contracts}{value\:of\:controllable\:operating\:costs}}$$ Controllable operating costs are defined as the sum of selected cost items that are dependent on the contracting authority's management decisions (these include consumption of energy and materials, repairs and maintenance, other services, etc.), which are obtained from the profit and loss account of the respective institution, together with expenditures on acquisition of capital assets from the cash flow statement. Both documents are available on the State Treasury web. In case of institution without direct control by State Treasury, the Controllable operating costs are identified from the financial statements as sum of non-personnel opex and investments. In case of missing particular data points in the treasury web, it's calculated from indirect numbers or gathered from annual report or, in few extraordinary case linearly extrapolated. Methodology adjustments for the different types of contracting authorities For municipalities, this indicator is only calculated for the 2012-2013 period, because their accounting methodology underwent significant changes during 2010-2011. Who may be discriminated by this indicator? Smaller contracting authorities may be put at a disadvantage here, as they typically have a relatively bigger share of small-scale contracts. This is one of the reasons why scores should only be compared for authorities of similar size and type. According to a financial audit summary (in Czech) conducted by the Ministry of Finance, the most frequent reasons for purchases outside the scope of Public Procurement Act (and not published in the Journal) are the following: The associated contract is either small-scale, outside the scope of Public Procurement Act, or classified. Comparable contracting authorities should however have a comparable share of such contracts. The purchase is for additional work beyond the value of an original contract. Extra work of this sort is considered by best practice guides to be a major public procurement issue. The associated contract was concluded before the observed reference period. As long as the contracting authority annually tenders a similar volume of contracts, then the volume of historic contracts for which controllable operating costs are carried over into the reporting period should be equal to the volume of contracts awarded in the reporting period whose costs are carried over into the following period. A contracting authority will, however, be at a disadvantage if it awarded an unusually large volume of contracts (with a value of over 50% of its controllable operating costs) prior to the reference period in question and the costs of those contracts are thus not counterbalanced in the manner described. How to improve this rating Do not split contracts. Review the legality of your conduct with every supplier to whom annual payments surpass the threshold applicable for small-scale contracts, and consider launching a tender invitation for those purchases. Reduce the amount of extra work. Strictly permit only well-justified extra work, and make the conditions for such extra work clear in the contract wording (see the Competitive contracting indicator). Avoid using supplementary agreements if not economically necessary, and do not exceed the contractual price set in the original contract. Avoid taking advantage of legal exceptions if not strictly necessary. Re-tender historic contracts wherever economically appropriate - typically this includes mobile operator services, ICT and energy supplies. The most questionable contracts are those that were concluded in 1990s, before substantially stricter statutory regulations were introduced. en/podil_verejnych_zakazek_na_celkovych_nakupech.txt autor: Jiří Skuhrovec
CommonCrawl
R. Abbott Citations12,797 Highly Influential Citations1,008 P. Astone P. Fritschel A. Freise L. Cadonati Gravitational Waves and Gamma-Rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A B. Abbott, R. Abbott, +1,148 authors P. Ubertini On 2017 August 17, the gravitational-wave event GW170817 was observed by the Advanced LIGO and Virgo detectors, and the gamma-ray burst (GRB) GRB 170817A was observed independently by the Fermi… Expand Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors L. S. Collaboration, Virgo Collaboration J. Abadie, +708 authors K. Belczynski We present an up-to-date, comprehensive summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave… Expand View PDF on arXiv GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs The Ligo Scientific Collaboration, T. Abbott, +1,147 authors J. Zweizig We present the results from three gravitational-wave searches for coalescing compact binaries with component masses above 1$\mathrm{M}_\odot$ during the first and second observing runs of the… Expand Binary Black Hole Mergers in the First Advanced LIGO Observing Run The Ligo Scientific Collaboration, T. Abbott, +971 authors J. Zweizig The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper… Expand ASTROPHYSICAL IMPLICATIONS OF THE BINARY BLACK HOLE MERGER GW150914 B. Abbott, R. Abbott, +959 authors J. Zweizig The discovery of the gravitational-wave source GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black-hole systems that inspiral and… Expand The Rate of Binary Black Hole Mergers Inferred from Advanced LIGO Observations Surrounding GW150914 A transient gravitational-wave signal, GW150914, was identified in the twin Advanced LIGO detectors on September 14, 2015 at 09:50:45 UTC. To assess the implications of this discovery, the detectors… Expand GW170608: Observation of a 19 solar-mass binary black hole coalescence On 2017 June 8 at 02:01:16.49 UTC, a gravitational-wave (GW) signal from the merger of two stellar-mass black holes was observed by the two Advanced Laser Interferometer Gravitational-Wave… Expand Exploring the Sensitivity of Next Generation Gravitational Wave Detectors B. Abbott, R. Abbott, +720 authors J. Harms The second-generation of gravitational-wave detectors are just starting operation, and have already yielding their first detections. Research is now concentrated on how to maximize the scientific… Expand A gravitational-wave standard siren measurement of the Hubble constant B. Abbott, R. Abbott, +1,308 authors M. Serra-Ricart Physics, Medicine On 17 August 2017, the Advanced LIGO and Virgo detectors observed the gravitational-wave event GW170817—a strong signal from the merger of a binary neutron-star system. Less than two seconds after… Expand View on Nature Localization and broadband follow-up of the gravitational-wave transient GW150914 B. Abbott, R. Abbott, +1,514 authors S. Rosswog A gravitational-wave (GW) transient was identified in data recorded by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) detectors on 2015 September 14. The event, initially… Expand
CommonCrawl
export.arXiv.org > math > arXiv:2110.10375v1 math.PR Mathematics > Probability Title: A Heavy Traffic Theory of Two-Sided Queues Authors: Sushil Mahavir Varma, Siva Theja Maguluri (Submitted on 20 Oct 2021) Abstract: Motivated by emerging applications in online matching platforms and marketplaces, we study a two-sided queue. Customers and servers that arrive into a two-sided queue depart as soon as they are matched. It is known that a state-dependent control is needed to ensure the stability of a two-sided queue. However, analytically studying the steady-state behaviour of a two-sided queue, in general, is challenging. Therefore, inspired by the heavy-traffic regime in classical queueing theory, we study a two-sided queue in an asymptotic regime where the control decreases to zero. It turns out that there are two different ways the control can be sent to zero, and we model these using two parameters viz., $\epsilon$ that goes to zero and $\tau$ that goes to infinity. Intuitively, $\epsilon$ modulates the magnitude of the control and $\tau$ is the threshold after which we modulate the control. We show that depending on the relative rates of $\epsilon$ and $\tau$, there is a phase transition in the limiting regime. We christen the regime when $\epsilon \tau \to 0$, the quality-driven regime, and the limiting behaviour is a Laplace distribution. The phase transition starts in the regime when, $\epsilon \tau $ goes to a nonzero constant when the limiting distribution is a Gibbs distribution, and so we call it the critical regime. When $\epsilon \tau \to \infty$, we conjecture that the limiting distribution is uniform and prove that in a special case. We call this the profit-driven regime. These results are established using two related proof techniques. The first one is a generalization of the characteristic function method, and the second is a novel inverse Fourier transform method. Subjects: Probability (math.PR) Cite as: arXiv:2110.10375 [math.PR] (or arXiv:2110.10375v1 [math.PR] for this version) From: Sushil Mahavir Varma [view email] [v1] Wed, 20 Oct 2021 05:07:00 GMT (115kb)
CommonCrawl