text
stringlengths
100
500k
subset
stringclasses
4 values
Every statement in a program must end with a semicolon. The ___________loads Java bytecode to the memory. suppose x = 10 and y = 10. what is x after evaluating the expression ( y > 10) && (x-- > 10)? what Java uses to collect data and operations on that data into a single unit. It is called virtual because it is usually implemented in software, rather than hardware. Which of the following statements are correct? Programs and data are stored in secondary storage and moved to memory when the computer actually uses them. which of the following expression is equivalent to (year % 4 == 0 && year % 100 !=0) || (year % 400 == 0)? They are not programming statements and are ignored by the compiler. \begin{matrix} Les fora de prparation la campagne agricole 2022-2023 se sont drouls du 24 au 27 mai 2022 dans toutes les zones cotonnires SOCOMA. How many different loco mocos can be ordered? \text{ The cost of the paint}\\ Which of the is a program that runs on a computer to manage and control a computer's activities: Operating System, Java, Modem, Interpreter, Compiler? a. algorithm This part of the computer fetches instructions, carries out the operations commanded by the instructions, and produces some outcome or resultant information: the type of memory that can hold data for long periods of time - even when there is no power to the computer: if you were to look at a machine language program, you would see_________: this type of program is designed to be transmitted over the internet and run in a web browser: these are words that have a special meaning in the programming language: these are symbols or words that perform operations on one or more operands: these characters serve specific purposes, such as marking the beginning or ending of a statement, or separating items in a list: these are words or names that are used to identify storage locations in memory and parts of the program that are created by the programmer: these are rules that must be follwoed while writing a program: this is a named storage location in the computer's memory: Computer Organization and Design MIPS Edition: The Hardware/Software Interface, Charles E. Leiserson, Clifford Stein, Ronald L. Rivest, Thomas H. Cormen, Introduction to the Theory of Computation. La traditionnelle confrence de presse de l'AICB sur la fixation des prix de cession des intrants pour la campagne agricole cotonnire 2022-2023 s'est tenu le jeudi 12 mai 2022 la maison du coton, AICB. Capable of compiling each bytecode once, and then reinvoking the compiled code repeatedly when the bytecode is executed. True or False? Java ________ can run from a Web browser. Explain the benefits of providing such a synchronization tool. Suppose you define a Java class as follows: The JDK command to compile a class in the file Test.java is. The program should display the following Describe the sequence of events executed by the client computer and web server when a web page with client-side embedded code is requested and displayed by a browser. What is the output of the following programs? Specifies the number of pixels per square inch. Java was originally developed by a team led by James Gosling at Sun Microsystems. Charles E. Leiserson, Clifford Stein, Ronald L. Rivest, Thomas H. Cormen, C++ Programming: Program Design Including Data Structures. (choose all that apply). A tiny dot that forms part of an image on a screen. The char data type cannot store which of the following values? What will be printed by the following code? For example, if a is 1 4 9 16 and b is 4 7 9 9 11 then mergeSorted returns a new list containing the values 1 4 4 7 9 9 9 11 16, Let S = {1, 2, 3}. The quotes cause the second expression to be treated as a string. Windows Vista provides a lightweight synchronization tool called slim reader-writer locks. The code will not compile because base and height were not initialized. True or False? A Java interpreter is a program that translates Java source code into Java bytecode. Solve tough problems on your own with the help of expert-verified explanations. A program that controls and manages a computer's activities. Write a code fragment that prints the characters stored in a String object called str backward. The binary representation of decimal 49 is ________. Which of the following is an OS: Java. All import declarations must be placed _________. _______________ is a software that interprets Java bytecode. C. If number is zero. \text{#} & \text{#}\\ The speed of a computer is measured in Because digital devices have 2 stable states and it is natural to use one state for 0 and the other for 1. The __________ method returns a raised to the power of b. D. The program has a syntax error, because 09 is an incorrect literal value. Stores data and instructions for the CPU to execute. The instructions are in the form of binary code. (choose all that apply). A Java program block starts with an open brace ({) and ends with a closing brace (}). The result of compiling Java source code. \text{#} & \text{ } & \text{ } & \text{ } & \text{ } & \text{#}\\ (choose all that apply). The compiled Java machine language instructions can be run on any processor that has a Java Virtual Machine. When you execute a class with the Java interpreter(JVM), the runtime system starts by calling the class's main() method. To improve readability and maintainability, you should declare ________ instead of using literal values such as 3.14159. The decimal value 5 is ________ in binary. $$ True or False? What is the output from this code snippet? Bytecode is machine independent and can run on any machine that has a Java runtime environment. These characters serve specific purposes, such as marking the beginning or ending of a statement, or separating items in a list. In this course, a Java interpreter must be used to translate Java bytecode stored in .class files into native machine language for the processor (CPU) to execute. \text{##}&\text{ }\\ Which of the following would be most useful in determining whether the int variable num is an even number? Recursions on sequences often use this as a base case: Which of the following are valid ways to calculate 472 in Java? With the do until structure, the statements in the loop are repeated as long as a certain condition is true. (choose all that apply). Keep an index into each list, indicating how much of it has been processed already. $$. a. to_list True or false, In Java, there is no limit to the size of an int. A device to connect to a local area network (LAN) ____________ is able to translate high-level language program into machine language program. ________ contains predefined classes and interfaces for developing Java programs. This is an example of what kind of error? Would the same benefits have been achieved if instead Chrome had been designed to open each new website in a separate thread? final double TAX_RATE = .04; totalCost = originalCost * (1 + TAX_RATE); What is the value of this Java expression? Which of the following is required in order to run Java byte code? Le point sur la campagne cotonnire 2021-2022 et les perspectives de la campagne 2022-2023 ont t abords par les quipes sur les diffrentes zones d'animation. C++ Programming: From Problem Analysis to Program Design. A line of code that represents an action or sequence of actions. Write a program that asks the user to enter the square feet of wall space to Why? (T/F), An e-commerce website offers fewer models, colors, and sizes than a physical store. Write a method called alarm that prints the string "Alarm!" a. Which configuration file does sudo read when determining if a user is permitted to run applications with root privileges? #include using namespace std; void function(int); int main() { int x = 10; function(x); return 0; } void function(int num) { if (num > 0) { for (int x = 0; x < num; x++) cout << '*'; cout << endl; function(num 1); } }, A painting company has determined that for every 112 square feet of wall space, one gallon Which Java statement prints a blank line? Which of the following Java expressions will give you the same result as Math.sqrt(25)? True or False: When a recursive function directly calls itself, this is known as direct recursion. These document what a program is and how it is constructed. USB Drives. (530330). C++, Windows XP, Visual Basics. A set of classes and interfaces that can be used to develop Java programs. This is a named storage location in the computer's memory. An assignment statement tells the computer to assign a value to (that is, store a value in) a variable. ________ translates high-level language program into machine language, The speed of the CPU is measured in ________. If you attempt to add an int, a byte, a long, and a double, the result will be a __________ value. JOptionPane.showMessageDialog(null, "Welcome to Java! b) 1 A Java class with the name Printer has to be saved using the source file name: When a compiler finds a syntax error in a program, what happens? What is the printout of the following code: E. System.currentTimeMillis() / 1000 / 60 / 60 % 24, assume x = 4 and y = 5, which of the following is true. Which of the following does not contain a syntax error? Specifies a single class in an import statement. In our median-finding algorithm, a basic primitive is the split operation, which takes as input an array S and a value and then divides S into three sets: the elements less than , the elements equal to , and the elements greater than . hour for labor. Whereas most implementations of reader-writer locks favor either readers or writers, or perhaps order waiting threads using a FIFO policy, slim reader-writer locks favor neither readers nor writers, nor are waiting threads ordered in a FIFO queue. Binary number 1001 is decimal number ________. Introduction to the Theory of Computation, Computer Organization and Design MIPS Edition: The Hardware/Software Interface, Matlab: A Practical Introduction to Programming and Problem Solving. Java is an object-oriented programming language. The ________ operator can be used to compare two values. b. getlist Solve tough problems on your own with the help of expert-verified explanations. Which of the following lines is not a Java comment? Each location in main memory has a unique address. IDE tools integrate editing, compiling, building, debugging, and online help in one graphical user interface. Which of the following statements are true about Java's primitive number types? the java compiler generates quizlet Black Mesh Peep Toe Heels How Does Amplitude Analytics Work Wnem Weather Live Stream Ochocinco Child Please Update Angular-eslint How To Treat Powdery Mildew On Cactus The Hidden, Berlin Hostel Continued Partnership In A Sentence Do Any Of The Duggar Daughters Work the java compiler generates quizlet 2022
CommonCrawl
Home Journals TS A Secure Image Encryption Algorithm Based on Composite Chaos Theory A Secure Image Encryption Algorithm Based on Composite Chaos Theory Qiuru Cai School of Computer Engineering Jiangsu University of Technology, Changzhou 213001, China [email protected] This paper introduces chaotic image encryption to realize the secure transmission of digital images, and combines 1D and 3D chaotic systems into an image encryption algorithm. Firstly, two sets of logistic chaotic sequences were generated iteratively for pixel permutation, so were the initial values of the Bao system. Then, the chaotic sequence for pixel permutation and diffusion was generated based on the plaintext pixel value and Bao system. After that, the plaintext image was encrypted and decrypted by encryption and decryption formulas. Finally, the encryption process was simulated, and the algorithm performance was analyzed in terms of key space and correlation of adjacent pixels. The results show that our algorithm achieved better security than the traditional methods. image encryption, permutation, diffusion, composite chaotic system The development of network technology has given birth to images with huge data volume, high redundancy and strong correlation, which are increasingly important in such fields as business, military and medical aid. Therefore, more and more attention has been paid to protect the security of image transmission. Among the various security protection methods, the most easy, reliable and effective way is to encrypt the digital information of the target image before transmission and decrypt the information after transmission. The existing image encryption techniques, ranging from partial encryption, complete encryption to scrambling [1, 2], attempt to make the image exhibit a chaotic, disordered, noisy state, such that the attacker can never extract information of the image. Most of the mature encryption algorithms are developed for encryption of text information, namely, the Advanced Encryption Standard (AES) and Data Encryption Standard (DES). However, image, as the encryption target, differs from text in many aspects. For example, the adjacent pixels often have strong correlation and high redundancy. If directly applied to image, the existing encryption algorithms may suffer from heavy computation and low efficiency, failing to satisfy the real-time demand of image transmission. What is worse, the traditional algorithms might destroy the storage mode of the original image. To solve the above problem, this paper designs a secure image encryption algorithm based on composite chaos theory. Chaos is a complex dynamic behavior in nonlinear dynamical systems [3]. It reflects the inherent randomness of deterministic systems and carries natural similarities with cryptography. Thus, the chaos theory has been applied widely in image encryption. Compared with the traditional encryption, the chaotic encryption focuses on the generation of encryption sequence rather than the algorithm, and highlights the selection of proper chaotic system to generate a highly random sequence. This encryption approach is much more efficient and secure for images than the traditional algorithms. The chaotic encryption was first proposed by Matthew in 1989 [4]. Later, Jakimoski et al. [5] developed a chaotic encryption structure of confusion and diffusion; the former disturbs the pixel position to eliminate the correlation between adjacent pixels, and the latter restores the even distribution of the pixel values. This structure has been accepted as a classical framework of chaotic image encryption. The chaotic image encryption algorithms roughly fall into four categories: low-dimensional chaotic system, multi-dimensional chaotic system, hyper-chaotic system and composite chaotic system. The low-dimensional chaotic system stands out with simplicity and ease of implementation. Typical examples include logistic mapping [6], tent mapping [7] and Chebyshev mapping [8]. However, the low-dimensional chaotic system cannot withstand decoding attacks like spectrum analysis and phase space reconstruction. After all, the system has too few control parameters and a small key space. To solve the defect, many scholars have designed high-dimensional chaotic system with complex dynamic features, such as Lorenz system [9], Chen system [10] and Bao system [11]. These high-dimensional chaos systems can effectively curb common attacks like phase space reconstruction. Nonetheless, the emerging chosen plaintext attack goes beyond the capacity of these systems. With a few plaintext pairs, the attacker can decipher the control parameters of the high-dimensional chaos systems. In recent years, the chaos theory has been integrated with the concepts and theories of other disciplines for image encryption. For instance, Reference [12] puts forward a quantum chaotic encryption system in conjunction with quantum mechanics. Reference [13] design image encryption algorithms based on DNA encoding. Reference [14] presents a hybrid algorithm for image compression and encryption. These integrated methods greatly enrich the theory of chaotic image encryption. For secure and efficient encryption, composite chaotic encryption algorithms have been created [15-17], either coupling low- and high-dimensional chaos or integrating low- and hyper-dimensional chaos. Two or more chaotic systems can complement each other. Their coupling can protect chaotic orbit information, reduce the computing complexity and support real-time communication. Seyedzadeh et al. [18] combined three chaotic maps, i.e. Logistic, Arnold and Kent, into a multi-chaotic fast encryption algorithm, which achieves good encryption effect at the expense of a small key space. Haroun et al. [19] formulated a composite encryption algorithm of low- and multi-dimensional chaos, which has good random sequence but weak differential resistance. To improve the encryption performance, it is necessary to combine hyper- and multi-dimensional chaotic systems properly, and design a rational plaintext association strategy. 3. Digital Image Encryption Based on Chaos Theory 3.1 Judgment of chaos To judge if a system is chaotic, an important criterion is the Lyapunov exponent, which can determine the local stability of the orbit of a dynamical system. Let $x_(n+1)=f(x_n)$ and $y_(n+1)=f(y_n)$ be the two state equations of a 1D discrete chaotic system, and $|x_0-y_0|$ be the error of the initial values of the two state equations. After one iteration, the error can be expressed as: $|x_1-y_1 |=|f(x_0)-f(y_0)|≈|df/dx |_{x_0}|∙|x_0-y_0|$, where $|df/dx |_{x_0 }|=lim┬{x_0→y_0}⁡|f(x_0)-f(y_0)|/|x_0-y_0|$. Then, the error after the second iteration is $|x_2-y_2 |≈|df/dx |_{x_0}∙df/dx|_{x_1}|∙|x_0-y_0|$. The error after the n-th iteration is $|x_n-y_n|≈|∏_{n=0}^{n-1}df/dx |_{x_n }|∙|x_0-y_0|$. The mean deviation value of each iteration is $|∏_{n=0}^{n-}df/dx |_{x_n}|^{1⁄n}$. The deviation will occur with the elapse of time, if there is an error in the initial errors of the two state equations. The degree of deviation can be expressed as: $λ=1/n ln⁡(|∏_{n=0}^{n-1}df/dx |_{x_n}|)$ (1) where xn is the n-th iteration error of the system. When n→∞, the Lyapunov exponent λ can be calculated by: $λ=lim┬{n→∞}1/n (∑_{n=0}^{n-1}ln⁡|df/dx |_{x_n}|$ (2) The Lyapunov index measures the divergence/convergence speed of the system. If λ is positive, then the Lyapunov exponent is divergent; if it is negative, then the index is convergent. For multi-dimensional chaotic systems, Eq. (2) should be replaced with the Jacobian matrix, which can compute the Lyapunov exponents of multi-dimensional chaotic systems more accurately. For an m-dimensional chaotic system, the system equation can be expressed as: $\left\{\begin{aligned} \mathrm{x}_{\mathrm{n}+1} &=\mathrm{f}_{1}\left(\mathrm{x}_{\mathrm{n}}, \mathrm{y}_{\mathrm{n}}, \ldots, \mathrm{z}_{\mathrm{n}}\right) \\ \mathrm{y}_{\mathrm{n}+1} &=\mathrm{f}_{2}\left(\mathrm{x}_{\mathrm{n}}, \mathrm{y}_{\mathrm{n}}, \ldots, \mathrm{z}_{\mathrm{n}}\right) \\ &...... \\ \mathrm{z}_{\mathrm{n}+1} &=\mathrm{f}_{\mathrm{m}}\left(\mathrm{x}_{\mathrm{n}}, \mathrm{y}_{\mathrm{n}}, \ldots, \mathrm{z}_{\mathrm{n}}\right) \end{aligned}\right.$ (3) The Jacobian matrix of the system equation is: $\text{J=}\left[ \begin{matrix} \frac{\partial {{f}_{1}}}{\partial {{x}_{n}}} & \frac{\partial {{f}_{1}}}{\partial {{y}_{n}}} & ... & \frac{\partial {{f}_{1}}}{\partial {{z}_{n}}} \\ \frac{\partial {{f}_{2}}}{\partial {{x}_{n}}} & \frac{\partial {{f}_{2}}}{\partial {{y}_{n}}} & ... & \frac{\partial {{f}_{2}}}{\partial {{z}_{n}}} \\ ... & ... & ... & ... \\ \frac{\partial {{f}_{m}}}{\partial {{x}_{n}}} & \frac{\partial {{f}_{m}}}{\partial {{y}_{n}}} & ... & \frac{\partial {{f}_{m}}}{\partial {{z}_{n}}} \\ \end{matrix} \right]$ (4) Given the initial value of the system equation (x0, y0,…, z0), the first n-1 Jacobian matrix can be obtained as: $\mathrm{J}_{0}=\mathrm{J}\left(\mathrm{x}_{0}, \mathrm{y}_{0}, \ldots, \mathrm{z}_{0}\right), \mathrm{J}_{1}=\mathrm{J}\left(\mathrm{x}_{1}, \mathrm{y}_{1}, \ldots, \mathrm{z}_{1}\right)$ $\ldots, \mathrm{J}_{\mathrm{n}-1}=\mathrm{J}\left(\mathrm{x}_{\mathrm{n}-1}, \mathrm{y}_{\mathrm{n}-1}, \ldots, \mathrm{z}_{\mathrm{n}-1}\right)$ (5) Then, each Lyapunov exponent of the system can be calculated as: $\lambda_{1}=\frac{1}{n} \sum_{i=0}^{n-1} \ln \lambda_{1 i}, \lambda_{2}=\frac{1}{n} \sum_{i=0}^{n-1} \ln \lambda_{2 i}$ $\ldots, \lambda_{m}=\frac{1}{n} \sum_{i=0}^{n-1} \ln \lambda_{m i}$ (6) where $λ_1i, λ_2i, …, λ_mi (i∈[0,n-1])$ is the eigenvalue of each Jacobian matrix; $λ_1, λ_2, …, λ_m$ are Lyapunov exponents of m-dimensional system.To judge if a system is chaotic, the first step is to compute all the Lyapunov exponents of the system. If the maximum Lyapunov exponent is positive, the system is chaotic; otherwise, it is not chaotic. Out of the many Lyapunov exponents, there may be more than one positive exponent for multi-dimensional systems. If so, the system is hyper-chaotic. 3.2 Basic design method of chaotic encryption algorithm The information of a digital image mainly covers two aspects: the value and the position of pixels, the smallest unit of an image. The pixel value generally falls in the interval [0,255]. The pixel position, i.e. the arrangement of all the pixel values, is basically unique. Hence, the key of an image encryption algorithm is to cover up the pixel value and pixel position of the target image. Popular image encryption algorithms include the pixel value permutation [20] and the position permutation [21]. Here, permutation is a way to make the image visually chaotic. It refers to disrupting the original order of image pixels and reducing the correlation of adjacent pixels as much as possible, making it impossible to get useful information through the senses. Generally, the information of an image can be represented as a 2D matrix. Each element in the matrix corresponds to each pixel in each image, including their position and size. For an image of the size M×N, its corresponding M×N matrix can be defined as: $\text{P=}\left( \begin{align} & \begin{matrix} \text{P}{}_{\text{0,0}} & \text{P}{}_{\text{0,1}} & ... & \text{P}{}_{\text{0,N-1}} \\\end{matrix} \\ & \begin{matrix} ... & ... & ... & ... \\\end{matrix} \\ & \begin{matrix} \text{P}{}_{\text{M-1,0}} & ... & ... & \text{P}{}_{\text{M-1,N-1}} \\\end{matrix} \\ \end{align} \right)$$i \in[0, M-1], j \in[0, N-1]$ (7) where Pi,j are the pixel values of i-th row and j-th column of the image. The essence of permutation is to transform the rows and columns in the matrix of Eq. (7), that is, changing the rules of pixel placement of the whole image. For example, a 512×512 image was subjected to Arnold transform for 0, 1, 3, 7, 356, 370, 373 and 374 times in turn. The results are presented in Figure 1 below. Figure 1. Example images of permutation However, permutation can only conceal the image visually, instead of changing the statistical features. Taking a gray histogram for instance, the permutation will reduce the pixel correlation, but not sufficient to completely remove the relevance. This requires the complete change to the values and statistical features of image pixels, which is known as pixel replacement. Generally, the original information of plaintext is changed by simple addition, multiplication or exclusion. If the selected key sequence enjoys good randomness, there will be no correlation between the pixels. Through diffusion, the effect of the change of a single plaintext or key can spread to the whole image. 4. Composite Image Encryption Based on Logistic Mapping and Bao System 4.1 Composite chaotic system theory The image encryption algorithm based on single chaotic system has the problems of small key space and low security. Once a chaotic system fails to meet the requirements, it should be replaced with multiple chaotic systems. This idea is applied to composite chaotic encryption. As a result, this paper proposes an image encryption algorithm combining 1D and 3D chaotic systems. Figure 2. Logistic mapping bifurcation graph The system equation of logistic mapping, the most common 1D chaotic system, can be expressed as: $x_{n+1}=μx_n (1-x_n)$ (8) where x∈[0,1]. When 2.5<μ<3, the system presents a chaotic state (Figure 2). The system equation of 3D Bao system, a continuous autonomous dissipative system with a quadratic nonlinearity term, can be expressed as: $\left\{\begin{array}{c}{x^{\prime}=\alpha\left(x_{n}-y_{n}\right)} \\ {y^{\prime}=x z-\gamma y} \\ {z=x^{2}-\beta z}\end{array}\right.$(9) Being a continuous chaotic system, the system should be discretized before application. Here, the fourth-order Runge-Kutta algorithm [22, 23] is used to discretize Bao system. The equation of the discretized system can be expressed as: $\left\{\begin{array}{l}{x_{n+1}=x_{n}+h\left(f_{x 1}+2 f_{x 2}+2 f_{x 3}+f_{x 4}\right) / 6} \\ {y_{n+1}=y_{n}+h\left(f_{y 1}+2 f_{y 2}+2 f_{y 2}+f_{y 4}\right) / 6} \\ {z_{n+1}=z_{n}+h\left(f_{z 1}+2 f_{z 2}+2 f_{z 2}+f_{z 4}\right) / 6}\end{array}\right.$ (10) The system shows a chaotic state, when the controller parameters α=20, β=3 and γ=32. Figure 3 shows the trajectory map of the 3D Bao system when the initial value of (x, y, z) is (10, 10, 10), the step length is h=0.001, and the number of iterations n=5,000. Figure 3. Trajectory map of 3D Bao system 4.2 Algorithm design To set up the chaos-based image encryption algorithm, the initial value of the system should be determined to generate chaotic sequences. Then, these sequences should be processed and modified to obtain the sequence suitable for image encryption. After that, it is necessary to design the formulas of pixel permutation, replacement and diffusion. Let M×N be the size of the target image, and $p_{i,j}$ be the pixel value of i-th row and j-th column. Step 1: Determine the two initial values of logistic mapping system. First, perform N iterations on the system to prevent the transition effect. Then, generate two chaotic sequences, ${X_{i=0}^{M-1}}$ and ${Y_{j=0}^{N-1}}$, with length M and N through M and N iterations, respectively. Next, extract the $N_1$-th, $N_2$-th and $N_3$-th iteration values from both sets of sequences, and calculate the initial values $x_{bao}$, $y_{bao}$ and $z_{bao}$ of x, y and z parameters of Bao system. $x_{bao}=(X_{N_1}+Y_{N_2})⁄2×10$ $y_{bao}=(X_{N_2}+Y_{N_2})⁄2×10$ $\left\{\begin{array}{l}{x_{b a o}=\left(X_{N_{1}}+Y_{N_{2}}\right) / 2 \times 10} \\ {y_{b a o}=\left(X_{N_{2}}+Y_{N_{2}}\right) / 2 \times 10} \\ {z_{b a o}=\left(X_{N_{3}}+Y_{N_{3}}\right) / 2 \times 10}\end{array}\right.$(11) The initial value of the chaotic system and the selected sequence pair can be used as the key in the algorithm design. Step 2: Calculate by the two chaotic sequences. $X_i=mod((fabs(X_i )-floor(fabs(X_i ))×10^14 ),M)$ (12) $Y_j=mod((fabs(Y_j )-floor(fabs(Y_j ))×10^14 ),N)$ (13) where fabs() is an absolute value function for finding a floating-point number; floor() is to find the maximum integer function that does not exceed the number itself; $X_{i} \in[0, M-1]$ ; $Y_{i} \in[0, N-1]$ . Swap the pixel values of point (i, j) with those of point ($X_{i}, Y_{j}$): $P_{i, j}=P_{X_{i}, Y_{j}}$ (14) Perform the swap on all pixels in turn to complete the replacement. Step 3: Input the $x_{bao}$, $y_{bao}$ and $z_{bao}$ into the Bao system. Iterate the chaotic system by the Runge-Kutta algorithm, such that a set of new values $(b_1, b_2, b_3)$ is generated from the Bao system in each iteration. Denote the processed sequence as $\left\{B_{k}\right\}_{i=0}^{M \times N-1}$. Step 4: Calculate the following formulas. $\begin{aligned} b_{i+1} &=mod \left(\left(\text {fabs}\left(x_{i}\right)-f \operatorname{loor}\left(f a b s\left(x_{i}\right)\right) \times 10^{14}\right), 256\right) \\ b_{i+2} &=mod \left(\left(f a b s\left(y_{i}\right)-f \operatorname{loor}\left(f a b s\left(y_{i}\right)\right) \times 10^{14}\right), 256\right) \\ b_{i+3} &=mod \left(\left(f a b s\left(z_{i}\right)-f \operatorname{loor}\left(f a b s\left(z_{i}\right)\right) \times 10^{14}\right), 256\right) \end{aligned}$ (15) Then, compute the value of t=mod($P_{i,j}$,3). If t=0, insert $b_{i+1}$, $b_{i+2}$ and $b_{i+3}$ into $B_k$ in turn; if t=1, insert $b_{i+2}$, $b_{i+3}$ and $b_{i+1}$ into $B_k$ in turn; if t=2, insert $b_{i+3}$, $b_{i+1}$ and $b_{i+2}$ into $B_k$ in turn. Repeat this process until $M×N-1 b_i$ values are generated. Step 5: Form a sequence of permutated image pixels $P_i$, and denote encrypted image pixel sequence as $C_i$. Define replacement and diffusion operations as: $C_i=B_k⨁(mod(P_i+S,256))⨁C_{i-1}$ (16) where S is the sum of image pixel values. Each encrypted pixel value is related to the previous encrypted pixel value and also to S. Step 6: Repeat the permutation, replacement and diffusion process T times to complete the encryption of the entire image. 5. Experimental Simulation and Analysis The proposed encryption algorithm was verified through experiments on an 8-bit 256-color image. The initial parameters of logistic mapping system were set as (0.75, 0.85), N0=500, C0=120, S=11,520 and T=3. The encryption effect is presented in Figure 4 below. Figure 4. The encryption effect As shown in Figure 4, the final encrypted image was completely confused, presenting no useful information, after replacement and diffusion to the encrypted image. Figure 5 compares the gray histograms of the original image and he encrypted image. It can be seen that the encrypted image exhibited a uniform distribution of pixel values and basically the same probability of different pixels, while the original image showed an uneven distribution and vastly different probabilities. This means the encrypted image can better resist the attack based on statistical analysis. Figure 5. Gray histograms Next, the horizontal and vertical adjacent pixels before and after image encryption were subjected to correlation analysis. The results are displayed in Figure 6 below. Figure 6. Correlation of horizontal and vertical adjacent pixels before and after encryption Figure 6 shows a strong linear correlation between both horizontal and vertical adjacent pixels on the pre-encryption image. Meanwhile, the pixel values of the encrypted image were scattered in the space of adjacent pixels, with weak correlation and random distribution. This paper presents an image encryption algorithm based on 1D and 3D chaotic systems, and verifies the algorithm performance through experiments. The results show that the proposed algorithm boasts a large key space and strong resistance to various attacks, including the statistical analysis by the attacker. Thus, our algorithm can greatly improve the security of chaotic encryption. The future research will further improve our algorithm, trying to achieve security requirements with fewer iterations. Sponsored by the National Natural Science Foundation of China(Grant No. 61401226), the MOE (Ministry of Education in China) Project of Humanities and Social Sciences (Grant No.14YJAZH023), the Basic Research Program of Jiangsu University of Technology (Grant No.KYY14007), the Natural Science Foundation of Universities of Jiangsu province (No. 13KJB520005) and State Key Laboratory of Information Security (Grant No. 2015-MSB-10). [1] Norouzi B, Mirzakuchaki S, Seyedzadeh S, Mosavi MR. (2014). A simple, sensitive and secure image encryption algorithm based on hyper-chaotic system with only one round diffusion process. Multimedia Tools & Applications 71(3): 1469-1497. https://doi.org/10.1007/s11042-012-1292-9 [2] Norouzi B, Mirzakuchaki S. (2014). A fast color image encryption algorithm based on hyper-chaotic systems. Nonlinear Dynamics 78(2): 995-1015. https://doi.org/10.1007/s11071-014-1492-0 [3] El-Sayed AMA, Salman SM. (2016). Dynamic behavior and chaos control in a complex Riccati-type map. Quaestiones Mathematicae 39(5): 665-681. https://doi.org/10.2989/16073606.2015.1115441 [4] Matthews R. (1989). On the derivation of a "Chaotic" encryption algorithm. Cryptologia 13(1): 29-42. https://doi.org/10.1080/0161-118991863745 [5] Jakimoski G, Kocarev L. (2001). Analysis of some recently proposed chaos-based encryption algorithms. Physics Letters A 291(6): 381-384. https://doi.org/10.1016/S0375-9601(01)00771-X [6] Liu B, Liu N, Li JX, Liang W. (2011). Research of image encryption algorithm base on chaos theory. Proceedings of 2011 6th International Forum on Strategic Technology 2: 1096-1098. https://doi.org/10.1109/IFOST.2011.6021211 [7] Wei Y, Dai Y, Zhang Y, Chen J, Ding J. (2013). Adaptive chaotic embedded particle swarm optimization algorithm based on tent mapping. Computer Engineering & Applications 49(10): 45-49. [8] Yoon EJ, Jeon IS. (2011). An efficient and secure diffie–hellman key agreement protocol based on chebyshev chaotic map. Communications in Nonlinear Science & Numerical Simulation 16(6): 2383-2389. https://doi.org/10.1016/j.cnsns.2010.09.021 [9] Lee KW, Singh SN. (2011). Non-certainty-equivalent adaptive control of chaos in Lorenz system. International Journal of Modelling, Identification and Control 13(4): 310-321. http://dx.doi.org/10.1504/IJMIC.2011.041786 [10] Yassen MT. (2003). Chaos control of Chen chaotic dynamical system. Chaos, Solitons and Fractals 15(2): 271-283. https://doi.org/10.1016/S0960-0779(01)00251-X [11] Khellat F. (2014). Delayed feedback control of bao chaotic system based on HOPF bifurcation analysis. Journal of Engineering Science & Technology Review 8(2): 7-11. https://doi.org/10.25103/jestr.082.02 [12] Ellatif AAA, Li L, Wang N, Han Q, Niu XM. (2013). A new approach to chaotic image encryption based on quantum chaotic system, exploiting color spaces. Signal Processing 93(11): 2986-3000. https://doi.org/10.1016/j.sigpro.2013.03.031 [13] Chai X, Gan Z, Lu Y, Chen YR, Han D. (2017). A novel image encryption algorithm based on the chaotic system and DNA computing. International Journal of Modern Physics C 28(5): 1-13. https://doi.org/10.1142/S0129183117500693 [14] Zhang Y, Xu B, Zhou N. (2017). A novel image compression–encryption hybrid algorithm based on the analysis sparse representation. Optics Communications 392: 223-233. https://doi.org/10.1016/j.optcom.2017.01.061 [15] Liu Z, Zeng G, Xie FS. (2012). Chaotic image encryption method based on pixel value composite scrambling. Computer Engineering & Applications 48: 122-126. https://doi.org/10.3778/j.issn.1002-8331.2012.25.026 [16] Sun G, Bin S. (2018). A new opinion leaders detecting algorithm in multi-relationship online social netwoks. Multimedia Tools and Applications 77(4): 4295-4307. [17] Zhu H, Zhang X, Yu H. (2016). A novel image encryption scheme using the composite discrete chaotic system. Entropy 18(8): 276-289. https://doi.org/10.3390/e18080276 [18] Seyedzadeh SM, Mirzakuchaki S. (2012). A fast color image encryption algorithm based on coupled two-dimensional piecewise chaotic map. Signal Processing 92(5): 1202-1215. https://doi.org/10.1016/j.sigpro.2011.11.004 [19] Haroun MF, Gulliver TA. (2015). Real-time image encryption using a low-complexity discrete 3D dual chaotic cipher. Nonlinear Dynamics 82(3): 1523-1535. https://doi.org/10.1007/s11071-015-2258-z [20] Li Y, Wang C, Chen H. (2017). A hyper-chaos-based image encryption algorithm using pixel-level permutation and bit-level permutation. Optics & Lasers in Engineering 90: 238-246. https://doi.org/10.1016/j.optlaseng.2016.10.020 [21] Lang J. (2015). Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain. Optics Communications 338(338): 181-192. https://doi.org/10.1016/j.optcom.2014.10.049 [22] Yang D, Wang N, Chen S, Song GJ. (2009). An explicit method based on the implicit runge-kutta algorithm for solving wave equations. Bulletin of the Seismological Society of America 99(6): 3340-3354. https://doi.org/10.1785/0120080346
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. How does a magnetic field affect electronic spin? As I understand it, in the Einstein-de Haas experiment we apply a magnetic field to a ferromagnetic material and the spin of its electrons align with the magnetic field, producing a magnetic dipole. But in the Stern-Gerlach experiment, spin isn't affected by the magnetic field, otherwise the beam of atoms wouldn't split in two. If the electrons in the E-dH experiment behaved the same way, there wouldn't be a macroscopic dipole. I'm probably misunderstanding how one of these experiments work, but in case I'm not: why does spin act differently in these two experiments? quantum-mechanics electromagnetism quantum-spin Julian LJulian L $\begingroup$ "But in the Stern-Gerlach experiment, spin isn't affected by the magnetic field...": Technically it's not true. S-G apparatus "measures" the spin of the electron, and a measurement on a quantum system affects its dynamics. What you get at the output of the measurement is the result of your measurement. $\endgroup$ – UKH In both cases, the magnetic field doesn't change the electron spin. The difference is in the fact that the electrons in the Einstein-de Haas experiment are part of a lattice, and the ones in the Stern-Gerlach experiment are not. In the Stern-Gerlach experiment, the electrons in the beam are effectively isolated, meaning that whatever spin state they had when they were put into the beam stays that way. The magnetic field gradient doesn't change the spin direction, it just exerts force in whatever direction the spin dictates. In the Einstein-de Haas experiment, the electrons are part of a lattice with tons of other electrons at a non-zero temperature. Therefore, due to lattice interactions, the spin of each electron is constantly fluctuating, regardless of the presence of a magnetic field. In the absence of a magnetic field, there is an equal probability of detecting the electron in any spin configuration. An applied magnetic field makes some spin configurations (namely, those parallel to the field direction) lower energy than others, so it shifts the probability distribution toward the direction parallel to the field*. The stronger the applied field, the more heavily the distribution is weighted in that direction. So the magnetic field doesn't really change the direction of the spin, it just changes how much time the fluctuating spin spends in a particular configuration. *In some cases (see antiferromagnetism), interactions between adjacent electrons can be more important than an external field, and lead to unusual effective potentials that make for weird spin arrangements. Typically things happen as above, though. probably_someoneprobably_someone A spin generates a magnetic moment $\mu$ which can be translated and rotated by a magnetic field $\textbf{B}$ (you can think of it classically), with: $$F=\nabla(\mu\cdot\textbf{B})$$ and $$T=\mu\times\textbf{B}$$ where $F$ is the force and $T$ the torque acting on the moment. In the Stern-Gerlach experiment the beam does split in as many sub-beams as the possible projections of the spin (2, if s=1/2), unless a previous measurement set all the projections to the same value. JalfredPJalfredP $\begingroup$ Yes, I understand that, my question is why the Stern-Gerlach experiment only measures the projection of the spin, while the Einstein-de Haas experiment forces all spins to have the same projection. If it measured the spins, like the first one, half of the electrons would have spin up, and the other half spin down, causing a total magnetic dipolar momentum of zero. But that isn't what happens. $\endgroup$ – Julian L I suppose that this is due to the fact that in the ferromagnetic case the electrons are not free to move. They are part of the metallic grid and in bound state wrt the nucleus. Therefore their only chance of reaction is to flip the spin (and stay bound). Nobody-Knows-I-am-a-DogNobody-Knows-I-am-a-Dog Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged quantum-mechanics electromagnetism quantum-spin or ask your own question. Is there record of a bosonic Stern-Gerlach measurement? The Einstein–de Haas effect on a ferromagnetic coil which generates the external magnetic field Quantum Spin intrinsic once inside a magnetic field Does spin precession stop immediately after leaving Stern-Gerlach apparatus and split into the orthogonal direction of precession? Why Is an Inhomogenous Magnetic Field Used in the Stern Gerlach Experiment? How does Stern-Gerlach experiment lead to conclusion of electron's spin? Cascaded Stern-Gerlach and Polarization Experiments - Differences in measurement concepts
CommonCrawl
Vacuum of space [closed] Want to improve this question? Update the question so it's on-topic for Astronomy Stack Exchange. Closed 1 year ago. Firstly I'm genuinely interesting in a working explanation for this question. It is for this reason that I am editing the question to fine tune the question. In essense the question has remained the same. In order for me to check mark the best answer I'm going to ask for citations for the claims made in the answers because I'm getting lots of theory and responders are not in agreement, as far as I can tell. Also I cannot accept circular reasoning i.e. if we assume x then y, since y therefore x. That's fallacious logical reasoning because x was never proven it was assumed and y might be independent of x. The vacuum of space is incredibly powerful, 1 x 10-17 torr, and the vacuum between the Earth and the Moon is 1 x 10-11 torr. How can such a vacuum (very low pressure) in close proximity to the Earth's atmosphere (high pressure) that goes to 8.5km elevation, coexist with the open system of the Earth's atmosphere? How does this not defy the second law of thermodynamics and still remain true? Space is low pressure and the earth's atmosphere is high pressure. In order to have any pressure the gas requires (demands) that it press upon something. Pressure is a force exerted by the substance per unit area on another substance. The pressure of a gas is the force that the gas exerts on the walls of its container. When you blow air into a balloon, the balloon expands because the pressure of air molecules is greater on the inside of the balloon than the outside. Pressure is a property which determines the direction in which mass flows. If the balloon is released, the air moves from a region of high pressure to a region of low pressure and the balloon deflates. 1 Earth is an open system that presses against the vacuum of space, why therefore is the second law of thermodynamics suspended if indeed it is a law. Space is low pressure, therefore the atmosphere which is not in a container should disperse into the low pressure space. VACUUM (There's nothing to it…) Vacuum Technology PHY451 October 22, 2014 cosmology atmosphere space vacuum asked Mar 3 '19 at 5:42 AutodidactAutodidact $\begingroup$ I've voted to close for off-topic because... This question does not appear to be about astronomy, within the scope defined in the help center. This has become clear in a series of comments 1, 2 This is about the physics of the Earth's atmosphere (paraphrasing: why they have them, how gravity keeps them there even though vacuum is pulling at them) not astronomy $\endgroup$ – uhoh Mar 3 '19 at 9:37 $\begingroup$ You shouldn't change a question significantly after an answer has been posted. This is not a forum. The proper way is to post a new question. $\endgroup$ – Peter Mortensen Mar 3 '19 at 9:55 $\begingroup$ I see, the second law of thermodynamics would demand that the rare gas which has a torr value reach equilibrium with the atmosphere immediately next to it, which has a higher torr value. My question is why can they remain in proximity and maintain their torr values thereby defying the second law of thermodynamics. I don't have the answer which is why I asked. Pressure demands a gas fill the space. Yet it stops at that tangential point but only partially. Why? $\endgroup$ – Autodidact Mar 3 '19 at 13:25 $\begingroup$ "The vacuum of space is incredibly powerful" - you're looking at it backwards. The pressure of the atmosphere you're under is immense, due to gravity. In 'space', that pressure is much less, but it's non-zero no matter how far you go. There is no demarcation where "positive pressure touching a negative pressure" is. $\endgroup$ – Mazura Mar 3 '19 at 19:57 $\begingroup$ The atmosphere presses on the Earth, and everything else in it due to displacement (and it's not moving into the vast space full of low pressure) because of gravity. $\endgroup$ – Mazura Mar 3 '19 at 20:05 Your assertion that our atmosphere doesn't escape is wrong. Helium and Hydrogen atoms have a low enough mass that they do have an escape velocity at the temperatures on the edge of our atmosphere. This means that when those gasses are released, if they fail to react on their way out, then they will be lost forever to the planet. This is why when you look at our atmosphere we just don't have any. You also seem to assert that there's a "line" where it's the high pressure of our atmosphere on one side, and a low pressure of space on the other; that line doesn't exist. The pressure is a gradient, in the same way as you swim down in a pool, at the top the pressure is low; at the bottom it's high; and it changes steadily as you traverse between the two. Hence why climbers of Everest need to carry oxygen. The heavier gasses don't escape for the same reason that a rock you throw doesn't escape. It requires energy to escape the gravitational well of Earth; and they don't have it; currently. Of course, as the atmosphere heats up from global warming, heavier and heavier particles will gain the energy to leave our atmosphere... UKMonkeyUKMonkey $\begingroup$ Let us continue this discussion in chat. $\endgroup$ – Autodidact Mar 4 '19 at 1:22 $\begingroup$ I've deleted the comments here, some of which were constructive and some of which were, uh, not quite as constructive. There's a chat room for any further productive discussion; please go there unless you'd like to suggest an edit to this answer or request a clarification. Thanks. $\endgroup$ – HDE 226868♦ Mar 4 '19 at 16:20 $\begingroup$ @HDE226868 :) thanks $\endgroup$ – UKMonkey Mar 4 '19 at 16:25 The underlying reason that the molecules of Earth's atmosphere do not fly away into the surrounding vacuum is that they are slower than the escape velocity, which would be 11200 m/s. The typical molecule speed at ground level and room temperature appears to be 500 m/s. If it had a free path such a molecule could fly vertically for $t = v/a = \frac{500m/s}{9.81m/s^2} \approx 50s$ before starting to fall back, with an average velocity of 250 m/s, thus reaching an altitude of 12 or 13 km. (In reality it would collide with other molecules on the way, transferring some kinetic energy to them, so that they could in turn rise higher. Obviously, molecules at the outer fringes of the atmosphere are the real escape candidates.) Molecules which are fast enough surely do escape Earth's gravity well. Some may have been accelerated by particles of the solar wind, some may just have been on the long tail of the standard distribution. The latter is more likely for light atoms and molecules which are faster, like Helium and Hydrogen. Hydrogen's average speed at room temperature is perhaps 2000 m/s. These gases have indeed mostly left Earth for good long ago. (By the way, the solar wind would likely "blow away" our atmosphere in the long run — as it did on Mars — if it weren't deflected by the Earth's magnetic field.) Peter - Reinstate MonicaPeter - Reinstate Monica update: This answer was written before the question was modified. I've tried to explain where a value like 10-17 Torr for deep space might come from, but it's since been dropped in lieu of 10-11 Torr at the Moon, which is probably a better way to formulate the question. I think the answer is the same, two points very far away can have very different pressures. They can coexist in the same solar system, just not right next to each other. I think "Why doesn't the Moon have at least a small atmosphere?" could also be an excellent, but very different question. In a comment the OP links to the presentation VACUUM (There's nothing to it… ) written from the perspective of an engineer in the semiconductor manufacturing industry. Slide 6 gives examples of vacuum levels in different situations: Going down: Low vacuum: 760 Torr to 1 x 10-3 Torr Vacuum cleaner: to 600 Torr Thermos bottle 10-3 Torr High vacuum: 10-3 to 10-9 Torr Ion Implanter – Evaporator – Sputterer Ultra high vacuum: 10-9 to 10-12 Torr CERN LHC: 1 x 10-10 Torr Moon's surface: 1 x 10-11 Torr Deep Space 1 x 10-17 Torr = 0.000,000,000,000,000,01 Torr So we can see that the value of 1 x 10-17 Torr is associated with a place in "Deep Space" which is (probably) beyond that of the Moon. Let's see if we can figure out where the author is getting that number. According to the Wikipedia article on the interstellar medium (space between stars, far away from solar systems and other things): In all phases, the interstellar medium is extremely tenuous by terrestrial standards. In cool, dense regions of the ISM, matter is primarily in molecular form, and reaches number densities of 106 molecules per cm3 (1 million molecules per cm3). In hot, diffuse regions of the ISM, matter is primarily ionized, and the density may be as low as 10−4 ions per cm3. Compare this with a number density of roughly 1019 molecules per cm3 for air at sea level, and 1010 molecules per cm3 (10 billion molecules per cm3) for a laboratory high-vacuum chamber. It is harder to talk about pressure than density because pressure is related to both number density and temperature. The atmosphere is more than 10x hotter than interstellar medium, so let's look for a number density ratio of 1/10 of 1000 Torr versus 10-17 Torr, or a ratio of 1019. If (according to Wikipedia) Earth's atmosphere has a density of 1019 per cm3, we're looking for a density of 1 per cm3. Checking Wikipedia we can see that there are components of the interstellar medium with number densities between 106 and 10-4. It looks like the value in the presentation a rough ballpark estimate, but isn't off by more than a handful of orders of magnitude ;-) How can such a vacuum coexist with the open system of earth's atmosphere whereby debris from space can enter in? While these two pressures can coexist in the same universe, they don't coexist in proximity at all. The interstellar medium is very, very far away from Earth's atmosphere, on the order of a lightyear. Gravity keeps Earth's atmosphere nearby Earth, the solar system has gas produced by (and attracted by) the Sun's gravity. In interstellar space, there just isn't any source of gas, and what might have been there at one time has moved away, towards sources of gravity, over billions of years. Peter - Reinstate Monica $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – HDE 226868♦ Mar 4 '19 at 16:21 Yes, and that 'something' is Earth's gravity. Gas molecules are pulled down toward the surface just like all other matter. So there's an equilibrium between gravity and gas pressure. At the upper end of the atmosphere, some gas escapes (molecules reach escape velocity), this is predominantly the light gases. $\begingroup$ I fully understand the assertion @Hobbes, it's been said several times in different forms in this questions. In order that I give you the check mark for best answer please provide scientific evidence where this is also proven to be the case. Because right now gravity is strong enough to hold down water but weak enough that water vapor rises. And at some point beyond the gradient atmosphere the low pressure vacuum of space can take up minute number of molecules. We also have the moon pulling on the oceans to create tides but too weak to pull the molecules at the top of the atmosphere. You see? $\endgroup$ – Autodidact Mar 4 '19 at 12:57 $\begingroup$ Earth's atmosphere is not enough evidence for you? It's here despite the vacuum of space, it's been here for a long time and we have good data on atmospheric losses. Any experiment would require building a planet with an atmosphere, and that isn't feasible. So basic physics is all you'll get. $\endgroup$ – Hobbes Mar 4 '19 at 13:26 $\begingroup$ Ok so your evidence for how the earth's atmosphere interacts with the vacuum of space is the earth's atmosphere interacting with the vacuum of space. That's a faith based claim if I ever heard one. How about we don't know but we assume...? Basic physics would ask you demonstrate gravity holding gas inside of a container next to a vacuum and the gas not filling the space. Basic physics and The 2nd LAW of thermodynamics is being suspended to account for an unproven faith based opinion. We can show gradient of the atmosphere but not of the vacuum of space? Isn't mars far enough to test space? $\endgroup$ – Autodidact Mar 4 '19 at 13:52 $\begingroup$ No, my evidence for there being a force that prevents Earth's atmosphere from escaping is Earth's atmosphere still being there. $\endgroup$ – Hobbes Mar 4 '19 at 14:05 $\begingroup$ Voted to close, I'm not going to spend time explaining all of physics to you. $\endgroup$ – Hobbes Mar 4 '19 at 14:41 I think uhoh covered proximity but just to induce equilibrium to further elaborate: First of all, positive pressure and negative pressure are just terminology based on where we started i.e. 1 atm and above/below, we just went along. There is zero pressure and gradually moving up from that. Say, perfect vacuum starts at zero pressure and move up as comes across high pressure open system. Things are always at equilibrium and if you want to move something from low pressure to high, you do some work as you have mentioned second law of thermodynamics. Gravity does that work in this case till some point. Ignore everything, lets say there is one good looking blue dot aka earth, and as you move close, gravity starts to get stronger. So, it will invite more molecule to be cuddly and at same time pressure gradient will move gas the other way. Eventually they will reach at equilibrium i.e. same transference. This is true for any height from earth. In absence of such equilibrium earth would loose its atmosphere or gain more (may be case at astronomical time scale). We start at one (gravity rules) to continuously (important) merge to space vacuum with equilibrium all along the line. Once gas truly escape earth gravity (i.e. temperature movement is way stronger than earth gravity), it has no reason to hang around. Same way we can gain by unsuspecting wandering gas molecules. Earth atmosphere looks at equilibrium at human timescale. But earth loose and gain as these forces continuously change with distance. Edit: let's try to come up based on fundamental property of universe: Let's take two points A and B sealed in perfect vacuum by a long tube. 1 insert some gas at point A. You would expect, given enough time, gas will diffuse uniformly between point A and B. That is one property of universe: entropy always maximize. (Remember pressure is not a force). There is no answer why but it does and give rise to interesting open physics problem: arrow of time. Wikipedia has nice article on it. Now second property: fundamental force gravity which attracts everything. Say gas is diffused and you switch on gravity at point A. You expect gas to move towards point A. And it will. Now both at play. At any cross section , gas would cross to go to A due to gravity and B to maximize the entropy. Given enough time there will be equilibrium. Equal transference at any length between A and B. But you would expect A to be more concentrated since gravity wants all gas at A where entropy wants all uniformly distributed between A and B. So there will be gradient between A and B. Now think of tube between earth and space and make it disappear. Now B at low concentration, there can be escape but is it significant at human scale. In short atmosphere gradually thins out. Answer based on escape velocity are fine but couple of things: Escape velocity is consequence of fundamental force gravity. You don't need escape velocity to leave earth. Only true for projectile thrown from ground. Gas velocity is function of temperature. Earth atomposhere temperate decreases and then increases and then again decreases and guess -increases again. (With height).See Wikipedia. So will Gas velocity. Where escape velocity will decrease with height. Gas in space is hot because of high kinetic energy and lack of collision to transfer heat. Speed is not due to lack of gravity. You can argue lack of gravity explains lack of concentration but that feels like going in circles. All of this is in Wikipedia and basic science Hope it is more clear.. $\begingroup$ At this point I really neeed citations to decide on the best answer. Could you cite your response or are you merely theorizing? $\endgroup$ – Autodidact Mar 3 '19 at 19:55 $\begingroup$ I will edit to include one simple thought experiment . $\endgroup$ – HR04375439 Mar 4 '19 at 7:29 $\begingroup$ With due respect I don't need thought experiments, I'll read what you have to say but why reinvent the wheel, show me peer reviewed experiments that have been repeated multiple times and are in themselves repeatable. A thought experiment is highly subjective. $\endgroup$ – Autodidact Mar 4 '19 at 12:11 Not the answer you're looking for? Browse other questions tagged cosmology atmosphere space vacuum or ask your own question. Is it accurate to compare comets to clouds and rain? Could vacuum energy dim standard candles? If water vapor is always blown away into space, how is it able to create chemical compounds on Venus?
CommonCrawl
Pressure dependency in Haber Bosch ammonia synthesis Haber-Bosch ammonia synthesis reaction: $$\ce{3H2 +N2 -> 2NH3}$$ According to the ideal gas law: $pV=nRT$ where constant volume implies $\displaystyle \frac nV=\frac p{RT}$ Then $\displaystyle K_\mathrm c=\frac{[\ce{NH3}]^{2}}{[\ce{H2}]^3\cdot [\ce{N2}]}$ When I substitute $\displaystyle \frac nV=c=\frac {p_{\text{gas}}}{RT}$ I get: $$ K_\mathrm c=\frac{\left(\frac{p(NH_3)}{RT}\right)^2}{\left(\frac{p(H_2)}{RT}\right)^3\left(\frac{p(N_2)}{RT}\right)}=\frac{R^2T^2}{p^2}$$ But according to Le Chatelier's principle, increased pressure should result in an increased $K_\mathrm c$, but my $\displaystyle K_\mathrm c\approx \frac 1{p^2}$. Where is the error? Is is impossible to combine the partial pressures like that because they scale differently with total pressure? Can one derive from the gas law the pressure dependency of this reaction? Edit after solved: First mistake was that the formula had $K_c$ (constant) instead of Q, $$ Q=\frac{\left(\frac{p(NH_3)}{RT}\right)^2}{\left(\frac{p(H_2)}{RT}\right)^3\left(\frac{p(N_2)}{RT}\right)}=\frac{R^2T^2}{p^2}$$ where Q is the current value/ratio of concentrations, so the reaction tries to get Q back to $K_c$ so for greater pressure p, Q is smaller so the reaction goes in the direction of NH3... physical-chemistry equilibrium gas-laws pressure Beny Benz Beny BenzBeny Benz How do you know that $p_{\ce{NH_3}}$, $p_{\ce{H_2} } $ and $p_{\ce{N_2}}$ are all same and all equals $p$. You can't just substitute same $p$ for all of them because according to Le-chatterlier's principle, the increase in total pressure of the system, distorts the equilibrium, but your $p$ is not at all total pressure. So, you can't just say anything from your result. By the formula, $$K_p = K_x (P)^{\Delta n}$$ you can say something. Here $P$ is the total pressure of the system. (This is because the formula is derived by using Dalton's law of partial pressure which says, $p_i = x_i\dot P$ where $p_i$ is the partial prssure of the $i^{th}$ gas, $x_i$ is its mole fraction and $P$ is the total pressure of the system.) Here $\Delta n = -2$, so ,$$K_p = K_x/P^2$$ If you now increase $p$ at constant temperature, $K_p$ remains constant as it depends only on temperature. So, $K_x$ also increases to make $K_p$ remain constant, but $$K_x = x^2_{\ce{NH3}}/x^3_{\ce{H2}}\cdot x_{\ce{N2}}=n^2_{\ce{NH3}}\dot N^2/n^3_{\ce{H2}}\cdot n_{\ce{N2}}$$ where, $N = n_\ce{NH3} + n_\ce{N2} + n_\ce{H2}$ $K_x$ increasing means that the moles of $\ce{NH3}$ increase while the moles of $\ce{H2}$ and $\ce{N2}$ decrease, which is the favour of forward reaction and as $$K_p = K_c (RT)^{\Delta n}$$ here $\Delta n = -2$ and the temperature doesn't change so $$K_c= K_p (RT)^2 -> K_c \propto K_p$$ As $K_p$ increases, $K_c$ also increases. Loong♦ Soumik DasSoumik Das According to Le Chatelier's principle, a pressure increase should result in the equilibrium shifting to the right, but it should not - at least to first approximation - affect the equilibrium constant. $$K = \frac{\ce{[NH3]}^2}{\ce{[H2]}^3 \ce{[N2]}}$$ Using partial pressures, this can be rewritten as $$K = \left(\frac{RT}{P}\right)^2 \frac{y_{\ce{NH3}}^2}{y_{\ce{H2}}^3 y_{\ce{N2}}}$$ Then for $K$ to be constant, an increase in the denominator, given by the pressure for example, has to be followed by an increase in the numerator, which means the reaction working in the direction of formation of ammonia. Tyberius Vinícius GodimVinícius Godim $\begingroup$ Oh I see, so if I have two containers where one has double the pressure of the other, then at the start of my reaction c(H2) and c(N2) are increased (aswell as c(NH3) but the total fraction of the momentarily reaction "fraction" is changed by 1/4 and so the system tries to get back to equilibium / to Kc by synthesising NH3. Thank you @Vinícius. $\endgroup$ – Beny Benz Feb 21 '18 at 20:50 It is relatively easy to work out what happens in a gas phase reaction when the total pressure is changed by considering the degree of dissociation $\alpha$. $K_p$ may be defined in terms of free energies of the species in their standard states, i.e. 1 bar pressure thus it is independent of pressure. The changes due to dissociation are shown under the equation for the Haber process $$3H_2 + \;\;N_2 \leftrightharpoons \;\;2NH_3\\ 3(1-\alpha) \;\; 1-\alpha \qquad 2\alpha$$ As $\displaystyle K_p= \frac{p_A^2}{p_H^3p_N}$, where subscript A is for ammonia. To work out the partial pressures $p_A$ etc. in terms of the total pressure $p$ and $\alpha$ recall that each partial pressure is the extent of dissociation over the total (in this case $3-3\alpha+1-\alpha+2\alpha=4-2\alpha=f$) which are $\displaystyle p_A= \frac{2\alpha}{f}p, \; p_N=\frac{1-\alpha}{f}p,\; p_H=\frac{3-\alpha}{f}p$ which produces the equilibrium constant in terms of $\alpha$ as $$K_p=\frac{4\alpha^2p^2 (4-2\alpha)^4}{(4-2\alpha)^2\cdot 3^3(1-\alpha)^3p^3(1-\alpha)p}= \frac{4\alpha^2(4-2\alpha)^2}{3^3(1-\alpha)^4}\frac{1}{p^2}$$ To find the pressure dependence, rearrange the equation as $\displaystyle p=\sqrt{ \frac{4\alpha^2(4-2\alpha)^2}{3^3(1-\alpha)^4K_p} }$ and workout what the pressure is at small and large $\alpha$. In this case when $\alpha =0,\; p=0$ so as pressure increases $\alpha$ increases as $\alpha$ cannot be negative. If you plot a graph this is clear also. (Incidentally it does not matter what value $K_p$ has as it only multiples the curve by a constant value). Thus in this case as pressure is increased the extent of dissociation increases, and more product is formed. le-Chatelier's principle is in accord with this conclusion. porphyrinporphyrin $\begingroup$ Thank you for your answer @porphyrin , but why is the formula for the degree of dissociation alpha for $3H^2=3-a$ and not $3-3a$ or $3*(1-a)$? Degree of dissociation is an unknown topic for me, so I'm sorry if it is obvious... I found chemistry.stackexchange.com/questions/531/… and if I understand it correctely alpha is the fraction of (reacted substance/substance to start with (constant)). So it should be between 0 and 1... $\endgroup$ – Beny Benz Feb 26 '18 at 0:49 $\begingroup$ Thank you for noticing my error. I have corrected the equations. The conclusions are unchanged. $\endgroup$ – porphyrin Feb 26 '18 at 14:52 Not the answer you're looking for? Browse other questions tagged physical-chemistry equilibrium gas-laws pressure or ask your own question. How do I calculate the degree of dissociation in equilibrium? Le Chatelier's principle when pressure of gases increased with the same stoichiometric coefficient What kind of equilibrium constant we use for Gibbs free energy and Van't Hoff equation? How many liters of hydrogen gas is produced per gram of aluminium? Where does this equation for the activity in the gas phase equilibrium between N₂, H₂ and NH₃ come from? Average or individual molar heat capacity? How to calculate the Gibbs energy for the vaporisation of solid ammonia? Chemical equilibrium and Le Chatelier Braun for ammonia production How to find pH of an aqueous ammonia solution at high pressures? Calculating volume % in a mixture of gases at equilibrium Finding the kp with density, temperature, and pressure Why does reducing pressure cause the equilibrium to shift towards the side with less moles?
CommonCrawl
Plant Methods Measuring the compressive modulus of elasticity of pith-filled plant stems Loay A. Al-Zube1,2, Daniel J. Robertson3, Jean N. Edwards1, Wenhuan Sun1 & Douglas D. Cook1 Plant Methods volume 13, Article number: 99 (2017) Cite this article The compressional modulus of elasticity is an important mechanical property for understanding stalk lodging, but this property is rarely available for thin-walled plant stems such as maize and sorghum because excised tissue samples from these plants are highly susceptible to buckling. The purpose of this study was to develop a testing protocol that provides accurate and reliable measurements of the compressive modulus of elasticity of the rind of pith-filled plant stems. The general approach was to relying upon standard methods and practices as much as possible, while developing new techniques as necessary. Two methods were developed for measuring the compressional modulus of elasticity of pith-filled node–node specimens. Both methods had an average repeatability of ± 4%. The use of natural plant morphology and architecture was used to avoid buckling failure. Both methods relied up on spherical compression platens to accommodate inaccuracies in sample preparation. The effect of sample position within the test fixture was quantified to ensure that sample placement did not introduce systematic errors. Reliable measurements of the compressive modulus of elasticity of pith-filled plant stems can be performed using the testing protocols presented in this study. Recommendations for future studies were also provided. The measurement of mechanical properties of plant stems helps in investigating early and late season stalk lodging [1]. But in spite of the economic significance of plants with thin-walled stems (e.g., maize, sorghum, wheat, etc.), few studies have investigated reliable methods for obtaining their mechanical properties under compressive loading. One of the most important mechanical properties is the modulus of elasticity, which provides a linear relation between stress and strain [2]. This mechanical property is essential for calculating stress states as well as physical deformation of a structure or a plant [3, 4]. The modulus of elasticity can be measured in a number of ways, including bending, tension, compression, vibration, and acoustic excitation tests. Bending tests have been used in a number of studies, including those focused on the mechanical properties of wood [5,6,7,8], sunflower stalks [9], sorghum stalks [10], wheat stems [11], and maize stalks [12, 13]. Bending tests are popular because they involve low loads, easily measurable deformation, and require little sample preparation. Bending tests can only be performed on test samples that are long and slender [14], and produce one estimate of the modulus of elasticity for each sample. As a result, this method produces rather poor spatial resolution for the modulus of elasticity. The accuracy of the modulus obtained by bending tests is also adversely affected by the nonlinear form of the bending equations, which tends to amplify measurement uncertainty. Tensile testing is another common technique for obtaining the modulus of elasticity. This approach has been used to measure the modulus of elasticity of wood [7], excised rind sections of maize stems [15, 16], excised longitudinal sections of switchgrass stems [17], rice stems, and Arabidopsis stems [18]. However, sample preparation is more laborious as compared to bending and specimens must be gripped securely without inducing tissue damage. The gripping aspect of tensile test is often quite challenging. Compression testing is very common in the wood literature [7, 19], but is not commonly used in the testing of thin-walled plant stems. This is because the plant rind tends to be highly susceptible to buckling deformation. Consequently, information on the compressive modulus of thin-walled plant stems is often not available. Studies have reported that the tensile and compressive modulus of elasticity values can be different for lumber, wheat straw, and barley straw [6, 20]. This indicates that tensile testing alone may be insufficient for measuring the modulus of elasticity, and that bending tests (which induce both bending and compression) may yield modulus values which are unreliable. Techniques for measuring the compressive modulus of elasticity of plant stems are therefore needed. For thin-walled plant stems, bending loads are primarily borne by longitudinal stresses in the rind tissue [21, 22], so this study focuses on the longitudinal modulus of elasticity. The goal of this study was to develop a robust method for obtaining the compressive modulus of elasticity of the rind of pith-filled plant stems, and to study the factors that influence the accuracy and reliability of this method. For the sake of brevity, the abbreviated term "modulus of elasticity" will be used in place of the more precise term "longitudinal compressive modulus of elasticity" in the remainder of this paper. Stalk samples Dry maize stalks were used as test specimens in this study. Maize can be highly susceptible to late-season stalk lodging, which occurs due to compression-induced buckling of the rind [13]. Maize stalks were sampled from 2 replicates of four commercially available hybrids of dent corn (maize) seeded at 5 planting densities (119,000, 104,000, 89,000, 74,000, and 59,000 plants ha−1) [23]. Stalks were cut just above the ground and just above the ear node immediately before harvest. To prevent fungal growth, stalks were placed in forced-air dryers to reduce stalk moisture to approximately 10–15% moisture by weight, which closely mimics the state of stalks in the field just prior to harvest. To avoid confounding factors, only stalks found to be free of disease and pest damage were included in the study. One hundred (100) samples were selected for compression testing. CT scanning X-ray computed tomography was used to quantify cross-sectional areas of the rind and pith regions (Fig. 1). Stalks were scanned using an X5000 scanner (NorthStar Imaging, Rogers, MN, USA). The scanning process produced 2D cross-sectional images of the maize stalks. A customized computer program was used to extract the cross-sectional area of each stalk from the CT data. The scanning and morphology extraction are described in more detail in a previous study [23]. Transverse cross-section of a maize stalk as obtained by X-ray computed tomography. a X-ray CT image, b X-ray CT image overlaid with lines used to segment the image into rind and pith regions. Segmentation was performed with a custom computer algorithm [23] Technical standards have been developed for compression tests on metals [24], plastics [25], and biomaterials [26]. Each of these standards specifies that sample geometry is critical for accurate assessment of compressional stiffness. These standards require that samples are prepared with end faces that are planar and perpendicular to the loading axis (Fig. 2a). This insures that stresses applied during testing are evenly distributed throughout the specimen. Compressive testing setup: a schematic diagram depicting geometric features of an ideal compression test; b a photograph of one specimen situated for testing In this study, test specimens were cut from stalks with an abrasive saw (Bosch GCO2000, Gerlingen, Germany). The face of the rotating saw blade insured that end faces were planar. Because the rind is thickest just below the node line [13], specimens were cut just below each node, as shown in Fig. 3. This approach utilizes the natural architecture of the stalk to minimize the stresses applied to each end during testing. Specimens prepared in this manner were found to be very durable, thus enabling the performance of multiple tests on individual specimens without induced permanent tissue damage. The prepared specimens contained three distinct tissue regions, each of which differed in anatomy and geometry; internode tissue, elongation zone tissue, and a subapical primary elongation meristem region [27]. (Top) Maize rind thickness as a function of axial distance. (Bottom) X-ray computed tomography image of the corresponding maize stalk. Dashed lines indicate the locations where the rind is thickest. Samples were prepared by cutting near the dashed lines. A prepared sample is shown in Fig. 2b Self-aligning compression platens are used in situations where the perpendicularity of sample end-faces is difficult to achieve. As a load is applied, these platens rotate until they are in alignment with the testing surface, thus accommodating any discrepancies in the angle of the end-face. Self-aligning platens (Cat No: S5722A, Instron Corp., Norwood, MA, USA) were therefore used at both ends of the specimens to accommodate any angular inaccuracies in the cutting process. Figure 2 provides a diagram and a photograph of a specimen situated for testing. Compression testing equipment Compression tests were performed using a universal testing machine (Instron 5965, Instron Corp., Norwood, MA, USA). Loads were measured with a 5 kN Instron load cell. Instrumentation control and data acquisition were managed with Instron software (Bluehill 3.0). Two types of strain were measured for each sample in this study; overall strain (ε overall ) and local strain (ε local ). Overall strain was based on the total displacement of the universal testing machine, (i.e., the displacement between the two spherical platens) divided by the total initial length of the sample prior to loading. Local strain was measured using an Instron extensometer, which recorded the displacement of two points on the surface of the specimen (see Fig. 2). The extensometer had a reference length of 50 mm (Instron 2630 Series Dynamic Extensometer, Instron Corp., Norwood, MA, USA) (Fig. 2). Compression testing procedure When testing biological tissues, a preload and repeated application of load cycles is commonly used to bring the samples to a repeatable reference state [28]. This procedure is used to reduce measurement variability and is referred to as pre-conditioning [28,29,30,31]. The loading process is described below. An initial load of 200 N was applied to each specimen. Five loading cycles were then applied. In each loading cycle, the load increased from 200 to 700 N and then returned to the 200 N initial state. The first cycle was used as a conditioning cycle. Only measurements from the latter four cycles were employed in the modulus of elasticity calculations. A strain rate of 0.1 mm/s and a sampling frequency of 33 Hz were used in this study. This rate is similar to that used in a previous report (0.0833 mm/s), where corn stalk specimens with a length to diameter ratio of 1:1 were tested [20]. Lower rates have been used in testing wheat/barley straw (0.04 mm/s) [20], lumber (0.005 mm/s) [7] and timber (0.042 mm/s) specimens [32]. Further investigation is needed in the future to elucidate the effect of strain rate on the compressive elastic moduli values of pith-filled plant stems. Modulus of elasticity calculations Compressive modulus is defined as the slope of a uniaxial stress–strain curve. Because the rind is the primary load-bearing tissue of the maize stalk [33], the compressional stress, σ, was obtained by dividing the applied force, F, by the cross-sectional area of the rind, A r (Eq. 1). Cross-sectional areas were measured 5 cm below the node. $$\sigma = \frac{F}{{A_{r} }}$$ This approach neglects the structural contribution of the pith tissue, but allows the estimation of the rind stiffness from a single test. As will be shown in the results section, this assumption introduces relatively minor errors. For small deformations, strain is obtained by dividing the change in length by the original length: $$\varepsilon = \frac{\Delta L}{{L_{0} }} = \left( {\frac{{L_{f} - L_{0} }}{{L_{0} }}} \right)$$ The slope of the stress–strain curve, or compressive modulus, E, was calculated as follows: $$E = \frac{\Delta \sigma }{\Delta \varepsilon } = \frac{{\sigma_{2} - \sigma_{1} }}{{\varepsilon_{2} - \varepsilon_{1} }} = \frac{{\left( {\frac{{F_{2} - F_{1} }}{{A_{r} }}} \right)}}{{\left( {\frac{{L_{2} - L_{1} }}{{L_{0} }}} \right)}} = \left( {\frac{\Delta F}{\Delta L}} \right) \frac{{L_{0} }}{{A_{r} }}$$ In this equation, ΔF and ΔL values in this study corresponded to the changes measured between F1 = 200 N and F2 = 700 N. Equation 3 was used to compute overall and local compressive modulus of each sample. The above equations represent a standard approach to measuring the compressive modulus. Although the self-aligning platens accommodated non-perpendicular end-faces, the self-aligning nature of these platens in combination with the complex geometry of the stalk was found to cause circumferential variation in strain distribution within the specimens. To account for potential variations in stress, local strain measurements were obtained from 4 equally spaced positions around the circumference of each specimen, denoted by the angular position of each measurement: ε 0, ε 90, ε 180, and ε 270 (see Fig. 4). Equation 3 was used to calculate corresponding Compressive Modulus values (E 0, E 90, E 180, and E 270). Top view of the self-aligning platen with a maize stalk cross-section and angular directions of strain measurement These individual strain values were combined to obtain a more accurate assessment of the compressive modulus. Because strain is inversely related to the compressive modulus, special attention must be paid to the manner in which averaging is performed [34,35,36,37]. Instead of inserting individual strain values into Eq. 3, the strain values were first averaged to obtain a single average strain value (εlocal) representing the average cross-sectional strain: $$\varepsilon_{local} = \frac{1}{4}\left( {\varepsilon_{0} + \varepsilon_{90} + \varepsilon_{180} + \varepsilon_{270} } \right)$$ This strain can be substituted into Eq. 3 as follows to obtain a final expression for the local compressive modulus, E local : $$E_{local} = \frac{\Delta \sigma }{{\Delta \varepsilon_{local} }} = \frac{{\left( {\frac{{F_{2} - F_{1} }}{{A_{r} }}} \right)}}{{\frac{1}{4}\left\{ {\left( {\frac{{L_{2} - L_{1} }}{{L_{0} }}} \right)_{0} + \left( {\frac{{L_{2} - L_{1} }}{{L_{0} }}} \right)_{90} + \left( {\frac{{L_{2} - L_{1} }}{{L_{0} }}} \right)_{180} + \left( {\frac{{L_{2} - L_{1} }}{{L_{0} }}} \right)_{270} } \right\}}}$$ As before, the subscript indices 1 and 2 refer to the test conditions at loads of 200 and 700 N, respectively. Assessing the contribution of pith tissue After all specimens were tested, the contribution of pith tissue to overall stiffness was assessed by carefully drilling a hole of 5 mm in diameter through the nodal tissue at the end-face of each specimen. A common wood drill bit was used for this purpose. A round wood file was then used to gently abrade the pith tissue until only rind tissue remained. These hollow samples were then re-tested using the techniques described above. Sensitivity of the compressive modulus to sample placement The cross-sectional shape of the maize stalk is somewhat irregular (see Fig. 4). Placement of each specimen on the two self-aligning platens is therefore somewhat subjective. The sensitivity of the compressive modulus measurements to specimen placement was therefore assessed to determine if sample placement affected compressive modulus results. These tests were performed by first placing a specimen at the apparent center of each self-aligning platen. After measuring the compressive modulus in the typical fashion, the specimen was shifted away from the center and the test was repeated. This process was repeated for shift distances of 2 mm and 4 shift directions (0°, 90°, 180°, and 270°). The compressive modulus was therefore measured at each of the 12 resulting shift locations. These measurements were balanced by 12 tests performed with the specimen in the center position. Testing alternated between centered and shifted positions to avoid potential bias caused by temporal effects. Measurement repeatability Repeatability of the compression test methodologies in this paper was performed according to standard procedures [38]. A set of 10 specimens were tested repeatedly according to the protocols described above. Each specimen was tested 5 times, and both methods for obtaining compressive modulus were used for each test. The standard deviation was used to quantify the test repeatability for each specimen. Representative stress–strain curves of pith-filled maize samples The stress–strain curves for both overall and local compressive modulus values were linear in nature. The loads used in this study typically resulted in strain values less than 0.5%. Representative curves are shown in Fig. 5, which illustrates that the stiffness measured via local strain measurements was generally higher than the overall stiffness. For all tests in this study, the coefficient of determination (R 2) between stress and strain was above 0.99. Representative stress–strain curves for local and overall measurements. Slopes of each curve represent the respective compressive modulus values, E local , and E overall Sensitivity of the compressive moduli to pith-filled sample placement Preliminary testing revealed that compressive modulus values were sensitive to sample placement—but only when the sample was shifted more than 2 mm from the center of the platens. The authors' experience in performing these tests is that a shift of more than 1 mm from the center is easily detectable to the human eye. To assess the influence of spatial position, 10 specimens were tested at the centered position and 2 mm shift positions. Each specimen was tested a total of 8 times: 4 times at the center location, and 4 tests with a 2 mm shift. Each of the 4 "shifted" tests involved shifting the specimen in a different direction, as shown in Fig. 4. The resulting data is shown in Fig. 6, which demonstrates that sample placement within ± 2 mm of the platen center had no significant effect on compressive modulus measurements. Box plots illustrating the effect of spatial position on specimen placement. All data for a single specimen was normalized by the mean modulus value from the tests conducted at the centered position Repeatability analysis for pith-filled samples Recall that the test repeatability (i.e., test-to-test variation of a single specimen) was quantified by using the standard deviation for each of 10 samples. Both compression tests methods (using local strain or overall strain) were found to have a mean repeatability of 3.9%. Additional repeatability information is summarized in Table 1. The final column of Table 1 provides an upper bound on test-to-test variation at the 95% confidence level. Table 1 Repeatability statistics obtained from repeated tests on a set of 10 specimens Averaging local strains of pith-filled samples Self-aligning platens induced slight circumferential variation in strain. This variation was captured by taking strain measurements at 4 circumferential locations for each specimen. To examine the effect of averaging circumferential strains, compressive modulus values were calculated by using 1, 2, 3, and 4 strain values. Equation 3 describes the calculation of compressive modulus for a single strain measurement while Eq. 4 describes the calculation process for four strain measurements. Similar expressions can be obtained for two and three strain measurements. The effect of strain averaging is shown in Fig. 7. As expected, variation in the calculated compressive modulus decreased as the number of utilized strain measurement increased. This trend was evident at both the individual and group levels. The effect of averaging local strain measurements around the circumference of the test specimens from 1, 2, 3, or 4 sides. Measuring strain from all sides reduces circumferential variation caused by structural asymmetry of the test specimens. Data in this chart is from 94 specimens, with 4 strain measurements per specimen (6 specimens were damaged during testing and therefore were excluded). Sample sizes reflect the number of different combinations for averaging strain measurements (e.g., given 4 strain measurements per specimen, there are 6 possible combinations when using groups of two, 4 combinations when using groups of three, etc.) Neglecting the contribution of pith tissue Several specimens were damaged either during the pith removal process or during subsequent testing, thus reducing the sample size for this portion of the study. The contribution of the pith tissue had a statistically significant effect on the compressive modulus values, which was found to be of approximately the same magnitude as specimen repeatability. The overall mean reduction in stiffness after pith removal was found to be approximately 4%. The variation in this effect was relatively high, which was likely due to the imprecise nature of the pith removal process. At a 95% confidence level, the mean effect of pith tissue on stiffness was found to be less than or equal to 6.3%. We therefore concluded that although the process of neglecting the pith tissue does introduce a consistent error, the magnitude of this error is not substantial. Table 2 provides a summary of the statistics related to pith removal. It is worth noting that the pith prevents failure due to buckling and may therefore significantly contribute to overall stalk strength but not stiffness [39]. Table 2 Statistical effects of pith removal Local versus overall compressive moduli of pith-filled samples We now examine differences between local and overall compressive moduli (E overall and E local ). These values were calculated for all specimens in this study. Recall that E overall is the stiffness of the entire specimen; whereas E local is the stiffness obtained near the center of each specimen (see Fig. 2). Figure 8 provides distribution plots for all specimens tested in this study. The mean and standard deviation values for E overall and E local were (10.1 ± 1.5 and 12.8 ± 1.5 GPa, respectively). Figure 8 also provides distributions which were shifted downward by 4% to account for the effect of neglecting the pith tissue. Finally, Fig. 8 also provides comparisons to published data on the distribution of the compressive modulus values for dried wood from angiosperms and gymnosperms. Overall versus local compressive modulus distributions for maize and the two major types of wood. The narrower, gray boxes indicate modulus that have been decreased by the average pith effect of 4%. Wood data from [40, 41] This study involved the compressional testing of dry, non-diseased maize stalk segments consisting of two nodes and the intervening internode (see Fig. 1). Specimens were cut just below the node lines because tough nodal tissues and thicker rind in this region effectively distribute stresses, thus preventing premature tissue failures that can occur during compression testing when test specimens involve only internode tissues. Certain challenges were encountered in this study. One of these was the difficulty of cutting two parallel end faces on maize stalk specimens, both of which should (according to compression testing standards) be perpendicular to the stalk axis. This challenge was addressed by using two self-aligning compression platens. However, this solution then generated a new challenge: the lack of structural symmetry induced circumferential variation in strain, thus necessitating the measurement of strain at multiple locations. These strains were averaged to obtain the compressive modulus of internodal tissues. Accuracy, reliability, and test duration Two different compressive modulus values were obtained for each specimen in this study: E overall and E local . The overall compressive modulus value is based on deformation that occurs throughout the entire specimen, including at the end faces, meristematic tissue, and internodal tissues. As such, the overall compressive modulus should be considered as an aggregate stiffness value, with tissue stiffness within the specimen varying above and below this value. The local modulus approach measures tissue strain in a region where tissue is regular and uniform and thus is likely more accurate. Deformation of the testing apparatus was negligible as compared to deformation in test specimens. The repeatability values of both tests were comparable. The local compressive modulus values were higher than overall modulus values for every specimen in this study. Although spatial variation in stiffness was not the focus of this study, we believe that this is due to a lower tissue stiffness near each node and in the meristematic region [42]. More detailed studies will be necessary to confirm this. The calculation of the compressive modulus values was based on an assumption that the pith tissues have a negligible effect on stalk stiffness. The removal of pith tissue was found to decrease modulus values by an average of 4%. Thus, the values obtained for rind stiffness in this study are (on average) 4% higher than their true values. As shown in Fig. 7, the reliability of local compressive modulus values improved as the number of circumferential sample points increased. However, unless multiple circumferential samples can be acquired simultaneously, each circumferential sample point increases the testing duration. Excluding sample preparation (which was similar for both test types), local modulus testing required approximately 10 min per specimen. As shown in Fig. 7, the mean value is relatively insensitive to the number of circumferential strain values. However, a decrease in circumferential measurements also decreases test-to-test repeatability, thus artificially increasing the observed variation in the compressive modulus. The use of fewer circumferential measurements may be suitable in certain situations where the mean value is the primary objective. If relative differences between plants is of primary concern, absolute accuracy may not be a primary concern. In such a case, the overall modulus may be a better choice. The overall modulus provides a single, average rind stiffness value for the entire specimen with reasonable reliability. Test duration for prepared samples was approximately 2 min per specimen. Recommendations for future studies One of the most important considerations when performing compression tests is the perpendicularity of end faces. This is a particular challenge when dealing with plant stems, which typically do not have straight edges that can be used as a reference. Spherical platens can be used to address this issue, and are recommended for future studies. If for some reason, spherical platens cannot be used, special attention should be paid to the preparation of end faces as well as the resulting load/deformation curves. An alternative approach is to embed each end of the sample in polymethyl methacrylate (PMMA) or some other kind of resin, a technique used in the testing of bone specimens [43, 44]. In the current study, rind thicknesses were obtained from X-ray computed tomography 2D images, but this approach requires special equipment and software. A more accessible technique is to obtain areas of rind and pith areas based on cross-sectional images obtained with a flatbed scanner [45]. Two methods were developed for measuring the compressive elastic modulus values of the rind of pith-filled plant stems such as maize. The two elastic modulus values were calculated using two different strain measurements. These test methodologies did not require that end faces were strictly parallel, and both methods produced consistent results (mean repeatability of 4%). Both methods utilized the natural shape of the plant stem to avoid stress concentrations and buckling failure which are common challenges when performing compression tests, especially with thin-walled specimens. Both elastic moduli measurements presented in this study neglected the contribution of the pith tissue. This assumption had a mean effect of overestimating the rind stiffness by 4%, which was deemed to be acceptable for these purposes. Each of these methods possesses unique advantages and disadvantages. The overall compressive modulus technique provides a single, average value for all rind tissue in the specimen, but can be obtained relatively quickly. In contrast, the measurement of local modulus required multiple strain measurements, thus requiring additional tests, but provided results which are likely more accurate. The modulus of elasticity values reported in this study are relevant from the stalk-level down to scales of a few centimeters. At scales smaller than this, the cellular architecture of the stalk tissue should be considered. Finally, although these measurements were developed and tested for dry maize specimens, the methods and principles introduced in this study are likely applicable for other types of plant stems, such as sorghum, reed, bamboo, etc. Von Forell G, Robertson D, Lee SY, Cook DD. Preventing lodging in bioenergy crops: a biomechanical analysis of maize stalks suggests a new approach. J Exp Bot. 2015;66:4367–71. Beer FP, Russell Johnston E, DeWolf JT, Mazurek DF. Mechanics of materials. 6th ed. New York: Mc Graw Hill; 2012. Boresi AP, Schmidt RJ. Advanced mechanics of materials. 6th ed. New York: Wiley; 2003. Gurtin ME. The linear theory of elasticity. In: Truesdell C, editor. Linear theories of elasticity and thermoelasticity. Berlin: Springer; 1973. p. 1–295. Buchanan AH. Bending strength of lumber. J Struct Eng ASCE. 1990;116:1213–29. Kin K, Shim K: Comparison between tensile and compressive Young's modulus of structural size lumber. In: World conference on timber engineering. Riva del Garda, Italy, 20–24 June 2010. Kretschmann DE. The influence of juvenile wood content on shear parallel, compression, and tension perpendicular to grain strength and mode I fracture toughness of loblolly pine at various ring orientation. For Prod J. 2008;58:89–96. Lindstrom H, Harris P, Nakada R. Methods for measuring stiffness of young trees. Holz Als Roh-Und Werkstoff. 2002;60:165–74. Ince A, Ugurluay S, Guzel E, Ozcan MT. Bending and shearing characteristics of sunflower stalk residue. Biosyst Eng. 2005;92:175–81. Bashford LL, Maranville JW, Weeks SA, Campbell R. Mechanical-properties affecting lodging of sorghum. Trans ASAE. 1976;19:962–6. Esehaghbeygi A, Hoseinzadeh B, Khazaei M, Masoumi A. Bending and shearing properties of wheat stem of alvand variety. World Appl Sci J. 2009;6:1028–32. Robertson DJ, Smith SL, Cook DD. On measuring the bending strength of septate grass stems. Am J Bot. 2015;102:5–11. Robertson DJ, Julias M, Gardunia BW, Barten T, Cook DD. Corn stalk lodging: a forensic engineering approach provides insights into failure patterns and mechanisms. Crop Sci. 2015;55:2833–41. Robertson D, Smith S, Gardunia B, Cook D. An improved method for accurate phenotyping of corn stalk strength. Crop Sci. 2014;54:2038–44. Zhang LX, Yang ZP, Zhang Q, Guo HL. Tensile properties of maize stalk rind. BioResources. 2016;11:6151–61. Yu M, Igathinathane C, Hendrickson J, Sanderson M, Liebig M. Mechanical shear and tensile properties of selected biomass stems. Trans ASABE. 2014;57:1231–42. Yu M, Womac AR, Igathinathane C, Ayers PD, Buschermohle MJ. Switchgrass ultimate stresses at typical biomass conditions available for processing. Biomass Bioenergy. 2006;30:214–9. Varanasi P, Katsnelson J, Larson DM, Sharma R, Sharma MK, Vega-Sánchez ME, Zemla M, Loque D, Ronald PC, Simmons BA, et al. Mechanical stress analysis as a method to understand the impact of genetically engineered rice and Arabidopsis plants. Ind Biotechnol. 2012;8:238–44. Young SA, Clancy P. Compression mechanical properties of wood at temperatures simulating fire conditions. Fire Mater. 2001;25:83–93. Wright CT, Pryfogle PA, Stevens NA, Steffler ED, Hess JR, Ulrich TH. Biomechanics of wheat/barley straw and corn stover. Appl Biochem Biotechnol. 2005;121:5–19. Robertson DJ, Julias M, Lee SY, Cook DD. Maize stalk lodging: morphological determinants of stalk strength. Crop Sci. 2017;57:926–34. Stubbs CJ, Baban NS, Robertson DJ, Al-Zube LA, Cook DD. Bending stress in plant stems: models and assumptions. In: Geitmann A, Gril J, editors. Plant biomechanics—from structure to function at multiple scales. Berlin: Springer; 2018. Robertson DJ, Lee SY, Julias M, Cook DD. Maize stalk lodging: flexural stiffness predicts strength. Crop Sci. 2016;56:1711–8. ASTM-E9: Standard test methods of compression testing of metallic materials at room temperature. ASTM International, West Conshohocken, PA. http://www.astm.org (2009). Accessed 1 Aug 2017. ASTM-D695: Standard test method for compressive properties of rigid. ASTM International, West Conshohocken, PA. http://www.astm.org (2015). Accessed 1 Aug 2017. ASTM-F2150: Standard guide for characterization and testing of biomaterial scaffolds used in tissue-engineered medical products. ASTM International, West Conshohocken, PA. http://www.astm.org (2013). Accessed 1 Aug 2017. Sachs RM. Stem elongation. Annu Rev Plant Physiol. 1965;16:73–96. Cheng SK, Clarke EC, Bilston LE. The effects of preconditioning strain on measured tissue properties. J Biomech. 2009;42:1360–2. Bowman SM, Keaveny TM, Gibson LJ, Hayes WC, Mcmahon TA. Compressive creep-behavior of bovine trabecular bone. J Biomech. 1994;27:301–10. Caler WE, Carter DR. Bone creep-fatigue damage accumulation. J Biomech. 1989;22:625–35. Keaveny TM, Guo XE, Wachtel EF, Mcmahon TA, Hayes WC. Trabecular bone exhibits fully linear elastic behavior and yields at low strains. J Biomech. 1994;27:1127–36. ASTM-D143: Standard test methods for small clear specimens of timber. ASTM International, West Conshohocken, PA. http://www.astm.org (2014). Accessed 1 Aug 2017. Maranville J, Clegg M: Morphological and physiological factors associated with stalk strength. In: Rosenberg G, editor. Sorghum root and stalk rots: a critical review. Proceedings of the consultative group discussion on research needs and strategies for control of sorghum root and stalk rot diseases, Bellagio, Italy: ICRISAT, Patancheru, India; 1984. p. 111–8. Westfall PH, Henning KS. Understanding advanced statistical methods. Boca Raton, FL: Taylor and Francis Group; 2013. Robertson D, Cook D. Unrealistic statistics: how average constitutive coefficients can produce non-physical results. J Mech Behav Biomed Mater. 2014;40:234–9. Robertson DJ, Cook DD. Hyperelasticity and the failure of averages. In: Kruis J, Tsompanakis Y, Topping BHV, editors. Proceedings of the fifteenth international conference on civil, structural and environmental engineering computing. Stirlingshire: Civil-Comp Press; 2015. Cook DD, Robertson DJ. The generic modeling fallacy: average biomechanical models often produce non-average results! J Biomech. 2016;49:3609–15. NIST-TN1297: Guidelines for evaluating and expressing the uncertainty of NIST measurement results. National Institute of Standards and Technology. http://www.nist.gov (1994). Accessed 1 Aug 2017. Zuber MS, Colbert TR, Darrah LL. Effect of recurrent selection for crushing strength on several stalk components in maize. Crop Sci. 1980;20:711–7. Green DW, Winandy JE, Kretschmann DE. Mechanical properties of wood. In: Wood handbook: wood as an engineering material. Gen Tech Rep FPL-GTR-113. Madison, WI: USDA, Forest Services, Forest Products Laboratory; 1999. p 45. Kretschmann DE: Mechanical properties of wood. In: Wood handbook: wood as an engineering material. Gen Tech Rep FPL-GTR-113. Madison: WI: USDA, Forest Service, Forest Products Laboratory; 1999. p 41–4. Niklas KJ. Responses of hollow, septate stems to vibrations: biomechanical evidence that nodes can act mechanically as spring-like joints. Ann Bot. 1997;80:437–48. Keller TS, Liebschner MA. Tensile and compression testing of bone. In: Yuehuei HA, Robert AD, editors. Mechanical testing of bone and the bone-implant interface. Boca Raton: CRC Press; 1999. p. 181. Untaroiu CD. A numerical investigation of mid-femoral injury tolerance in axial compression and bending loading. Int J Crashworthiness. 2010;15:83–92. Heckwolf S, Heckwolf M, Kaeppler SM, de Leon N, Spalding EP. Image analysis of anatomical traits in stalk transections of maize and other grasses. Plant Methods. 2015;11:26. LA, DR, and DC designed the research and wrote the manuscript. LA, DR, and DC developed the experimental procedure of the approach. LA, JE, and WS performed the experimental procedure. All authors read and approved the final manuscript. We thank Monsanto Company, St. Louis, MO, USA for providing the maize stalk samples used in this study. This work was funded in part by the National Science Foundation (Award # 1400973), and the U.S. Department of Agriculture (Award # 2016-67012-24685). The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. No consent was required in this study. No human subjects or animals were used in this study. This study was supported by the National Science Foundation, Arlington, VA, USA (Grant # 1400973) and the U.S. Department of Agriculture, Washington, DC, USA (Grant # 2016-67012-24685). The funding agencies did not have any role in designing the study or in collecting, analyzing, interpreting the data or in writing the manuscript. Guidelines and legislation The authors confirm following local UAE import regulations to get the stalk samples imported from the USA. No permissions and/or licenses for the study are required. Division of Engineering, New York University-Abu Dhabi, P.O. Box 129188, Abu Dhabi, United Arab Emirates Loay A. Al-Zube, Jean N. Edwards, Wenhuan Sun & Douglas D. Cook Faculty of Engineering, The Hashemite University, P.O. Box 330127, Zarqa, Hashemite Kingdom of Jordan Loay A. Al-Zube Department of Mechanical Engineering, University of Idaho, 875 Perimeter Drive, MS 0902, Moscow, Idaho, 83844-0902, USA Daniel J. Robertson Jean N. Edwards Wenhuan Sun Douglas D. Cook Correspondence to Douglas D. Cook. Al-Zube, L.A., Robertson, D.J., Edwards, J.N. et al. Measuring the compressive modulus of elasticity of pith-filled plant stems. Plant Methods 13, 99 (2017). https://doi.org/10.1186/s13007-017-0250-y DOI: https://doi.org/10.1186/s13007-017-0250-y Corn stalk tissue Compression stiffness Strain measurement
CommonCrawl
The Journal of Chinese Sociology Research question Literature review and analytic perspective Research method Analysis results Conclusion and discussion Multi-channelled forceful intervention, frames and protest success—a fuzzy-set qualitative comparative study of 40 anti-demolition protests in China Ronggui Huang1Email authorView ORCID ID profile, Wen Zheng2 and Yong Gui1 The Journal of Chinese Sociology20163:23 Having incorporated the characteristics of Chinese politics, this article puts forward an exploratory analytic framework for understanding protest success and points out how political opportunities and protest frames can explain protest success. Political opportunities not only include direct intervention by the central government but also support from state-sponsored media and favourable policies and laws. This article uses the method of fuzzy-set qualitative comparative analysis (fsQCA) to compare 40 socially influential cases of anti-demolition protests in China. The results show that the co-presence of central government intervention and supportive reports from central state-sponsored media—which this study calls "multi-channelled forceful intervention"—is a sufficient condition for protest success. Further, "multi-channelled forceful intervention" depends on a favourable institutional environment and protestors' strategic use of multiple frames. This article not only enriches the studies on protest results but also expands on the theory of political opportunity structures and the study of protest frames. Political Opportunity Multi-channelled forceful intervention Protest Success In recent years, research on protest has garnered increasing academic attention. Relevant studies include environmental and "Not-In-My-Back-Yard" protests (Sun and Zhao 2007), peasants' protests (O'Brien and Li 2006; Ying 2007), workers' protests (Cheng 2012; Tong 2006), homeowners' protests (Chen 2010; Zhang 2005), and anti-demolition protests (Lü 2012). These studies have drawn on the insights of social movement theories to study the organization, mobilization, and strategy of various protests from the perspectives of resource mobilization, political opportunities, and framing strategies. Although these studies have enhanced our understanding of protest organization and mobilization, they have neglected the study of protest outcomes. Having reviewed studies on the outcomes and consequences of Chinese mass incidents, Tangbiao and Kong (2011) point out that Chinese scholarship of this topic is inadequate because it lacks clear research direction, analytic framework, and systematic empirical investigation. Some researchers have suggested that studies of popular protests should shift their focus from "using what weapon" to "why a weapon is effective" (Huang 2011). In this light, the question of the present study is: in China, what are the factors that shape the success of protests?1 Here success refers to achieving protestors' intended goals. By comparing cases of protests against residential demolition to acquire land, this article aims to provide a fairly systematic discussion about what factors can shape the success of socially influential protests in China. As of now, anti-demolition protests have become an archetypal phenomenon of China's socio-economic transformation. Based on the statistics of civil rights activism-related posts from the Tianya online community in the "Annual Report on Social Mentality of China (2012–2013)", forced demolition incidents have occupied 20.1 % of China's online civil rights activism discourse. Among the letters the Chinese Ministry of Construction had received between January and August of 2002, 28 % were related to housing demolition; among batches of petitions, 70 % were about demolition problems; and among group petitions, 83.7 % were related to demolition (Zhao 2003). In China, residential demolition to acquire land is regarded as one of the "troika" mass incidents.2 Given this background, investigating anti-demolition protests can deepen our understanding of the forces that drive successful protests in China. We have collected, via media reports and the internet, 40 cases of anti-demolition protests (2003–2012). Our comparative analysis of these cases attempts to go beyond the characteristics of a particular protest and to reveal the conditions for the success of socially influential protests. Though comparative case studies do not possess the kind of generalizability manifested in statistical analyses, when compared to existing single case studies and small-N analyses (e.g. Zhang 2005; Yu 2012; Cai 2010), this study can deepen our understanding of the factors that shape protest success (Cress and Snow 2000). Specifically, this study attempts to make three contributions to existing studies of protests: (1) it tests, albeit in a preliminary way, how well the existing literature on social movements can explain protest success in China; (2) by incorporating characteristics of China's political system and the theoretical insights provided by the existing studies of protests in China, it sums up the conditions for successful protests in an authoritarian state, while also pointing out the importance of political opportunities and protest frames; (3) it proposes and examines the influence of supportive reports from central state-sponsored media and favourable institutions on protest success so as to enhance our understanding of political opportunity structures in China. Factors shaping protest success in Western and Chinese societies Western literature has explained the outcomes of social movements from the perspectives of organizational features, protest tactics, frames, public opinion, and political opportunities (Amenta & Caren 2004; Giugni 1998). Specifically, the theory of resource mobilization emphasizes the importance of the organizational aspects of social movements. Gamson (1990, in Giugni 1998) points out that social movement organizations possessing the following features are likely to achieve success: single issue demands, the use of selective incentives, the use of disruptive strategies, and being bureaucratized, centralized, and unfactionalized. Research on the American Civil Rights Movement shows that organizational density and tactical diversity influence policy outcomes, as measured by the annual federal budget for the Commission on Civil Rights (Olzak and Ryo 2007). Piven and Cloward (1979) state that disruptive tactics help increase the chance that the poor can launch a successful protest and that the so-called cadre organizations, rather than social movement organizations, play a major role in protests launched by the poor (Cloward and Piven 1984). Having reanalyzed Gamson's data, Goldstone (1980) argues that political crisis is the key to the success of social movements. Follow-up studies also show that political opportunity structures are key to explaining social movement outcomes (Kitschelt 1986; McCammon et al. 2007; Rootes 2006). Yet, not all studies support the theory of political opportunity structures (Olzak and Ryo 2007). In addition, Burstein (1999), from the perspective of representative democracy, argues that public opinion is the foundation for understanding the working mechanism of political opportunities. Although one study has confirmed the importance of public opinion (Burstein and Linton 2002), scholarship in general has not come to a definite conclusion (Amenta et al. 2005; McCammon et al. 2007). Framing strategies also influence protest outcomes. Cress and Snow's (2000) study on social movement organizations in the USA that focus on homelessness reveals the importance of frames: among the six causal paths to achieved outcomes, three paths simultaneously contain diagnostic and prognostic frames, while two paths contain prognostic frames. Similarly, frames are a crucial factor influencing the outcome of women's suffrage movement (McCammon 2001). Recent studies have developed the concept of discursive opportunity structures to explain the effects of frames: as for the US women jury movement, frames aligned with dominant legal discourses are more effective than others (McCammon et al. 2007). Researchers have gradually come to understand the complex relationship between a protest and its outcomes (Cress and Snow 2000; Giugni 2007). The political mediation model contends that political environment is a mediating variable between protests and policy outcomes. If long-term structural political conditions are favourable, social movements per se can influence public policy; if short-term political opportunity structures are relatively favourable, low levels of movement mobilization can influence public policy; if short-term political opportunity structures are unfavourable, social movements must adopt assertive actions to influence public policy (Amenta et al. 2005). The question of what factors shape protest outcomes has slowly gained attention from Chinese researchers. Studies on property owner protests show that key factors in rights-defending activism include the following: leadership from prominent rights activists, establishment of homeowners' committees, effective mobilization, well-chosen strategies, homeowners' rich social network resources, local government support, highlighting the legality of rights and interests, and the relatively weak power of real estate developers (Zhang 2005). Analysis of environmental protests shows that neither state nor society is monolithic; instead, the results of protest actions are determined by the configuration of contender alliances among different government departments, different levels of governments, mass media, and civil societies and their interactions with opposing alliances (Sun and Zhao 2007). Yu (2012) points out that the relationship between protesters and authorities is key to the success of protests; meanwhile, she stresses the importance of media reports and political opportunities. Cai (2010) analyses the success and failure of protests from a cost-benefit perspective, pointing out that the cost of governments meeting protesters' demands, protesters' issue-linking strategies, social ties between protesters and high level authorities, the forcefulness of protests, and the absence of violence are crucial factors shaping protest outcomes. Overall, existing studies show that, due to the absence of movement organizations, activists and social ties play a vital role in China's popular protests, while appropriate strategies and political opportunities help protestors achieve success. Analytic perspective Although existing studies have enhanced our understanding of protest success, they have some limitations. To begin, political opportunities are derived not only from structural changes but also from the signals that political systems emit (Meyer and Minkoff 2004). Studies of protests in China mostly emphasize the importance of higher-level government interventions (Cai 2010). This suggests that it is necessary to broaden the conception of political opportunity structures by contextually analysing and incorporating the characteristics of China's political system. To understand protest success, one must consider the role of the state: no citizen is immune from the influence of the state given its penetration into every corner of the society; its monopoly over most resources, and that their redistribution deeply influences the structures of interests of all social classes; and that political power ultimately determines the status and structural position of each and every social group, and, through legislation, may co-opt or reject certain groups (Xie 2010, p. 4). In this way, not only are there large differences between different actors in terms of resources and influences but also the most powerful actors often reside within the political system. This is particularly evident in the politics of residential demolition and land acquisition. In 1997, the Chinese State Council announced "Notice on the Further Deepening Reform of Housing System and the Acceleration of Housing Construction", which clearly positioned real estate as one of the nation's pillar industries; from this point onwards, land development has become a major driving force in provincial economic development (Li and Fan 2013). Since the tax-sharing reform of 1994, land development and transfer have not only become a means for the local governments to consolidate their own power but have also become major sources of local finance (Hsing 2010; Zhou 2007). This provides the local governments with a strong motive to participate in activities of residential demolition and land acquisition. As land becomes increasingly valuable, the desire for residents affected by demolition to protect their own interests increases, which intensifies conflicts between demolition contractors and those affected by demolition. In a given demolition dispute, opponents of anti-demolition protesters often have the advantage in terms of resources, organization, and policy; without external support, protestors rarely achieve their desired outcomes. Furthermore, because anti-demolition protests are closely related to people' basic livelihoods, any mishandling of these cases may incur vast social influences and consequences; thus, the state tends to offer a balance in the game of clashing social forces to maintain social stability. The state's leading role means that the central government can become a balancing power in interest disputes among various social groups, and intervention from the central government is crucial to protest success. Within a multi-layered and flexible political structure (Cai 2008), the central government conditionally grants the local governments the autonomy to respond to protests so that the majority of protests are contained at the local level; at the same time, the central government retains the power to restrain the local governments so that the central government selectively provides expressive outlets for social demands while maintaining state legitimacy and social stability. This article argues that the central government, the embodiment of the state, can shape protest results in at least three ways. First, the central government can directly intervene in a protest and this determines the success or failure of that protest. Existing literature has provided ample discussion on this scenario (Cai 2010), and thus, we will not discuss it in detail. Secondly, central state-sponsored media's supportive reports on protests often have an influence on the success of protests, which we see as a political opportunity that differs from direct intervention by the central government. The central government's direct intervention is mostly bureaucratic and organizational. However, central state-sponsored media can be regarded as both "public institutions" and "market enmities" (Li 2003) and whose attitudes in news reports are to a certain degree independent of the central government. Based on our observations, the cases reported by central state-sponsored media were not always the ones that the central government chose to intervene in; on the other hand, just because the central government chooses to intervene in some cases does not mean that state-sponsored media will choose to report on them. This article argues that state-sponsored media, especially central-level media's supportive coverage of protests, can reflect state authorities' attitudes towards these protests—this is a public signal from state authorities. Although such signals are not a direct indication of a willingness to intervene, they help protestors discover political opportunities, thereby strengthening protestors' confidence and improving their ability to mobilize further support from the public. This dynamic may affect the local governments' responding strategies and central government's intervention paths, which eventually help protests achieve success. Noticeably, supportive reports from central-level state-sponsored media are more likely to reflect the relationship between actors within the political system and protesters than the relationship between the public and protesters. Therefore, supportive reports should be understood as political opportunities rather than social influences. Thirdly, intervention by the central government is affected by changes in laws and regulations. Reviewing the course of change in the Chinese demolition system, "Urban Housing Units Demolition Management Regulations" of 2001 stipulates that "when demolishing houses meets with residents' protest, demolition must be done forcefully"; this stipulation has essentially strengthened the demolition policy of the local governments and land developers. Though various demolition policies have been constantly readjusted in the previous decade, the institutional environment for the aforementioned "double standard" continues to exist, and different actors (e.g. protesters and local governments) cite different legal rules to defend themselves. The constitutional amendment of 2004 and the Property Law of 2007 both make clear that Chinese citizens have rights to their private properties, and this marks an obvious improvement to the institutional environment for anti-demolition protests. Although property law is a higher-ranking law than is "Urban Housing Units Demolition Management Regulations", the state had not systematically unified the laws and regulations pertinent to demolition until 2011. After the promulgation of "Regulations on the Expropriation and Compensation of Houses on State-owned Land" of 2011, the institutional environment regarding demolition significantly improved. Given these changes in policies, regulations, and laws, the relationship between the institutional environment and popular protests deserves scholarly analysis. This article argues that not only can the institutional environment shape the manner of protest, it also shapes protest outcomes. Given that the legitimacy of protest is often a challenging issue (Ying 2007), favourable policies and laws strengthen a claim's legitimacy and thus decrease the possibility that the local governments will take suppressive measures. If earlier protests lead to readjustments in laws and regulations (Cai 2010), such readjustments would signal that the central government wants the local governments to handle social conflicts appropriately; in this case, changes in laws and regulations represent the central government's attitudes towards protesters and can be seen as "signals" of political opportunities. In addition, protest-supported demolition litigation brings pressure to the courts, which in turn pushes the courts to develop coping strategies to constrain the housing demolition authorities (He 2014). Because protesters in socially influential protests will try their utmost to utilize all possible opportunities and resources, significant changes in the institutional environment can provide new forms of resources, indicating the expansion of political opportunities. Because the law can be understood and interpreted in multiple ways, its role in social contestation has been a controversial issue (McCann 2004). In the field of housing demolition, local governments tend to cite "Urban Housing Units Demolition Management Regulations" to support forced demolition, whereas anti-demolition protestors cite new laws and regulations that prohibit forced demolition. Readjustments in laws and regulations not only change the legal resources available to both local governments and anti-demolition protestors but also prompt different parties to have divergent interpretations about the applicability of laws and regulations. Within an "unstable" institutional environment, intervention from the central government once again becomes a key factor influencing protest results. In addition to the aforementioned flexible political structure, the central government's intervention in anti-demolition protests depends on the frames employed by protestors. The justification for financial compensation has been the core issue in many demolition and relocation disputes; given that every demolition operation involves different levels of compensation, ranging from a few hundreds of thousands yuan to a few millions yuan to a few tens of millions yuan, all of which are large numbers. In a sense, during the process of demolition and relocation there has been a conflict of interest between local governments and civilians—which can be regarded as important to interest redistribution. When handled inappropriately, demolition can cause massive societal impacts, even affecting social stability and state legitimacy. Since many anti-demolition protests focus on economic demands rather than ideological appeals, interventions by the central government bear little political risk. The fact that anti-demolition protests exert enormous impacts on Chinese society in turn give the central government incentives to intervene. Under such circumstances, when anti-demolition protestors use multiple frames to demonstrate the legitimacy of their actions and claims, and the employed frames are congruent or compatible with the central government's ideology, protestors are more likely to gain attention and successfully convince the central government to intervene. Although protest frames have strong explanatory power regarding the success of social movements in the West (Cress and Snow 2000; McCammon et al. 2007; McCammon 2001), Chinese academia has not yet systematically examined whether frames can influence protest success—the few studies that do focus on protest frames have mainly concentrated on the relationship between frames and mobilization (Cheng 2012; Tong 2006; Xia 2014). Nevertheless, the existing literature provides valuable insights about the relationship between frames and protest success. Because protesters usually have explicit interest-based claims, the primary task of framing is to justify and legitimize these claims. For instance, worker protesters tend to use the discourse of socialist cultural traditions (Tong 2006), political appeals, and state policies and regulations (Cheng 2012) to defend their demands. Moreover, the visibility, legitimacy, and public resonance of protests to a large degree depend on the process of framing and discursive opportunity structures (McCammon et al. 2007). Based on the above discussion, we argue that frames may increase the chance of intervention from the central government by generating resonance between the state and protest demands, and this is more likely to occur when frames are derived from fundamental socio-political cultures. Consequently, frame resonance may open up new political opportunities for protesters. As for anti-demolition protests, before a dispute enters the public view, protestors often use "weapons of the weak" to protest; after a dispute enters the public view and has gained attention from the media, however, the core position changes to highlight and criticize demolition policies (Lü 2012). Because China's urban land development is facilitated by relatively comprehensive regulatory changes (Weinstein and Ren 2009), "rule violation" has become a forceful and resonant frame. Furthermore, there has been a fracture between constitution- and tradition-based property systems; specifically, the re-demarcation of properties that were built before the establishment of the modern property system, in the memories and historical records of property holders, was "exploitive" in nature, causing both cognitive and interest conflicts. Similarly, before the housing reform, housing property rights experienced major changes, resulting in some ownership claims being disputed (Zhou and Logan 1996). The resentment caused by the aforementioned historical legacies has prompted protestors to use a historical perspective to legitimize their demands through the discourses of collectivism and socialism (Hsing 2010: Shin 2013). Because China's rural lands are owned collectively, peasants have come to think of the state as "parents". This perception, together with the weighing of interests, livelihoods and village customs, as well as the "reason things out" approach common to village societies, determine peasants' choices as they protest against land acquisition (Zhu 2011). Based on the literature and our own observations of anti-demolition protests, we suggest that "rule violation", "the weak identity", "socialism", and "collectivism" are common frames deployed by anti-demolition protestors. Based on the above discussion, this article contends that protest frames can influence protest success by affecting the probability of "state" intervention, whereas the political influence of protest frames to a large extent depends on the legal-political institution. Since existing research has rarely explained Chinese anti-demolition protests from the perspective of frames, this article does not directly put forward specific propositions in regard to the effects of frames; instead, we aim to reveal associational patterns between frames and protest success through cross-case comparison. Data source and methods of data collection The study consists of 40 anti-demolition protests that happened between 2003 and 2012. Anti-demolition protests were chosen as a study subject for the following reasons: anti-demolition contestation includes not only protests by village residents but also by urban residents; protesters come from multiple social classes, including but not limited to peasants, workers, marginal urbanites, and new urban middle class; and this diversity has led to a range of protest tactics. We chose 2003 as the starting point for three reasons: firstly, 2003 is regarded as the year when new forms of protest began to emerge in the 21st century (Zhao 2012, p. 4); secondly, a landmark demolition event in China happened in 2003 (Zhu 2009); thirdly, in these 10 years, there had been a series of readjustments in laws and regulations in relation to demolition, and these changes provide an opportunity to investigate the relationship between institutional environment and protest success. We identified the cases through media and internet reports, a method that has been widely used in studies of social movements in Western societies (Earl et al. 2004) and studies of protests in China (Cai 2010). Although case selection through media reports might lead to bias, it does not mean that we should completely abandon this method. Instead, we must contextually assess whether this method is better than others (Earl et al. 2004, p. 69). Considering that existing studies are primarily single case studies, cross-case comparison helps more systematically assess the explanatory conditions for protest success. In addition, the selection bias associated with this method has been empirically investigated in a previous study (McCarthy et al. 2008), which helps clarify potential bias and the generalizability of present findings. In fact, we do not attempt to reveal the conditions for all successful protests but merely aim to explain the success of socially influential protests. Moreover, consistent with the approach of qualitative comparison analysis (Rihoux and Lobe 2009), our case selection method helps improve the comparability of cases. Last but not least, as Chinese media has undergone marketization (Li and Liu 2009), the space for reporting controversial events has expanded (Stockmann 2010). Our interviews also show that Chinese media can report controversies through two channels: first, there is ample space for news reports before the authorities explicitly prohibit reporting protests; second, even if bans do exist in a province, the news agency can publish reports through affiliated agencies or partners in other provinces. Based on the above discussion, we argue that our data collection method is reasonable. Our data collection procedure is as follows: (1) we used the keyword "demolition" to conduct a full-text search on "Chinese Core Newspapers Full-Text Data Base" from China National Knowledge Infrastructure (www.cnki.net) and retrieved 13,024 news reports; (2) we read through all the reports and filtered out cases which had been reported by at least two media outlets to be included in our dataset; (3) considering that a few protests were primarily exposed through the internet and had significant impacts on Chinese society but failed to receive mainstream media coverage, we synthesized information about these cases via mainstream web portals such as Sina.com.cn to supplement our database. All selected cases have the following features: the selected protests were caused by land acquisition for a public project or commercial development and the core demand was demolition compensation; the targets of the protests were governments or developers; there were antagonistic relations between anti-demolition protestors and their opponents because solving the disputes would alter both sides' interests; and every selected protest involved more than two people. The coding of our cases was based on reading through relevant media reports and reviewing the second-hand literature and documents such as documented interview records, court pleadings (and rulings), banner slogans, pictures, open letters, texts from blogs/microblogs, and academic articles. Depending on information availability, the number of available documents for most cases ranged from ten to dozens, and the number for a few cases even reached hundreds. Our data collection process lasted more than 6 months, during which time we triangulated the information. Therefore, our data is both credible and valid. The analytic technique: fuzzy-set qualitative comparison Qualitative comparison analysis (QCA) is suitable for systematically comparing small to medium numbers of cases. This method uses a set-theoretic approach to establish the necessary and sufficient relationship between explanatory conditions and outcome variables. In the analysis of sufficient conditions, QCA can discover multiple conjectural causes of a particular result, which means that the occurrence of the result can be explained by different causes, while each cause is comprised of multiple explanatory conditions. In qualitative comparative analysis, capital letters indicate the presence of conditions, lowercase letters indicate the absence of conditions, operator "*" means co-presence, and operator "+" links two alternative causal paths. For instance, "A * b + B * c = Y" means that two paths lead to the presence of Y; the first path A * b means the presence of A and the absence of b, whereas the second path B * c means the presence of B and the absence of c. In order to overcome the limitations of crisp-set qualitative comparison analysis, which requires that all variables be dichotomous, Ragin (2008) puts forward a fuzzy-set qualitative comparison analysis (fsQCA). This approach uses fuzzy-set scores to present the degree of membership in explanatory conditions and results. Because a fuzzy-set score can be any number between 0 and 1, it can avoid information loss in the process of data transformation and more accurately reflect the situations of the chosen cases. This approach has been utilized in studies of social movements (Amenta et al. 2005). To proceed with fsQCA analysis, researchers must designate a coding scheme with qualitative anchors to assign fuzzy-set scores to cases and then evaluate the necessary or sufficient relations between explanatory conditions and results based on a consistency index. Consistency can be used to evaluate whether a particular condition or the combinations of conditions can be regarded as a sufficient or necessary condition of the result. If an explanatory condition (or combination of conditions) X is a sufficient condition of result Y, then the fuzzy-set score of X is consistently lower than or equal to the fuzzy-set score of Y; and the corresponding consistency is measured as follows: $$ \mathrm{Consistency}\ \left({X}_i\le {Y}_i\right) = {\displaystyle \sum}\left[ \min \left({X}_i,\ {Y}_i\right)\right]/{\displaystyle \sum }{X}_i $$ When the index is greater than 0.8, it roughly indicates that more than 80 % of the cases are consistent and X is a sufficient condition of Y. When consistency is satisfied, researchers can move on to calculate the coverage index: $$ \mathrm{Consistency}\ \left({X}_i\le {Y}_i\right) = {\displaystyle \sum}\left[ \min \left({X}_i,\ {Y}_i\right)\right]/{\displaystyle \sum }{Y}_i $$ This index depicts the explanatory power of X for result Y.3 The greater the coverage, the greater the empirical explanatory power of X for Y. Similarly, we can calculate Consistency (Y i ≤ X i ) to evaluate whether X can be regarded as a necessary condition of Y. If the index is greater than 0.9, we regard X as a necessary condition. When doing exploratory analysis, one could use the above indexes to assess the necessity and sufficiency of one explanatory condition. However, when analysing multiple conjectural causes, one needs to build truth tables based on consistency, which present the connections between the combinations of explanatory conditions and the outcome, and then use a Boolean minimization algorithm to simplify the truth tables so as to reveal the causal paths leading to the result (Ragin 2008). In QCA, the numbers of combinations of explanatory conditions increase exponentially with the numbers of selected conditions, resulting in complicated causal paths that are difficult to interpret. The existing methodological literature recommends that one should clarify the causal mechanisms through which different conditions interact with each other to influence the outcome, and then choose the relevant conditions for QCA analysis (Amenta and Poulsen 1994). Given that existing studies on protests mainly focus on the effects of specific factors on protest results (the few exceptions are Amenta et al. 2005; Cress and Snow 2000), this article will first examine the explanatory conditions of each theory, and then proceed to evaluate the combinational effects of conditions of different theories. Because the comparative method itself cannot provide a guideline for selecting explanatory conditions, researchers must choose these conditions based on existing theories (Caramani 2009, pp. 52–55). According to the existing literature, we focus on explanatory conditions such as political opportunities, resource mobilization, protest tactics, and protest frames. Although a protest's levels of social influence might shape protest success, we have chosen not to include it as an explanatory condition for two reasons. First, our case selection method implies that the levels of their social influence are similar, and they can be regarded as a constant. Second, a protest's levels of social influence to a large extent depend on media reports and the involvement of opinion leaders. Yet, most of the cases selected in this study have been reported by mainstream media, and new media reports (e.g. social media, internet) and the involvement of opinion leaders have been included as explanatory conditions to assess the theory of resource mobilization. We adopted a six-value coding scheme.4 In order to reduce the subjectivity of the fuzzy-set score assignment, this study follows the credibility principle of qualitative text analysis (Kuckartz 2014). Three authors discussed the rules of score assignment in detail, based on which we coded all cases, and further discussed discrepancies so as to achieve a consensus. It should be noted that some variables only had limited variance, and the actual fuzzy-set scores might not cover all six values. In our study, the explained variable of protest success indicates the degree of achievement of protest demands, where "1" represents protest demands being fully met, "0.6" represents protest demands being met with substantial costs such as a "tragic victory", and "0" indicates failure. Descriptive analysis shows that 35 % of cases achieved success, 22.5 % were tragic victories, and 42.5 % failed. Political opportunities are measured with three variables, namely central government intervention (CGOV), supportive reports from central state-sponsored media outlets (CMEDIA), and favourable institutional framework (OBOPP5). As for central government intervention, the fuzzy-set score 1 represents that the central government intervenes in protest events by making public announcements or deploying a state council appointed task force, issuing new policies and regulations, explicitly supporting anti-demolition protesters or punishing local governments; 0.6 represents the central government's direct intervention in the events, but upholding a neutral stance; and 0 represents non-involvement. Among our cases, 32.5 % have a fuzzy-set score of 1, 5 % have a fuzzy-set score of 0.6, and 62.5 % have a fuzzy-set score of 0. The fuzzy-set score assignment of supportive reports from central state-sponsored media has not only considered the levels of social influence of media outlets and their stances but also guaranteed that their reports appeared after protests had occurred and before protests had been settled. Here, 1 represents supportive reports from central state-sponsored media such as Xinhua News Agency Head Office, People's Daily, CCTV, or Xinhua Daily Telegraph; 0.8 represents supportive reports from China Youth Daily, Procuratorate Daily, or Legal Daily; given that state-sponsored media reports can heighten the influence of protests and have a positive effect on conflict resolution, we used 0.6 to represent impartial reports from the above media outlets; and 0 represents the absence of reports from any of the above state-sponsored media outlets. The variable favourable institutional framework represents the degree to which laws and regulations are conducive to protestors' claim-making. This variable is used to assess whether the central government indirectly shapes protest success by amending laws and regulations and thus measures the expansion of political opportunity (Tarrow 2011). For this variable, 0 represents that the institutional framework is disadvantageous to protestors, who face forced demolition without effective lawful weapons to self-defend (2001–2004); 0.4 represents that private properties were recognized in principle but without specific protective ordinances (that is, from the fourth amendment to the Constitution in 2004 until the introduction of Property Law in 2007); 0.6 represents that the rights and interests of anti-demolition protestors have to some extent been safeguarded because the 2007 amendment to Urban Real Estate Administration Law has put forward the need to protect the legal rights and interests of those being relocated due to residential demolition and guarantees standard residential conditions after relocation (2007–2010); 0.8 represents a relatively favourable institutional framework with the abolishment of "Urban Housing Units Management Regulations"; 1 represents a favourable institutional framework (since January 2011) with the introduction of "Regulations on the Expropriation and Compensation of Houses on State-Owned Land", which provides detailed ordinances regarding the standards for compensation and the legal responsibilities of demolition contractors. Measures of resource mobilization include mobilization networks (Yu 2012; Zhang 2005), the support of opinion leaders, and the involvement of new media (Lü 2012). Mobilization networks measure how many social ties were mobilized by protesters to advance their demands. Here, 1 represents that protestors enjoyed great support from immediate family members and protest allies, 0.4 represents that protesters gained support from few family members, and 0 means that protesters fought without additional support. As for the support of opinion leaders, 1 represents the involvement of opinion leaders and 0 means the absence of opinion leaders. As for the involvement of new media, 1 represents protests being reported by more than three national websites, causing linked interactions nationwide; 0.4 represents protests being reported by one or two national websites; and 0 represents the absence of national reports. It should be noted that this variable measures new media reports that were produced in tandem with the unfolding protests. Due to a lack of data availability, we did not measure protest size. Yet, this limitation is offset by the following elements: (1) mobilization networks can be seen as proxies for protest size and (2) the positive relationship between media reports and protest size (McCarthy et al. 2008) implies that the effect of the latter is partially controlled by the inclusion of the former. Protest tactics include disruptive tactics, violence, and performance (Amenta and Caren 2004; Giugni 1998; Cai 2010; Huang 2011). Disruptive tactic refers to actions threatening public order/safety. Here, value 1 represents the occurrence of serious injuries and casualties; 0.8 represents using illegal home-made weaponry, which would jeopardize public order; 0.6 represents actions slightly upsetting public order; and 0 means actions with no disruption. Violence measures the violent nature of protest actions. In this variable, value 1 indicates extreme measures such as self-immolation; 0.8 means violent behaviour without casualties; 0.6 means the threat of violent acts; and 0 means the absence of violence. As for the performative tactic, 1 indicates that protestors actively publicized protests through dramatic acts and performance; 0.6 indicates protests being publicized by third parties through dramatic narratives or performance; and 0 indicates the absence of drama or performance. Protest frames reflect protestors' discursive strategies to put forward their demands. Because frames might be developed by protestors alone or through discursive interactions between protestors, media, and the public, this study only measures whether a particular frame is used in relation to protest demands. As long as a frame is deployed, its value is 1, otherwise 0. Based on the examination of all selected cases, we have come up with four frames, namely "frame of the weak" (WEAK), "socialist frame" (SOCIALISM), "collectivist frame" (COLLECTIVISM), and "frame of rule violation" (RU_VIOLATION). WEAK emphasizes protestors' status of being weak in conflicts (Dong 2008); it highlights the image of a vulnerable group under powerful oppression. SOCIALISM derives from the relationship between CCP and the mass; protestors borrow symbols and values of socialist ideology with Chinese characteristics and bind together individual protests with the mission of socialist justice to gain legitimacy from the "holy" state as well as to discredit local governments. COLLECTIVISM derives from the way protestors understand the relationship between personal and collective interests and that between private and public interests; not only does COLLECTIVISM include using collectivist discourse to demonstrate the legitimacy of demands but it also includes protestors criticizing local governments for violating and twisting collectivist principles. RU_VIOLATION emphasizes specific laws and regulations in the realm of demolition, for instance, whether demolition planners applied to the court for forced demolition. What factors can shape the success of protests? This article first investigates the relationship between single explanatory conditions and protest success. Our results show (see Table 1) that consistencies of central government intervention, supportive reports by central state-sponsored media, and favourable institutional framework as necessary conditions are all below 0.9; therefore, they, on their own, cannot be regarded as necessary conditions of protest success. Consistencies of central government intervention and supportive reports by central state-sponsored media as sufficient conditions score 0.79 and 0.78, respectively, which are slightly lower than the 0.8 standard score and can be regarded as nearly sufficient conditions. Coverage of central government intervention and supportive reports by central state-sponsored media are 0.58 and 0.70, respectively. Comparison of these two values shows that the latter has a greater explanatory power for protest success than the former. The sufficient consistency of favourable institutional framework is 0.6, which suggests that it cannot be regarded as a sufficient condition for protest success. In sum, political opportunities have significant effects on protest success, but political opportunities alone cannot adequately explain the variation in protest success. Analysis of the necessity and sufficiency of single factors Explanatory conditions Necessary consistency Necessary coverage Sufficient consistency Sufficient coverage Central government intervention State-sponsored media support Favourable institutional framework Mobilization networks New media involvement Opinion leaders' involvement Disruption strategy Violence strategy Performance strategy RU_VIOLATION As for mobilization theory, mobilization networks and support from internet opinion leaders are neither necessary nor sufficient conditions for protest success. The inter-linked reporting by new media can be regarded as a necessary condition for success of socially influential protests. Almost all successful protests have been reported by new media. Further analysis shows that new media reports are not a necessary condition for the failure of protests, which means that new media reports are not a "trivial" necessary condition. However, new media reports are not a sufficient condition for protest success. As for protest tactics, be it disruptive, violent, or performative, none can be regarded as a sufficient condition for protest success. Similarly, any single protest frame alone cannot be regarded as a sufficient condition for protest success. Next, we assessed the explanatory power of each theory by treating the corresponding variables as a group of explanatory conditions. We constructed truth tables and proceeded to simplify the truth tables through Boolean minimization (Table 2). The analysis of the three variables of political opportunities reveals two causal paths to protest success: (1) the co-presence of three conditions, which explains the majority of our cases and (2) the co-presence of central government intervention and supportive reports from central state-sponsored media. These two causal paths can be further simplified to "central government intervention * supportive reports from central state-sponsored media". The sufficient consistency and coverage of this causal path are 0.87 and 0.57, respectively. The coverage indicates that among the successful protests, about 57 % of them can be explained by this path. This result shows that the theory of political opportunity structures can to a certain degree explain the success of anti-demolition protests. Opportunities, resources, strategies, frames, and protest success: results of fsQCA Causal path Raw coverage Unique coverage CGOV * CMEDIA Protest tactics F1: weak * SOCIALISM * COLLECTIVISM F2: SOCIALISM * COLLECTIVISM * ru_violation F3: WEAK * socialism * collectivism * RU_VIOLATION [solution] Note: IS represents being insufficient to explain protest success, in which case consistency has no meaning, shown as "–" The analysis of the three variables for resource mobilization shows that no combinations can explain protest success. Similarly, no combinations of the three variables related to protest tactics can be seen as sufficient conditions for protest success. Combinations of four protest frames can to some degree explain protest success. fsQCA reveals three causal paths: (F1) "using SOCIALISM and COLLECTIVISM while not using WEAK", (F2) "using SOCIALISM and COLLECTIVISM while not using RU_VIOLATION", and (F3) "using WEAK and RU_VIOLATION while not using SOCIALISM and COLLECTIVISM". Comparison of these paths shows that SOCIALISM and COLLECTIVISM tend to appear together but not show up with either WEAK or RU_VIOLATION at the same time. Comparison of the raw and unique coverages6 of the three paths shows that path F1 has the largest explanatory power whereas path F3 has the smallest explanatory power. This indicates that SOCIALISM and COLLECTIVISM play a significant role in anti-demolition protests. Although the three causal paths involving protest frames satisfy the sufficiency criterion, they only explain approximately 36 % of the successful cases (raw coverage is 0.36). Comparison shows that the explanatory power of frames is weaker than that of political opportunity structures (raw overage is 0.57). One explanation might be that framing strategies influence protest success indirectly through changing the probability of a central government response to protests. This indirect working mechanism might have reduced the explanatory power of protest frames. In sum, resource mobilization and protest tactics cannot adequately explain protest success; protest frames have a moderate explanatory power for protest success, while political opportunities have fairly strong explanatory power. Although existing studies have pointed out that political opportunities play a significant role in the success of protests in China (e.g. Cai 2010), they mainly emphasize the importance of direct intervention from higher-level governments and have not differentiated modes of government intervention nor elaborated how different modes of intervention shape protest success. This article not only lends support to the importance of central government intervention but also shows that the explanatory power of supportive reports from central state-sponsored media is greater than that of the central government's direct intervention. In particular, the central intervention would have the most effective effect on protest success when it occurs through multiple institutionalized channels and is publicly endorsed by state-sponsored media. We call this "multi-channelled forceful intervention". Assessment of the robustness of the causal paths to successful protests Given that the results of fuzzy-set qualitative comparative analysis is sensitive to the assignment of fuzzy-set scores (Skaaning 2011), it is necessary to test the robustness of the analytical results. Some may argue that a "tragic victory" should be regarded as failure because protestors have paid a huge price and thus be coded as 0.4 instead of 0.6. We recoded "tragic victory" as 0.4 and re-ran the above analysis, and the results were basically the same. In our single factor analysis, except for "new media involvement", no other factors on their own can be regarded as a necessary condition for protest success; and no factors can be regarded as a sufficient condition for protest success, with the sufficiency consistencies of "CGOV" and "CMEDIA" dropping to 0.72 and 0.71. This shows that it is harder to explain protest success when the standard of protest success is raised. We then reanalysed the explanatory power of each theory for protest success. The results show that resource mobilization and protest tactics cannot adequately explain success, whereas protest frames and political opportunities can to some extent explain success. As for protest frames, we found the same causal paths as in the previous analysis, but the raw coverage became 0.397. However, when raising the standard of protest success, the co-presence of "CGOV", "CMEDIA", and "OBOPP" is required to achieve protest success. This causal path explains approximately 46.5 % of the successful cases, which is larger than that of protest frames. We also investigated how the assignment of fuzzy-set scores of CGOV and CMEDIA may influence the robustness of the findings. We recoded "central government intervention with a neutral stance" to 0.7 and re-ran the analysis, and the result remained basically the same. Similarly, we achieved the same conclusion even if we recoded the values of 0.4 and 0.6 in CMEDIA as 0.3 and 0.7, respectively. Political opportunities as a variable Because the central government does not intervene in all protests, studies of protest outcomes must answer the question of "under what circumstances will the central government intervene". However, this question has not been adequately explored by existing studies (Cai 2010, p. 5). Cai (2010) points out that the central government's intervention to some degree depends on the forcefulness of the protest, which is determined by protestors' resources and strategies. The present study further contends that framing strategies have a significant influence on the occurrence of the central government's multi-channelled forceful intervention. This section uses fsQCA to investigate the effects of resource mobilization, protest tactics, and protest frames on the occurrence of multi-channelled intervention. Our results (Table 3) show that neither the three variables in relation to resource mobilization nor the three protest tactics can sufficiently explain the central government's multi-channelled intervention. However, different combinations of protest frames can adequately explain the occurrence of the central government's multi-channelled forceful intervention. The result of fsQCA reveals three causal paths. Causal path F1 is "using WEAK, SOCIALISM and COLLECTIVISM while not using RU_VIOLATION", which explains about 8 % of the cases. Causal path F2 is "not using WEAK but using COLLECTIVISM and RU_VIOLATION", which explains about 30 % of the cases. These two paths together explain about 38 % of the cases involving the central government's multi-channelled intervention. This result confirms, in a preliminary way, our argument that protest frames are a key to understanding government intervention. The conditions for "multi-channelled forceful intervention" F1: WEAK * SOCIALISM * COLLECTIVISM *ru_violation F2: weak * SOCIALISM * RU_VIOLATION Institutional environment * frames IF1: OBOPP * SOCIALISM * COLLECTIVISM * ru_violation IF2: OBOPP * weak * SOCIALISM * RU_VIOLATION IF3: OBOPP * weak * COLLECTIVISM * RU_VIOLATION IF4: weak * SOCIALISM * COLLECTIVISM * RU_VIOLATION Note: IS represents being unable to form sufficient conditions such that consistency has no meaning, shown as "–" Previously, we argued that the numerous readjustments in laws and regulations during the past 10 years have expanded political opportunities for anti-demolition protesters. And descriptive analysis shows that about 60.1 % of the successful protests occurred within a favourable institutional framework. An institutional framework not only facilitates/constrains protestors' mobilization efforts and choice of protest tactics and frames but it also influences the chance of central government's responding strategies; therefore, we included "favourable institutional framework" (OBOPP) and four protest frames as explanatory conditions and re-ran the analysis. The result reveals four causal paths, including (IF1) "presence of OBOPP, using SOCIALISM and COLLECTIVISM, not using RU_VIOLATION", (IF2) "presence of OBOPP, using SOCIALISM and RU_VIOLATION, but not using WEAK", (IF3) "presence of OBOPP, using SOCIALISM and RU_VIOLATION, but not using WEAK", and (IF4) "using COLLECTIVISM, SOCIALISM and RU_VIOLATION". The overall coverage of this set of causal paths is 0.507, which indicates that more than half of the multi-channelled forceful intervention cases can be explained by frames and institutional framework. Further examination shows that OBOPP is an ingredient of three paths, and the unique coverage of the causal path without OBOPP is very small. This means that multi-channelled forceful intervention from the central government mainly occurred in an institutional environment that favoured protestors. As for protest frames, some protests simultaneously use the discourses of COLLECTIVISM and SOCIALISM, which we call "traditional cultural frame" (IF1), while other protestors use the policy-based frame of RU_VIOLATION and culture-based frames, which we call "mixed frames" (IF2-IF4). The paths consisting of "mixed frames" explain 34.9 % of cases with multi-channelled intervention. It should be mentioned that although 65 % of the cases used the "WEAK" frame, the three "mixed frames" paths do not include "WEAK", which indicates that the frame of "WEAK" has limited effect on protest success. To elaborate the conjectural influence of institutional framework and protest frames on protest success, we will analyse an anti-demolition case in detail. In this case, a teacher who was to be relocated as a result of residential demolition was suspended from job without pay in 2010. From January 25 to 27, 2011, this case was reported by Eastern Morning Post, Xinhua Daily Telegraph, and People's Court News. Among these reports, Xinhua Daily Telegraph pointed out that demolition of "Zhulian"7 style is not only an invasion of citizens' civil rights but also a violation of laws (Shan 2011). On February 1, 2011, People's Court News pointed out that this event was a covert forced demolition, which violated "Regulations on the Expropriation and Compensation of Houses on State-owned Land (draft)"—a bill that was passed on January 19, 2011; it also contended that such a covert forced demolition had caused tensions between cadres and those affected by the demolition, which had further created grievances among the masses (Wang 2011). At the same time, a new district management committee began investigating this event and later advised that the victim's salary should be reinstated, an apology be issued, and self-criticism be conducted about the demolition. In relation to this case, the Central Discipline Inspection Commission and the Ministry of Supervision made an announcement in March 2011. The announcement demanded the strengthening of the supervision of demolition policies, as well as the curbing and correcting of demolition projects that were in violation of rules and regulations. This case shows that mass media are more likely to report anti-demolition protests from the perspective of the rule of law and their violation following the promulgation of new laws and regulations. It also confirms that the co-presence of the discourse of socialism such as cadre-mass relations and legal discourse (Lee 2000; Tong 2006). Similarly, a case in Jiangxi province clearly demonstrates the effects of protest frames. During an investigation by China Youth Daily, one staff member confidently bragged that "this kind of news regarding demolition incidents would never be publicly reported" (Tu 2008). Yet, when Legal Daily synthesized anti-demolition protestors' discourse and raised questions about whether the local government should use political resources to service demolition for business development, whether threatening civil servants related to the demolished house units (see "Zhulian") has any legal base, and the relationship between housing demolition regulations and property law, the local government swiftly responded to such questions (Chen and Li 2008). Understanding factors that shape central government intervention Why is using a particular set of protest frames within a particular institutional environment more likely to gain multi-channelled forceful intervention from the central government? This sub-section, drawing from existing studies, will discuss the possible driving forces of central government's responses to anti-demolition protests from a macro perspective. Since 2003, "stability maintenance" has become a core issue in the governance of Chinese society, and this was strengthened between 2005 and 2008 to form an authoritarian regime where "protest bargaining" became an important way to absorb popular protests in a non-zero-sum manner (Lee and Zhang 2013). To a certain degree, maintaining stability has become the foundation by which the central government chooses to intervene in popular protests. We suggest that the drivers of the central government's decision to intervene in anti-demolition protests might lie in its pursuit of multiple and sometimes conflicting policy aims in relation to land use. On the one hand, it aims to improve the productivity of land use and promote economic growth without violating the principle of land being owned by the state; on the other hand, it is necessary to protect arable land and safeguard food security (Lin and Ho 2005). However, in the process of land and urban development, land transfer fees have become an important source of revenue for local governments. Driven by economic interests, it is not uncommon that local governments violate laws and regulations, which has not only caused anti-demolition protests and aggravated social conflicts but also influenced the policy aim of protecting arable land. The pursuit of arable land protection, social stability maintenance, and disciplining local governments provides drivers that encourage the central government to intervene. A study has pointed out that political legitimacy is an important driver of central government intervention (Cai 2010). In the past 10 years, the state has endeavoured to regain political legitimacy, and its focus has shifted from economic performance and nationalism to political ideology and institution building (Holbig and Gilley 2010), elevating the significance of ideologies such as harmonious society, traditional culture, the building of institutions to support governance, and democracy with Chinese characteristics (e.g. rule by law). Against this background, the protest frames of socialism, collectivism, and rule violation are congruent with the political ideologies, traditional culture, and institution building which undergird the rebuilding of political legitimacy; this congruency has not only provided protestors the justification for mobilization and organization (Tong 2006) but also enhanced the legitimacy of their demands. The central government's inaction, in the face of legitimate demands, may negatively influence its political legitimacy. In line with this, the framing strategies described previously encourage intervention from the central government. In addition, political legitimacy building is also a driver for changing rules and regulations (Gilley 2008). Specifically, not only are readjustments of demolition policies a response from the central government to past protests (Cai 2010) but they can also be regarded as an effort by the central government to regain political legitimacy and maintain social stability. Situated in this political context, using the frame of rule violation is conducive to protest success. Lastly, using multiple protest frames that are compatible and congruent with the state's legitimacy building efforts can appeal to different central government departments and thus increase the effectiveness of framing strategies. In sum, the central government's "multiple-channelled intervention" is key to the success of protests in China. Stability maintenance and the multiplicity of policy aims are the foundation for central intervention. The state's recent efforts to regain political legitimacy mean that protest frames play a crucial role in promoting central government intervention. As demonstrated previously, protestors are more likely to gain "multiple-channelled forceful intervention" from the central government when they utilize multiple "mixed frames" congruent with the state's legitimacy building discourse. This article takes anti-demolition protests as a case to explore what factors shape the success of socially influential protests. It compared 40 cases of anti-demolition protests that occurred in 2003–2012 through the method of fuzzy-set qualitative comparative analysis and found that the co-presence of central government direct intervention and supportive reports from central state-sponsored media is a sufficient condition for protest success. We call this mode of intervention "multi-channelled forceful intervention". Although framing strategies have explanatory power for protest success, they are likely to indirectly influence protest success through increasing the central government's "multi-channelled forceful intervention". It found that the frames of SOCIALISM, COLLECTIVISM, and RU_VIOLATION are pertinent to protest success. In sum, political opportunity structures and framing theories have strong explanatory power for protest success in China. Although the institutional environment is neither a sufficient nor a necessary condition for protest success, it does play an important part in shaping the success of protests: (1) the most successful protests occurred within an institutional framework favourable to protestors; (2) analysis shows that the co-presence of a favourable institutional framework and a "multi-channelled intervention" from the central government is required for protest success once it is defined by a stricter criterion; and (3) when institutional environment is favourable to protestors, framing strategies are more likely to increase the chance of central government intervention. These findings suggest that an institutional framework favourable to protesters should be regarded as a political opportunity structure. Meanwhile, it should be acknowledged that protestors may perceive and make use of political opportunities differently, and future studies need to investigate how "objective" political opportunities, protestors' perception of opportunities, and their strategies to make use of opportunities jointly shape the protest results. The importance of political opportunities, and central government interventions in particular, for protest success lends support to existing studies of protests (e.g. Cai 2010). Meanwhile, this study further elaborates the significance of the central government's "multi-channelled intervention". Specifically, direct intervention from the central government is merely one element of the causal path to protest success, and it is the co-presence of central government's direct intervention and the support from central state-sponsored media that forms a sufficient condition for protest success. Comparing the explanatory power of the central government's direct intervention and that of the open support from state-sponsored media shows that the two are roughly the same, with the latters' sufficiency coverage slightly larger than that of the former. This finding is understandable. One potential explanation is that even if the central government directly intervenes in a protest, in absence of open media reports, it is difficult for protestors and the public to know the actual attitudes of the central government towards the protests and therefore difficult to effectively make use of the political opportunities afforded by higher levels of government. In contrast, when the central government openly supports a protest, protestors are more likely to fully take advantage of the political opportunities afforded by the higher-level governments and more likely to gain support from the public so that protestors have more power to bargain with the local governments and eventually achieve success. In addition, the co-presence of the central government's direct intervention and supportive reports from state-sponsored media also implies the multiplicity of potential allies from within the polity, which is a key factor to successful protests. In line with the above discussion, it is possible to develop a typology to describe different modes of government intervention based on the intervention channels (e.g. single channel vs multiple channels) and whether intervention is public. And the "multiple-channelled intervention" studied in this paper is a particular mode of intervention that simultaneously occurs in multiple channels and is known to the public because of media coverage. Yet, it is worth mentioning that the above theoretical explanation needs further investigation. Our study shows that there is no robust relationship between the occurrence of the central government's "multi-channelled intervention" and resource mobilization, as well as "multi-channelled intervention" and protest tactics. In contrast, meaningful relationships between protest frames and the occurrence of "multi-channelled intervention" are identified. Among the frames, COLLECTIVISM, SOCIALISM, and RU_VIOLATION play significant roles in anti-demolition protests, while the WEAK frame is not beneficial to protest success. We argue that, on the one hand, the authoritarian regime of stability maintenance and the multiplicity of policy aims in relation to land use is key to understanding the central government's multi-channelled forceful intervention in anti-demolition protests; and on the other hand, the central government's endeavour to regain political legitimacy to a certain degree affords political opportunities for successful protests. Protest frames congruent with the state's legitimacy building discourse not only legitimize protestors' claims but also increase the cost of central government inaction. Strategic use of such frames thus is able to gain supportive intervention from the central government. Our findings suggest that framing can not only mobilize potential participants by activating the public's resonance with the cause at stake (Amenta and Caren 2004) but also influence protest results by directly appealing to higher level governments and the central government in particular. The tremendous significance of ideology and legitimacy for authoritarian states make the latter working mechanism of framing strategies especially important. Of course, why governments are more likely to respond to particular frames needs further study. Some may argue that because the selected cases span across 10 years, media reports about protests occurring at an early stage can influence the frames used in protests during later stages and thus indirectly shape the likelihood of success. If this argument holds, it implies systematic differences in frame prevalence between early and later stages. To test this inference, we used 2008 to demarcate our cases into two periodic groups and investigated whether there was a significant difference in the usage of protest frames. Analysis shows no systematic differences; hence, the previously mentioned argument is not supported. Moreover, is it possible that the occurrence and results of protests at the early stage influence the dynamics and results of protests at the latter stage? We think that such feedback effects do exist, with the most important being that the central government may adjust the rules and policies regarding demolition as a result of early anti-demolition protests; further, such adjustments may provide opportunities for protests at later stages. It is worth mentioning that such a feedback effect does not invalidate the conclusions of this study. This article has some limitations. First, using media reports as data sources might lead to selection bias. It has been demonstrated that media coverage of popular protests is shaped by factors such as event type, issue involved, news agency, event size, and status of event sponsors (Earl et al. 2004; McCarthy et al. 2008). Given that the selected cases in this study are by and large socially influential ones, the findings of the present study are mainly generalized to high-profile and influential protests, and whether they can be generalized to more localized protests will require further investigation. Second, due to the limit of data availability, this article has not yet examined the roles of institutional protest tactics (e.g. litigation and petition writing) and news reports from local media. Finally, this study aims to explore what factors are conducive to successful protests by systematically comparing cases, which means that it has not scrutinized the dynamic mechanisms through which these factors lead to the success of protests by in-depth analysis of individual cases. Future studies can use in-depth interviews to explore how strategic interactions between protestors and governments shape protest outcomes. Specifically, probing how protestors and local governments perceive signals from the central government and act accordingly can further substantiate the thesis of "multi-channelled forceful intervention" in the present study. For discussion on the conceptual difference between protest outcome and protest consequence, see Amenta and Caren 2004, Cress and Snow 2000, and Giugni 1998. http://www.ftchinese.com/story/001048280/ accessed on April 27, 2014. If the consistency index is significantly lower than 0.8, then the coverage index has no practical meaning and there is no need to calculate it. In a six-value coding scheme, fuzzy-set scores can take values of 1, 0.8, 0.6, 0.4, 0.3, and 0. Among these values, 1 represents the presence of a condition, 0 represents the absence of a condition, and other values are in-between. "OBOPP" means objective opportunity. Roughly speaking, raw coverage measures the percentage of cases a particular path can explain. Yet some cases can be explained by multiple causal paths; thus, raw coverage cannot effectively reflect the explanatory power of one causal path after considering the other causal paths. Unlike raw coverage, unique coverage measures the percentage of cases that can only be explained by a particular causal path. ZhuLian refers to the way property developers and local governments threaten to terminate the employment of public officials found among anti-demolition protestors and/or those public officials who happen to be related to anti-demolition protestors—not only by asking them to sign a number of unreasonable relocation agreements, but also asking them to mobilize their relatives to sign a relocation agreement, or else they will be suspended without pay or transferred. RH is primarily responsible for the manuscript; he participated in formulating the theoretical framework, data analysis, and manuscript writing and revision. YG participated in formulating the theoretical framework and research design. WZ collected and coded the dataset, described the cases, and participated in formulating the theoretical framework. All authors read and approved the final manuscript. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Department of Sociology, Fudan University, 220 Handan Road, Shanghai, 200433, China School of Journalism, Fudan University, Shanghai, China Amenta, E., and N. Caren. 2004. The legislative, organizational, and beneficiary consequences of state-oriented challengers. In The Blackwell companion to social movements, ed. D.A. Snow, S.A. Soule, and H. Kriesi, 461–488. Oxford: Blackwell Pub.Google Scholar Amenta, E., and J.D. Poulsen. 1994. Where to begin: A survey of five approaches to selecting independent variables for qualitative comparative analysis. Sociological Methods and Research 23(1): 22–53.View ArticleGoogle Scholar Amenta, E., N. Caren, and S.J. Olasky. 2005. Age for leisure? Political mediation and the impact of the pension movement on US old-age policy. American Sociological Review 70(3): 516–538.View ArticleGoogle Scholar Burstein, P. 1999. Social movements and public policy. In How social movements matter, ed. M. Giugni, D. McAdam, and C. Tilly, 3–21. Minneapolis: University of Minnesota Press.Google Scholar Burstein, P., and A. Linton. 2002. The impact of political parties, interest groups, and social movement organizations on public policy. Social Forces 81(2): 380–408.View ArticleGoogle Scholar Cai, Y. 2008. Power structure and regime resilience: contentious politics in China. British Journal of Political Sciences 38(3): 411–432.Google Scholar Cai. 2010. Collective resistance in China. Stanford: Stanford University Press.Google Scholar Caramani, D. 2009. Introduction to the comparative method with Boolean algebra. LA: Sage.View ArticleGoogle Scholar Chen, Peng. 2010. The rights struggle of contemporary Chinese owners. Sociological Studies 1: 1–30.Google Scholar Chen, Xiuqi, and Li Qing. 2008. Many legal questions about 'ZhuLian' demolition in FengCheng of JiangXi Province. Legal Daily. (January 15, 2008).Google Scholar Cheng, Xiuying. 2012. From political petition to legal logic: A discourse analysis of the politics of Chinese workers' protests. Open Times 1: 73–89.Google Scholar Cloward, A., and F. Piven. 1984. Disruption and organisation: A rejoinder [to William A. Gamson and Emilie Schmeidler]. Theory and Society 13(4): 587–599.View ArticleGoogle Scholar Cress, M., and A. Snow. 2000. The outcomes of homeless mobilization: The influence of organization, disruption, political mediation, and framing. American Journal of Sociology 105(4): 1063–1104.View ArticleGoogle Scholar Dong, Haijun. 2008. The weak identity as a weapon: The subaltern politics of the peasant resistance for rights. Chinese Journal of Sociology 4: 34–78.Google Scholar Earl, J., A. Martin, D. McCarthy, and A. Soule. 2004. The use of newspaper data in the study of collective action. Annual Review of Sociology 30: 65–80.View ArticleGoogle Scholar Gilley, B. 2008. Legitimacy and institutional change: The case of China. Comparative Political Studies 41(30): 259–284.Google Scholar Giugni, M. 1998. Was it worth the effort? The outcomes and consequences of social movements. Annual Review of Sociology 24(1): 371–393.View ArticleGoogle Scholar Giugni, M. 2007. Useless protest? A time-series analysis of the policy outcomes of ecology, antinuclear, and peace movements in the United States, 1977-1995. Mobilization: An International Quarterly 12(1): 53–77.Google Scholar Goldstone, A. 1980. The weakness of organization: A new look at Gamson's 'the strategy of social protest'. American Journal of Sociology 85(5): 1017–1042.View ArticleGoogle Scholar He, X. 2014. Maintaining stability by law: Protest-supported housing demolition litigation and social change in China. Law and Social Inquiry 39(4): 849–873.View ArticleGoogle Scholar Holbig, H., and B. Gilley. 2010. Reclaiming legitimacy in China. Politics and Policy 38(3): 395–422.View ArticleGoogle Scholar Hsing, Y. 2010. The great urban transformation. Oxford: Oxford University Press.View ArticleGoogle Scholar Huang, Zhenhui. 2011. Performative protest: Landscape, challenge and the mechanism of occurrence. Open Time 2: 71–84.Google Scholar Kitschelt, P. 1986. Political opportunity structures and political protest: Anti-nuclear movements in four democracies. British Journal of Political Science 16(1): 57–85.View ArticleGoogle Scholar Kuckartz, Udo. 2014. Qualitative text analysis. LA: Sage.Google Scholar Lee, C.K. 2000. The 'revenge of history': Collective memories and labor protests in North-Eastern China. Ethnography 1(2): 217–237.View ArticleGoogle Scholar Lee, C.K., and Y. Zhang. 2013. The power of instability: Unravelling the micro-foundations of bargained authoritarianism in China. American Journal of Sociology 118(6): 1475–1508.View ArticleGoogle Scholar Li, Liangrong. 2003. Discussion on the dual-track system of Chinese news media. Modern Communication 3: 1–4.Google Scholar Li, Guowu, and Yuan Fan. 2013. Land development within the growth of regional economy. Journal of Central University of Finance & Economics 5: 65–70.Google Scholar Li, L., and L. Liu. 2009. 30 years' reform of China's mass media. Asia Europe Journal 7(3-4): 405–415.View ArticleGoogle Scholar Lin, G.C.S., and S.P.S. Ho. 2005. The state, land system, and land development processes in contemporary China. Annals of the Association of American Geographers 95(2): 411–436.View ArticleGoogle Scholar Lü, Dewen. 2012. Media mobilisation, demolition-resistant families and contentious politics: Reanalysis of the event of Yihuang. Chinese Journal of Sociology 3: 129–170.Google Scholar McCammon, H.J. 2001. Stirring up suffrage sentiment: The formation of the state woman suffrage organizations, 1866-1914. Social Forces 80(2): 449–480.View ArticleGoogle Scholar McCammon, H.J., C.S. Muse, H.D. Newman, and T.M. Terrell. 2007. Movement framing and discursive opportunity structures. American Sociological Review 72(5): 725–749.View ArticleGoogle Scholar McCann, M. 2004. Law and social movements. In The Blackwell companion to law and society, ed. A. Sarat, 506–522. Oxford: Blackwell Pub.View ArticleGoogle Scholar McCarthy, J., L. Titarenko, C. McPhail, P. Rafail, and B. Augustyn. 2008. Assessing stability in the patterns of selection bias in newspaper coverage of protest during the transition from communism in Belarus. Mobilization: An International Quarterly 13(2): 127–146.Google Scholar Meyer, D.S., and D.C. Minkoff. 2004. Conceptualizing political opportunity. Social Forces 82(4): 1457–1492.View ArticleGoogle Scholar O'Brien, K., and L. Li. 2006. Rightful resistance in rural China. Cambridge: Cambridge University Press.View ArticleGoogle Scholar Olzak, S., and E. Ryo. 2007. Organizational diversity, vitality and outcomes in the civil rights movement. Social Forces 85(4): 1561–1591.View ArticleGoogle Scholar Piven, F.F., and R.A. Cloward. 1979. Poor people's movements. New York: Vintage Books.Google Scholar Ragin, C. 2008. Redesigning social inquiry. Chicago: University of Chicago Press.View ArticleGoogle Scholar Rihoux, B., and B. Lobe. 2009. The case for qualitative comparative analysis (QCA). In The SAGE handbook of case-based methods, ed. David Byrne and Charles C. Ragin, 222–242. LA: Sage.View ArticleGoogle Scholar Rootes, C. 2006. Explaining the outcomes of campaigns against waste incinerators in England. In Community and ecology, ed. A. McCright and T.N. Clark, 179–198. Bingley: Emerald Group Publishing Limited.View ArticleGoogle Scholar Skaaning, S.E. 2011. Assessing the robustness of crisp-set and fuzzy-set QCA results. Sociological Methods and Research 40(2): 391–408.View ArticleGoogle Scholar Shan, Shibing. 2011. Implicating officials in the process of demolition is anti-humanist and unethical. XinHua Daily Telegraph. (January 6, 2011).Google Scholar Shin, H.B. 2013. The right to the city and critical reflections on China's property rights activism. Antipode 45(2): 1167–1189.Google Scholar Stockmann, D. 2010. Who believes propaganda? Media effects during the anti-Japanese protests in Beijing. The China Quarterly 202: 269–289.View ArticleGoogle Scholar Sun, Y., and N. Zhao. 2007. Multifaceted state and fragmented society: dynamics of environmental movement in China. In Discontented miracle: Growth, conflict, and institutional adaptations in China, ed. D. Yang, 111–159. Singapore: World Scientific Publisher.View ArticleGoogle Scholar Tangbiao, Xiao, and Weina Kong. 2011. The consequences of contemporary Chinese mass incidents. Comparative Economic and Social Systems 2: 190–198.Google Scholar Tarrow, S. 2011. Power in movement. Cambridge: Cambridge University Press.View ArticleGoogle Scholar Tong, Xin. 2006. The continued socialist cultural tradition: An analysis of collective action of workers in a state-owned enterprise. Sociological Studies 1: 59–76.Google Scholar Tu, Chaohua. 2008. Investigating demolition 'ZhuLian' in FengCheng of JiangXi Province. China Youth. (October 1, 2008).Google Scholar Wang, Weijian. 2011. Demolition while doing 'ZhuLian', whose fault it is?. People's Court News. (January 27, 2011).Google Scholar Weinstein, L., and X. Ren. 2009. The changing right to the city: Urban renewal and housing rights in globalizing Shanghai and Mumbai. City and Community 8(4): 407–432.View ArticleGoogle Scholar Xia, Ying. 2014. From the marginal to the mainstream: Collective action frames and cultural context. Chinese Journal of Sociology 1: 52–74.Google Scholar Xie, Yue. 2010. The politics of protest. ShangHai: ShangHai Education Press.Google Scholar Ying, Xing. 2007. The grassroots mobilization and the mechanism of interest expression of the peasants group. Sociological Studies 1: 1–23.Google Scholar Yu, Zhiyuan. 2012. The factors shaping the outcomes of collective actions. Sociological Studies 3: 90–112.Google Scholar Zhang, Lei. 2005. Beijing house owners' rights protection movement: reasons of breakout and mobilization mechanism. Sociological Studies 6: 1–39.Google Scholar Zhao, Dingxin. 2012. Lectures on social and political movements (second edition). Beijing: Social Sciences Academic Press.Google Scholar Zhao, Ling. 2003. The 10-year tragicomedy of demolition. Southern Weekend. (September 4, 2003).Google Scholar Zhou, Feizhou. 2007. The role of government and farmers in land development and transfer. Sociological Studies 1: 1–34.Google Scholar Zhou, M., and J. Logan. 1996. Market transition and the commodification of housing in urban China. International Journal of Urban and Regional Research 20(30): 400–421.View ArticleGoogle Scholar Zhu, Xiaoyang. 2011. A little village's tale: Topography and homeland 2003-2009. Beijing: Peking University Press.Google Scholar Zhu, Tao. 2009. Iconic events of forceful demolition. Finance and Economics 12. (December 21, 2009). http://misc.caijing.com.cn/chargeFullNews.jsp?id=110341593&time=2009-12-21&cl=106. Accessed 27 Apr 2014.
CommonCrawl
Repeated roots of $\det(A-xI)$ The solutions of the equation $\det(A-xI)=0$ give rise to the eigenvalues of Matrix $A$. This confuses me because the equation can give repeated roots. Suppose there is a root $x=\lambda$ of multiplicity $n$, and the corresponding eigenvectors are $\{v_1,v_2,...,v_n\}$. Then any linear combination of these vectors is an eigenvector, because $A(v_1+v_2)=Av_1+Av_2=\lambda v_1+\lambda v_2=\lambda(v_1+v_2)$ . What I find puzzling is that those $n$ vectors doesn't have to be linear independent. For example, the matrix $\begin{bmatrix}1&0\\-4&1\end {bmatrix}$ has a repeated eigenvalue $\lambda=1$ of multiplicity 2, but the corresponding vector space of eigenvectors is only one dimensional, spanned by $(0,1)^T$. This lead me to the following questions. If a non-singular matrix $A$ satisfies $\operatorname{rank}(A-\lambda I)=\operatorname{rank}(A)-n$, does the equation $\det(A-xI)=0$ always have a repeated root of multiplicity greater than or equal to $n$ at $x=\lambda$? Under what conditions are those $\{v_1,v_2,...,v_n\}$ above linearly independent? linear-algebra matrices eigenvalues-eigenvectors matrix-rank JethroJethro This is not a full answer. It's just a collection of thoughts related to your predicament. It is, however, a bit long for a comment. What you have here is an example of an eigenvalue whose geometric multiplicity (the dimension of the corresponding eigenspace) is strictly smaller than the algebraic multiplicity (the degree of the root of the characteristic polynomial). One concept invented to resolve this is generalized eigenvectors, which are vectors $v$ where $(A-\lambda I)v$ isn't necessarily $0$, but $(A-\lambda I)^nv=0$ for some natural number $n$. And you can always find enough generalized eigenvectors for a given eigenvalue to make up for the discrepancy you noted here. In your case, any vector in the plane is a generalized eigenvector corresponding to the eigenvalue $1$, which suits the degree of the root $\lambda=1$ in your characteristic polynomial. If all the eigenvalues does have the "correct" dimension of all its eigenspaces, then the matrix and the linear transformation it represents is diagonalizable. If there is some eigenspace which is "too small", then it is not diagonalizable. The resolution here, related to the concept of generalized eigenvectors above, is the so-called Jordan normal form, which says that when transforming $A$ to some specific basis of generalized eigenvectors, the resulting matrix is almost diagonal. It will have the eigenvalues in the diagonal, and $1$'s in some places along the superdiagonal (and otherwise $0$). ArthurArthur The best way to think about this might be the eigenspaces, not eigenvectors. If you have a nondegenerate eigenvalue (i.e. $n=1$), the space of eigenvectors is $1$-dimensional and thus you can determine it by giving a single nonzero vector $v_1$, which automatically yields a basis $\{v_1\}$. If you have $n > 1$ though, what you mean by $\{v_1, ..., v_n\}$ is some basis of the eigenspace, and that's just where the linear independence comes from. As a basis, the $v_i$ are in no way uniquely determined though. I think that's where your confusion comes from. GnampfissimoGnampfissimo $\begingroup$ The way I read the question, this isn't about the non-uniqueness (up to scaling) of an eigenbasis, but about how there are "missing" dimensions of the eigenspace compared to the multiplicity of roots in the characteristic polynomial. $\endgroup$ – Arthur Nov 27 '18 at 7:36 Not the answer you're looking for? Browse other questions tagged linear-algebra matrices eigenvalues-eigenvectors matrix-rank or ask your own question. $v_1,v_2$ are eigenvectors of $A$. Is it true that $v_1-v_2$ is eigenvector of $A$? How do you prove this linear algebra matrix equality? The generalized eigenvector of $A$ matrix $A^k$ related to $A$ $Av=λv$ basis proof problems Eigenspace for $4 \times 4$ matrix How to calculate $a,b,c,d$ given the eigenvectors Generalized Eigenvectors for Systems of ODEs Understanding repeated Eigen values Is $Av_1,Av_2,Av_3$ orthogonal if you have eigenvector of $A^TA$ Proving v1+v2 is not an eigenvector of A
CommonCrawl
Mon, 03 Jun 2019 03:51:44 GMT 6.E: Applications of Integration (Exercises) [ "article:topic", "showtoc:yes" ] Homework Exercises Exercises: Calculus (OpenStax) 6.2: Determining Volumes by Slicing Disk and Washer Method 6.3: Volumes of Revolution: Cylindrical Shells 6.4: Arc Length of a Curve and Surface Area 6.5: Physical Applications 6.6: Moments and Centers of Mass 6.7: Integrals, Exponential Functions, and Logarithms 6.8: Exponential Growth and Decay 6.9: Calculus of the Hyperbolic Functions Chapter Review Exercises These are homework exercises to accompany OpenStax's "Calculus" Textmap. 6.1: Areas between Curves For exercises 1 - 2, determine the area of the region between the two curves in the given figure by integrating over the \(x\)-axis. 1) \(y=x^2−3\) and \(y=1\) \(\dfrac{3}{2} \, \text{units}^2\) 2) \(y=x^2\) and \(y=3x+4\) For exercises 3 - 4, split the region between the two curves into two smaller regions, then determine the area by integrating over the \(x\)-axis. Note that you will have two integrals to solve. 3) \(y=x^3\) and \( y=x^2+x\) \(\dfrac{13}{12}\, \text{units}^2\) 4) \(y=\cos θ\) and \( y=0.5\), for \( 0≤θ≤π\) For exercises 5-6, determine the area of the region between the two curves by integrating over the \(y\)-axis. 5) \(x=y^2\) and \(x=9\) \(36 \, \text{units}^2\) 6) \(y=x\) and \( x=y^2\) For exercises 7 - 13, graph the equations and shade the area of the region between the curves. Determine its area by integrating over the \(x\)-axis. 7) \(y=x^2\) and \(y=−x^2+8x\) 243 square units 8) \(y=\dfrac{1}{x}, \quad y=\dfrac{1}{x^2}\), and \(x=3\) 9) \(y=\cos x\) and \(y=\cos^2x\) on \(x \in [−π,π]\) 4 square units 10) \(y=e^x,\quad y=e^{2x−1}\), and \(x=0\) 11) \(y=e^x, \quad y=e^{−x}, \quad x=−1\) and \(x=1\) \(\dfrac{2(e−1)^2}{e}\, \text{units}^2\) 12) \( y=e, \quad y=e^x,\) and \(y=e^{−x}\) 13) \(y=|x|\) and \(y=x^2\) \(\dfrac{1}{3}\, \text{units}^2\) For exercises 14 - 19, graph the equations and shade the area of the region between the curves. If necessary, break the region into sub-regions to determine its entire area. 14) \(y=\sin(πx),\quad y=2x,\) and \(x>0\) 15) \(y=12−x,\quad y=\sqrt{x},\) and \(y=1\) \(\dfrac{34}{4}\, \text{units}^2\) 16) \(y=\sin x\) and \(y=\cos x\) over \(x \in [−π,π]\) 17) \(y=x^3\) and \(y=x^2−2x\) over \(x \in [−1,1]\) 18) \(y=x^2+9\) and \( y=10+2x\) over \(x \in [−1,3]\) 19) \(y=x^3+3x\) and \(y=4x\) For exercises 20 -25, graph the equations and shade the area of the region between the curves. Determine its area by integrating over the \(y\)-axis. 20) \(x=y^3\) and \( x = 3y−2\) 21) \(x=y\) and \( x=y^3−y\) 22) \(x=−3+y^2\) and \( x=y−y^2\) 23) \(y^2=x\) and \(x=y+2\) 24) \(x=|y|\) and \(2x=−y^2+2\) 25) \(x=\sin y,\quad x=\cos(2y),\quad y=π/2\), and \( y=−π/2\) \(\dfrac{3\sqrt{3}}{2}\, \text{units}^2\) For exercises 26 - 37, graph the equations and shade the area of the region between the curves. Determine its area by integrating over the \(x\)-axis or \(y\)-axis, whichever seems more convenient. 26) \(x=y^4\) and \(x=y^5\) 27) \(y=xe^x,\quad y=e^x,\quad x=0\), and \(x=1\). \(e^{−2}\, \text{units}^2\) 28) \(y=x^6\) and \(y=x^4\) 29) \(x=y^3+2y^2+1\) and \(x=−y^2+1\) 30) \( y=|x|\) and \( y=x^2−1\) 31) \(y=4−3x\) and \(y=\dfrac{1}{x}\) \(\left(\dfrac{4}{3}−\ln(3)\right)\, \text{units}^2\) 32) \(y=\sin x,\quad x=−π/6,\quad x=π/6,\) and \(y=\cos^3 x\) 33) \(y=x^2−3x+2\) and \( y=x^3−2x^2−x+2\) \(\dfrac{1}{2}\) square units 34) \(y=2\cos^3(3x),\quad y=−1,\quad x=\dfrac{π}{4},\) and \( x=−\dfrac{π}{4}\) 35) \(y+y^3=x\) and \(2y=x\) 36) \( y=\sqrt{1−x^2}\) and \(y=x^2−1\) 37) \(y=\cos^{−1}x,\quad y=\sin^{−1}x,\quad x=−1,\) and \( x=1\) \(−2(\sqrt{2}−π)\) square units For exercises 38 - 47, find the exact area of the region bounded by the given equations if possible. If you are unable to determine the intersection points analytically, use a calculator to approximate the intersection points with three decimal places and determine the approximate area of the region. 38) [T] \(x=e^y\) and \(y=x−2\) 39) [T] \(y=x^2\) and \(y=\sqrt{1−x^2}\) \(1.067\) square units 40) [T] \(y=3x^2+8x+9\) and \(3y=x+24\) 41) [T] \(x=\sqrt{4−y^2}\) and \( y^2=1+x^2\) 42) [T] \(x^2=y^3\) and \(x=3y\) 43) [T] \(y=\sin^3x+2,\quad y=\tan x,\quad x=−1.5,\) and \(x=1.5\) 44) [T] \(y=\sqrt{1−x^2}\) and \(y^2=x^2\) 45) [T] \(y=\sqrt{1−x^2}\) and \(y=x^2+2x+1\) \(\dfrac{3π−4}{12}\) square units 46) [T] \(x=4−y^2\) and \( x=1+3y+y^2\) 47) [T] \(y=\cos x,\quad y=e^x,\quad x=−π,\quad\) and\(\quad x=0\) 48) The largest triangle with a base on the \(x\)-axis that fits inside the upper half of the unit circle \(y^2+x^2=1\) is given by \( y=1+x\) and \( y=1−x\). See the following figure. What is the area inside the semicircle but outside the triangle? 49) A factory selling cell phones has a marginal cost function \(C(x)=0.01x^2−3x+229\), where \(x\) represents the number of cell phones, and a marginal revenue function given by \(R(x)=429−2x.\) Find the area between the graphs of these curves and \(x=0.\) What does this area represent? $33,333.33 total profit for 200 cell phones sold 50) An amusement park has a marginal cost function \(C(x)=1000e−x+5\), where \(x\) represents the number of tickets sold, and a marginal revenue function given by \(R(x)=60−0.1x\). Find the total profit generated when selling \(550\) tickets. Use a calculator to determine intersection points, if necessary, to two decimal places. 51) The tortoise versus the hare: The speed of the hare is given by the sinusoidal function \(H(t)=1−\cos((πt)/2)\) whereas the speed of the tortoise is \(T(t)=(1/2)\tan^{−1}(t/4)\), where \(t\) is time measured in hours and the speed is measured in miles per hour. Find the area between the curves from time \(t=0\) to the first time after one hour when the tortoise and hare are traveling at the same speed. What does it represent? Use a calculator to determine the intersection points, if necessary, accurate to three decimal places. \(3.263\) mi represents how far ahead the hare is from the tortoise 52) The tortoise versus the hare: The speed of the hare is given by the sinusoidal function \(H(t)=(1/2)−(1/2)\cos(2πt)\) whereas the speed of the tortoise is \(T(t)=\sqrt{t}\), where \(t\) is time measured in hours and speed is measured in kilometers per hour. If the race is over in 1 hour, who won the race and by how much? Use a calculator to determine the intersection points, if necessary, accurate to three decimal places. For exercises 53 - 55, find the area between the curves by integrating with respect to \(x\) and then with respect to \(y\). Is one method easier than the other? Do you obtain the same answer? 53) \(y=x^2+2x+1\) and \(y=−x^2−3x+4\) \(\dfrac{343}{24}\) square units 54) \(y=x^4\) and \(x=y^5\) 55) \(x=y^2−2\) and \(x=2y\) \(4\sqrt{3}\) square units For exercises 56 - 57, solve using calculus, then check your answer with geometry. 56) Determine the equations for the sides of the square that touches the unit circle on all four sides, as seen in the following figure. Find the area between the perimeter of this square and the unit circle. Is there another way to solve this without using calculus? 57) Find the area between the perimeter of the unit circle and the triangle created from \(y=2x+1,\,y=1−2x\) and \(y=−\dfrac{3}{5}\), as seen in the following figure. Is there a way to solve this without using calculus? \( \left(π−\dfrac{32}{25}\right)\) square units 1) Derive the formula for the volume of a sphere using the slicing method. 2) Use the slicing method to derive the formula for the volume of a cone. 3) Use the slicing method to derive the formula for the volume of a tetrahedron with side length a. 4) Use the disk method to derive the formula for the volume of a trapezoidal cylinder. 5) Explain when you would use the disk method versus the washer method. When are they interchangeable? For exercises 6 - 10, draw a typical slice and find the volume using the slicing method for the given volume. 6) A pyramid with height 6 units and square base of side 2 units, as pictured here. Here the cross-sections are squares taken perpendicular to the \(y\)-axis. We use the vertical cross-section of the pyramid through its center to obtain an equation relating \(x\) and \(y\). Here this would be the equation, \( y = 6 - 6x \). Since we need the dimensions of the square at each \(y\)-level, we solve this equation for \(x\) to get, \(x = 1 - \tfrac{y}{6}\). This is half the distance across the square cross-section at the \(y\)-level, so the side length of the square cross-section is, \(s = 2\left(1 - \tfrac{y}{6}\right).\) Thus, we have the area of a cross-section is, \(A(y) = \left[2\left(1 - \tfrac{y}{6}\right)\right]^2 = 4\left(1 - \tfrac{y}{6}\right)^2.\) \(\begin{align*} \text{Then},\quad V &= \int_0^6 4\left(1 - \tfrac{y}{6}\right)^2 \, dy \\[5pt] &= -24 \int_1^0 u^2 \, du, \quad \text{where} \, u = 1 - \tfrac{y}{6}, \, \text{so} \, du = -\tfrac{1}{6}\,dy, \quad \implies \quad -6\,du = dy \\[5pt] &= 24 \int_0^1 u^2 \, du = 24\dfrac{u^3}{3}\bigg|_0^1 \\[5pt] &= 8u^3\bigg|_0^1 \\[5pt] &= 8\left( 1^3 - 0^3 \right) \quad= \quad 8\, \text{units}^3 \end{align*}\) 7) A pyramid with height 4 units and a rectangular base with length 2 units and width 3 units, as pictured here. 8) A tetrahedron with a base side of 4 units,as seen here. \(V = \frac{32}{3\sqrt{2}} = \frac{16\sqrt{2}}{3}\) units3 9) A pyramid with height 5 units, and an isosceles triangular base with lengths of 6 units and 8 units, as seen here. 10) A cone of radius \( r\) and height \( h\) has a smaller cone of radius \( r/2\) and height \( h/2\) removed from the top, as seen here. The resulting solid is called a frustum. \(V = \frac{7\pi}{12} hr^2\) units3 For exercises 11 - 16, draw an outline of the solid and find the volume using the slicing method. 11) The base is a circle of radius \( a\). The slices perpendicular to the base are squares. 12) The base is a triangle with vertices \( (0,0),(1,0),\) and \( (0,1)\). Slices perpendicular to the \(xy\)-plane are semicircles. \(\displaystyle V = \int_0^1 \frac{\pi(1-x)^2}{8}\, dx \quad = \quad \frac{π}{24}\) units3 13) The base is the region under the parabola \( y=1−x^2\) in the first quadrant. Slices perpendicular to the \(xy\)-plane are squares. 14) The base is the region under the parabola \( y=1−x^2\) and above the \(x\)-axis. Slices perpendicular to the \(y\)-axis are squares. \(\displaystyle V = \int_0^1 4(1 - y)\,dy \quad = \quad 2\) units3 15) The base is the region enclosed by \( y=x^2)\) and \( y=9.\) Slices perpendicular to the \(x\)-axis are right isosceles triangles. 16) The base is the area between \( y=x\) and \( y=x^2\). Slices perpendicular to the \(x\)-axis are semicircles. \(\displaystyle V = \int_0^1 \frac{\pi}{8}\left( x - x^2 \right)^2 \, dx \quad=\quad \frac{π}{240}\) units3 For exercises 17 - 24, draw the region bounded by the curves. Then, use the disk or washer method to find the volume when the region is rotated around the \(x\)-axis. 17) \( x+y=8,\quad x=0\), and \( y=0\) 18) \( y=2x^2,\quad x=0,\quad x=4,\) and \( y=0\) \(\displaystyle V = \int_0^4 4\pi x^4\, dx \quad=\quad \frac{4096π}{5}\) units3 19) \( y=e^x+1,\quad x=0,\quad x=1,\) and \( y=0\) 20) \( y=x^4,\quad x=0\), and \( y=1\) \(\displaystyle V = \int_0^1 \pi\left( 1^2 - \left( x^4\right)^2\right)\, dx = \int_0^1 \pi\left( 1 - x^8\right)\, dx \quad = \quad \frac{8π}{9}\) units3 21) \( y=\sqrt{x},\quad x=0,\quad x=4,\) and \( y=0\) 22) \( y=\sin x,\quad y=\cos x,\) and \( x=0\) \(\displaystyle V = \int_0^{\pi/4} \pi \left( \cos^2 x - \sin^2 x\right) \, dx = \int_0^{\pi/4} \pi \cos 2x \, dx \quad=\quad \frac{π}{2}\) units3 23) \( y=\dfrac{1}{x},\quad x=2\), and \( y=3\) 24) \( x^2−y^2=9\) and \( x+y=9,\quad y=0\) and \( x=0\) \(V = 207π\) units3 For exercises 25 - 32, draw the region bounded by the curves. Then, find the volume when the region is rotated around the \(y\)-axis. 25) \( y=4−\dfrac{1}{2}x,\quad x=0,\) and \( y=0\) \(V = \frac{4π}{5}\) units3 27) \( y=3x^2,\quad x=0,\) and \( y=3\) 28) \( y=\sqrt{4−x^2},\quad y=0,\) and \( x=0\) \(V = \frac{16π}{3}\) units3 29) \( y=\dfrac{1}{\sqrt{x+1}},\quad x=0\), and \( x=3\) 30) \( x=\sec(y)\) and \( y=\dfrac{π}{4},\quad y=0\) and \( x=0\) \(V = π\) units3 31) \( y=\dfrac{1}{x+1},\quad x=0\), and \( x=2\) 32) \( y=4−x,\quad y=x,\) and \( x=0\) For exercises 33 - 40, draw the region bounded by the curves. Then, find the volume when the region is rotated around the \(x\)-axis. 33) \( y=x+2,\quad y=x+6,\quad x=0\), and \( x=5\) 34) \( y=x^2\) and \( y=x+2\) 35) \( x^2=y^3\) and \( x^3=y^2\) 36) \( y=4−x^2\) and \( y=2−x\) \(V = \frac{108π}{5}\) units3 37) [T] \( y=\cos x,\quad y=e^{−x},\quad x=0\), and \( x=1.2927\) 38) \( y=\sqrt{x}\) and \( y=x^2\) \(V = \frac{3π}{10}\) units3 39) \( y=\sin x,\quad y=5\sin x,\quad x=0\) and \( x=π\) 40) \( y=\sqrt{1+x^2}\) and \( y=\sqrt{4−x^2}\) \(V = 2\sqrt{6}π\) units3 For exercises 41 - 45, draw the region bounded by the curves. Then, use the washer method to find the volume when the region is revolved around the \(y\)-axis. 41) \( y=\sqrt{x},\quad x=4\), and \( y=0\) 42) \( y=x+2,\quad y=2x−1\), and \( x=0\) \(V = 9π\) units3 43) \( y=\dfrac{3}{x}\) and \( y=x^3\) 44) \( x=e^{2y},\quad x=y^2,\quad y=0\), and \( y=\ln(2)\) \(V = \dfrac{π}{20}(75−4\ln^5(2))\) units3 45) \( x=\sqrt{9−y^2},\quad x=e^{−y},\quad y=0\), and \( y=3\) 46) Yogurt containers can be shaped like frustums. Rotate the line \( y=\left(\frac{1}{m}\right)x\) around the \(y\)-axis to find the volume between \( y=a\) and \( y=b\). \(V = \dfrac{m^2π}{3}(b^3−a^3)\) units3 47) Rotate the ellipse \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1\) around the \(x\)-axis to approximate the volume of a football, as seen here. 48) Rotate the ellipse \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1\) around the \(y\)-axis to approximate the volume of a football. \(V = \frac{4a^2bπ}{3}\) units3 49) A better approximation of the volume of a football is given by the solid that comes from rotating \( y=\sin x) around the \(x\)-axis from \( x=0\) to \( x=π\). What is the volume of this football approximation, as seen here? 50) What is the volume of the Bundt cake that comes from rotating \( y=\sin x\) around the \(y\)-axis from \( x=0\) to \( x=π\)? \(V = 2π^2\) units3 For exercises 51 - 56, find the volume of the solid described. 51) The base is the region between \( y=x\) and \( y=x^2\). Slices perpendicular to the \(x\)-axis are semicircles. 52) The base is the region enclosed by the generic ellipse \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1.\) Slices perpendicular to the \(x\)-axis are semicircles. \(V = \frac{2ab^2π}{3}\) units3 53) Bore a hole of radius a down the axis of \(a\) right cone and through the base of radius \(b\), as seen here. 54) Find the volume common to two spheres of radius \(r\) with centers that are \(2h\) apart, as shown here. \(V = \frac{π}{12}(r+h)^2(6r−h)\) units3 55) Find the volume of a spherical cap of height \(h\) and radius \(r\) where \(h<r\), as seen here. 56) Find the volume of a sphere of radius \(R\) with a cap of height \(h\) removed from the top, as seen here. \(V = \dfrac{π}{3}(h+R)(h−2R)^2\) units3 For exercises 1 - 6, find the volume generated when the region between the two curves is rotated around the given axis. Use both the shell method and the washer method. Use technology to graph the functions and draw a typical slice by hand. 1) [T] Over the curve of \( y=3x,\) \(x=0,\) and \( y=3\) rotated around the \(y\)-axis. 2) [T] Under the curve of \( y=3x,\) \(x=0\), and \( x=3\) rotated around the \(y\)-axis. \(V = 54π\) units3 3) [T] Over the curve of \( y=3x,\) \(x=0\), and \( y=3\) rotated around the \(x\)-axis. 4) [T] Under the curve of \( y=3x,\) \(x=0,\) and \( x=3\) rotated around the \(x\)-axis. 5) [T] Under the curve of \( y=2x^3,x=0,\) and \( x=2\) rotated around the \(y\)-axis. 6) [T] Under the curve of \( y=2x^3,x=0,\) and \( x=2\) rotated around the \(x\)-axis. For exercises 7 - 16, use shells to find the volumes of the given solids. Note that the rotated regions lie between the curve and the \(x\)-axis and are rotated around the \(y\)-axis. 7) \( y=1−x^2,\) \(x=0,\) and \( x=1\) 8) \( y=5x^3,\) \(x=0\), and \( x=1\) 9) \( y=\dfrac{1}{x},\) \(x=1,\) and \( x=100\) 10) \( y=\sqrt{1−x^2},\) \(x=0\), and \( x=1\) \(V= \frac{2π}{3}\) units3 11) \( y=\dfrac{1}{1+x^2},\) \(x=0\),and \( x=3\) 12) \( y=\sin x^2,x=0\), and \( x=\sqrt{π}\) \(V= 2π\) units3 13) \( y=\dfrac{1}{\sqrt{1−x^2}},\) \(x=0\), and \( x=\frac{1}{2}\) 14) \( y=\sqrt{x},\) \(x=0\), and \( x=1\) 15) \( y=(1+x^2)^3,\) \(x=0\), and \( x=1\) 16) \( y=5x^3−2x^4,\) \(x=0\), and \( x=2\) \(V= \frac{64π}{3}\) units3 For exercises 17 - 26, use shells to find the volume generated by rotating the regions between the given curve and \( y=0\) around the \(x\)-axis. 18) \( y=x^2,\) \(x=0\), and \( x=2\) 19) \( y=e^x,\) \(x=0\),and \( x=1\) 20) \( y=\ln(x),\) \(x=1\), and \( x=e\) \(V= π(e−2)\) units3 21) \( x=\dfrac{1}{1+y^2},\) \(y=1\), and \( y=4\) 22) \( x=\dfrac{1+y^2}{y},\) \(y=0\), and \( y=2\) 23) \( x=\cos y,\) \(y=0\), and \( y=π\) 24) \( x=y^3−4y^2,\) \(x=−1\), and \( x=2\) 25) \( x=ye^y,\) \(x=−1\), and \( x=2\) 26) \( x=e^y\cos y,\) \(x=0\), and \( x=π\) \(V = e^ππ^2\) units3 For exercises 27 - 36, find the volume generated when the region between the curves is rotated around the given axis. 27) \( y=3−x,\) \(y=0,\) \(x=0\), and \( x=2\) rotated around the \(y\)-axis. 28) \( y=x^3,\) \(y=0\),and \( y=8\) rotated around the \(y\)-axis. \( V=\frac{64π}{5}\) units3 29) \( y=x^2,\) \(y=x,\) rotated around the \(y\)-axis. 30) \( y=\sqrt{x},\) \(x=0\), and \( x=1\) rotated around the line \( x=2.\) \(V=\frac{28π}{15}\) units3 31) \( y=\dfrac{1}{4−x},\) \(x=1,\) and \( x=2\) rotated around the line \( x=4\). 32) \( y=\sqrt{x}\) and \( y=x^2\) rotated around the \(y\)-axis. \(V=\frac{3π}{10}\) units3 33) \( y=\sqrt{x}\) and \( y=x^2\) rotated around the line \( x=2\). 34) \( x=y^3,\) \(y=\dfrac{1}{x},\) \(x=1\), and \( y=2\) rotated around the \(x\)-axis. \( \frac{52π}{5}\) units3 35) \( x=y^2\) and \( y=x\) rotated around the line \( y=2\). 36) [T] Left of \( x=\sin(πy)\), right of \( y=x\), around the \(y\)-axis. \(V \approx 0.9876\) units3 For exercises 37 - 44, use technology to graph the region. Determine which method you think would be easiest to use to calculate the volume generated when the function is rotated around the specified axis. Then, use your chosen method to find the volume. 37) [T] \( y=x^2\) and \( y=4x\) rotated around the \(y\)-axis. 38) [T] \( y=\cos(πx),y=\sin(πx),x=\frac{1}{4}\), and \( x=\frac{5}{4}\) rotated around the \(y\)-axis. \(V = 3\sqrt{2}\) units3 39) [T] \( y=x^2−2x,x=2,\) and \( x=4\) rotated around the \(y\)-axis. 40) [T] \( y=x^2−2x,x=2,\) and \( x=4\) rotated around the \(x\)-axis. \(V= \frac{496π}{15}\) units3 41) [T] \( y=3x^3−2,y=x\), and \( x=2\) rotated around the \(x\)-axis. 42) [T] \( y=3x^3−2,y=x\), and \( x=2\) rotated around the \(y\)-axis. \( V = \frac{398π}{15}\) units3 43) [T] \( x=\sin(πy^2)\) and \( x=\sqrt{2}y\) rotated around the \(x\)-axis. 44) [T] \( x=y^2,x=y^2−2y+1\), and \( x=2\) rotated around the \(y\)-axis. \( V =15.9074\) units3 For exercises 45 - 51, use the method of shells to approximate the volumes of some common objects, which are pictured in accompanying figures. 45) Use the method of shells to find the volume of a sphere of radius \( r\). 46) Use the method of shells to find the volume of a cone with radius \( r\) and height \( h\). \(V = \frac{1}{3}πr^2h\) units3 47) Use the method of shells to find the volume of an ellipse \( (x^2/a^2)+(y^2/b^2)=1\) rotated around the \(x\)-axis. 48) Use the method of shells to find the volume of a cylinder with radius \( r\) and height \( h\). \(V= πr^2h\) units3 49) Use the method of shells to find the volume of the donut created when the circle \( x^2+y^2=4\) is rotated around the line \( x=4\). 50) Consider the region enclosed by the graphs of \( y=f(x),y=1+f(x),x=0,y=0,\) and \( x=a>0\). What is the volume of the solid generated when this region is rotated around the \(y\)-axis? Assume that the function is defined over the interval \( [0,a]\). \( V=πa^2\) units3 51) Consider the function \( y=f(x)\), which decreases from \( f(0)=b\) to \( f(1)=0\). Set up the integrals for determining the volume, using both the shell method and the disk method, of the solid generated when this region, with \( x=0\) and \( y=0\), is rotated around the \(y\)-axis. Prove that both methods approximate the same volume. Which method is easier to apply? (Hint: Since \( f(x)\) is one-to-one, there exists an inverse \( f^{−1}(y)\).) For the following exercises, find the length of the functions over the given interval. 1) \(\displaystyle y=5x\) from \(\displaystyle x=0\) to \(\displaystyle x=2\) Solution: \(\displaystyle 2\sqrt{26}\) 2) \(\displaystyle y=−\frac{1}{2}x+25\) from \(\displaystyle x=1\) to \(\displaystyle x=4\) 3) \(\displaystyle x=4y\) from \(\displaystyle y=−1\) to \(\displaystyle y=1\) 4) Pick an arbitrary linear function \(\displaystyle x=g(y)\) over any interval of your choice \(\displaystyle (y_1,y_2).\) Determine the length of the function and then prove the length is correct by using geometry. 5) Find the surface area of the volume generated when the curve \(\displaystyle y=\sqrt{x}\) revolves around the \(\displaystyle x-axis\) from \(\displaystyle (1,1)\) to \(\displaystyle (4,2)\), as seen here. Solution: \(\displaystyle \frac{π}{6}(17\sqrt{17}−5\sqrt{5})\) 6) Find the surface area of the volume generated when the curve \(\displaystyle y=x^2\) revolves around the \(\displaystyle y-axis\) from \(\displaystyle (1,1)\) to \(\displaystyle (3,9)\). For the following exercises, find the lengths of the functions of \(\displaystyle x\) over the given interval. If you cannot evaluate the integral exactly, use technology to approximate it. 7) \(\displaystyle y=x^{3/2}\) from \(\displaystyle (0,0)\) to \(\displaystyle (1,1)\) Solution: \(\displaystyle \frac{13\sqrt{13}−8}{27}\) 9) \(\displaystyle y=\frac{1}{3}(x^2+2)^{3/2}\) from \(\displaystyle x=0\) to \(\displaystyle x=1\) Solution: \(\displaystyle \frac{4}{3}\) 10) \(\displaystyle y=\frac{1}{3}(x^2−2)^{3/2}\) from \(\displaystyle x=2\) to \(\displaystyle x=4\) 11) [T] \(\displaystyle y=e^x\) on \(\displaystyle x=0\) to \(\displaystyle x=1\) Solution: \(\displaystyle 2.0035\) 12) \(\displaystyle y=\frac{x^3}{3}+\frac{1}{4x}\) from \(\displaystyle x=1\) to \(\displaystyle x=3\) 13) \(\displaystyle y=\frac{x^4}{4}+\frac{1}{8x^2}\) from \(\displaystyle x=1\) to \(\displaystyle x=2\) Solution: \(\displaystyle \frac{123}{32}\) 14) \(\displaystyle y=\frac{2x^{3/2}}{3}−\frac{x^{1/2}}{2}\) from \(\displaystyle x=1\) to \(\displaystyle x=4\) 15) \(\displaystyle y=\frac{1}{27}(9x^2+6)^{3/2}\) from \(\displaystyle x=0\) to \(\displaystyle x=2\) Solution: \(\displaystyle 10\) 16) [T] \(\displaystyle y=sinx\) on \(\displaystyle x=0\) to \(\displaystyle x=π\) For the following exercises, find the lengths of the functions of \(\displaystyle y\) over the given interval. If you cannot evaluate the integral exactly, use technology to approximate it. 17) \(\displaystyle y=\frac{5−3x}{4}\) from \(\displaystyle y=0\) to \(\displaystyle y=4\) Solution: \(\displaystyle \frac{20}{3}\) 18) \(\displaystyle x=\frac{1}{2}(e^y+e^{−y})\) from \(\displaystyle y=−1\) to \(\displaystyle y=1\) 19) \(\displaystyle x=5y^{3/2}\) from \(\displaystyle y=0\) to \(\displaystyle y=1\) Solution: \(\displaystyle \frac{1}{675}(229\sqrt{229}−8)\) 20) [T] \(\displaystyle x=y^2\) from \(\displaystyle y=0\) to \(\displaystyle y=1\) 21) \(\displaystyle x=\sqrt{y}\) from \(\displaystyle y=0\) to \(\displaystyle y=1\) Solution: \(\displaystyle \frac{1}{8}(4\sqrt{5}+ln(9+4\sqrt{5}))\) 22) \(\displaystyle x=\frac{2}{3}(y^2+1)^{3/2}\) from \(\displaystyle y=1\) to \(\displaystyle y=3\) 23) [T] \(\displaystyle x=tany\) from \(\displaystyle y=0\) to \(\displaystyle y=\frac{3}{4}\) Solution: \(\displaystyle 1.201\) 24) [T] \(\displaystyle x=cos^2y\) from \(\displaystyle y=−\frac{π}{2}\) to \(\displaystyle y=\frac{π}{2}\) 25) [T] \(\displaystyle x=4^y\) from \(\displaystyle y=0\) to \(\displaystyle y=2\) Solution: \(\displaystyle 15.2341\) 26) [T] \(\displaystyle x=ln(y)\) on \(\displaystyle y=\frac{1}{e}\) to \(\displaystyle y=e\) For the following exercises, find the surface area of the volume generated when the following curves revolve around the \(\displaystyle x-axis\). If you cannot evaluate the integral exactly, use your calculator to approximate it. 27) \(\displaystyle y=\sqrt{x}\) from \(\displaystyle x=2\) to \(\displaystyle x=6\) Solution: \(\displaystyle \frac{49π}{3}\) 28) \(\displaystyle y=x^3\) from \(\displaystyle x=0\) to \(\displaystyle x=1\) 29) \(\displaystyle y=7x\) from \(\displaystyle x=−1\) to \(\displaystyle x=1\) Solution: \(\displaystyle 70π\sqrt{2}\) 30) [T] \(\displaystyle y=\frac{1}{x^2}\) from \(\displaystyle x=1\) to \(\displaystyle x=3\) 31) \(\displaystyle y=\sqrt{4−x^2}\) from \(\displaystyle x=0\) to \(\displaystyle x=2\) Solution: \(\displaystyle 8π\) 32) \(\displaystyle y=\sqrt{4−x^2}\) from \(\displaystyle x=−1\) to \(\displaystyle x=1\) 33) \(\displaystyle y=5x\) from \(\displaystyle x=1\) to \(\displaystyle x=5\) Solution: \(\displaystyle 120π\sqrt{26}\) 34) [T] \(\displaystyle y=tanx\) from \(\displaystyle x=−\frac{π}{4}\) to \(\displaystyle x=\frac{π}{4}\) For the following exercises, find the surface area of the volume generated when the following curves revolve around the \(\displaystyle y-axis\). If you cannot evaluate the integral exactly, use your calculator to approximate it. Solution: \(\displaystyle \frac{π}{6}(17\sqrt{17}−1)\) 36) \(\displaystyle y=\frac{1}{2}x^2+\frac{1}{2}\) from \(\displaystyle x=0\) to \(\displaystyle x=1\) 37) \(\displaystyle y=x+1\) from \(\displaystyle x=0\) to \(\displaystyle x=3\) Solution: \(\displaystyle 9\sqrt{2}π\) 38) [T] \(\displaystyle y=\frac{1}{x}\) from \(\displaystyle x=\frac{1}{2}\) to \(\displaystyle x=1\) 39) \(\displaystyle y=\sqrt[3]{x}\) from \(\displaystyle x=1\) to \(\displaystyle x=27\) Solutipm: \(\displaystyle frac{10\sqrt{10}π}{27}(73\sqrt{73}−1)\) 40) [T] \(\displaystyle y=3x^4\) from \(\displaystyle x=0\) to \(\displaystyle x=1\) 41) [T] \(\displaystyle y=\frac{1}{\sqrt{x}}\) from \(\displaystyle x=1\) to \(\displaystyle x=3\) Solution: \(\displaystyle 25.645\) 42) [T] \(\displaystyle y=cosx\) from \(\displaystyle x=0\) to \(\displaystyle x=\frac{π}{2}\) 43) The base of a lamp is constructed by revolving a quarter circle \(\displaystyle y=\sqrt{2x−x^2}\) around the \(\displaystyle y-axis\) from \(\displaystyle x=1\) to \(\displaystyle x=2\), as seen here. Create an integral for the surface area of this curve and compute it. 44) A light bulb is a sphere with radius \(\displaystyle 1/2\) in. with the bottom sliced off to fit exactly onto a cylinder of radius \(\displaystyle 1/4\) in. and length \(\displaystyle 1/3\) in., as seen here. The sphere is cut off at the bottom to fit exactly onto the cylinder, so the radius of the cut is \(\displaystyle 1/4\) in. Find the surface area (not including the top or bottom of the cylinder). 45) [T] A lampshade is constructed by rotating \(\displaystyle y=1/x\) around the \(\displaystyle x-axis\) from \(\displaystyle y=1\) to \(\displaystyle y=2\), as seen here. Determine how much material you would need to construct this lampshade—that is, the surface area—accurate to four decimal places. 46) [T] An anchor drags behind a boat according to the function \(\displaystyle y=24e^{−x/2}−24\), where \(\displaystyle y\) represents the depth beneath the boat and \(\displaystyle x\) is the horizontal distance of the anchor from the back of the boat. If the anchor is \(\displaystyle 23\) ft below the boat, how much rope do you have to pull to reach the anchor? Round your answer to three decimal places. 47) [T] You are building a bridge that will span \(\displaystyle 10\) ft. You intend to add decorative rope in the shape of \(\displaystyle y=5|sin((xπ)/5)|\), where \(\displaystyle x\) is the distance in feet from one end of the bridge. Find out how much rope you need to buy, rounded to the nearest foot. Solution: \(\displaystyle 23\) ft For the following exercises, find the exact arc length for the following problems over the given interval. 48) \(\displaystyle y=ln(sinx)\) from \(\displaystyle x=π/4\) to \(\displaystyle x=(3π)/4\). (Hint: Recall trigonometric identities.) 49) Draw graphs of \(\displaystyle y=x^2, y=x^6\), and \(\displaystyle y=x^{10}\). For \(\displaystyle y=x^n\), as \(\displaystyle n\) increases, formulate a prediction on the arc length from \(\displaystyle (0,0)\) to \(\displaystyle (1,1)\). Now, compute the lengths of these three functions and determine whether your prediction is correct. Solution: \(\displaystyle 2\) 50) Compare the lengths of the parabola \(\displaystyle x=y^2\) and the line \(\displaystyle x=by\) from \(\displaystyle (0,0)\) to \(\displaystyle (b^2,b)\) as \(\displaystyle b\) increases. What do you notice? 51) Solve for the length of \(\displaystyle x=y^2\) from \(\displaystyle (0,0)\) to \(\displaystyle (1,1)\). Show that \(\displaystyle x=(1/2)y^2\) from \(\displaystyle (0,0)\) to \(\displaystyle (2,2)\) is twice as long. Graph both functions and explain why this is so. Solution: Answers may vary 52) [T] Which is longer between \(\displaystyle (1,1)\) and \(\displaystyle (2,1/2)\): the hyperbola \(\displaystyle y=1/x\) or the graph of \(\displaystyle x+2y=3\)? 53) Explain why the surface area is infinite when \(\displaystyle y=1/x\) is rotated around the \(\displaystyle x-axis\) for \(\displaystyle 1≤x<∞,\) but the volume is finite. Solution: For more information, look up Gabriel's Horn. For the following exercises, find the work done. 1) Find the work done when a constant force \(\displaystyle F=12\)lb moves a chair from \(\displaystyle x=0.9\) to \(\displaystyle x=1.1\) ft. 2) How much work is done when a person lifts a \(\displaystyle 50\) lb box of comics onto a truck that is \(\displaystyle 3\) ft off the ground? Solution: \(\displaystyle 150\) ft-lb 3) What is the work done lifting a \(\displaystyle 20\) kg child from the floor to a height of \(\displaystyle 2\) m? (Note that \(\displaystyle 1\) kg equates to \(\displaystyle 9.8\)N) 4) Find the work done when you push a box along the floor \(\displaystyle 2\) m, when you apply a constant force of \(\displaystyle F=100N\). Solution: \(\displaystyle 200J\) 5) Compute the work done for a force \(\displaystyle F=12/x^2\)N from \(\displaystyle x=1\) to \(\displaystyle x=2\) m. 6) What is the work done moving a particle from \(\displaystyle x=0\) to \(\displaystyle x=1\) m if the force acting on it is \(\displaystyle F=3x^2\)N? Solution: \(\displaystyle 1\) J For the following exercises, find the mass of the one-dimensional object. 7) A wire that is \(\displaystyle 2\)ft long (starting at \(\displaystyle x=0\)) and has a density function of \(\displaystyle ρ(x)=x^2+2x\) lb/ft 8) A car antenna that is \(\displaystyle 3\) ft long (starting at \(\displaystyle x=0)\) and has a density function of \(\displaystyle ρ(x)=3x+2\) lb/ft 9) A metal rod that is \(\displaystyle 8\)in. long (starting at \(\displaystyle x=0\)) and has a density function of \(\displaystyle ρ(x)=e^{1/2x}\) lb/in. 10) A pencil that is \(\displaystyle 4\)in. long (starting at \(\displaystyle x=2\)) and has a density function of \(\displaystyle ρ(x)=5/x\) oz/in. Solution: \(\displaystyle ln(243)\) 11) A ruler that is \(\displaystyle 12\)in. long (starting at \(\displaystyle x=5\)) and has a density function of \(\displaystyle ρ(x)=ln(x)+(1/2)x^2\) oz/in. For exercises 12 - 16, find the mass of the two-dimensional object that is centered at the origin. 12) An oversized hockey puck of radius \(\displaystyle 2\)in. with density function \(\displaystyle ρ(x)=x^3−2x+5\) Solution: \(\displaystyle \frac{332π}{15}\) 13) A frisbee of radius \(\displaystyle 6\)in. with density function \(\displaystyle ρ(x)=e^{−x}\) 14) A plate of radius \(\displaystyle 10\)in. with density function \(\displaystyle ρ(x)=1+cos(πx)\) Solution: \(\displaystyle 100π\) 15) A jar lid of radius \(\displaystyle 3\)in. with density function \(\displaystyle ρ(x)=ln(x+1)\) 16) A disk of radius \(\displaystyle 5\)cm with density function \(\displaystyle ρ(x)=\sqrt{3x}\) Solution: \(\displaystyle 20π\sqrt{15}\) 17) A \(\displaystyle 12\)-in. spring is stretched to \(\displaystyle 15\) in. by a force of \(\displaystyle 75\)lb. What is the spring constant? 18) A spring has a natural length of \(\displaystyle 10\)cm. It takes \(\displaystyle 2\) J to stretch the spring to \(\displaystyle 15\) cm. How much work would it take to stretch the spring from \(\displaystyle 15\) cm to \(\displaystyle 20\) cm? Solution: \(\displaystyle 6\)J 19) A \(\displaystyle 1\)-m spring requires \(\displaystyle 10\) J to stretch the spring to \(\displaystyle 1.1\) m. How much work would it take to stretch the spring from \(\displaystyle 1\) m to \(\displaystyle 1.2\)m? 20) A spring requires \(\displaystyle 5\)J to stretch the spring from \(\displaystyle 8\) cm to \(\displaystyle 12\) cm, and an additional \(\displaystyle 4\) J to stretch the spring from \(\displaystyle 12\) cm to \(\displaystyle 14\) cm. What is the natural length of the spring? Solution: \(\displaystyle 5\) cm 21) A shock absorber is compressed 1 in. by a weight of 1 t. What is the spring constant? 22) A force of \(\displaystyle F=20x−x^3\)N stretches a nonlinear spring by \(\displaystyle x\) meters. What work is required to stretch the spring from \(\displaystyle x=0\) to \(\displaystyle x=2\) m? Solution: \(\displaystyle 36\) J 23) Find the work done by winding up a hanging cable of length \(\displaystyle 100\)ft and weight-density \(\displaystyle 5\)lb/ft. 24) For the cable in the preceding exercise, how much work is done to lift the cable \(\displaystyle 50\)ft? Solution: \(\displaystyle 18,750\) ft-lb 25) For the cable in the preceding exercise, how much additional work is done by hanging a \(\displaystyle 200\)lb weight at the end of the cable? 26) [T] A pyramid of height \(\displaystyle 500\)ft has a square base \(\displaystyle 800\) ft by \(\displaystyle 800\) ft. Find the area \(\displaystyle A\) at height \(\displaystyle h\). If the rock used to build the pyramid weighs approximately \(\displaystyle w=100lb/ft^3\), how much work did it take to lift all the rock? Solution: \(\displaystyle \frac{32}{3}×10^9ft-lb\) 27) [T] For the pyramid in the preceding exercise, assume there were \(\displaystyle 1000\) workers each working \(\displaystyle 10\) hours a day, \(\displaystyle 5\) days a week, \(\displaystyle 50\) weeks a year. If the workers, on average, lifted 10 100 lb rocks \(\displaystyle 2\)ft/hr, how long did it take to build the pyramid? 28) [T] The force of gravity on a mass \(\displaystyle m\) is \(\displaystyle F=−((GMm)/x^2)\) newtons. For a rocket of mass \(\displaystyle m=1000kg\), compute the work to lift the rocket from \(\displaystyle x=6400\) to \(\displaystyle x=6500\) km. (Note: \(\displaystyle G=6×10^{−17}N m^2/kg^2\) and \(\displaystyle M=6×10^{24}kg\).) Solution: \(\displaystyle 8.65×10^5J\) 29) [T] For the rocket in the preceding exercise, find the work to lift the rocket from \(\displaystyle x=6400\) to \(\displaystyle x=∞\). 30) [T] A rectangular dam is \(\displaystyle 40\) ft high and \(\displaystyle 60\) ft wide. Compute the total force \(\displaystyle F\) on the dam when a. the surface of the water is at the top of the dam and b. the surface of the water is halfway down the dam. Solution: \(\displaystyle a. 3,000,000\)lb, \(\displaystyle b. 749,000\)lb 31) [T] Find the work required to pump all the water out of a cylinder that has a circular base of radius \(\displaystyle 5\)ft and height \(\displaystyle 200\) ft. Use the fact that the density of water is \(\displaystyle 62\)lb/ft3. 32) [T] Find the work required to pump all the water out of the cylinder in the preceding exercise if the cylinder is only half full. Solution: \(\displaystyle 23.25π\) million ft-lb 33) [T] How much work is required to pump out a swimming pool if the area of the base is \(\displaystyle 800\)ft2, the water is \(\displaystyle 4\) ft deep, and the top is \(\displaystyle 1\) ft above the water level? Assume that the density of water is \(\displaystyle 62\)lb/ft3. 34) A cylinder of depth \(\displaystyle H\) and cross-sectional area \(\displaystyle A\) stands full of water at density \(\displaystyle ρ\). Compute the work to pump all the water to the top. Solution: \(\displaystyle \frac{AρH^2}{2}\) 35) For the cylinder in the preceding exercise, compute the work to pump all the water to the top if the cylinder is only half full. 36) A cone-shaped tank has a cross-sectional area that increases with its depth: \(\displaystyle A=(πr^2h^2)/H^3\). Show that the work to empty it is half the work for a cylinder with the same height and base. For the following exercises, calculate the center of mass for the collection of masses given. 1) \(\displaystyle m_1=2\) at \(\displaystyle x_1=1\) and \(\displaystyle m_2=4\) at \(\displaystyle x_2=2\) 2) \(\displaystyle m_1=1\) at \(\displaystyle x_1=−1\) and \(\displaystyle m_2=3\) at \(\displaystyle x_2=2\) 3) \(\displaystyle m=3\) at \(\displaystyle x=0,1,2,6\) 4) Unit masses at \(\displaystyle (x,y)=(1,0),(0,1),(1,1)\) Solution: \(\displaystyle (\frac{2}{3},\frac{2}{3})\) 5) \(\displaystyle m_1=1\) at \(\displaystyle (1,0)\) and \(\displaystyle m_2=4\) at \(\displaystyle (0,1)\) For the following exercises, compute the center of mass x–. 7) \(\displaystyle ρ=1\) for \(\displaystyle x∈(−1,3)\) 8) \(\displaystyle ρ=x^2\) for \(\displaystyle x∈(0,L)\) Solution: \(\displaystyle \frac{3L}{4}\) 9) \(\displaystyle ρ=1\) for \(\displaystyle x∈(0,1)\) and \(\displaystyle ρ=2\) for \(\displaystyle x∈(1,2)\) 10) \(\displaystyle ρ=sinx\) for \(\displaystyle x∈(0,π)\) Solution: \(\displaystyle \frac{π}{2}\) 11) \(\displaystyle ρ=cosx\) for \(\displaystyle x∈(0,\frac{π}{2})\) 12) \(\displaystyle ρ=e^x\) for \(\displaystyle x∈(0,2)\) Solution: \(\displaystyle \frac{e^2+1}{e^2−1}\) 13) \(\displaystyle ρ=x^3+xe^{−x}\) for \(\displaystyle x∈(0,1)\) 14) \(\displaystyle ρ=xsinx\) for \(\displaystyle x∈(0,π)\) Solution: \(\displaystyle \frac{π^2−4}{π}\) 15) \(\displaystyle ρ=\sqrt{x}\) for \(\displaystyle x∈(1,4)\) 16) \(\displaystyle ρ=lnx\) for \(\displaystyle x∈(1,e)\) Solution: \(\displaystyle \frac{1}{4}(1+e^2)\) For the following exercises, compute the center of mass \(\displaystyle (\bar{x},\bar{y})\). Use symmetry to help locate the center of mass whenever possible. 17) \(\displaystyle ρ=7\) in the square \(\displaystyle 0≤x≤1, 0≤y≤1\) 18) \(\displaystyle ρ=3\) in the triangle with vertices \(\displaystyle (0,0), (a,0)\), and \(\displaystyle (0,b)\) Solution: \(\displaystyle (\frac{a}{3},\frac{b}{3})\) 19) \(\displaystyle ρ=2\) for the region bounded by \(\displaystyle y=cos(x), y=−cos(x), x=−\frac{π}{2}\), and \(\displaystyle x=\frac{π}{2}\) For the following exercises, use a calculator to draw the region, then compute the center of mass \(\displaystyle (\bar{x},\bar{y})\). Use symmetry to help locate the center of mass whenever possible. 20) [T] The region bounded by \(\displaystyle y=cos(2x), x=−\frac{π}{4}\), and \(\displaystyle x=\frac{π}{4}\) Solution: \(\displaystyle (0,\frac{π}{8})\) 21) [T] The region between \(\displaystyle y=2x^2, y=0, x=0,\) and \(\displaystyle x=1\) 22) [T] The region between \(\displaystyle y=\frac{5}{4}x^2\) and \(\displaystyle y=5\) Solution: \(\displaystyle (0,3)\) 23) [T] Region between \(\displaystyle y=\sqrt{x}, y=ln(x), x=1,\) and \(\displaystyle x=4\) 24) [T] The region bounded by \(\displaystyle y=0, \frac{x^2}{4}+\frac{y^2}{9}=1\) Solution: \(\displaystyle (0,\frac{4}{π})\) 25) [T] The region bounded by \(\displaystyle y=0, x=0,\) and \(\displaystyle \frac{x^2}{4}+\frac{y^2}{9}=1\) 26) [T] The region bounded by \(\displaystyle y=x^2\) and \(\displaystyle y=x^4\) in the first quadrant For the following exercises, use the theorem of Pappus to determine the volume of the shape. 27) Rotating \(\displaystyle y=mx\) around the \(\displaystyle x\)-axis between \(\displaystyle x=0\) and \(\displaystyle x=1\) 28) Rotating \(\displaystyle y=mx\) around the \(\displaystyle y\)-axis between \(\displaystyle x=0\) and \(\displaystyle x=1\) Solution: \(\displaystyle \frac{mπ}{3}\) 29) A general cone created by rotating a triangle with vertices \(\displaystyle (0,0), (a,0),\) and \(\displaystyle (0,b)\) around the \(\displaystyle y\)-axis. Does your answer agree with the volume of a cone? 30) A general cylinder created by rotating a rectangle with vertices \(\displaystyle (0,0), (a,0),(0,b),\) and \(\displaystyle (a,b)\) around the \(\displaystyle y\) -axis. Does your answer agree with the volume of a cylinder? Solution: \(\displaystyle πa^2b\) 31) A sphere created by rotating a semicircle with radius \(\displaystyle a\) around the \(\displaystyle y\)-axis. Does your answer agree with the volume of a sphere? For the following exercises, use a calculator to draw the region enclosed by the curve. Find the area \(\displaystyle M\) and the centroid \(\displaystyle (\bar{x},\bar{y})\) for the given shapes. Use symmetry to help locate the center of mass whenever possible. 32) [T] Quarter-circle: \(\displaystyle y=\sqrt{1−x^2}, y=0\), and \(\displaystyle x=0\) Solution: \(\displaystyle (\frac{4}{3π},\frac{4}{3π})\) 33) [T] Triangle: \(\displaystyle y=x, y=2−x\), and \(\displaystyle y=0\) 34) [T] Lens: \(\displaystyle y=x^2\) and \(\displaystyle y=x\) 35) [T] Ring: \(\displaystyle y^2+x^2=1\) and \(\displaystyle y^2+x^2=4\) 36) [T] Half-ring: \(\displaystyle y^2+x^2=1, y^2+x^2=4,\) and \(\displaystyle y=0\) Solution: \(\displaystyle (0,\frac{28}{9π})\) 37) Find the generalized center of mass in the sliver between \(\displaystyle y=x^a\) and \(\displaystyle y=x^b\) with \(\displaystyle a>b\). Then, use the Pappus theorem to find the volume of the solid generated when revolving around the y-axis. 38) Find the generalized center of mass between \(\displaystyle y=a^2−x^2, x=0\), and \(\displaystyle y=0\). Then, use the Pappus theorem to find the volume of the solid generated when revolving around the y-axis. Solution: Center of mass: \(\displaystyle (\frac{a}{6},\frac{4a^2}{5}),\) volume: \(\displaystyle \frac{2πa^4}{9}\) 39) Find the generalized center of mass between \(\displaystyle y=bsin(ax), x=0,\) and \(\displaystyle x=\frac{π}{a}.\) Then, use the Pappus theorem to find the volume of the solid generated when revolving around the y-axis. 40) Use the theorem of Pappus to find the volume of a torus (pictured here). Assume that a disk of radius \(\displaystyle a\) is positioned with the left end of the circle at \(\displaystyle x=b, b>0,\) and is rotated around the y-axis. Solution: Find the center of mass \(\displaystyle (\bar{x},\bar{y})\) for a thin wire along the semicircle \(\displaystyle y=\sqrt{1−x^2}\) with unit mass. (Hint: Use the theorem of Pappus.) For the following exercises, find the derivative \(\displaystyle \frac{dy}{dx}\). 1) \(\displaystyle y=ln(2x)\) Solution: \(\displaystyle \frac{1}{x}\) 2) \(\displaystyle y=ln(2x+1)\) 3) \(\displaystyle y=\frac{1}{lnx}\) Solution: \(\displaystyle −\frac{1}{x(lnx)^2}\) For the following exercises, find the indefinite integral. 4) \(\displaystyle ∫\frac{dt}{3t}\) 5) \(\displaystyle ∫\frac{dx}{1+x}\) Solution: \(\displaystyle ln(x+1)+C\) For the following exercises, find the derivative \(\displaystyle dy/dx\). (You can use a calculator to plot the function and the derivative to confirm that it is correct.) 6) [T] \(\displaystyle y=\frac{ln(x)}{x}\) 7) [T] \(\displaystyle y=xln(x)\) Solution:\(\displaystyle ln(x)+1\) 8) [T] \(\displaystyle y=log_{10}x\) 9) [T] \(\displaystyle y=ln(sinx)\) Solution: \(\displaystyle cot(x)\) 10) [T] \(\displaystyle y=ln(lnx)\) 11) [T] \(\displaystyle y=7ln(4x)\) 12) [T] \(\displaystyle y=ln((4x)^7)\) 13) [T] \(\displaystyle y=ln(tanx)\) Solution: \(\displaystyle csc(x)secx\) 14) [T] \(\displaystyle y=ln(tan(3x))\) 15) [T] \(\displaystyle y=ln(cos^2x)\) Solution: \(\displaystyle −2tanx\) For the following exercises, find the definite or indefinite integral. 16) \(\displaystyle ∫^1_0\frac{dx}{3+x}\) 17) \(\displaystyle ∫^1_0\frac{dt}{3+2t}\) Solution: \(\displaystyle \frac{1}{2}ln(\frac{5}{3})\) 18) \(\displaystyle ∫^2_0\frac{xdx}{x^2+1}\) 19) \(\displaystyle ∫^2_0\frac{x^3dx}{x^2+1}\) Solution: \(\displaystyle 2−\frac{1}{2}ln(5)\) 20) \(\displaystyle ∫^e_2\frac{dx}{xlnx}\) 21) \(\displaystyle ∫^e_2\frac{dx}{(xln(x))^2}\) Solution: \(\displaystyle \frac{1}{ln(2)}−1\) 22) \(\displaystyle ∫\frac{cosxdx}{sinx}\) 23) \(\displaystyle ∫^{π/4}_0tanxdx\) Solution: \(\displaystyle \frac{1}{2}ln(2)\) 24) \(\displaystyle ∫cot(3x)dx\) 25) \(\displaystyle ∫\frac{(lnx)^2dx}{x}\) Solution: \(\displaystyle \frac{1}{3}(lnx)^3\) For the following exercises, compute \(\displaystyle dy/dx\) by differentiating \(\displaystyle lny\). 26) \(\displaystyle y=\sqrt{x^2+1}\) 27) \(\displaystyle y=\sqrt{x^2+1}\sqrt{x2^−1}\) Solution: \(\displaystyle \frac{2x^3}{\sqrt{x^2+1}\sqrt{x^2−1}}\) 28) \(\displaystyle y=e^{sinx}\) 29) \(\displaystyle y=x^{−1/x}\) Solution: \(\displaystyle x^{−2−(1/x)}(lnx−1)\) 30) \(\displaystyle y=e^{(ex)}\) 31) \(\displaystyle y=x^e\) Solution: \(\displaystyle ex^{e−1}\) 32) \(\displaystyle y=x^{(ex)}\) 33) \(\displaystyle y=\sqrt{x}\sqrt[3]{x}\sqrt[6]{x}\) 34) \(\displaystyle y=x^{−1/lnx}\) 35) \(\displaystyle y=e^{−lnx}\) Solution: \(\displaystyle −\frac{1}{x^2}\) For the following exercises, evaluate by any method. 36) \(\displaystyle ∫^{10}_5\frac{dt}{t}−∫^{10x}_{5x}\frac{dt}{t}\) 37) \(\displaystyle ∫^{e^π}_1\frac{dx}{x}+∫^{−1}_{−2}\frac{dx}{x}\) Solution: \(\displaystyle π−ln(2)\) 38) \(\displaystyle \frac{d}{dx}∫^1_x\frac{dt}{t}\) 39) \(\displaystyle \frac{d}{dx}∫^{x^2}_x\frac{dt}{t}\) 40) \(\displaystyle \frac{d}{dx}ln(secx+tanx)\) For the following exercises, use the function \(\displaystyle lnx\). If you are unable to find intersection points analytically, use a calculator. 41) Find the area of the region enclosed by \(\displaystyle x=1\) and \(\displaystyle y=5\) above \(\displaystyle y=lnx\). Solution: \(\displaystyle e^5−6units^2\) 42) [T] Find the arc length of \(\displaystyle lnx\) from \(\displaystyle x=1\) to \(\displaystyle x=2\). 43) Find the area between \(\displaystyle lnx\) and the x-axis from \(\displaystyle x=1\) to \(\displaystyle x=2\). Solution: \(\displaystyle ln(4)−1units^2\) 44) Find the volume of the shape created when rotating this curve from \(\displaystyle x=1\) to \(\displaystyle x=2\) around the x-axis, as pictured here. 45) [T] Find the surface area of the shape created when rotating the curve in the previous exercise from \(\displaystyle x=1\) to \(\displaystyle x=2\) around the x-axis. If you are unable to find intersection points analytically in the following exercises, use a calculator. 46) Find the area of the hyperbolic quarter-circle enclosed by \(\displaystyle x=2\) and \(\displaystyle y=2\) above \(\displaystyle y=1/x.\) 47) [T] Find the arc length of \(\displaystyle y=1/x\) from \(\displaystyle x=1\) to \(\displaystyle x=4\). 48) Find the area under \(\displaystyle y=1/x\) and above the x-axis from \(\displaystyle x=1\) to \(\displaystyle x=4\). For the following exercises, verify the derivatives and antiderivatives. 49) \(\displaystyle \frac{d}{dx}ln(x+\sqrt{x^2+1})=\frac{1}{\sqrt{1+x^2}}\) 50) \(\displaystyle \frac{d}{dx}ln(\frac{x−a}{x+a})=\frac{2a}{(x^2−a^2)}\) 51) \(\displaystyle \frac{d}{dx}ln(\frac{1+\sqrt{1−x^2}}{x})=−\frac{1}{x\sqrt{1−x^2}}\) 52) \(\displaystyle \frac{d}{dx}ln(x+\sqrt{x^2−a^2})=\frac{1}{\sqrt{x^2−a^2}}\) 53) \(\displaystyle ∫\frac{dx}{xln(x)ln(lnx)}=ln(ln(lnx))+C\) True or False? If true, prove it. If false, find the true answer. 1) The doubling time for \(\displaystyle y=e^{ct}\) is \(\displaystyle (ln(2))/(ln(c))\). 2) If you invest \(\displaystyle $500\), an annual rate of interest of \(\displaystyle 3%\) yields more money in the first year than a \(\displaystyle 2.5%\) continuous rate of interest. Solution: True 3) If you leave a \(\displaystyle 100°C\) pot of tea at room temperature \(\displaystyle (25°C)\) and an identical pot in the refrigerator \(\displaystyle (5°C)\), with \(\displaystyle k=0.02\), the tea in the refrigerator reaches a drinkable temperature \(\displaystyle (70°C)\) more than \(\displaystyle 5\) minutes before the tea at room temperature. 4) If given a half-life of t years, the constant \(\displaystyle k\) for \(\displaystyle y=e^{kt}\) is calculated by \(\displaystyle k=ln(1/2)/t\). Solution: False; \(\displaystyle k=\frac{ln(2)}{t}\) For the following exercises, use \(\displaystyle y=y_0e^{kt}.\) 5) If a culture of bacteria doubles in \(\displaystyle 3\) hours, how many hours does it take to multiply by \(\displaystyle 10\)? 6) If bacteria increase by a factor of \(\displaystyle 10\) in \(\displaystyle 10\) hours, how many hours does it take to increase by \(\displaystyle 100\)? Solution: \(\displaystyle 20\) hours 7) How old is a skull that contains one-fifth as much radiocarbon as a modern skull? Note that the half-life of radiocarbon is \(\displaystyle 5730\) years. 8) If a relic contains \(\displaystyle 90%\) as much radiocarbon as new material, can it have come from the time of Christ (approximately \(\displaystyle 2000\) years ago)? Note that the half-life of radiocarbon is \(\displaystyle 5730\) years. Solution: No. The relic is approximately \(\displaystyle 871\) years old. 9) The population of Cairo grew from \(\displaystyle 5\) million to \(\displaystyle 10\) million in \(\displaystyle 20\) years. Use an exponential model to find when the population was \(\displaystyle 8\) million. 10) The populations of New York and Los Angeles are growing at \(\displaystyle 1%\) and \(\displaystyle 1.4%\) a year, respectively. Starting from \(\displaystyle 8\) million (New York) and \(\displaystyle 6\) million (Los Angeles), when are the populations equal? Solution: \(\displaystyle 71.92\) years 11) Suppose the value of \(\displaystyle $1\) in Japanese yen decreases at \(\displaystyle 2%\) per year. Starting from \(\displaystyle $1=¥250\), when will \(\displaystyle $1=¥1\)? 12) The effect of advertising decays exponentially. If \(\displaystyle 40%\) of the population remembers a new product after \(\displaystyle 3\) days, how long will \(\displaystyle 20%\)remember it? Solution: \(\displaystyle 5\) days \(\displaystyle 6\) hours \(\displaystyle 27\)minutes 13) If \(\displaystyle y=1000\) at \(\displaystyle t=3\) and \(\displaystyle y=3000\) at \(\displaystyle t=4\), what was \(\displaystyle y_0\) at \(\displaystyle t=0\)? 14) If \(\displaystyle y=100\) at \(\displaystyle t=4\) and \(\displaystyle y=10\) at \(\displaystyle t=8\), when does \(\displaystyle y=1\)? 15) If a bank offers annual interest of \(\displaystyle 7.5%\) or continuous interest of \(\displaystyle 7.25%,\) which has a better annual yield? 16) What continuous interest rate has the same yield as an annual rate of \(\displaystyle 9%\)? Solution: \(\displaystyle 8.618%\) 17) If you deposit \(\displaystyle $5000\)at \(\displaystyle 8%\) annual interest, how many years can you withdraw \(\displaystyle $500\) (starting after the first year) without running out of money? 18) You are trying to save \(\displaystyle $50,000\) in \(\displaystyle 20\) years for college tuition for your child. If interest is a continuous \(\displaystyle 10%,\) how much do you need to invest initially? Solution: $6766.76 19) You are cooling a turkey that was taken out of the oven with an internal temperature of \(\displaystyle 165°F\). After \(\displaystyle 10\) minutes of resting the turkey in a \(\displaystyle 70°F\) apartment, the temperature has reached \(\displaystyle 155°F\). What is the temperature of the turkey \(\displaystyle 20\) minutes after taking it out of the oven? 20) You are trying to thaw some vegetables that are at a temperature of \(\displaystyle 1°F\). To thaw vegetables safely, you must put them in the refrigerator, which has an ambient temperature of \(\displaystyle 44°F\). You check on your vegetables \(\displaystyle 2\) hours after putting them in the refrigerator to find that they are now \(\displaystyle 12°F\). Plot the resulting temperature curve and use it to determine when the vegetables reach \(\displaystyle 33°\). Solution: \(\displaystyle 9\)hours \(\displaystyle 13\)minutes 21) You are an archaeologist and are given a bone that is claimed to be from a Tyrannosaurus Rex. You know these dinosaurs lived during the Cretaceous Era (\(\displaystyle 146\) million years to \(\displaystyle 65\) million years ago), and you find by radiocarbon dating that there is \(\displaystyle 0.000001%\) the amount of radiocarbon. Is this bone from the Cretaceous? 22) The spent fuel of a nuclear reactor contains plutonium-239, which has a half-life of \(\displaystyle 24,000\)years. If \(\displaystyle 1\) barrel containing \(\displaystyle 10kg\) of plutonium-239 is sealed, how many years must pass until only \(\displaystyle 10g\) of plutonium-239 is left? Solution: \(\displaystyle 239,179\) years For the next set of exercises, use the following table, which features the world population by decade. Years since 1950 Population (millions) Source: http:/www.factmonster.com/ipka/A0762181.html. 23) [T] The best-fit exponential curve to the data of the form \(\displaystyle P(t)=ae^{bt}\) is given by \(\displaystyle P(t)=2686e^{0.01604t}\). Use a graphing calculator to graph the data and the exponential curve together. 24) [T] Find and graph the derivative \(\displaystyle y′\)of your equation. Where is it increasing and what is the meaning of this increase? Solution: \(\displaystyle P'(t)=43e^{0.01604t}\). The population is always increasing. 25) [T] Find and graph the second derivative of your equation. Where is it increasing and what is the meaning of this increase? 26) [T] Find the predicted date when the population reaches \(\displaystyle 10\) billion. Using your previous answers about the first and second derivatives, explain why exponential growth is unsuccessful in predicting the future. Solution: The population reaches \(\displaystyle 10\) billion people in \(\displaystyle 2027\). For the next set of exercises, use the following table, which shows the population of San Francisco during the 19th century. Years since 1850 Population (thousands) 20 149.5 Source: http:/www.sfgenealogy.com/sf/history/hgpop.htm. 27) [T] The best-fit exponential curve to the data of the form \(\displaystyle P(t)=ae^{bt}\) is given by \(\displaystyle P(t)=35.26e^{0.06407t}\). Use a graphing calculator to graph the data and the exponential curve together. 28) [T] Find and graph the derivative \(\displaystyle y′\) of your equation. Where is it increasing? What is the meaning of this increase? Is there a value where the increase is maximal? Solution: \(\displaystyle P'(t)=2.259e^{0.06407t}\). The population is always increasing. 29) [T] Find and graph the second derivative of your equation. Where is it increasing? What is the meaning of this increase? 1) [T] Find expressions for \(\displaystyle oshx+sinhx\) and \(\displaystyle coshx−sinhx.\) Use a calculator to graph these functions and ensure your expression is correct. Solution: \(\displaystyle e^x\) and \(\displaystyle e^{−x}\) 2) From the definitions of \(\displaystyle cosh(x)\) and \(\displaystyle sinh(x)\), find their antiderivatives. 3) Show that \(\displaystyle cosh(x)\) and \(\displaystyle sinh(x)\) satisfy \(\displaystyle y''=y\). 4) Use the quotient rule to verify that \(\displaystyle tanh(x)'=sech^2(x).\) 5) Derive \(\displaystyle cosh^2(x)+sinh^2(x)=cosh(2x)\) from the definition. 6) Take the derivative of the previous expression to find an expression for \(\displaystyle sinh(2x)\). 7) Prove \(\displaystyle sinh(x+y)=sinh(x)cosh(y)+cosh(x)sinh(y)\) by changing the expression to exponentials. 8) Take the derivative of the previous expression to find an expression for \(\displaystyle cosh(x+y).\) For the following exercises, find the derivatives of the given functions and graph along with the function to ensure your answer is correct. 9) [T] \(\displaystyle cosh(3x+1)\) Solution: \(\displaystyle 3sinh(3x+1)\) 10) [T] \(\displaystyle sinh(x^2)\) 11) [T] \(\displaystyle \frac{1}{cosh(x)}\) Solution: \(\displaystyle −tanh(x)sech(x)\) 12) [T] \(\displaystyle sinh(ln(x))\) 13) [T] \(\displaystyle cosh^2(x)+sinh^2(x)\) Solution: \(\displaystyle 4cosh(x)sinh(x)\) 14) [T] \(\displaystyle cosh^2(x)−sinh^2(x)\) 15) [T] \(\displaystyle tanh(\sqrt{x^2+1})\) Solution: \(\displaystyle \frac{xsech^2(\sqrt{x^2+1})}{\sqrt{x^2+1}}\) 16) [T] \(\displaystyle \frac{1+tanh(x)}{1−tanh(x)}\) 17) [T] \(\displaystyle sinh^6(x)\) Solution: \(\displaystyle 6sinh^5(x)cosh(x)\) 18) [T] \(\displaystyle ln(sech(x)+tanh(x))\) For the following exercises, find the antiderivatives for the given functions. 19) \(\displaystyle cosh(2x+1)\) Solution: \(\displaystyle \frac{1}{2}sinh(2x+1)+C\) 20) \(\displaystyle tanh(3x+2)\) 21) \(\displaystyle xcosh(x^2)\) Solution: \(\displaystyle \frac{1}{2}sinh^2(x^2)+C\) 22) \(\displaystyle 3x^3tanh(x^4)\) 23) \(\displaystyle cosh^2(x)sinh(x)\) Solution: \(\displaystyle \frac{1}{3}cosh^3(x)+C\) 24) \(\displaystyle tanh^2(x)sech^2(x)\) 25) \(\displaystyle \frac{sinh(x)}{1+cosh(x)}\) Solution: \(\displaystyle ln(1+cosh(x))+C\) 26) \(\displaystyle coth(x)\) 27) \(\displaystyle cosh(x)+sinh(x)\) Solution: \(\displaystyle cosh(x)+sinh(x)+C\) 28) \(\displaystyle (cosh(x)+sinh(x))^n\) For the following exercises, find the derivatives for the functions. 29) \(\displaystyle tanh^{−1}(4x)\) Solution: \(\displaystyle \frac{4}{1−16x^2}\) 30) \(\displaystyle sinh^{−1}(x^2)\) 31) \(\displaystyle sinh^{−1}(cosh(x))\) Solution: \(\displaystyle \frac{sinh(x)}{\sqrt{cosh^2(x)+1}}\) 32) \(\displaystyle cosh^{−1}(x^3)\) 33) \(\displaystyle tanh^{−1}(cos(x))\) Solution: \(\displaystyle −csc(x)\) 34) \(\displaystyle e^{sinh^{−1}(x)}\) 35) \(\displaystyle ln(tanh^{−1}(x))\) Solution: \(\displaystyle −\frac{1}{(x^2−1)tanh^{−1}(x)}\) For the following exercises, find the antiderivatives for the functions. 36) \(\displaystyle ∫\frac{dx}{4−x^2}\) 37) \(\displaystyle ∫\frac{dx}{a^2−x^2}\) Solution: \(\displaystyle \frac{1}{a}tanh^{−1}(\frac{x}{a})+C\) 38) \(\displaystyle ∫\frac{dx}{\sqrt{x^2+1}}\) 39) \(\displaystyle ∫\frac{xdx}{\sqrt{x^2+1}}\) Solution: \(\displaystyle \sqrt{x^2+1}+C\) 40) \(\displaystyle ∫−\frac{dx}{x\sqrt{1−x^2}}\) 41) \(\displaystyle ∫\frac{e^x}{\sqrt{e^{2x}−1}}\) Solution: \(\displaystyle cosh^{−1}(e^x)+C\) 42) \(\displaystyle ∫−\frac{2x}{x^4−1}\) For the following exercises, use the fact that a falling body with friction equal to velocity squared obeys the equation \(\displaystyle dv/dt=g−v^2\). 43) Show that \(\displaystyle v(t)=\sqrt{g}tanh(\sqrt{gt})\) satisfies this equation. 44) Derive the previous expression for \(\displaystyle v(t)\) by integrating \(\displaystyle \frac{dv}{g−v^2}=dt\). 45) [T] Estimate how far a body has fallen in \(\displaystyle 12\)seconds by finding the area underneath the curve of \(\displaystyle v(t)\). Solution: \(\displaystyle 37.30\) For the following exercises, use this scenario: A cable hanging under its own weight has a slope \(\displaystyle S=dy/dx\) that satisfies \(\displaystyle dS/dx=c\sqrt{1+S^2}\). The constant \(\displaystyle c\) is the ratio of cable density to tension. 46) Show that \(\displaystyle S=sinh(cx)\) satisfies this equation. 47) Integrate \(\displaystyle dy/dx=sinh(cx)\) to find the cable height \(\displaystyle y(x)\) if \(\displaystyle y(0)=1/c\). Solution: \(\displaystyle y=\frac{1}{c}cosh(cx)\) 48) Sketch the cable and determine how far down it sags at \(\displaystyle x=0\). For the following exercises, solve each problem. 49) [T] A chain hangs from two posts \(\displaystyle 2\)m apart to form a catenary described by the equation \(\displaystyle y=2cosh(x/2)−1\). Find the slope of the catenary at the left fence post. Solution: \(\displaystyle −0.521095\) 50) [T] A chain hangs from two posts four meters apart to form a catenary described by the equation \(\displaystyle y=4cosh(x/4)−3.\) Find the total length of the catenary (arc length). 51) [T] A high-voltage power line is a catenary described by \(\displaystyle y=10cosh(x/10)\). Find the ratio of the area under the catenary to its arc length. What do you notice? 52) A telephone line is a catenary described by \(\displaystyle y=acosh(x/a).\) Find the ratio of the area under the catenary to its arc length. Does this confirm your answer for the previous question? 53) Prove the formula for the derivative of \(\displaystyle y=sinh^{−1}(x)\) by differentiating \(\displaystyle x=sinh(y).\) (Hint: Use hyperbolic trigonometric identities.) 54) Prove the formula for the derivative of \(\displaystyle y=cosh^{−1}(x)\) by differentiating \(\displaystyle x=cosh(y).\) 55) Prove the formula for the derivative of \(\displaystyle y=sech^{−1}(x)\) by differentiating \(\displaystyle x=sech(y).\) 56) Prove that \(\displaystyle cosh(x)+sinh(x))^n=cosh(nx)+sinh(nx).\) 57) Prove the expression for \(\displaystyle sinh^{−1}(x).\) Multiply \(\displaystyle x=sinh(y)=(1/2)(e^y−e^{−y})\) by \(\displaystyle 2e^y\) and solve for \(\displaystyle y\). Does your expression match the textbook? 58) Prove the expression for \(\displaystyle cosh^{−1}(x).\) Multiply \(\displaystyle x=cosh(y)=(1/2)(e^y−e^{−y})\) by \(\displaystyle 2e^y\) and solve for \(\displaystyle y\). Does your expression match the textbook? True or False? Justify your answer with a proof or a counterexample. 1) The amount of work to pump the water out of a half-full cylinder is half the amount of work to pump the water out of the full cylinder. Solution: False 2) If the force is constant, the amount of work to move an object from \(\displaystyle x=a\) to \(\displaystyle x=b\) is \(\displaystyle F(b−a)\). 3) The disk method can be used in any situation in which the washer method is successful at finding the volume of a solid of revolution. 4) If the half-life of \(\displaystyle seaborgium-266\) is \(\displaystyle 360\) ms, then \(\displaystyle k=(ln(2))/360.\) For the following exercises, use the requested method to determine the volume of the solid. 5) The volume that has a base of the ellipse \(\displaystyle x^2/4+y^2/9=1\) and cross-sections of an equilateral triangle perpendicular to the \(\displaystyle y-axis\). Use the method of slicing. Solution: \(\displaystyle 32\sqrt{3}\) 6) \(\displaystyle y=x^2−x\), from \(\displaystyle x=1\) to \(\displaystyle x=4\), rotated around they-axis using the washer method 7) \(\displaystyle x=y^2\) and \(\displaystyle x=3y\) rotated around the y-axis using the washer method Solution: \(\displaystyle \frac{162π}{5}\) 8) \(\displaystyle x=2y^2−y^3,x=0\),and \(\displaystyle y=0\) rotated around the x-axis using cylindrical shells For the following exercises, find a. the area of the region, b.the volume of the solid when rotated around the x-axis, and c. the volume of the solid when rotated around the y-axis. Use whichever method seems most appropriate to you. 9) \(\displaystyle y=x^3,x=0,y=0\), and \(\displaystyle x=2\) Solution: \(\displaystyle a. 4, b. \frac{128π}{7}, c. \frac{64π}{5}\) 10) \(\displaystyle y=x^2−x\) and \(\displaystyle x=0\) 11) [T] \(\displaystyle y=ln(x)+2\) and \(\displaystyle y=x\) Solution: \(\displaystyle a. 1.949, b. 21.952, c. 17.099\) 12) \(\displaystyle y=x^2\) and \(\displaystyle y=\sqrt{x}\) 13) \(\displaystyle y=5+x, y=x^2, x=0\), and \(\displaystyle x=1\) Solution: \(\displaystyle a. \frac{31}{6},b. \frac{452π}{15}, c. \frac{31π}{6}\) 14) Below \(\displaystyle x^2+y^2=1\) and above \(\displaystyle y=1−x\) 15) Find the mass of \(\displaystyle ρ=e^{−x}\) on a disk centered at the origin with radius \(\displaystyle 4\). Solution: \(\displaystyle 245.282\) 16) Find the center of mass for \(\displaystyle ρ=tan^2x\) on \(\displaystyle x∈(−\frac{π}{4},\frac{π}{4})\). 17) Find the mass and the center of mass of \(\displaystyle ρ=1\) on the region bounded by \(\displaystyle y=x^5\) and \(\displaystyle y=\sqrt{x}\). Solution: Mass: \(\displaystyle \frac{1}{2},\) center of mass: \(\displaystyle (\frac{18}{35},\frac{9}{11})\) For the following exercises, find the requested arc lengths. 18) The length of \(\displaystyle x\) for \(\displaystyle y=cosh(x)\) from \(\displaystyle x=0\) to \(\displaystyle x=2\). 19) The length of \(\displaystyle y\) for \(\displaystyle x=3−\sqrt{y}\) from \(\displaystyle y=0\) to \(\displaystyle y=4\) Solution: \(\displaystyle \sqrt{17}+\frac{1}{8}ln(33+8\sqrt{17})\) For the following exercises, find the surface area and volume when the given curves are revolved around the specified axis. 20) The shape created by revolving the region between \(\displaystyle y=4+x, y=3−x, x=0,\) and \(\displaystyle x=2\) rotated around the y-axis. 21) The loudspeaker created by revolving \(\displaystyle y=1/x\) from \(\displaystyle x=1\) to \(\displaystyle x=4\) around the x-axis. Solution: Volume: \(\displaystyle \frac{3π}{4},\) surface area: \(\displaystyle π(\sqrt{2}−sinh^{−1}(1)+sinh^{−1}(16)−\frac{\sqrt{257}}{16})\) For the following exercises, consider the Karun-3 dam in Iran. Its shape can be approximated as an isosceles triangle with height \(\displaystyle 205\) m and width \(\displaystyle 388\) m. Assume the current depth of the water is \(\displaystyle 180\) m. The density of water is \(\displaystyle 1000\) kg/m3. 22) Find the total force on the wall of the dam. 23) You are a crime scene investigator attempting to determine the time of death of a victim. It is noon and \(\displaystyle 45°F\) outside and the temperature of the body is \(\displaystyle 78°F\). You know the cooling constant is \(\displaystyle k=0.00824°F/min\). When did the victim die, assuming that a human's temperature is \(\displaystyle 98°F\) ? Solution: 11:02 a.m. For the following exercise, consider the stock market crash in 1929 in the United States. The table lists the Dow Jones industrial average per year leading up to the crash. Yeat after 1920 Value ($) Source: http:/stockcharts.com/freecharts/hi...a19201940.html 24) [T] The best-fit exponential curve to these data is given by \(\displaystyle y=40.71+1.224^x\). Why do you think the gains of the market were unsustainable? Use first and second derivatives to help justify your answer. What would this model predict the Dow Jones industrial average to be in 2014 ? For the following exercises, consider the catenoid, the only solid of revolution that has a minimal surface, or zero mean curvature. A catenoid in nature can be found when stretching soap between two rings. 25) Find the volume of the catenoid \(\displaystyle y=cosh(x)\) from \(\displaystyle x=−1\) to \(\displaystyle x=1\) that is created by rotating this curve around the x-axis, as shown here. Solution: \(\displaystyle π(1+sinh(1)cosh(1))\) 26) Find surface area of the catenoid \(\displaystyle y=cosh(x)\) from \(\displaystyle x=−1\) to \(\displaystyle x=1\) that is created by rotating this curve around the x-axis. Gilbert Strang (MIT) and Edwin "Jed" Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. 5.E: Integration (Exercises) 7.E: Techniques of Integration (Exercises)
CommonCrawl
Jonathan Bootle Sumcheck Arguments and their Applications 📺 Abstract Jonathan Bootle Alessandro Chiesa Katerina Sotiraki We introduce a class of interactive protocols, which we call *sumcheck arguments*, that establishes a novel connection between the sumcheck protocol (Lund et al. JACM 1992) and folding techniques for Pedersen commitments (Bootle et al. EUROCRYPT 2016). Informally, we consider a general notion of bilinear commitment over modules, and show that the sumcheck protocol applied to a certain polynomial associated with the commitment scheme yields a succinct argument of knowledge for openings of the commitment. Building on this, we additionally obtain succinct arguments for the NP-complete language R1CS over certain rings. Sumcheck arguments enable us to recover as a special case numerous prior works in disparate cryptographic settings (such as discrete logarithms, pairings, RSA groups, lattices), providing one abstract framework to understand them all. Further, we answer open questions raised in prior works, such as obtaining a lattice-based succinct argument from the SIS assumption for satisfiability problems over rings. A non-PCP Approach to Succinct Quantum-Safe Zero-Knowledge 📺 Abstract Jonathan Bootle Vadim Lyubashevsky Ngoc Khanh Nguyen Gregor Seiler Today's most compact zero-knowledge arguments are based on the hardness of the discrete logarithm problem and related classical assumptions. If one is interested in quantum-safe solutions, then all of the known techniques stem from the PCP-based framework of Kilian (STOC 92) which can be instantiated based on the hardness of any collision-resistant hash function. Both approaches produce asymptotically logarithmic sized arguments but, by exploiting extra algebraic structure, the discrete logarithm arguments are a few orders of magnitude more compact in practice than the generic constructions.\\ In this work, we present the first (poly)-logarithmic \emph{post-quantum} zero-knowledge arguments that deviate from the PCP approach. At the core of succinct zero-knowledge proofs are succinct commitment schemes (in which the commitment and the opening proof are sub-linear in the message size), and we propose two such constructions based on the hardness of the (Ring)-Short Integer Solution (Ring-SIS) problem, each having certain trade-offs. For commitments to $N$ secret values, the communication complexity of our first scheme is $\tilde{O}(N^{1/c})$ for any positive integer $c$, and $O(\log^2 N)$ for the second. %Both of our protocols have somewhat large \emph{slack}, which in lattice constructions is the ratio of the norm of the extracted secrets to the norm of the secrets that the honest prover uses in the proof. The lower this factor, the smaller we can choose the practical parameters. For a fixed value of this factor, our $\tilde{O}(N^{1/c})$-argument actually achieves lower communication complexity. Both of these are a significant theoretical improvement over the previously best lattice construction by Bootle et al. (CRYPTO 2018) which gave $O(\sqrt{N})$-sized proofs. Linear-Time Arguments with Sublinear Verification from Tensor Codes 📺 Abstract Jonathan Bootle Alessandro Chiesa Jens Groth Minimizing the computational cost of the prover is a central goal in the area of succinct arguments. In particular, it remains a challenging open problem to construct a succinct argument where the prover runs in linear time and the verifier runs in polylogarithmic time. We make progress towards this goal by presenting a new linear-time probabilistic proof. For any fixed ? > 0, we construct an interactive oracle proof (IOP) that, when used for the satisfiability of an N-gate arithmetic circuit, has a prover that uses O(N) field operations and a verifier that uses O(N^?) field operations. The sublinear verifier time is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-size encoding of the circuit that is computable in linear time). When combined with a linear-time collision-resistant hash function, our IOP immediately leads to an argument system where the prover performs O(N) field operations and hash computations, and the verifier performs O(N^?) field operations and hash computations (given a short digest of the N-gate circuit). Foundations of Fully Dynamic Group Signatures Abstract Jonathan Bootle Andrea Cerulli Pyrros Chaidos Essam Ghadafi Jens Groth Group signatures allow members of a group to anonymously sign on behalf of the group. Membership is administered by a designated group manager. The group manager can also reveal the identity of a signer if and when needed to enforce accountability and deter abuse. For group signatures to be applicable in practice, they need to support fully dynamic groups, i.e., users may join and leave at any time. Existing security definitions for fully dynamic group signatures are informal, have shortcomings, and are mutually incompatible. We fill the gap by providing a formal rigorous security model for fully dynamic group signatures. Our model is general and is not tailored toward a specific design paradigm and can therefore, as we show, be used to argue about the security of different existing constructions following different design paradigms. Our definitions are stringent and when possible incorporate protection against maliciously chosen keys. We consider both the case where the group management and tracing signatures are administered by the same authority, i.e., a single group manager, and also the case where those roles are administered by two separate authorities, i.e., a group manager and an opening authority. We also show that a specialization of our model captures existing models for static and partially dynamic schemes. In the process, we identify a subtle gap in the security achieved by group signatures using revocation lists. We show that in such schemes new members achieve a slightly weaker notion of traceability. The flexibility of our security model allows to capture such relaxation of traceability. Algebraic Techniques for Short(er) Exact Lattice-Based Zero-Knowledge Proofs 📺 Abstract Jonathan Bootle Vadim Lyubashevsky Gregor Seiler A key component of many lattice-based protocols is a zero-knowledge proof of knowledge of a vector $$\vec {s}$$ with small coefficients satisfying $$A\vec {s}=\vec {u}\bmod \,q$$ . While there exist fairly efficient proofs for a relaxed version of this equation which prove the knowledge of $$\vec {s}'$$ and c satisfying $$A\vec {s}'=\vec {u}c$$ where $$\Vert \vec {s}'\Vert \gg \Vert \vec {s}\Vert $$ and c is some small element in the ring over which the proof is performed, the proofs for the exact version of the equation are considerably less practical. The best such proof technique is an adaptation of Stern's protocol (Crypto '93), for proving knowledge of nearby codewords, to larger moduli. The scheme is a $$\varSigma $$ -protocol, each of whose iterations has soundness error $$2{/}3$$ , and thus requires over 200 repetitions to obtain soundness error of $$2^{-128}$$ , which is the main culprit behind the large size of the proofs produced. In this paper, we propose the first lattice-based proof system that significantly outperforms Stern-type proofs for proving knowledge of a short $$\vec {s}$$ satisfying $$A\vec {s}=\vec {u}\bmod \,q$$ . Unlike Stern's proof, which is combinatorial in nature, our proof is more algebraic and uses various relaxed zero-knowledge proofs as sub-routines. The main savings in our proof system comes from the fact that each round has soundness error of $$1{/}n$$ , where n is the number of columns of A. For typical applications, n is a few thousand, and therefore our proof needs to be repeated around 10 times to achieve a soundness error of $$2^{-128}$$ . For concrete parameters, it produces proofs that are around an order of magnitude smaller than those produced using Stern's approach. Sub-linear Lattice-Based Zero-Knowledge Arguments for Arithmetic Circuits 📺 Abstract Carsten Baum Jonathan Bootle Andrea Cerulli Rafael del Pino Jens Groth Vadim Lyubashevsky We propose the first zero-knowledge argument with sub-linear communication complexity for arithmetic circuit satisfiability over a prime $${p}$$ whose security is based on the hardness of the short integer solution (SIS) problem. For a circuit with $${N}$$ gates, the communication complexity of our protocol is $$O\left( \sqrt{{N}{\lambda }\log ^3{{N}}}\right) $$ , where $${\lambda }$$ is the security parameter. A key component of our construction is a surprisingly simple zero-knowledge proof for pre-images of linear relations whose amortized communication complexity depends only logarithmically on the number of relations being proved. This latter protocol is a substantial improvement, both theoretically and in practice, over the previous results in this line of research of Damgård et al. (CRYPTO 2012), Baum et al. (CRYPTO 2016), Cramer et al. (EUROCRYPT 2017) and del Pino and Lyubashevsky (CRYPTO 2017), and we believe it to be of independent interest. Efficient Batch Zero-Knowledge Arguments for Low Degree Polynomials Abstract Jonathan Bootle Jens Groth Bootle et al. (EUROCRYPT 2016) construct an extremely efficient zero-knowledge argument for arithmetic circuit satisfiability in the discrete logarithm setting. However, the argument does not treat relations involving commitments, and furthermore, for simple polynomial relations, the complex machinery employed is unnecessary.In this work, we give a framework for expressing simple relations between commitments and field elements, and present a zero-knowledge argument which, by contrast with Bootle et al., is constant-round and uses fewer group operations, in the case where the polynomials in the relation have low degree. Our method also directly yields a batch protocol, which allows many copies of the same relation to be proved and verified in a single argument more efficiently with only a square-root communication overhead in the number of copies.We instantiate our protocol with concrete polynomial relations to construct zero-knowledge arguments for membership proofs, polynomial evaluation proofs, and range proofs. Our work can be seen as a unified explanation of the underlying ideas of these protocols. In the instantiations of membership proofs and polynomial evaluation proofs, we also achieve better efficiency than the state of the art. LWE Without Modular Reduction and Improved Side-Channel Attacks Against BLISS Abstract Jonathan Bootle Claire Delaplace Thomas Espitau Pierre-Alain Fouque Mehdi Tibouchi This paper is devoted to analyzing the variant of Regev's learning with errors (LWE) problem in which modular reduction is omitted: namely, the problem (ILWE) of recovering a vector $$\mathbf {s}\in \mathbb {Z}^n$$ given polynomially many samples of the form $$(\mathbf {a},\langle \mathbf {a},\mathbf {s}\rangle + e)\in \mathbb {Z}^{n+1}$$ where $$\mathbf { a}$$ and e follow fixed distributions. Unsurprisingly, this problem is much easier than LWE: under mild conditions on the distributions, we show that the problem can be solved efficiently as long as the variance of e is not superpolynomially larger than that of $$\mathbf { a}$$. We also provide almost tight bounds on the number of samples needed to recover $$\mathbf {s}$$.Our interest in studying this problem stems from the side-channel attack against the BLISS lattice-based signature scheme described by Espitau et al. at CCS 2017. The attack targets a quadratic function of the secret that leaks in the rejection sampling step of BLISS. The same part of the algorithm also suffers from a linear leakage, but the authors claimed that this leakage could not be exploited due to signature compression: the linear system arising from it turns out to be noisy, and hence key recovery amounts to solving a high-dimensional problem analogous to LWE, which seemed infeasible. However, this noisy linear algebra problem does not involve any modular reduction: it is essentially an instance of ILWE, and can therefore be solved efficiently using our techniques. This allows us to obtain an improved side-channel attack on BLISS, which applies to 100% of secret keys (as opposed to $${\approx }7\%$$ in the CCS paper), and is also considerably faster. Arya: Nearly Linear-Time Zero-Knowledge Proofs for Correct Program Execution Abstract Jonathan Bootle Andrea Cerulli Jens Groth Sune Jakobsen Mary Maller There have been tremendous advances in reducing interaction, communication and verification time in zero-knowledge proofs but it remains an important challenge to make the prover efficient. We construct the first zero-knowledge proof of knowledge for the correct execution of a program on public and private inputs where the prover computation is nearly linear time. This saves a polylogarithmic factor in asymptotic performance compared to current state of the art proof systems.We use the TinyRAM model to capture general purpose processor computation. An instance consists of a TinyRAM program and public inputs. The witness consists of additional private inputs to the program. The prover can use our proof system to convince the verifier that the program terminates with the intended answer within given time and memory bounds. Our proof system has perfect completeness, statistical special honest verifier zero-knowledge, and computational knowledge soundness assuming linear-time computable collision-resistant hash functions exist. The main advantage of our new proof system is asymptotically efficient prover computation. The prover's running time is only a superconstant factor larger than the program's running time in an apples-to-apples comparison where the prover uses the same TinyRAM model. Our proof system is also efficient on the other performance parameters; the verifier's running time and the communication are sublinear in the execution time of the program and we only use a log-logarithmic number of rounds. Linear-Time Zero-Knowledge Proofs for Arithmetic Circuit Satisfiability Jonathan Bootle Andrea Cerulli Essam Ghadafi Jens Groth Mohammad Hajiabadi Sune K. Jakobsen Efficient Zero-Knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting Jonathan Bootle Andrea Cerulli Pyrros Chaidos Jens Groth Christophe Petit Carsten Baum (1) Andrea Cerulli (5) Pyrros Chaidos (2) Alessandro Chiesa (2) Rafael del Pino (1) Claire Delaplace (1) Thomas Espitau (1) Essam Ghadafi (2) Jens Groth (7) Mohammad Hajiabadi (1) Sune K. Jakobsen (1) Sune Jakobsen (1) Vadim Lyubashevsky (3) Mary Maller (1) Ngoc Khanh Nguyen (1) Christophe Petit (1) Gregor Seiler (2) Katerina Sotiraki (1) Mehdi Tibouchi (1)
CommonCrawl
Scattering Amplitudes and Wilson Loops in Twistor Space pp 95-118 | Cite as Tree-level superamplitudes and the integrands of loop corrections are invariant under the Yangian of the superconformal symmetry \({\mathcal {Y}}({\mathfrak {psu}}(2,2|4)\), which is represented in momentum twistor space by the generators [1, 2]. Momentum Twistor Tree-level Superamplitudes Super Gauge Transformations Dual Conformal Symmetry Remainder Function Appendix: Component Expansions The superconnection \({\mathbb {A}}\) on chiral superspace is determined by the superspace constraints and supergauge condition \(\theta ^{\alpha a}{\mathcal {A}}_{\alpha a }=0\). Up to order fourth order in the fermions, the constraints are solved by the following expansion $$\begin{aligned} {\mathcal {A}}&= A + \mathrm{i}|\theta ^a\rangle [\bar{\psi }_a| + \frac{\mathrm{i}}{2}|\theta ^a\rangle \langle \theta ^b|D\phi _{ab} -\frac{1}{3!}\varepsilon _{abcd}|\theta ^a\rangle \langle \theta ^b|\,D\langle \theta ^c\psi ^d\rangle \nonumber \\&\quad +\frac{\mathrm{i}}{4!}\varepsilon _{abcd}|\theta ^a\rangle \langle \theta ^b|\,D\langle \theta ^c|G|\theta ^d\rangle + \cdots \\ |{\mathcal {A}}_a\rangle&= \frac{\mathrm{i}}{2}\phi _{ab}|\theta ^b\rangle -\frac{1}{3}\varepsilon _{abcd}|\theta ^b\rangle \langle \theta ^c\psi ^d\rangle +\frac{\mathrm{i}}{8}\varepsilon _{abcd}|\theta ^b\rangle \langle \theta ^c|G|\theta ^d\rangle +\cdots \ .\nonumber \end{aligned}$$ The corresponding supercurvatures have component expansions up to second order in the fermions as follows $$\begin{aligned} {\mathcal {F}}_{\dot{\alpha }\dot{\beta }}^+(x,\theta )&= F_{\dot{\alpha }\dot{\beta }}^{+} +\mathrm{i}\langle \theta ^a|D_{(\dot{\alpha }}\bar{\psi }_{\dot{\beta })a} +\frac{\mathrm{i}}{2}\langle \theta ^a|D_{(\dot{\alpha }}\langle \theta ^b|D_{\dot{\beta })}\phi _{ab} -\frac{\mathrm{i}}{2}\langle \theta ^a\theta ^b\rangle \left\{ \bar{\psi }_{a(\dot{\alpha }},\bar{\psi }_{\dot{\beta })b}\right\} +\cdots \nonumber \\ {\mathcal {F}}_{\alpha \beta }^-(x,\theta )&= F_{\alpha \beta }^{-} + \mathrm{i}\theta ^a_{(\alpha }D_{\beta )\dot{\beta }}\bar{\psi }_a^{\dot{\beta }} +\mathrm{i}\theta ^{\ a}_{(\alpha }\theta ^{\ b}_{\beta )} \left( \Box \phi _{ab} - \left\{ \bar{\psi }^{\dot{\alpha }}_{\ a},\bar{\psi }_{\dot{\alpha }b}\right\} \right) +\frac{1}{4}\theta ^{\gamma b}\theta ^a_{(\alpha }F^-_{\beta )\gamma }\phi _{ab}+\cdots \nonumber \\ {\mathcal {F}}_{\dot{\alpha }a}(x,\theta )&= \bar{\psi }_{\dot{\alpha }a}+\theta ^{\alpha b}D_{\alpha \dot{\alpha }}\phi _{ab} + \frac{\mathrm{i}\varepsilon _{abcd}}{3!}\theta ^{\alpha b}D_{\alpha \dot{\alpha }}\langle \theta ^c\psi ^d\rangle +\cdots \nonumber \\ {\mathcal {W}}_{ab}(x,\theta )&= \phi _{ab}+\mathrm{i}\varepsilon _{abcd}\langle \theta ^c\psi ^d\rangle +\frac{1}{2}\varepsilon _{abcd}\langle \theta ^c|G|\theta ^d\rangle + \frac{1}{4}\left[ \phi _{ac},\phi _{bd}\right] \langle \theta ^c\theta ^d\rangle + \cdots \, . \end{aligned}$$ See Refs. [32, 40] for further information. J.M. Drummond, J.M. Henn, J. Plefka, Yangian symmetry of scattering amplitudes in N \(=\) 4 super Yang-Mills theory. JHEP 0905, 046 (2009), [arXiv:0902.2987]Google Scholar J. Drummond, L. Ferro, Yangians, Grassmannians and T-duality. JHEP 1007, 027 (2010), [arXiv:1001.3348]Google Scholar R. Akhoury, Mass divergences of wide angle scattering amplitudes. Phys. Rev. D19, 1250 (1979)ADSGoogle Scholar A.H. Mueller, On the asymptotic behavior of the sudakov form-factor. Phys. Rev. D20, 2037 (1979)MathSciNetADSGoogle Scholar J.C. Collins, Algorithm to compute corrections to the Sudakov form-factor. Phys. Rev. D22, 1478 (1980)ADSGoogle Scholar A. Sen, Asymptotic behavior of the Sudakov form-factor in QCD. Phys. Rev. D24, 3281 (1981)ADSGoogle Scholar G.F. Sterman, Summation of large corrections to short distance Hadronic cross-sections. Nucl. Phys. B281, 310 (1987)ADSCrossRefGoogle Scholar J. Botts, G.F. Sterman, Hard elastic scattering in QCD: leading behavior. Nucl. Phys. B325, 62 (1989)ADSCrossRefGoogle Scholar S. Catani, L. Trentadue, Resummation of the QCD perturbative series for hard processes. Nucl. Phys. B327, 323 (1989)ADSCrossRefGoogle Scholar G. Korchemsky, Sudakov form-factor in QCD. Phys. Lett. B220, 629 (1989)MathSciNetADSCrossRefGoogle Scholar G. Korchemsky, Double logarithmic asymptotics in QCD. Phys. Lett. B217, 330–334 (1989)MathSciNetADSCrossRefGoogle Scholar L. Magnea, G.F. Sterman, Analytic continuation of the Sudakov form-factor in QCD. Phys. Rev. D42, 4222–4227 (1990)ADSGoogle Scholar G. Korchemsky, G. Marchesini, Resummation of large infrared corrections using Wilson loops. Phys. Lett. B313, 433–440 (1993)ADSCrossRefGoogle Scholar S. Catani, The singular behavior of QCD amplitudes at two loop order. Phys. Lett. B427, 161–171 (1998), [hep-ph/9802439]Google Scholar G.F. Sterman, M.E. Tejeda-Yeomans, Multiloop amplitudes and resummation. Phys. Lett. B552, 48–56 (2003), [hep-ph/0210130]Google Scholar G. Korchemsky, J. Drummond, E. Sokatchev, Conformal properties of four-gluon planar amplitudes and Wilson loops. Nucl. Phys. B795, 385–408 (2008), [arXiv:0707.0243]Google Scholar J. Drummond, J. Henn, G. Korchemsky, E. Sokatchev, Conformal Ward identities for Wilson loops and a test of the duality with gluon amplitudes. Nucl. Phys. B826, 337–364 (2010), [arXiv:0712.1223]Google Scholar I. Korchemskaya, G. Korchemsky, On lightlike Wilson loops. Phys. Lett. B287, 169–175 (1992)ADSCrossRefGoogle Scholar A. Bassetto, I. Korchemskaya, G. Korchemsky, G. Nardelli, Gauge invariance and anomalous dimensions of a light cone Wilson loop in lightlike axial gauge. Nucl. Phys. B408, 62–90 (1993), [hep-ph/9303314]Google Scholar S. Ivanov, G. Korchemsky, A. Radyushkin, Infrared asymptotics of perturbative QCD: contour gauges. Yad. Fiz. 44, 230–240 (1986)Google Scholar G. Korchemsky, G. Marchesini, Structure function for large X and renormalization of Wilson loop. Nucl. Phys. B406, 225–258 (1993), [hep-ph/9210281]Google Scholar Z. Bern, L.J. Dixon, V.A. Smirnov, Iteration of planar amplitudes in maximally supersymmetric Yang-Mills theory at three loops and beyond. Phys. Rev. D72, 085001 (2005), [hep-th/0505205]Google Scholar L.F. Alday, R. Roiban, Scattering amplitudes, Wilson loops and the String/Gauge theory correspondence. Phys. Rep. 468, 153–211 (2008), [arXiv:0807.1889]Google Scholar G. Korchemsky, E. Sokatchev, Symmetries and analytic properties of scattering amplitudes in N \(=\) 4 SYM theory. Nucl. Phys. B832, 1–51 (2010), [arXiv:0906.1737]Google Scholar A. Sever, P. Vieira, Symmetries of the N = 4 SYM S-matrix, arXiv:0908.2437Google Scholar S. Caron-Huot, S. He, Jumpstarting the all-loop S-matrix of planar N \(=\) 4 super Yang-Mills. JHEP 1207, 174 (2012), [arXiv:1112.1060]Google Scholar M. Bullimore, D. Skinner, Holomorphic Linking, Loop Equations and Scattering Amplitudes in Twistor Space, arXiv:1101.1329Google Scholar D. Gaiotto, J. Maldacena, A. Sever, P. Vieira, Pulling the straps of polygons. JHEP 1112, 011 (2011), [arXiv:1102.0062]Google Scholar S. Caron-Huot, Superconformal symmetry and two-loop amplitudes in planar N \(=\) 4 super Yang-Mills, arXiv:1105.5606 (Temporary entry)Google Scholar A. Brandhuber, P. Heslop, G. Travaglini, MHV amplitudes in N \(=\) 4 super Yang-Mills and Wilson loops. Nucl. Phys. B794, 231–243 (2008), [arXiv:0707.1153]Google Scholar L. Mason, D. Skinner, The complete planar S-matrix of N \(= \)4 SYM as a Wilson loop in Twistor space. JHEP 1012, 018 (2010), [arXiv:1009.2225]Google Scholar R. Boels, L. Mason, D. Skinner, Supersymmetric gauge theories in Twistor space. JHEP 0702, 014 (2007), [hep-th/0604040]Google Scholar N. Woodhouse, Real methods in Twistor theory. Class. Quant. Grav. 2, 257–291 (1985)MathSciNetADSCrossRefzbMATHGoogle Scholar L.F. Alday, D. Gaiotto, J. Maldacena, A. Sever, P. Vieira, An operator product expansion for polygonal null Wilson loops. JHEP 1104, 088 (2011), [arXiv:1006.2788]Google Scholar N. Beisert, B. Eden, M. Staudacher, Transcendentality and crossing. J. Stat. Mech. 0701, P01021 (2007), [hep-th/0610251]Google Scholar L.F. Alday, J. Maldacena, Minimal Surfaces in AdS and the Eight-Gluon Scattering Amplitude at Strong Coupling, arXiv:0903.4707Google Scholar L.F. Alday, D. Gaiotto, J. Maldacena, Thermodynamic bubble Ansatz. JHEP 1109, 032 (2011), [arXiv:0911.4708]Google Scholar L.F. Alday, J. Maldacena, A. Sever, P. Vieira, Y-system for scattering amplitudes. J. Phys. A A43, 485401 (2010), [arXiv:1002.2459]Google Scholar A. Belitsky, G. Korchemsky, E. Sokatchev, Are scattering amplitudes dual to super Wilson loops? Nucl. Phys. B855, 333–360 (2012), [arXiv:1103.3008]Google Scholar Bullimore M.R. (2014) Anomalies. In: Scattering Amplitudes and Wilson Loops in Twistor Space. Springer Theses (Recognizing Outstanding Ph.D. Research). Springer, Cham. https://doi.org/10.1007/978-3-319-00909-4_6
CommonCrawl
Question 772 interest tax shield, capital structure, leverage A firm issues debt and uses the funds to buy back equity. Assume that there are no costs of financial distress or transactions costs. Which of the following statements about interest tax shields is NOT correct? (a) Higher debt leads to higher interest expense. (b) Higher interest expense leads to lower profit before tax, following on from above. (c) Lower profit before tax leads to lower tax payments, following on from above. (d) Lower tax payments lead to higher cash flow from assets, following on from above. (e) Lower profit after tax leads to a lower share price, following on from above. ##\text{CFFA}_\text{U}## $48.5m Cash flow from assets excluding interest tax shields (unlevered) ##\text{CFFA}_\text{L}## $50m Cash flow from assets including interest tax shields (levered) ##\text{WACC}_\text{BeforeTax}## 10% pa Weighted average cost of capital before tax ##\text{WACC}_\text{AfterTax}## 9.7% pa Weighted average cost of capital after tax ##r_\text{EL}## 11.25% pa Cost of levered equity (e) $431.111m Question 237 WACC, Miller and Modigliani, interest tax shield Which of the following discount rates should be the highest for a levered company? Ignore the costs of financial distress. (a) Cost of debt (##r_\text{D}##). (b) Unlevered cost of equity (##r_\text{E, U}##). (c) Levered cost of equity (##r_\text{E, L}##). (d) Levered before-tax WACC (##r_\text{V, LxITS}##). (e) Levered after-tax WACC (##r_\text{V, LwITS}##). Question 104 CAPM, payout policy, capital structure, Miller and Modigliani, risk Assume that there exists a perfect world with no transaction costs, no asymmetric information, no taxes, no agency costs, equal borrowing rates for corporations and individual investors, the ability to short the risk free asset, semi-strong form efficient markets, the CAPM holds, investors are rational and risk-averse and there are no other market frictions. For a firm operating in this perfect world, which statement(s) are correct? (i) When a firm changes its capital structure and/or payout policy, share holders' wealth is unaffected. (ii) When the idiosyncratic risk of a firm's assets increases, share holders do not expect higher returns. (iii) When the systematic risk of a firm's assets increases, share holders do not expect higher returns. (d) Only (i) and (ii) are true. (e) All statements (i), (ii) and (iii) are true. Three years ago Frederika bought a house for $400,000. Frederika's residential property has an expected total return of 7% pa. She rents her house out for $2,500 per month, paid in advance. Every 12 months she plans to increase the rental payments. What is the expected annual capital yield of the property? (b) -0.2724% (d) 2% Question 416 real estate, market efficiency, income and capital returns, DDM, CAPM A residential real estate investor believes that house prices will grow at a rate of 5% pa and that rents will grow by 2% pa forever. All rates are given as nominal effective annual returns. Assume that: His forecast is true. Real estate is and always will be fairly priced and the capital asset pricing model (CAPM) is true. Ignore all costs such as taxes, agent fees, maintenance and so on. All rental income cash flow is paid out to the owner, so there is no re-investment and therefore no additions or improvements made to the property. The non-monetary benefits of owning real estate and renting remain constant. Which one of the following statements is NOT correct? Over time: (a) The rental yield will fall and approach zero. (b) The total return will fall and approach the capital return (5% pa). (c) One or all of the following must fall: the systematic risk of real estate, the risk free rate or the market risk premium. (d) If the country's nominal wealth growth rate is 4% pa and the nominal real estate growth rate is 5% pa then real estate will approach 100% of the country's wealth over time. (e) If the country's nominal gross domestic production (GDP) growth rate is 4% pa and the nominal real estate rent growth rate is 2% pa then real estate rent will approach 100% of the country's GDP over time. Question 695 utility, risk aversion, utility function Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? (a) All rational people prefer more wealth to less. (b) Mr Blue is risk averse. (c) Miss Red is risk neutral. (d) Mrs Green is risk averse. (e) Mrs Green may enjoy gambling. (a) Mr Blue prefers more wealth to less. Mrs Green enjoys losing wealth. (b) Miss Red has the same utility no matter how much wealth she has. (c) Mr Blue is risk averse. (d) Miss Red is risk averse. (e) Mrs Green is risk loving. (a) Mr Blue and Miss Red prefer more wealth to less. (b) Mrs Green enjoys losing wealth. (d) Miss Red is risk neutral. (e) Mrs Green is risk averse. (a) Neither Miss Red nor Mrs Green would appear rational to an economist. (b) Mrs Green is satiated when she has $25 of wealth. That is her bliss point. (c) Mrs Green is risk loving when she has between zero and $25 of wealth, same as Mr Blue. (d) Mrs Green is risk averse when she has between $25 and $50 of wealth. (e) Mrs Green is risk loving when she has between $50 and $100 of wealth, same as Mr Blue. Mr Blue, Miss Red and Mrs Green are people with different utility functions. Note that a fair gamble is a bet that has an expected value of zero, such as paying $0.50 to win $1 in a coin flip with heads or nothing if it lands tails. Fairly priced insurance is when the expected present value of the insurance premiums is equal to the expected loss from the disaster that the insurance protects against, such as the cost of rebuilding a home after a catastrophic fire. (a) Mr Blue, Miss Red and Mrs Green all prefer more wealth to less. This is rational from an economist's point of view. (b) Mr Blue is risk averse. He will not enjoy a fair gamble and would like to buy fairly priced insurance. (c) Miss Red is risk-neutral. She will not enjoy a fair gamble but wouldn't oppose it either. Similarly with fairly priced insurance. (d) Mrs Green is risk-loving. She would enjoy a fair gamble and would dislike fairly priced insurance. (e) Mr Blue would like to buy insurance, but only if it is fairly or under priced. Question 775 utility, utility function Below is a graph of 3 peoples' utility functions, Mr Blue (U=W^(1/2) ), Miss Red (U=W/10) and Mrs Green (U=W^2/1000). Assume that each of them currently have $50 of wealth. Which of the following statements about them is NOT correct? (a) Mr Blue would prefer to invest his wealth in a well diversified portfolio of stocks rather than a single stock, assuming that all stocks had the same total risk and return. (b) Mrs Green would prefer to invest her wealth in a single stock rather than a well diversified portfolio of stocks, assuming that all stocks had the same total risk and return. (c) The popularity of insurance only makes sense if people are similar to Mr Blue. (d) CAPM theory only makes sense if people are similar to Miss Red. (e) The popularity of casino gambling and lottery tickets only make sense if people are similar to Mrs Green. Question 779 mean and median returns, return distribution, arithmetic and geometric averages, continuously compounding rate Fred owns some BHP shares. He has calculated BHP's monthly returns for each month in the past 30 years using this formula: ###r_\text{t monthly}=\ln⁡ \left( \dfrac{P_t}{P_{t-1}} \right)### He then took the arithmetic average and found it to be 0.8% per month using this formula: ###\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.008=0.8\% \text{ per month}### He also found the standard deviation of these monthly returns which was 15% per month: ###\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.15=15\%\text{ per month}### Assume that the past historical average return is the true population average of future expected returns and the stock's returns calculated above ##(r_\text{t monthly})## are normally distributed. Which of the below statements about Fred's BHP shares is NOT correct? (a) The returns ##r_\text{t monthly}## are continuously compounded monthly returns. (b) The mean, median and mode of the continuously compounded annual return is expected to equal 0.8% per month. (c) The annualised average continuously compounded return is ##\bar{r}_\text{cc annual}=12×0.008=0.096=9.6\%\text{ pa}##. The annualised standard deviation is ##\sigma_\text{annual} = \sqrt{12}×0.15=0.519615242 = 52\%\text{ pa}##. (d) If the current price of the BHP shares is $20, they're expected to have a median value of $52.2339 ##(= 20×e^{0.008×12×10})## in 10 years. (e) If the current price of the BHP shares is $20, they're expected to have a mean value of $13.5411 ##(= 20×e^{(0.008 - 0.15^2/2)×12×10})## in 10 years. Question 780 mispriced asset, NPV, DDM, market efficiency, no explanation A company advertises an investment costing $1,000 which they say is under priced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to be 4% pa and the capital yield 11% pa. Assume that the company's statements are correct. What is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): (a) $7,359.46, Infinite (b) $4,887.57, $10,000 (c) $1,786.35, -$5,000 (Note the negative sign) (d) $1,471.89, $830.28 (e) $830.28, $3,000 Here is a table of stock prices and returns. Which of the statements below the table is NOT correct? Price and Return Population Statistics Time Prices LGDR GDR NDR 1 99 -0.010050 0.990000 -0.010000 2 180.40 0.600057 1.822222 0.822222 Arithmetic average 0.0399 1.1457 0.1457 Arithmetic standard deviation 0.4384 0.5011 0.5011 (a) The geometric average of the gross discrete returns (GAGDR) equals 1.04075 which is 104.075%. (b) ##\exp \left( \text{GAGDR} \right) = \text{AALGDR}##. The natural exponent of the GAGDR equals the arithmetic average of the logarithms of the gross discrete returns (AALGDR). (c) ##\text{GAGDR}^T = \left( P_T/P_0 \right)##. The GAGDR raised to the power of the number of time period that it's measured over equals the ratio of the first and last prices. (d) ##\text{GAGDR} = \exp \left( \text{AALGDR} \right)##. This is always true, regardless of the distribution of the prices or returns. The logarithm of the geometric average of the gross discrete returns (LGAGDR) is always equal to the arithmetic average of the logarithm of the gross discrete returns (AALGDR). (e) ##\text{AAGDR} \approx \exp \left( \text{AALGDR} + \text{SDLGDR}^2/2 \right)##. This is only approximately true and depends on the distribution of the prices and returns. It's asymptotically true as the time period that the returns are measured over reaches infinity. 1 50 -0.6931 0.5 -0.5 2 100 0.6931 2 1 Arithmetic average 0 1.25 0.25 Arithmetic standard deviation -0.6931 0.75 0.75 (a) The geometric average of the gross discrete returns (GAGDR) equals 1 which is 100%. (b) ##\text{GAGDR} = \exp \left( \text{AALGDR} \right)##. The GAGDR is equal to the natural exponent of the arithmetic average of the logarithms of the gross discrete returns (AALGDR). (c) ##\text{GAGDR} = \left( P_T/P_0 \right)^{1/T}##. The GAGDR equals the ratio of the last and first prices raised to the power of the inverse of the number of time periods between them. (d) ##\text{LGAGDR} = \text{AALGDR}##. This is always true, regardless of the distribution of the prices or returns and the number of return observations. The logarithm of the geometric average of the gross discrete returns (LGAGDR) is always equal to the arithmetic average of the logarithms of the gross discrete returns (AALGDR). (e) ##\text{LAAGDR} = \text{AALGDR} + \text{SDLGDR}^2/2##. This is always true, regardless of the distribution of the prices or returns and the number of return observations. The logarithm of the arithmetic average of the gross discrete returns (LAAGDR) equals the arithmetic average of the logarithms of the gross discrete returns (AALGDR) plus half the variance of the LGDR's. Question 724 return distribution, mean and median returns If a stock's future expected continuously compounded annual returns are normally distributed, what will be bigger, the stock's or continuously compounded annual return? Or would you expect them to be ? If a stock's future expected effective annual returns are log-normally distributed, what will be bigger, the stock's or effective annual return? Or would you expect them to be ? If a stock's expected future prices are log-normally distributed, what will be bigger, the stock's or future price? Or would you expect them to be ? Question 742 price gains and returns over time, no explanation For an asset's price to quintuple every 5 years, what must be its effective annual capital return? Note that a stock's price quintuples when it increases from say $1 to $5. (a) 20% pa. (b) 37.973% pa. (c) 43.0969% pa. (d) 56.9031% pa. (e) 80% pa. How many years will it take for an asset's price to triple (increase from say $1 to $3) if it grows by 5% pa? (a) 17.7643 years (b) 22.5171 years (c) 29.2722 years (d) 40 years (e) 60 years If someone says "my shares rose by 10% last year", what do you assume that they mean? (a) The historical nominal effective annual total return was 10%. (b) The historical real effective annual total return was 10%. (c) The expected nominal effective annual total return was 10%. (d) The historical nominal effective annual capital return was 10%. (e) The expected real effective annual dividend return was 10%. Itau Unibanco is a major listed bank in Brazil with a market capitalisation of equity equal to BRL 85.744 billion, EPS of BRL 3.96 and 2.97 billion shares on issue. Banco Bradesco is another major bank with total earnings of BRL 8.77 billion and 2.52 billion shares on issue. Estimate Banco Bradesco's current share price using a price-earnings multiples approach assuming that Itau Unibanco is a comparable firm. Note that BRL is the Brazilian Real, their currency. Figures sourced from Google Finance on the market close of the BVMF on 24/7/15. (a) BRL 28.87 (b) BRL 25.372 (c) BRL 22.1 (d) BRL 21.653 (e) BRL 21.528 Question 755 bond pricing, capital raising, no explanation A firm wishes to raise $50 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 6 years and have a face value of $100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? (a) 374,885 (b) 453,483 (c) 500,000 (d) 527,541 (e) 666,873 A firm wishes to raise $50 million now. They will issue 5% pa semi-annual coupon bonds that will mature in 10 years and have a face value of $100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat. Question 762 equivalent annual cash flow, no explanation Radio-Rentals.com offers the Apple iphone 5S smart phone for rent at $12.95 per week paid in advance on a 2 year contract. After renting the phone, you must return it to Radio-Rentals. Kogan.com offers the Apple iphone 5S smart phone for sale at $699. You estimate that the phone will last for 3 years before it will break and be worthless. Currently, the effective annual interest rate is 11.351%, the effective monthly interest rate 0.9% and the effective weekly interest rate is 0.207%. Assume that there are exactly 52 weeks per year and 12 months per year. Find the equivalent annual cost of renting the phone and also buying the phone. The answers below are listed in the same order. (a) $362.33, $287.79 (b) $362.33, $506.29 (c) $516.29, $506.29 (d) $673.40, $233.00 (e) $711.67, $287.79 An investor bought a 5 year government bond with a 2% pa coupon rate at par. Coupons are paid semi-annually. The face value is $100. Calculate the bond's new price 8 months later after yields have increased to 3% pa. Note that both yields are given as APR's compounding semi-annually. Assume that the yield curve was flat before the change in yields, and remained flat afterwards as well. (a) $95.345378 (c) $96.138082 (e) $96.775559 Question 728 inflation, real and nominal returns and cash flows, income and capital returns, no explanation Which of the following statements about gold is NOT correct? Assume that the gold price increases by inflation. Gold: (a) Pays no income cash flow. (b) Has a real total return of zero. (c) Has a real capital return equal to the inflation rate. (d) Has a real income return of zero. (e) Has a nominal income return of zero. Question 729 book and market values, balance sheet, no explanation If a firm makes a profit and pays no dividends, which of the following accounts will increase? (a) Asset revaluation reserve. (b) Foreign currency translation reserve. (c) Retained earnings, also known as retained profits. (d) Contributed equity, also known as paid up capital. (e) General reserve. Question 768 accounting terminology, book and market values, no explanation Accountants and finance professionals have lots of names for the same things which can be quite confusing. Which of the following groups of items are NOT synonyms? (a) Revenue, sales, turn over. (b) Paid up capital, contributed equity. (c) Shares, stock, equity. (d) Net income, earnings, net profit after tax, the bottom line. (e) Market capitalisation of equity, book value of equity. Question 769 short selling, idiom, no explanation "Buy low, sell high" is a well-known saying. It suggests that investors should buy low then sell high, in that order. How would you re-phrase that saying to describe short selling? (a) Buy high, then sell low. (b) Buy low, then sell high. (c) Sell high, then buy low. (d) Sell low, then buy high. (e) Sell high, then buy high. Question 776 market efficiency, systematic and idiosyncratic risk, beta, income and capital returns Which of the following statements about returns is NOT correct? A stock's: (a) Expected total return will equal its required total return if the stock is fairly priced. (b) Expected total return will be less than its required total return if the stock is over-priced. (c) Required total return should be higher than the risk free government bond yield if the stock has a positive beta. (d) Required total return depends on its total variance, required capital return depends on systematic variance and the dividend yield depends on idiosyncratic variance. (e) Expected capital return equals the expected total return less whatever the management decide that they will pay as an income return (which is the dividend yield). Question 777 CAPM, beta The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. A stock has a beta of 0.5. In the last 5 minutes, the federal government unexpectedly raised taxes. Over this time the share market fell by 3%. The risk free rate was unchanged. What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate? (a) -1% (b) -1.5% (c) -2.5% (d) -3% (e) -7.5% Question 778 CML, systematic and idiosyncratic risk, portfolio risk, CAPM, no explanation The capital market line (CML) is shown in the graph below. The total standard deviation is denoted by σ and the expected return is μ. Assume that markets are efficient so all assets are fairly priced. Which of the below statements is NOT correct? (a) The risk free security has zero systematic variance and zero idiosyncratic variance. (b) The market portfolio has zero idiosyncratic variance. (c) Any portfolio comprised of the market portfolio and risk free security has zero idiosyncratic variance. (d) Portfolios that plot on the CML must only be comprised of the risk free security and the market portfolio. (e) Portfolios that plot on the CML have some idiosyncratic variance and some systematic variance. The 'time value of money' is most closely related to which of the following concepts? (a) Competition: Firms in competitive markets earn zero economic profit. (b) Opportunity cost: The cost of the next best alternative foregone should be subtracted. (d) Diversification: Risks can often be reduced by pooling them together. (e) Sunk costs: Costs that cannot be recouped should be ignored. Question 657 systematic and idiosyncratic risk, CAPM, no explanation A stock's required total return will decrease when its: Question 658 CFFA, income statement, balance sheet, no explanation To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the income statement needed? Note that the income statement is sometimes also called the profit and loss, P&L, or statement of financial performance. Question 661 systematic and idiosyncratic risk, CAPM A stock's total standard deviation of returns is 20% pa. The market portfolio's total standard deviation of returns is 15% pa. The beta of the stock is 0.8. What is the stock's diversifiable standard deviation? (c) 8% pa (d) 5% pa (e) 4% pa Which of the following interest rate labels does NOT make sense? (a) Annualised percentage rate compounding per month. (b) Effective monthly rate compounding per year. (c) Annualised percentage rate compounding per year. (d) Effective annual rate compounding per year. (e) Annualised percentage rate compounding semi-annually. Question 663 leverage, accounting ratio, no explanation A firm has a debt-to-assets ratio of 20%. What is its debt-to-equity ratio? Question 665 stock split A company conducts a 10 for 3 stock split. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order. (a) -76.92%, 333.33% (b) -70%, 333.33% (c) -70%, 233.33% (d) -57.14%, 233.33% (e) 233.33%, -70% Question 666 rights issue, capital raising A company conducts a 2 for 3 rights issue at a subscription price of $8 when the pre-announcement stock price was $9. Assume that all investors use their rights to buy those extra shares. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order. (a) -60%, 150% (b) -42.22%, 150% (c) -40%, 66.67% (d) -22.22%, 66.67% (e) -4.44%, 66.67% Question 668 buy and hold, market efficiency, idiom A quote from the famous investor Warren Buffet: "Much success can be attributed to inactivity. Most investors cannot resist the temptation to constantly buy and sell." Buffet is referring to the buy-and-hold strategy which is to buy and never sell shares. Which of the following is a disadvantage of a buy-and-hold strategy? Assume that share markets are semi-strong form efficient. Which of the following is NOT an advantage of the strict buy-and-hold strategy? A disadvantage of the buy-and-hold strategy is that it reduces: (a) Capital gains tax. (b) Explicit transaction costs such as brokerage fees. (c) Implicit transaction costs such as bid-ask spreads. (d) Portfolio rebalancing to maintain maximum diversification. (e) Time wasted on researching whether it's better to buy or sell. Question 669 beta, CAPM, risk Which of the following is NOT a valid method for estimating the beta of a company's stock? Assume that markets are efficient, a long history of past data is available, the stock possesses idiosyncratic and market risk. The variances and standard deviations below denote total risks. (a) ##Β_E=\dfrac{cov(r_E,r_M )}{var(r_M)}## (b) ##Β_E=\dfrac{correl(r_E,r_M ).sd(r_E)}{sd(r_M)} ## (c) ##Β_E=\dfrac{sd(r_E)}{sd(r_M)}##, since ##var(r_E)=β_E^2.var(r_M)## (d) ##Β_E=\dfrac{r_E-r_f}{r_M-r_f }##, since ##r_E=r_f+Β_E. (r_M-r_f )## (e) ##Β_E= \left(B_V - \dfrac{D}{V}.Β_D \right).\dfrac{V}{E}##, since ##B_V=\dfrac{E}{V}.Β_E+\dfrac{D}{V}.Β_D ## Question 667 forward foreign exchange rate, foreign exchange rate, cross currency interest rate parity, no explanation The Australian cash rate is expected to be 2% pa over the next one year, while the US cash rate is expected to be 0% pa, both given as nominal effective annual rates. The current exchange rate is 0.73 USD per AUD. What is the implied 1 year USD per AUD forward foreign exchange rate? (b) 0.73 USD per AUD (d) 0.9804 USD per AUD (e) 1.02 USD per AUD A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. What do you think will be the stock's expected return over the next year, given as an effective annual rate? (a) 5% pa (b) 7.5% pa (d) 12.5% pa Question 673 CAPM, beta, expected and historical returns In the last 5 minutes, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged. (a) -12.5% (b) -4% (e) 12.5% Over the last year, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged. What do you think was the stock's historical return over the last year, given as an effective annual rate? (a) -12.5% pa (b) -4% pa (c) -1.5% pa (d) -1% pa (e) 12.5% pa Question 86 CAPM Treasury bonds currently have a return of 5% pa. A stock has a beta of 0.5 and the market return is 10% pa. What is the expected return of the stock? A fairly priced stock has a beta that is the same as the market portfolio's beta. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the expected return of the stock? (b) 7.5% Question 230 bond pricing, capital raising A firm wishes to raise $10 million now. They will issue 6% pa semi-annual coupon bonds that will mature in 8 years and have a face value of $1,000 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. (a) 9,022.2 bonds (b) 10,000.0 bonds (c) 11,484.5 bonds (d) 12,712.9 bonds (e) 12,767.4 bonds Question 205 depreciation tax shield, CFFA There are a number of ways that assets can be depreciated. Generally the government's tax office stipulates a certain method. But if it didn't, what would be the ideal way to depreciate an asset from the perspective of a businesses owner? (a) 'Straight line' or 'prime cost' depreciation, which allocates equal depreciation expenses over each year of the asset's life. (b) 'Diminishing value' or 'reducing balance' depreciation, which allocates more depreciation expense at the start of the asset's life and less towards the end. (c) No depreciation at all, so the asset is always kept on the books as being the same value that it was bought for. The asset will cause no depreciation expense in any year. (d) Allocating all of the depreciation expense to the final year of the asset's life. (e) Allocating all of the depreciation expense to the first year of the asset's life. Accountants would call this 'expensing' the asset, rather than 'capitalising' it and depreciating it slowly. A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: ###V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}### Which point corresponds to the best time to calculate the terminal value? (a) Point A. (b) Point B. (c) Point C. (d) Any of the points. (e) None of the points. An old company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. Question 303 WACC, CAPM, CFFA There are many different ways to value a firm's assets. Which of the following will NOT give the correct market value of a levered firm's assets ##(V_L)##? Assume that: The firm is financed by listed common stock and vanilla annual fixed coupon bonds, which are both traded in a liquid market. The bonds' yield is equal to the coupon rate, so the bonds are issued at par. The yield curve is flat and yields are not expected to change. When bonds mature they will be rolled over by issuing the same number of new bonds with the same expected yield and coupon rate, and so on forever. Tax rates on the dividends and capital gains received by investors are equal, and capital gains tax is paid every year, even on unrealised gains regardless of when the asset is sold. There is no re-investment of the firm's cash back into the business. All of the firm's excess cash flow is paid out as dividends so real growth is zero. The firm operates in a mature industry with zero real growth. All cash flows and rates in the below equations are real (not nominal) and are expected to be stable forever. Therefore the perpetuity equation with no growth is suitable for valuation. (a) ##V_L = n_\text{shares}.P_\text{share} + n_\text{bonds}.P_\text{bond}## (b) ##V_L = n_\text{shares}.\dfrac{\text{Dividend per share}}{r_f + \beta_{EL}(r_m - r_f)} + n_\text{bonds}.\dfrac{\text{Coupon per bond}}{r_f + \beta_D(r_m - r_f)}## (c) ##V_L = \dfrac{\text{CFFA}_{L}}{r_\text{WACC before tax}}## (d) ##V_L = \dfrac{\text{CFFA}_{U}}{r_\text{WACC after tax}}## (e) ##V_L = \dfrac{\text{CFFA}_{L}}{r_\text{WACC after tax}}## ###r_\text{WACC before tax} = r_D.\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital before tax}### ###r_\text{WACC after tax} = r_D.(1-t_c).\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital after tax}### ###NI_L=(Rev-COGS-FC-Depr-\mathbf{IntExp}).(1-t_c) = \text{Net Income Levered}### ###CFFA_L=NI_L+Depr-CapEx - \varDelta NWC+\mathbf{IntExp} = \text{Cash Flow From Assets Levered}### ###NI_U=(Rev-COGS-FC-Depr).(1-t_c) = \text{Net Income Unlevered}### ###CFFA_U=NI_U+Depr-CapEx - \varDelta NWC= \text{Cash Flow From Assets Unlevered}### Over the next year, the management of an unlevered company plans to: Achieve firm free cash flow (FFCF or CFFA) of $1m. Pay dividends of $1.8m Complete a $1.3m share buy-back. Spend $0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above. All amounts are received and paid at the end of the year so you can ignore the time value of money. The firm has sufficient retained profits to pay the dividend and complete the buy back. The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? (b) $1.3m (c) $0.8m (d) $0.3m (e) No new shares need to be issued, the firm will be sufficiently financed. Make $5m in sales, $1.9m in net income and $2m in equity free cash flow (EFCF). Pay dividends of $1m. The firm has sufficient retained profits to legally pay the dividend and complete the buy back. (a) $2m (b) $1m The hardest and most important aspect of business project valuation is the estimation of the: (a) Diversifiable standard deviation of asset returns. (b) Systematic standard deviation of asset returns. (c) Proportion of debt and equity used to fund the assets. (d) Cash flows from assets. (e) Risk free rate. Question 366 opportunity cost, NPV, CFFA, needs refinement Your friend is trying to find the net present value of a project. The project is expected to last for just one year with: a negative cash flow of -$1 million initially (t=0), and a positive cash flow of $1.1 million in one year (t=1). The project has a total required return of 10% pa due to its moderate level of undiversifiable risk. Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project. He knows that the opportunity cost of investing the $1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is $0.1m ##(=1m \times 10\%)## which occurs in one year (t=1). He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year. Your friend has listed a few different ways to find the NPV which are written down below. (I) ##-1m + \dfrac{1.1m}{(1+0.1)^1} ## (II) ##-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1m}{(1+0.1)^1} \times 0.1 ## (III) ##-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1.1m}{(1+0.1)^1} \times 0.1 ## (IV) ##-1m + 1.1m - \dfrac{1.1m}{(1+0.1)^1} \times 0.1 ## (V) ##-1m + 1.1m - 1.1m \times 0.1 ## Which of the above calculations give the correct NPV? Select the most correct answer. (e) II and V only. A fairly priced stock has an expected return of 15% pa. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the beta of the stock? A stock has an arithmetic average continuously compounded return (AALGDR) of 10% pa, a standard deviation of continuously compounded returns (SDLGDR) of 80% pa and current stock price of $1. Assume that stock prices are log-normally distributed. In one year, what do you expect the mean and median prices to be? The answer options are given in the same order. (a) $1.521962, $1.105171 (b) $1.42, $1.1 (c) $1.105171, $1.521962 (d) $1.105171, $0.802519 (e) $1.1, $1.42 In 5 years, what do you expect the mean and median prices to be? The answer options are given in the same order. (b) $1.648721, $8.16617 (c) $1.648721, $1.61051 (d) $5.773534, $1.61051 (e) $8.16617, $1.648721 Fred owns some Commonwealth Bank (CBA) shares. He has calculated CBA's monthly returns for each month in the past 20 years using this formula: He then took the arithmetic average and found it to be 1% per month using this formula: ###\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.01=1\% \text{ per month}### He also found the standard deviation of these monthly returns which was 5% per month: ###\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.05=5\%\text{ per month}### Which of the below statements about Fred's CBA shares is NOT correct? Assume that the past historical average return is the true population average of future expected returns. (a) The returns ##r_\text{t monthly}## are continuously compounded monthly returns and are likely to be normally distributed, so the mean, median and mode of these continuously compounded returns are all equal to 1% per month. (b) The annualised average continuously compounded return is ##\bar{r}_\text{cc annual}=12×0.01=0.1=12\%\text{ pa}##. The annualised standard deviation is ##\sigma_\text{annual} = \sqrt{12}×0.05=0.173205081=17.32\%\text{ pa}##. (c) Over the next 10 years the mean, median and mode continuously compounded 10 year returns will all equal ##0.01 \times 12 \times 10 = 120\%##. (d) Over the next 10 years the expected mean gross discrete 10 year return is ##e^{(0.01 - 0.05^2/2)×12×10} = 2.857651118## and the median gross discrete 10 year return is ##e^{(0.01 + 0.05^2/2)×12×10}=3.857425531##. (e) If the current price of the CBA shares is $75, then in 10 years they're expected to have a mean value of ##75×e^{(0.01 + 0.05^2/2)×12×10} = 289.3069148## and a median value of ##75×e^{0.01×12×10}=249.0087692##. Question 572 bond pricing, zero coupon bond, term structure of interest rates, expectations hypothesis, forward interest rate, yield curve In the below term structure of interest rates equation, all rates are effective annual yields and the numbers in subscript represent the years that the yields are measured over: ###(1+r_{0-3})^3 = (1+r_{0-1})(1+r_{1-2})(1+r_{2-3}) ### (a) ##r_{0-3}## is the three year spot rate, given as an effective annual rate. (b) ##r_{0-1}## is the one year forward rate, given as an effective annual rate. (c) ##r_{1-2}## is the one year forward rate one year ahead, given as an effective annual rate. (d) ##r_{2-3}## is the one year forward rate two years ahead, given as an effective annual rate. (e) ##r_{1-3} = \left((1+r_{1-2})(1+r_{2-3})\right)^{1/2}-1## is the two year forward rate, one year ahead, given as an effective annual rate. Question 573 bond pricing, zero coupon bond, term structure of interest rates, expectations hypothesis, liquidity premium theory, forward interest rate, yield curve (a) If the expectations hypothesis is true, then the forward rates are the expected future spot rates. (b) If the liquidity premium theory is true, then the forward rates are lower than the expected future spot rates due to the liquidity premium. (c) The yield curve is normal when: ##r_{0-1} < r_{1-2} < r_{2-3}##. (d) The yield curve is flat when: ##r_{0-1} = r_{1-2} = r_{2-3}##. (e) The yield curve is inverse when: ##r_{0-1} > r_{1-2} > r_{2-3}##. Question 693 boot strapping zero coupon yield, forward interest rate, term structure of interest rates Information about three risk free Government bonds is given in the table below. Federal Treasury Bond Data Maturity Yield to maturity Coupon rate Face value Price (years) (pa, compounding semi-annually) (pa, paid semi-annually) ($) ($) 0.5 3% 4% 100 100.4926 1 4% 4% 100 100.0000 1.5 5% 4% 100 98.5720 Based on the above government bonds' yields to maturity, which of the below statements about the spot zero rates and forward zero rates is NOT correct? (a) The 0.5 year zero coupon spot yield per annum compounding semi-annually is 3% pa. (b) The 1 year zero coupon spot yield per annum compounding semi-annually is 4.0101% pa. (c) The 1.5 year zero coupon spot yield per annum compounding semi-annually is 5.0272% pa. (d) The 0.5 to 1 year zero coupon forward rate per annum compounding semi-annually is 5.0251% pa. (e) The 1 to 1.5 year zero coupon forward rate per annum compounding semi-annually is 6.0769% pa. Question 717 return distribution The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Let ##P_1## be the unknown price of a stock in one year. ##P_1## is a random variable. Let ##P_0 = 1##, so the share price now is $1. This one dollar is a constant, it is not a variable. Which of the below statements is NOT correct? Financial practitioners commonly assume that the shape of the PDF represented in the colour: (a) Green is a stock's continuously compounded rate of return, also called the log gross discrete return. ##r_\text{cc} = \ln(P_1/P_0)## (b) Blue is a stock's future price, ##P_1## (c) Blue is a stock's gross discrete return. ##\text{GDR} = P_1/P_0## (d) Blue is a stock's effective rate of return. ##r_\text{eff} = (P_1-P_0)/P_0## (e) Red is a stock's net discrete return. ##\text{NDR} = \text{GDR}-1## If a variable, say X, is normally distributed with mean ##\mu## and variance ##\sigma^2## then mathematicians write ##X \sim \mathcal{N}(\mu, \sigma^2)##. If a variable, say Y, is log-normally distributed and the underlying normal distribution has mean ##\mu## and variance ##\sigma^2## then mathematicians write ## Y \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)##. The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Select the most correct statement: (a)##Red \sim \mathcal{N}(\mu, \sigma^2)##, ##Green \sim \mathcal{N}(\mu, \sigma^2)## and ##Blue \sim \mathcal{N}(\mu, \sigma^2)## (b) ##Red \sim \mathcal{N}(\mu, \sigma^2)##, ##Green \sim \mathcal{N}(\mu, \sigma^2)## and ##Blue \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)## (c) ##Red \sim \mathcal{N}(\mu, \sigma^2)##, ##Green \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)## and ##Blue \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)## (d) ##Red \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)##, ##Green \sim \mathcal{N}(\mu, \sigma^2)## and ##Blue \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)## (e) ##Red \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)##, ##Green \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)## and ##Blue \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)## (a)##-1 < \text{Red} < \infty## if Red is log-normally distributed. (b) ##-2 < \text{Green} < 2## if Green is normally distributed. (c) ##0 < \text{Blue} < \infty## if Blue is log-normally distributed. (d) If the Green distribution is normal, then the mode = median = mean. (e) If the Red and Blue distributions are log-normal, then the mode < median < mean. Question 714 return distribution, no explanation Which of the following quantities is commonly assumed to be normally distributed? (a) Prices, ##P_1##. (b) Gross discrete returns per annum, ##r_{\text{gdr 0 }\rightarrow \text{ 1}} = \dfrac{P_1}{P_0} ##. (c) Effective annual returns per annum also known as net discrete returns, ##r_{\text{eff 0 }\rightarrow \text{ 1}} = \dfrac{P_1 - P_0}{P_0} = \dfrac{P_1}{P_0}-1##. (d) Continuously compounded returns per annum, ##r_{\text{cc 0 }\rightarrow \text{ 1}} = \ln \left( \dfrac{P_1}{P_0} \right)##. (e) Annualised percentage rates compounding per month, ##r_{\text{apr comp monthly 0 }\rightarrow \text{ 1 mth}} = \left( \dfrac{P_1 - P_0}{P_0} \right) \times 12##. Question 82 portfolio return deviation Correlation Dollars What is the expected return of the above portfolio? deviation Covariance ##(\sigma_{A,B})## Beta Dollars What is the standard deviation (not variance) of the above portfolio? Note that the stocks' covariance is given, not correlation. Question 294 short selling, portfolio weights Which of the following statements about short-selling is NOT true? (a) Short sellers benefit from price falls. (b) To short sell, you must borrow the asset from person A and sell it to person B, then later on buy an identical asset from person C and return it to person A. (c) Short selling only works for assets that are 'fungible' which means that there are many that are identical and substitutable, such as shares and bonds and unlike real estate. (d) An investor who short-sells an asset has a negative weight in that asset. (e) An investor who short-sells an asset is said to be 'long' that asset. Question 282 expected and historical returns, income and capital returns You're the boss of an investment bank's equities research team. Your five analysts are each trying to find the expected total return over the next year of shares in a mining company. The mining firm: Is regarded as a mature company since it's quite stable in size and was floated around 30 years ago. It is not a high-growth company; Share price is very sensitive to changes in the price of the market portfolio, economic growth, the exchange rate and commodities prices. Due to this, its standard deviation of total returns is much higher than that of the market index; Experienced tough times in the last 10 years due to unexpected falls in commodity prices. Shares are traded in an active liquid market. Your team of analysts present their findings, and everyone has different views. While there's no definitive true answer, who's calculation of the expected total return is the most plausible? The analysts' source data is correct and true, but their inferences might be wrong; All returns and yields are given as effective annual nominal rates. (a) Alice says 5% pa since she calculated that this was the average total yield on government bonds over the last 10 years. She says that this is also the expected total yield implied by current prices on one year government bonds. (b) Bob says 4% pa since he calculated that this was the average total return on the mining stock over the last 10 years. (c) Cate says 3% pa since she calculated that this was the average growth rate of the share price over the last 10 years. (d) Dave says 6% pa since he calculated that this was the average growth rate of the share market price index (not the accumulation index) over the last 10 years. (e) Eve says 15% pa since she calculated that this was the discount rate implied by the dividend discount model using the current share price, forecast dividend in one year and a 3% growth rate in dividends thereafter, which is the expected long term inflation rate. Question 558 portfolio weights, portfolio return, short selling (a) 200%, -100% (b) 200%, 100% (c) -100%, 200% (d) 100%, 200% (e) -100%, 100% Question 307 risk, variance Let the variance of returns for a share per month be ##\sigma_\text{monthly}^2##. What is the formula for the variance of the share's returns per year ##(\sigma_\text{yearly}^2)##? (a) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2## (b) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times 12## (c) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times 12^2## (d) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times \sqrt{12}## (e) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times {12}^{1/3}## Question 694 utility Which of the below statements about utility is NOT generally accepted by economists? Most people are thought to: (a) Prefer more to less, meaning that they will always like to have more wealth. (b) Be risk averse, meaning that they dislike risk as measured by the standard deviation of wealth. (c) Have a positively sloped utility function, so the gradient or first derivative of the utility function is always positive. (d) Have a utility function that increases at a diminishing rate. So the shape is concave down like a frown, or the second derivative is always negative. (e) Have a utility function that rises until it reaches the satiation point, then it falls because too much wealth causes problems. The Australian Federal Government lends money to domestic students to pay for their university education. This is known as the Higher Education Contribution Scheme (HECS). The nominal interest rate on the HECS loan is set equal to the consumer price index (CPI) inflation rate. The interest is capitalised every year, which means that the interest is added to the principal. The interest and principal does not need to be repaid by students until they finish study and begin working. Which of the following statements about HECS loans is NOT correct? (a) The real interest rate is zero. (b) The nominal amount owing on the loan will increase at the inflation rate over time, if there are no extra payments or borrowings. (c) The real amount owing on the loan in today's money will remain the same over time, if there are no extra payments or borrowings. (d) The interest rate on the HECS loan advantages rich domestic students because they are able to pay off their debt sooner and avoid paying as much HECS interest. (e) The interest rate on the HECS loan advantages all domestic students because the alternative of borrowing from the bank to pay for education would attract a higher interest rate. An investor bought a bond for $100 (at t=0) and one year later it paid its annual coupon of $1 (at t=1). Just after the coupon was paid, the bond price was $100.50 (at t=1). Inflation over the past year (from t=0 to t=1) was 3% pa, given as an effective annual rate. Which of the following statements is NOT correct? The bond investment produced a: (a) Nominal income return of 1% pa ##\left( =\dfrac{1}{100} \right)##. (b) Nominal capital return of 0.5% pa ##\left( =\dfrac{100.5-100}{100} \right)##. (c) Nominal total return of 1.5% pa ##\left( =\dfrac{100.5-100+1}{100} \right)##. (d) Real income return of 0.9708738% pa##\left( =\dfrac{ 1/(1+0.03)^1}{100} \right)##. (e) Real capital return of 0.4854369% pa ##\left( =\dfrac{ (100.5-100)/(1+0.03)^1}{100} \right) ##. Question 458 capital budgeting, no explanation Which of the following is NOT a valid method to estimate future revenues or costs in a pro-forma income statement when trying to value a company? (a) Extrapolation of past trends to estimate future revenues or costs. (b) Using a constant or trending 'percent of sales' method to forecast future costs. (c) Use futures (derivative) prices, if available, to forecast prices which helps calculate revenues. (d) Use forecast GDP growth rates published by the statistics bureau to estimate future revenue growth. (e) Assume that markets are efficient and use the random walk hypothesis to substitute a random value for revenue. Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive PE ratios. (a) Highly liquid publically listed firms. (b) Firms in a declining industry with very low or negative earnings growth. (c) Firms expected to have temporarily low earnings over the next year, but with higher earnings later. (d) Firms whose returns have a very low level of systematic risk. (e) Firms whose assets include a very large proportion of cash. Question 456 inflation, effective rate In the 'Austin Powers' series of movies, the character Dr. Evil threatens to destroy the world unless the United Nations pays him a ransom (video 1, video 2). Dr. Evil makes the threat on two separate occasions: In 1969 he demands a ransom of $1 million (=10^6), and again; In 1997 he demands a ransom of $100 billion (=10^11). If Dr. Evil's demands are equivalent in real terms, in other words $1 million will buy the same basket of goods in 1969 as $100 billion would in 1997, what was the implied inflation rate over the 28 years from 1969 to 1997? The answer choices below are given as effective annual rates: (b) 1.5086% pa (c) 5.0859% pa (d) 50.8591% pa (e) 150.8591% pa Question 455 income and capital returns, payout policy, DDM, market efficiency A fairly priced unlevered firm plans to pay a dividend of $1 next year (t=1) which is expected to grow by 3% pa every year after that. The firm's required return on equity is 8% pa. The firm is thinking about reducing its future dividend payments by 10% so that it can use the extra cash to invest in more projects which are expected to return 8% pa, and have the same risk as the existing projects. Therefore, next year's dividend will be $0.90. No new equity or debt will be issued to fund the new projects, they'll all be funded by the cut in dividends. What will be the stock's new annual capital return (proportional increase in price per year) if the change in payout policy goes ahead? Assume that payout policy is irrelevant to firm value (so there's no signalling effects) and that all rates are effective annual rates. (a) 2.7% pa. (b) 3.0% pa. (c) 3.5% pa. (d) 3.3% pa. (e) 3.8% pa. Question 452 limited liability, expected and historical returns What is the lowest and highest expected share price and expected return from owning shares in a company over a finite period of time? Let the current share price be ##p_0##, the expected future share price be ##p_1##, the expected future dividend be ##d_1## and the expected return be ##r##. Define the expected return as: ##r=\dfrac{p_1-p_0+d_1}{p_0} ## The answer choices are stated using inequalities. As an example, the first answer choice "(a) ##0≤p<∞## and ##0≤r< 1##", states that the share price must be larger than or equal to zero and less than positive infinity, and that the return must be larger than or equal to zero and less than one. (a) ##0≤p<∞## and ##0≤r< 1## (b) ##0≤p<∞## and ##-1≤r< ∞## (c) ##0≤p<∞## and ##0≤r< ∞## (d) ##0≤p<∞## and ##-∞≤r< ∞## (e) ##-∞<p<∞## and ##-∞<r< ∞## Question 415 income and capital returns, real estate, no explanation You just bought a residential apartment as an investment property for $500,000. You intend to rent it out to tenants. They are ready to move in, they would just like to know how much the monthly rental payments will be, then they will sign a twelve-month lease. You require a total return of 8% pa and a rental yield of 5% pa. What would the monthly paid-in-advance rental payments have to be this year to receive that 5% annual rental yield? Also, if monthly rental payments can be increased each year when a new lease agreement is signed, by how much must you increase rents per year to realise the 8% pa total return on the property? Ignore all taxes and the costs of renting such as maintenance costs, real estate agent fees, utilities and so on. Assume that there will be no periods of vacancy and that tenants will promptly pay the rental prices you charge. Note that the first rental payment will be received at t=0. The first lease agreement specifies the first 12 equal payments from t=0 to 11. The next lease agreement can have a rental increase, so the next twelve equal payments from t=12 to 23 can be higher than previously, and so on forever. (a) $1,997.78 per month in the first year, with 3% annual increases. (b) $2,083.33 per month in the first year, with 3% annual increases. (c) $2,157.60 per month in the first year, with 3% annual increases. (d) $2,171.49 per month in the first year, with 3% annual increases. (e) $3,333.33 per month in the first year, with 0% annual increases. The perpetuity with growth formula is: ###P_0= \dfrac{C_1}{r-g}### Which of the following is NOT equal to the total required return (r)? (a) ## \dfrac{C_1}{P_0} +g ## (b) ## \dfrac{C_1}{P_0} + \dfrac{P_1}{P_0} -1 ## (c) ## \dfrac{C_1+P_1 - 1}{P_0} ## (d) ## \dfrac{C_1}{P_0} + \dfrac{P_1-P_0}{P_0} ## (e) ## \dfrac{C_1+P_0.g}{P_0} ## Which of the following investable assets is the LEAST suitable for valuation using PE multiples techniques? (a) Common equity in a small private company. (b) Common equity in a listed public company. (c) Commercial real estate. (d) Ten year commercial real estate lease. (e) Residential real estate. Question 398 financial distress, capital raising, leverage, capital structure, NPV A levered firm has zero-coupon bonds which mature in one year and have a combined face value of $9.9m. Investors are risk-neutral and therefore all debt and equity holders demand the same required return of 10% pa. In one year the firm's assets will be worth: $13.2m with probability 0.5 in the good state of the world, or $6.6m with probability 0.5 in the bad state of the world. A new project presents itself which requires an investment of $2m and will provide a certain cash flow of $3.3m in one year. The firm doesn't have any excess cash to make the initial $2m investment, but the funds can be raised from shareholders through a fairly priced rights issue. Ignore all transaction costs. Should shareholders vote to proceed with the project and equity raising? What will be the gain in shareholder wealth if they decide to proceed? (a) Yes, $3m (b) Yes, $1.5m (c) Yes, $1m (d) No, -$0.5m (e) No, -$1.5m Which firms tend to have high forward-looking price-earnings (PE) ratios? (a) Illiquid small private companies. (b) Exchange-listed companies operating in stagnant industries with negative growth prospects. (c) Exchange-listed companies expected to have temporarily high earnings over the next year, but with lower earnings later. (d) Exchange-listed companies operating in high-risk industries with very high required returns on equity. (e) Exchange-listed companies whose assets include a very large proportion of cash. Estimate the Chinese bank ICBC's share price using a backward-looking price earnings (PE) multiples approach with the following assumptions and figures only. Note that the renminbi (RMB) is the Chinese currency, also known as the yuan (CNY). The 4 major Chinese banks ICBC, China Construction Bank (CCB), Bank of China (BOC) and Agricultural Bank of China (ABC) are comparable companies; ICBC 's historical earnings per share (EPS) is RMB 0.74; CCB's backward-looking PE ratio is 4.59; BOC 's backward-looking PE ratio is 4.78; ABC's backward-looking PE ratio is also 4.78; Note: Figures sourced from Google Finance on 25 March 2014. Share prices are from the Shanghai stock exchange. (a) RMB 6.4595 (b) RMB 6.3739 (c) RMB 6.3311 (d) RMB 3.4903 (e) RMB 3.3966 Question 707 continuously compounding rate, continuously compounding rate conversion Convert a 10% effective annual rate ##(r_\text{eff annual})## into a continuously compounded annual rate ##(r_\text{cc annual})##. The equivalent continuously compounded annual rate is: (a) 230.258509% pa Convert a 10% continuously compounded annual rate ##(r_\text{cc annual})## into an effective annual rate ##(r_\text{eff annual})##. The equivalent effective annual rate is: A continuously compounded monthly return of 1% ##(r_\text{cc monthly})## is equivalent to a continuously compounded annual return ##(r_\text{cc annual})## of: A continuously compounded semi-annual return of 5% ##(r_\text{cc 6mth})## is equivalent to a continuously compounded annual return ##(r_\text{cc annual})## of: Question 286 bill pricing A 30-day Bank Accepted Bill has a face value of $1,000,000. The interest rate is 2.5% pa and there are 365 days in the year. What is its price now? On 27/09/13, three month Swiss government bills traded at a yield of -0.2%, given as a simple annual yield. That is, interest rates were negative. If the face value of one of these 90 day bills is CHF1,000,000 (CHF represents Swiss Francs, the Swiss currency), what is the price of one of these bills? (a) CHF 999,507.09 (b) CHF 1,000,000.00 (c) CHF 1,000,493.39 (d) CHF 1,002,004.01 (e) CHF 1,002,498.39 Question 513 stock split, reverse stock split, stock dividend, bonus issue, rights issue (a) A 3 for 2 stock split means that for every 2 existing shares, all shareholders will receive 1 extra share. (b) A 3 for 10 bonus issue means that for every 10 existing shares, all shareholders will receive 3 extra shares. (c) A 20% stock dividend means that for every 10 existing shares, all shareholders will receive 2 extra shares. (d) A 1 for 10 reverse stock split means that for every 10 existing shares, all shareholders will lose 9 shares, so they will only be left with 1 share. (e) A 3 for 10 rights issue at a subscription price of $8 means that for every 10 existing shares, all shareholders can sell 3 of their shares back to the company at a price of $8 each, so shareholders receive money. Question 566 capital structure, capital raising, rights issue, on market repurchase, dividend, stock split, bonus issue A company's share price fell by 20% and its number of shares rose by 25%. Assume that there are no taxes, no signalling effects and no transaction costs. Which one of the following corporate events may have happened? (a) $1 cash dividend when the pre-announcement stock price was $5. (b) On-market buy-back of 20% of the company's outstanding stock. (c) 5 for 4 stock split. (d) 1 for 5 bonus issue. (e) 1 for 4 rights issue at a subscription price of $3 when the pre-announcement stock price was $5. Question 567 stock split, capital structure A company conducts a 4 for 3 stock split. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. (b) -25%, 33.33% (c) -25%, 25% (d) -20%, 25% (e) 33.33%, -25% Which of the following investable assets are NOT suitable for valuation using PE multiples techniques? (a) Common equity in a listed public mining extraction company that will cease operations, wind up, and return all capital to shareholders in one year when its sole gold mine becomes depleted. (b) Common equity in a small private owner-operated mining services company whose main asset is its sole tanker truck which delivers fuel to mines. The firm's shares are 100% owned by Bob, the driver of the tanker truck. (c) Common equity in a large listed public company in the banking industry. (d) Residential apartment real estate. (e) Commercial warehouse real estate. Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive earnings, disregard firms with negative earnings and therefore negative PE ratios. (b) High growth technology firms. (d) Firms with a very low level of systematic risk. (a) Common equity in small private companies. (b) Common equity in large private companies. (c) Common equity in listed public companies. (d) Fixed term bonds of listed public companies. (e) Real estate. Question 346 NPV, annuity due Your poor friend asks to borrow some money from you. He would like $1,000 now (t=0) and every year for the next 5 years, so there will be 6 payments of $1,000 from t=0 to t=5 inclusive. In return he will pay you $10,000 in seven years from now (t=7). What is the net present value (NPV) of lending to your friend? Assume that your friend will definitely pay you back so the loan is risk-free, and that the yield on risk-free government debt is 10% pa, given as an effective annual rate. Question 345 capital budgeting, break even, NPV Project life 10 yrs Initial investment in factory $10m Depreciation of factory per year $1m Expected scrap value of factory at end of project $0 Cost of capital per annum 10% The firm's current liabilities are forecast to stay at $0.5m. The firm's current assets (mostly inventory) is currently $1m, but is forecast to grow by $0.1m at the end of each year due to the project. At the end of the project, the current assets accumulated due to the project can be sold for the same price that they were bought. A marketing survey was used to forecast sales. It cost $1.4m which was just paid. The cost has been capitalised by the accountants and is tax-deductible over the life of the project, regardless of whether the project goes ahead or not. This amortisation expense is not included in the depreciation expense listed in the table above. Find the break even unit production (Q) per year to achieve a zero Net Income (NI) and Net Present Value (NPV), respectively. The answers below are listed in the same order. (a) 750,000 and 750,000. (b) 750,000 and 951,682. (c) 750,000 and 987,396. (d) 750,000 and 1,009,805. (e) 750,000 and 1,024,807. Question 329 DDM, expected and historical returns ### P_0= \frac{d_1}{r-g} ### The pronumeral ##g## is supposed to be the: (a) Expected future growth rate of the dividend. (b) Actual historical growth rate of the dividend. (c) Expected future growth rate of the total required return on equity r. (d) Actual historical growth rate of the market share price. (e) Actual historical growth rate of the return on equity (ROE) defined as (Net Income / Owners Equity). ### p_0= \frac{c_1}{r-g} ### Which expression is equal to the expected dividend return? (a) ##(c_1/p_0 ) -1## (b) ##(p_1/p_0) -1## (c) ##(c_5/c_4) -1## (d) ##c_3/p_2## (e) ##(p_1-p_0)/p_0## Is it possible for all countries' exchange rates to appreciate by 5% in the same year? or ? Question 308 risk, standard deviation, variance, no explanation A stock's standard deviation of returns is expected to be: 0.09 per month for the first 5 months; 0.14 per month for the next 7 months. What is the expected standard deviation of the stock per year ##(\sigma_\text{annual})##? Assume that returns are independently and identically distributed (iid) and therefore have zero auto-correlation. (a) ##\sigma_\text{annual} = 0.09 \times 5 + 0.14 \times 7## (b) ##\sigma_\text{annual} = (0.09 \times 5 + 0.14 \times 7)^{1/2}## (c) ##\sigma_\text{annual} = (0.09^2 \times 5 + 0.14^2 \times 7)^{1/2}## (d) ##\sigma_\text{annual} = (1+0.09)^5\times (1+0.14)^7 - 1## (e) ##\sigma_\text{annual} = \left( \dfrac{0.09^2 \times 5 + 0.14^2 \times 7}{12} \right)^{1/2}## Question 304 option Which one of the following is NOT usually considered an 'investable' asset for long-term wealth creation? (a) Shares (ordinary and preference stocks). (b) Real estate (land and buildings). (c) Options (calls and puts). (d) Human capital (education) (e) Debt (bond and loan assets) Question 297 implicit interest rate in wholesale credit You just bought $100,000 worth of inventory from a wholesale supplier. You are given the option of paying within 5 days and receiving a 2% discount, or paying the full price within 60 days. You actually don't have the cash to pay within 5 days, but you could borrow it from the bank (as an overdraft) at 10% pa, given as an effective annual rate. In 60 days you will have enough money to pay the full cost without having to borrow from the bank. What is the implicit interest rate charged by the wholesale supplier, given as an effective annual rate? Also, should you borrow from the bank in 5 days to pay the supplier and receive the discount? Or just pay the full price on the last possible date? Assume that there are 365 days per year. (a) 0.143476 is the implicit rate. The buying firm should borrow from the bank and pay at day 5. (b) 0.130771 is the implicit rate. The buying firm should NOT borrow from the bank and just pay at day 60. (c) 0.125473 is the implicit rate. The buying firm should borrow from the bank and pay at day 5. (d) 0.120131 is the implicit rate. The buying firm should NOT borrow from the bank and just pay at day 60. (e) 0.020408 is the implicit rate. The buying firm should borrow from the bank and pay at day 5. Question 292 standard deviation, risk Find the sample standard deviation of returns using the data in the table: Year Return pa The returns above and standard deviations below are given in decimal form. Question 270 real estate, DDM, effective rate conversion You own an apartment which you rent out as an investment property. What is the price of the apartment using discounted cash flow (DCF, same as NPV) valuation? You just signed a contract to rent the apartment out to a tenant for the next 12 months at $2,000 per month, payable in advance (at the start of the month, t=0). The tenant is just about to pay you the first $2,000 payment. The contract states that monthly rental payments are fixed for 12 months. After the contract ends, you plan to sign another contract but with rental payment increases of 3%. You intend to do this every year. So rental payments will increase at the start of the 13th month (t=12) to be $2,060 (=2,000(1+0.03)), and then they will be constant for the next 12 months. Rental payments will increase again at the start of the 25th month (t=24) to be $2,121.80 (=2,000(1+0.03)2), and then they will be constant for the next 12 months until the next year, and so on. The required return of the apartment is 8.732% pa, given as an effective annual rate. Ignore all taxes, maintenance, real estate agent, council and strata fees, periods of vacancy and other costs. Assume that the apartment will last forever and so will the rental payments. You're trying to save enough money for a deposit to buy a house. You want to buy a house worth $400,000 and the bank requires a 20% deposit ($80,000) before it will give you a loan for the other $320,000 that you need. You currently have no savings, but you just started working and can save $2,000 per month, with the first payment in one month from now. Bank interest rates on savings accounts are 4.8% pa with interest paid monthly and interest rates are not expected to change. How long will it take to save the $80,000 deposit? Round your answer up to the nearest month. (a) 27 months (t=27 months). (b) 38 months (t=38 months). (c) 40 months (t=40 months). (d) 43 months (t=43 months). (e) 79 months (t=79 months).
CommonCrawl
Malaria Journal A validated agent-based model to study the spatial and temporal heterogeneities of malaria incidence in the rainforest environment Francesco Pizzitutti1, William Pan2, Alisson Barbieri3, J Jaime Miranda4, Beth Feingold5, Gilvan R. Guedes6, Javiera Alarcon-Valenzuela1 & Carlos F. Mena1 Malaria Journal volume 14, Article number: 514 (2015) Cite this article The Amazon environment has been exposed in the last decades to radical changes that have been accompanied by a remarkable rise of both Plasmodium falciparum and Plasmodium vivax malaria. The malaria transmission process is highly influenced by factors such as spatial and temporal heterogeneities of the environment and individual-based characteristics of mosquitoes and humans populations. All these determinant factors can be simulated effectively trough agent-based models. This paper presents a validated agent-based model of local-scale malaria transmission. The model reproduces the environment of a typical riverine village in the northern Peruvian Amazon, where the malaria transmission is highly seasonal and apparently associated with flooding of large areas caused by the neighbouring river. Agents representing humans, mosquitoes and the two species of Plasmodium (P. falciparum and P. vivax) are simulated in a spatially explicit representation of the environment around the village. The model environment includes: climate, people houses positions and elevation. A representation of changes in the mosquito breeding areas extension caused by the river flooding is also included in the simulation environment. A calibration process was carried out to reproduce the variations of the malaria monthly incidence over a period of 3 years. The calibrated model is also able to reproduce the spatial heterogeneities of local scale malaria transmission. A "what if" eradication strategy scenario is proposed: if the mosquito breeding sites are eliminated through mosquito larva habitat management in a buffer area extended at least 200 m around the village, the malaria transmission is eradicated from the village. The use of agent-based models can reproduce effectively the spatiotemporal variations of the malaria transmission in a low endemicity environment dominated by river floodings like in the Amazon. The Amazon behaves as a complex system [1, 2], where socioeconomic, ecological and biophysical interactions occur between the human and the natural environments. In the last decades, part of the Amazon environment has been exposed to radical changes and severe alterations of native ecosystems ecologic equilibrium were generated [3]. In the northern Peruvian Amazon, the driving force of this change has been the influx of new immigrants that caused the rural frontier expansion [4]. In this context, activities such as the swidden-fallow agriculture have transformed the primary forest cover into a mix of cleared areas and secondary forest regrowth spots [5, 6]. This land cover modification process produced the variation of species composition of mosquito malaria vectors. The extension of riverine or floodable areas only partially covered by secondary regrowth generated new breeding areas for a highly competent malaria vectors, such as Anopheles darlingi [7, 8]. The reappearance of A. darlingi, that had been considered eradicated in the decades before the 1990, was accompanied, during the period from 1990 to 2000, by a remarkable rise of both Plasmodium falciparum and Plasmodium vivax malaria in the Northern Peruvian Amazon [9]. Malaria is a vector-borne parasitic disease caused by the Plasmodium protozoa that are transmitted to humans through the bites of infectious mosquitoes of the genus Anopheles. Malaria is widespread around the world with nearly one half of the world population at risk, an estimate of 198 million cases and 584,000 deaths in 2013 [10]. Many elements interacting at several spatial and time scales contribute to the malaria infection dynamics. Environmental factors, such as climate, presence and characteristics of water bodies, weather, economic activities and social conditions, interact with individual-based characteristics and behaviours of mosquitoes and humans to give rise to complex spatial and temporal patterns. Barbieri et al. [11] describe how the complex interactions between economic land uses (garimpo, or small-scale gold mining, urban activities and agricultural activities) and population characteristics may determine distinct risk profiles of malaria prevalence in the Brazilian Amazon. Furthermore, every individual of the involved populations has its own life trajectory, which is partly determined by environmental factors, and that in turn alters the environment and also contributes to determine the life trajectory of other individuals. The complex network of interactions involved in malaria transmission, generates feedbacks, non-linear behaviours and heterogeneities that are not easy to handle within ordinary mathematical models based on differential equations. Standard mathematical models of malaria transmission that evolved from the original formulation of Ross and MacDonald [12, 13] are based on a compartmental structure [14]. In such models, individuals are categorized in homogeneous groups and differential equations control the transitions from a compartment to another one. Although some compartmental model have tried to abstractly handle spatial heterogeneities [15], for the sake of simplicity compartmental models are usually based on the assumption of homogeneity inside the same compartment. Crucial factors in determining the malaria transmission, such as relative positions of houses and water bodies, environmental heterogeneities or individual-based behaviours are really difficult to be represented in an epidemiological compartment model. The natural approach to handle individual-based properties and spatial heterogeneities characteristic of the vector borne infection transmission is to make use of spatially explicit agent-based models (ABM). In ABMs the individuals are represented explicitly in a simulation code as agents interacting among them and with a representation of the natural environment [16]. In the malaria transmission study field, spatially explicit ABMs have been used to simulate the effect of management scenarios of aquatic mosquitoes habitats [17], to study the coupling of the hydrologic dynamics with the mosquito density [18], to study the spatial heterogeneities of mosquitoes distribution around water pools in a Shaelian village [19], to assess the risk of malaria re-emergence in the southern France [20] and to perform other similar analysis [21, 22]. The ABM presented in this paper integrates geographical analysis with entomology, demography and epidemiology in a simulation tool, able to reproduce the local scale malaria incidence dynamics in a small riverine village of nearly 1400 inhabitants located in the northern Peruvian Amazon. This study is a first attempt to represent through a validated mechanism-based simulation the local scale malaria infection dynamics mediated by A. darlingi in a region dominated by the flooding of a river as typical in the Amazon and in other tropical regions. The ABM is data-driven and validated: environmental data freely available from Internet are combined with field epidemiological observations to design a simulation tool able to reproduce the spreading of malaria that is disseminated by mosquito throughout the human population. Purpose of the model The first objective of the presented ABM is to represent the relationships, feedbacks, and behaviours involved in malaria transmission, observed in a typical Amazon environment. This objective consists in demonstrating that the mechanisms and factors that drive the dynamics of the malaria epidemics in the study area, during the study period, are included in the model in such a way that the main emergent patterns of the real world system are reproduced. One of the primary mechanism-based explanations of the present model consists in demonstrating that the observed temporal changes in the malaria incidence can be reproduced through a correct representation of the flooding generated by the river level changes. Accordingly, one the most important components of the model are the calibration and the validation processes through which the simulation outputs are validated against real world epidemiological data. The second objective of the model is to study the dependence of the malaria risk on spatial heterogeneities, which are linked to the heterogeneous distribution of water bodies around the houses where people live. A third objective of the model is to study the effect of possible management strategies of the aquatic mosquitoes breeding sites around the village. An additional objective of this ABM is to build a baseline for further data collection and model developments which in the future will better account for complexity of the relationship between environmental, entomological and epidemiological factors in determining the malaria transmission dynamics at a local scale. Short model description The ABM presented here is designed to reproduce the malaria transmission dynamics in a small, typical Amazon riverine village named Padre Cocha located in the Department of Loreto in the northern Peruvian Amazon (see Fig. 1). As it is showed in Fig. 2, the Nanay River, which flows nearly 500 m far from the village, as usual in the Amazon shows remarkable seasonal level variations (an average of 10 m yearly change). These changes produce floods that seasonally cover nearly half of the territory around Padre Cocha. The malaria transmission in Padre Cocha is strongly influenced by the seasonal river floodings because the flooded areas are ideal aquatic habitats for mosquitoes. The variations in the flooded area's extension produce changes in the mosquito density and consequently in the malaria incidence. The model includes a number of elements that are considered important [17, 18, 21, 23] in determining the malaria transmission characteristics such as: environment features, biology, ecology and ethology of the mosquito vector, human host behaviour, asymptomaticity and Plasmodium response to anti-malarial treatment. Mosquito and human agents interact among themselves and with a spatial explicit representation of the area around the study village. The agents representation includes features describing the malaria infection state and the relevant aspects of the life history of every individual mosquito and human. The environment representation includes climate (rain and temperature), changes of the water cover, the distribution of households in the village and a digital elevation model. Due to the nocturnal biting behaviour of A. darlingi, humans are represented as agents located inside their respective houses. When an infectious mosquito bites a susceptible human, the human becomes infected and starts to progress through the different malaria infection stages, passing from a prepatent stage to the infectious, the symptomatic and finally to the recovered stage. Furthermore, if the mosquito is susceptible and the human is infectious, the mosquito starts to develop the Plasmodium infection. A mosquito can become infectious if it lives long enough to complete the Plasmodium maturation inside its body (sporogonic cycle). A mosquito never recovers from the infection. The mosquitoes are designed to reproduce the characteristics of the A. darlingi that is the main malaria vector in the study area. The two species of malaria Plasmodium, P. falciparum and P. vivax, observed in the study area during the study period are included in the model. Location of the village of Padre Cocha near Iquitos. Iquitos is the capital of the Loreto department in the Northern Peruvian Amazon. In the small inset the Department of Loreto location (red) is showed respect to the rest of Peru (dark grey) and to the rest of the Amazon (green) The simulation area. Water covers corresponding to low and high Nanay river levels around Padre Cocha village are represented in different colors. The Nanay River low level water cover corresponds to the digitalization of a satellite image taken the 12 of September 2013 while the high level water cover correspond to an image taken the 24 of March 2013. The household georeferenced positions of the year 1998 are from the paper of Bautista et al. [27]. The entire area showed in the figure is included in the simulation area Process overview and scheduling The time step of the model is 1 h. Every 12 h, the mosquito agents emerge from the simulation pixels covered by water where other mosquitoes laid eggs and where the water cover endured for a period of time longer than the mosquito aquatic development time. During their life in the simulation, the mosquito agents try to have blood meals and subsequently to lay eggs. To achieve these goals the mosquitoes, only during night time, move through the simulation area changing their position passing from one pixel of the mosquito grid cover to an adjacent one. To human agents are assigned fixed position corresponding to the house they belong. The Plasmodium agents are represented as living inside the body of human and mosquito agents, passing through several stages of development that correspond to different stages of the infection of both vectors and hosts. The water cover of the simulation area changes accordingly to the Nanay River level registered in the date corresponding to the simulation time: higher river levels inundate more extended areas. Model calibration and validation The process of validation of a model could be defined as the process of verifying that the model produces outputs in line with the design objectives and also with empirical observations [24, 25]. Through the validation against empirical data it is possible to verify if the stylized representation of the real world implemented in the model is accurate enough to generate emergent behaviours that are similar to the real world system. The ABM outputs have been validated against the observed monthly series of malaria incidence in Padre Cocha during the period from the beginning of 1996 to the end of 1998. To validate the ABM a simple strategy was followed. The set of parameters in the model for which empirical data are not known were selected as elements of what is called the calibration vector [26]. Then a first reasonable calibration vector starting value was selected and 24 versions of the model assigning slightly different values to the calibration vector were built. Then simulated monthly malaria incidence resulting from the average over eight runs of every model version are compared with the observed historical data of monthly malaria incidence. The simulated yearly incidences separately for P. vivax and P. falciparum were also compared with the observed corresponding yearly incidences. The version of the model producing simulated outputs closer to the observed data was used as starting point to build 24 additional versions of the model. Then 24 more versions were produced starting from the model that gave up the best output and so on iteratively. To complete the calibration process 680 different values of the calibration vector were tested for a total number of 5440 independent simulation runs. The simulation outputs presented in the Results section are obtained as averages over 48 repeated runs of the calibrated model. This number of repetitions was chosen high enough to have statistical error on the main observables below the 5 %. Every production run starts with a 365 days equilibration period during which all the environmental parameters are constant and equal to the values of the first simulation day. Once the model was validated, the simulations outputs are compared with the observed spatial cluster of high malaria risk detected in the same study period between 1996 and 1998 by Bautista et al. in a retrospective surveillance study [27]. The spatial cluster analysis was carried out using the purely spatial statistic scan analysis developed by Kulldorff [28] and implemented in the program SaTScan [29]. The Kulldorff's method uses circular windows of variable size. The circular windows are centered in the position of every house in the simulation area and the radius of the windows is changed continuously from 0 up to a maximum size to include the 10 % of the population at risk. The SaTScan program compares the malaria counts within the scan windows with the expected number of cases resulting from Monte Carlo simulations. Then the window that corresponds to the maximum likelihood is selected and the number of cases inside this window is compared with the null hypothesis of no significant clustering. The analysis is carried out supposing that the malaria counts in each household follow a Poisson distribution. When the null hypothesis is true, the expected number of cases is proportional to the number of people living in the house. Software and hardware environment The model have been implemented in the MASON [30] environment. MASON is a free, Java-based, discrete-event, multi-agent simulation library core used to reduce the repetitive code writing effort necessary to develop an ABM. Every simulation run required approximately 5 CPU hours of one core of an Intel Core i7-800 processor. Environmental module The environment in an ABM is the space where the agents live. The kind of interactions between the agents and the environment, are crucial in determining the type and the characteristics of the ABM emergent properties. The detailed list of the environmental module parameters are showed in Table 1. Table 1 Environmental module parameters The study area is limited by a rectangular bounding box of 2370 × 2520 m, centered over Padre Cocha [3°41′54.17″S, 73°16′43.24″W]. Padre Cocha is located 5.5 km from the city of Iquitos, the capital of the district Loreto in Peru. The land cover around Padre Cocha is typical of a peri-urban area in the Peruvian Amazon: much of the cover is constituted by cleared areas that mix with secondary forest growth. Part of the land around Padre Cocha is covered all throughout the year by permanent stretches of water that include a big lagoon (cocha in quecha language) situated in the south-east area of the village. Approximately 500 m from the village, flows the Nanay River, an affluent of the Amazon River. Mosquitoes find breeding sites in every permanent or seasonal stretch of water around the village including the cocha [31]. Padre Cocha shows a typical equatorial climate: the rainfall level is high all around the year and there is not a sharp distinction between dry and rainy season. The average annual rainfall is 2616 mm with highest monthly rainfall intensity in April (310 mm) and December (282 mm) and lowest rainfall intensity in August (165 mm). The average annual temperature is 26.7 °C with a limited variability range, as usual in the Amazon (21–33 °C). The average humidity is 90 % (59–100 %). The average wind speed is 1 m/s with peaks of 4 m/s. The Nanay river level grows from the month of November to the month of February, while it decreases from May to July. The period from March to May is the high level period of the Nanay River, whereas the period from August to October is the low level period. The meteorological data included in the implementation of the ABM are: the daily rainfall and the daily minimum and maximum temperature. All the daily meteorological data used in this paper were obtained from the records of the meteorological station number 843770 (SPQT) located close to the Iquitos airport, approximately 9500 m far from Padre Cocha. The simulation grid covers The study area is covered by two distinct squared grids of different cell size. The first grid cover is called the mosquito grid cover, which is used to specify the mosquito agents positions. When mosquito agents are looking for blood meals or aquatic habitats, during night hours, they move every simulation time step from one mosquito grid pixel to another adjacent one. The mosquito grid cell size is calibrated to obtain flight ranges that are consistent with the majority of fields observation that report flight ranges for A. darlingi around 800 and 1500 m [32–35]. Periodic boundary conditions are imposed on the mosquito grid cover so that pairs of opposing sides of the bounding box area are topological equivalent. The use of periodic boundary conditions is equivalent to have periodic copies of the simulation village all around the central simulation area. These copies are separated in space by the size of the simulation domain (2370 × 2520 m) reproducing with good approximation the characteristics of the area around Padre Cocha where several peopled areas, including the extreme fringes of the Iquitos city, are scattered around the village at distances ranging from 2200 to 2600 m along all the directions. The remaining elements of the study area are projected over the second grid cover that is called the geographical grid cover. The spatial resolution of this grid is selected according to the spatial resolution of the GIS imagery employed to build the river flooding model subcomponent. Hydrological submodule The hydrological submodule implements an algorithm that reproduces in a simple way the floodings generated by the seasonal rise and fall of the Nanay River level in the area around Padre Cocha. The algorithm of the hydrological module converts a Nanay river level into a corresponding water cover. The input data of the hydrological submodule are: the monthly series of the Nanay River levels during the study period [27], two high resolution satellite images of the study area and an elevation model of the study area [36]. The two high resolution satellite images were obtained from Google Earth and correspond respectively to dates where the level of the Nanay River was near its maximum and near its minimum.Footnote 1 Through a GIS analysis of the two high resolution spatial images was extracted a digitalized representation of the water land cover in the period of the high and the low Nanay River level. The two covers are showed in Fig. 2. The mechanism underlying the hydrological submodule is simple: every 24 h during the simulation, the recorded value of the Nanay River level is read from the corresponding time series. Then the pixels that are covered by water in the high Nanay River water cover image and that have an elevation below the recorded Nanay River level are marked as covered by water. Mosquitoes breeding sites During the simulation, every geographical grid cover pixel covered by water is considered as suitable as breeding site with the exception of the central part of rivers where the water flow would flush out eggs and larvae. Mosquito agents are generated from aquatics breeding sites where other mosquito agents have previously laid eggs. The number of emergent mosquito per oviposition is determined by the model parameter: "Number of mosquito agents generated per oviposition". This number depends on several factors like the mosquito species and the biophysical characteristics of the aquatic habitats. A recent study [37] conducted on A. darlingi laboratory colonies reported a ratio of 6.6 emergent adults per oviposition. Although the laboratory conditions could be really different respect to the natural environment this value is adopted for the simulations. To include in the model the aquatic habitats carrying capacity [23], a maximum number of emergent adults per unit of breeding site area per half day is set. Several studies [38–40] reported that the mosquito density and the biting rate of A. darlingi during the periods corresponding to the low Nanay river level fall almost to zero. This occurrence may be due, among other things, to a minor suitability as aquatic habitats of permanent water covers than flooded areas. To reproduce this experimental fact in the model all the permanent water bodies generate a number of adult mosquito agents that is decreased respect to the number produced in the flooded areas by a factor whose value is specified by the state variable: "Reduction factor of the number of mosquitoes generated per unit of breeding area per unit of time in permanent water covers". Entomological module The A. darlingi shows high anthropophilic and endophagous behaviour [8]. In the Loreto region, A. darlingi has been observed to be the almost exclusive malaria diffusion vector [40], especially in rural areas [38] and when the natural environment is highly altered by human activities [41]. For this reason the entomological ABM module of this study has been designed and parameterized using what is actually known about the A. darlingi ecology, ethology and biology. Although other species of the genus Anopheles observed in Loreto are competent malaria vectors, the minor relative importance of these species during the study period in determining the process of malaria diffusion and the similarities with the A. darlingi species assure that representing all the mosquito agents in the ABM as A. darlingi, is a quite good representation of the individual-based malaria vector dynamics in the Amazon. The values of all the parameters presented in the entomological module are showed in Table 2. Table 2 Entomological module parameters As noted, only the adult stage of mosquitoes is represented explicitly in the model and all the mosquito agents are females. In Fig. 3 the flow chart of mosquito agents life cycle as implemented in the model is showed. The first action that a mosquito agent accomplishes is mating. Mating behaviours among mosquitoes could be different from species to species [42]. Although little is known specifically about mating behaviour of A. darlingi, a study of releases and recaptures of A. darlingi mosquitoes conducted in Rôndonia, Brazil [43] showed that the mating of this species is not preceded by a swarm formation and occur possibly in the proximity of the blood seeking sites. Therefore a mosquito agent emerging from the breeding aquatic habitat is assigned to the blood seeking movement mode (see below).The probability distribution showed in Table 3 is used to decide when the agent will mate. Flow chart of the mosquito agent time step Table 3 Probability distribution of mating times after the emergence from the aquatic breeding site of mosquito agents, from: [43] Blood-seeking and breeding habitat-seeking movements After mating, the mosquito agent immediately starts to search a human agent to have a blood meal (blood-seeking movement mode). If the mosquito agent succeeds in having a blood meal, the agent enters in the breeding habitat-seeking movement mode to find an aquatic habitat to lay eggs. It is known [44] that the Culicidae family can shows long range responses to nonspecific olfactory substances like the carbon dioxide produced by animals respiration. Moreover mosquitoes showed to be responsive to long-range stimuli of olfactory host-specific substances. Different compounds from human body have been identified as specifically attractive for mosquitoes. Furthermore mosquitoes show vision of poor resolution but of high sensitivity also in night-time that enables them to visually differentiate open spaces from closed dark spaces like forests also from distances of 15–20 m. It is still unclear if mosquitoes can show cognitive abilities such as learning the location of the visited hosts, of resting and breeding site [45]. For this reason in this study all the mosquito agent memories about specific sites location are ignored. To represent the sensorial responses that guide the mosquitoes toward their human hosts and toward the breeding sites targets, the blood-seeking and the breeding habitat seeking movements are implemented in the simulation as weighted random walks [46]. In both movement modes the flight direction is completely random while the mosquito agent is far from the movement targets. On the other hand, when the mosquito agent is located in a grid cell adjacent to the targets, the movement is weighted to pull the mosquito agent toward the targets. A nearest neighbor grid cell of the mosquito agent location is weighted with a probability proportional to a weight w if it contains breeding sites or human households, and (1 − w) in all the other cases (w є [0.5, 1]). The influence of the wind over the mosquito host seeking and breeding site seeking movements is not included in the model. Despite the fact that for some mosquitoes species it was observed that a wind of 0.83 m/s may reduce the mosquito host-seeking flights [47], in a study of human-baited landing collection of A. darlingi mosquitoes conducted in Belize in a human altered rainforest environment [48], it was observed that even an average wind of 8.9 m/s do not alter significantly the number of female mosquitoes collected both outdoor and indoor during night hours. In the northern Peruvian Amazon, A. darlingi shows a nocturnal biting behaviour and two not pronounced biting peaks around dusk and dawn [38]. Therefore, a not blood-engorged mosquito agent that is located in a pixel populated by human agents waits until night time hours, from 7 pm to 5 am, to try to take a blood meal. If the mosquito is not located in pixels populated by human agents, it enters in the host-seeking mode and move until it finds a pixel populated by human hosts. Once the appropriate time for biting has come the mosquito agent randomly chooses one household inside its actual location pixel. Then the mosquito chooses at random a human agent associated to the selected household. The host-seeking process is repeated until the mosquito succeeds in having a blood meal. During the day-time hours, the mosquito agent rests and remains in the cell where it was actually located at 5 am. After the blood meal the mosquito agent always rests [49]. Gonotrophic cycle and oviposition Once the blood feeding process is complete, the ingested blood catalyzes the process of internal eggs development in the female mosquito [50]. The gonotrophic cycle is defined as the time elapsed from one blood-feeding to the next one and is composed by the time that is needed to the eggs maturation and the time from the oviposition to the next blood meal. The time required for eggs development could be considered constant in the temperature range of the Amazon and is about 48 h long [51]. The times from the oviposition to the blood-feeding and vice versa are not constant and depend on environmental variables because are determined also by the time taken by the host seeking and by the oviposition site seeking processes. After the bite, the mosquito agent switches to the breeding habitat-seeking movement mode to find an aquatic habitat to lay eggs. The agent moves until it finds a breeding site. If when the mosquito agent finds a breeding site the eggs maturation process is not completed, the agent waits in the same location until the oviposition. Aquatic development stage The length of the mosquito aquatic development stage is in general a temperature dependent process. Nevertheless here this time is considered as constant within a temperatures range typical of the Amazon [52]. This development period is composed by the time needed to the eggs to hatching in larvae (48 h) plus the time needed for the development process from larvae to pupae and from pupae to the adult stage (13 days). As consequence, if a pixel of the geographical grid cover where a mosquito agent have laid eggs, maintains its water cover, at the end of the 15th day it will produce 6.6 adult mosquito agents per oviposition. Daily survival probability and death The daily survival probability p s of adult mosquitoes depends on many factors including the temperature. The classical relationship linking p s with the temperature that is reported by Craig [53] is described by the following equation: $$p_{s} = e^{{ - \frac{1}{{ - 4.4 + 1.3 \cdot T - 0.03 \cdot T^{2} }}}}$$ T is the daily average temperature. As observed by Monterio de Barros et al. [51] the A. darlingi daily survival probability can decrease during rainy days. For this reason in the model it is included a parameter called "survival probability factor during a rainy day" that sets the reduction of the survival probability during rainy days. A day is defined as a "rainy day" if the rainfall registered in this day is above a threshold specified by the model parameter "minimum rain of a rainy day". Plasmodium module Malaria is caused by a parasitic protozoan of the genus Plasmodium. The Plasmodium always has two hosts in its development cycle: a vector and a vertebrate host. In the case of Plasmodium species that cause malaria, the vector is a mosquito of the genus Anopheles and the vertebrate host could be a human as well as other mammal species like some monkeys. In the study area and in the Loreto Department, two different Plasmodium species are observed [9]: P. falciparum and P. vivax. Each species of Plasmodium give rise to different transmission pattern and different infection dynamics [54]. The values of the parameters presented in the entomological module are showed in Table 4. Table 4 Plasmodium module parameters Mosquito-related Plasmodium parameters: the sporogonic cycle The extrinsic incubation period is the only epidemiological feature in the model that originates from the interaction between mosquitoes and Plasmodium and is determined by the length of the sporogonic cycle. The sporogonic cycle or duration of the sporogony (DS) is the time required from the moment of the mosquito infection, to complete the sporozoite maturation in the salivary glands of the vector. This period depends on the Plasmodium species and also on the temperature: $$DS = \frac{DD}{{T_{avg} - T_{min} }}$$ T avg is the daily average temperature, T min is the minimum temperature required for the development of the sporozoites and DD are the parasite specific number of degree days needed to the complete the DS. T min and DD depend on the Plasmodium type [55]. Human-related Plasmodium parameters The human related Plasmodium parameters describe the progression of the malaria infection in human individuals. The human agent infection dynamics is more complex than the in case of the mosquito agent because human agents receive anti-malarial treatments and can recover from malaria. The Plasmodium agent development stages when interacting with human agents is showed in Fig. 4. Development stages of the Plasmodium agent in the mosquito agent vector and in the human agent host When a human is infected the Plasmodium agent starts its development process. The first development step, after the end of the intrinsic incubation period, is the transition from asymptomatic to symptomatic. In the real world this transition corresponds to the moment when the infected individuals go to the village health post to receive the treatment against malaria and the new malaria case is registered. Therefore, in the model when a human agent enters in the symptomatic stage the number of simulated malaria cases is upgraded. When the intrinsic incubation ends, the human agent is tagged as symptomatic. After the infection, when the "gametocytaemia time" is passed, the Plasmodium agent starts the gametocytaemia. Corresponding with the gametocytaemia onset the human agent became infectious and can transmit the infection to a mosquito agent. The parameter: "human infectious period" sets the length of parasitaemia after the beginning of treatment. Values of the human-related Plasmodium parameters The World Health Organization reports that the intrinsic incubation period could be 7 days or longer [10]. The estimation for this parameter was based on previous publications which found incubation periods between 9 and 14 days for P. falciparum and 12–17 days for P. vivax [56, 57]. The infectious period for a human being corresponds to the period during which the infecting gametocytes, the sexual stage parasite, are present in peripheral blood and can access to a mosquito through a bite. Following the Plasmodium cycle, the appearance of gametocytes depends on the first wave of asexual parasites. The time necessary for gametocytaemia onset varies considerably among Plasmodium species. In the case of P. falciparum the releasing of mature gametocytes takes 8-10 days after which the gametocytes become infectious to mosquitoes in 2–3 days [58]. For that reason, in this model a value of 12 ± 2 days is used for the parameter "gametocytaemia starting time" in the case of P. falciparum. The production of P. vivax gametocytes starts with the first generation of merozoites and therefore gametocytaemia may occur within 1–3 days after the first asexual parasites are observed in the blood [59, 60], this is equivalent to the prepatent period plus 1–3 days that gives the 11 ± 2 days used in this model. Considering treatment, P. vivax gametocytes are susceptible to most commonly used anti-malarial treatments with an approximate circulation time of 1 day after treatment and they disappear with asexual parasite clearance [59, 61]. This is reflected in the model parameter "human infectious period if treated" that is equal to 24 h for P. vivax. On the other hand, P. falciparum gametocytes have been found to circulate for up to 55 days after treatment unless primaquine is used, which reduces this time to approximately 6 days [62]. The average value used in this model for the P. falciparum parameter "human infectious period if treated" is 300 h which was based on the treatment regime used for Padre Cocha during the study period which included primaquine and sulfadoxine/pyrimethamine that eliminate parasitaemia in 7–14 days [31]. It must be considered that due to hypnozoite reservoirs in the liver, P. vivax infections present themselves in cycles and recurrent infections are common [56]. Both risk of recurrence and median survival time to recurrence were estimated to be of 0.3 and 203 days, respectively [63]. Even though the reported risk of recurrence tends to be higher in recent publications, at the time of the Padre Cocha study primaquine was being used and treatment resistance was low, therefore, risk of recurrence was lower than in current times. Transmission efficiency from humans to mosquitoes Transmission efficiency of Plasmodium spp. from humans to A. darlingi mosquitoes has been determined in few studies, considering different factors such as asymptomaticity, parasite density and gametocytaemia. Bharti et al. [64] found a value around 35 % for transmission efficiency from human patients infected by P. vivax to A. darlingi. In the study presented by Alves et al. [65], although biased by the low number of observed transmissions, symptomless patients with very low parasite densities, approaching zero parasites per microlitre were found still infecting 1.2 % of the A. darlingi mosquito population. The same study found 22 % of symptomatic patients being able to successfully infect A. darlingi mosquitoes. In this model it is used a value of 0.4 for the transmission efficiency from symptomatic acute patients to mosquitoes for both P. falciparum and P. vivax. Given that asymptomatic patients are not as efficient transmitters and that their values for transmission efficiency are significantly lower, in this model the transmission efficiency from asymptomatic human agents to mosquitoes is set to 0.1 for both P. falciparum and P. vivax. Human module Humans are represented in the model as individual agents. The A. darlingi mosquito has nocturnal biting habits therefore the representation of daily human activities is not included in the model and the human agents are located in their houses and do not move during the simulation. In the study period the human population of Padre Cocha have oscillated between 1396 and 1405 inhabitants [31]. The human agents are assigned randomly to the houses of the village. The number of human agent assigned to every household is determined by a Gaussian distribution. The minimum number of human agent in a house is set equal to 1. The positions of the houses in Padre Cocha during the study period are obtained from the study of Bautista et al. [27]. The values of the parameters of the human module are showed in Table 5. Table 5 Human module parameters The human agent behaviour is designed to reproduce the changes in the adherence to methods of protection against mosquito bites. In what follows, any element that obstacles the mosquitoes in biting humans is considered as a protection method against mosquitoes bites (bed nets, repellants, protective clothing, burn coils, fumigants). Several studies in Africa and in the Amazon and in other malaria endemic areas [66, 67] demonstrated that behavioural aspects of human individual are important in determining the malaria infection dynamics. The adherence to the use of bed nets was found influenced in the Brazilian Amazon [68] by seasonal factors including the absence or the presence of mosquitoes. These studies seem to indicate that local populations adopt protection methods against mosquito bites basically only if the perception of the malaria risk or the annoyance caused by mosquito bites is high enough to compensate the discomfort and the economic costs associated with the use of bed nets and other protection methods. These facts are stylized in the ABM associating to the human agents a Boolean state variable called "protection against malaria" that specifies if the human agent is protecting itself from mosquito bites or not. When the malaria incidence in the village is high human agents tend to protect themselves from mosquito bites changing their status of protection against mosquito bites. When the malaria incidence is low the human agents will switch gradually to a no protection state. A portion of human agents never change the protection state and is always considered as protected. The way human agents change their protection against mosquito bites state is represented as a flow chart in Fig. 5. If the malaria incidence is above the value specified by the variable "high incidence threshold", every simulation week, the human agent changes the value of its "protection against malaria" state variable to true with a probability specified by the parameter "probability of adoption of protection methods". The total adherence to the use of the protection methods against malaria is then maintained by the human agent for a period that is a fraction the time specified by the variable "protection period duration". This initial fraction is given by the following relationship: Flow chart of the human agent time step $$\begin{aligned} Period \,of \,total \,protection &= protection \,period \,duration \cdot \\ & \quad protection \,period \,fraction \end{aligned}$$ After the period of total protection the level of protection against mosquito bites is linearly decreased and the period of partial protection starts. This period of partial protection lasts until the end of the protection period. At the beginning of the partial protection period when P f is equal to 0 every attempt of bite will fail. At the end of the protection period when P f is 1 every bite attempt succeeds. The reduction of the value of the variable Pf is introduced in the model to represent the diminution of the adherence to protection methods as well as the reduction in time of the effectiveness of the prevention method itself (holes creation in bed nets, decrease in the content of insecticide in impregnated bed nets, etc.). The last possible action included in the human agents behaviour is the following: when the simulated malaria incidence falls below the value specified by the variable: "low incidence threshold" the human agent switches to a state of no protection against mosquito bites. Immunity and asymptomaticity The model does not consider explicitly the dynamics of acquired immunity against malaria. Only a fixed portion of human agents is considered as immune, asymptomatic and constantly infectious. The study of Roper et al. in Padre Cocha during 1998 indicated that only 2 % of the asymptomatic population was positive for malaria [31]; nevertheless only microscopy techniques were used to determine this result. There is discrepancy between microscopy and PCR diagnosis, with poor performance of microscopy at low parasite densities [69]. In general, prevalence of asymptomatic infections in the Amazon region, including Peru, Brazil and Venezuela, is relatively low with values that are found to be less than 10 % [70] and around 5–8 % in Zungarococha and Manacamiri located in the peri-Iquitos area (Unpublished data presented in [70]). Since, as showed by Alves et al. [71], the PCR technique is 6–7 times more efficient than microscopy for detecting plasmodial infections the values of the fraction of asymptomatic human agents used in the model is six times higher of what found by Roper et al. All the asymptomatic human agents are considered as no transmission-blocking. As mentioned by Roper et al. [31] no superinfection was detected in Padre Cocha during the study period. For this reason no superinfection is considered in the model: when a human agent is infected by a species of Plasmodium, the agent is considered as no susceptible to the infection of the other species of Plasmodium of the model. Cyclic peaks of malaria incidence The model output resulting from the calibration process is presented in this section. In Fig. 6 the curve of simulated monthly total malaria incidence is compared with the corresponding observed malaria monthly incidence. It is possible to note that the model can reproduce the observed pattern of temporal variation of the malaria incidence to a large extent. The sum of the P. vivax and P. falciparum incidences give rise to a simulated curve that, like the observed one, is almost zero during the months of August and September during the Nanay river low level season. The same simulated curve shows pronounced peaks corresponding to the Nanay river high level seasons from the month of March to the month of June. Monthly malaria incidence as calculated by the ABM (red line: total malaria incidence, light blue P. vivax malaria incidence, dark blue P. falciparum malaria incidence). The simulated malaria incidence curves are compared with the observed monthly malaria incidence (black line) and the Nanay river level (blue dotted line) in Padre Cocha, Loreto, Peru, during the period from the beginning of the year 1996 to the end of the year 1998. The observed incidence curve is obtained from the study of Bautista et al. [27] Unfortunately, it is not possible to have access to the desegregated counts of the observed malaria monthly incidence for P. vivax and P. falciparum and the monthly incidence curves cannot be validated separately for P. vivax and P. falciparum malaria. The only available desegregated data [27] are the cumulative yearly malaria incidence separated for P. vivax and P. falciparum that are showed in Table 6 together with the simulated yearly incidences. Also in this case of the yearly malaria incidence the outputs of the model are close to the empirical observations. Table 6 Observed and simulated yearly malaria incidence separated for Plasmodium vivax and Plasmodium falciparum To show how the model is sensible to the interactions between human individuals and the environment where they live, a modified version of the model is produced where the adherence level to methods of prevention against mosquito bites is maintained constant. In this new model parametrization the seasonal changes of the adherence to mosquito bites preventions methods are not included and the fraction of protected human agents is maintained equals to 0.46. As it is showed in Fig. 7, after the sharp increase in the malaria incidence that take place in August and September, the curves of simulated malaria incidence calculated without taking into account the human behaviour show a delay of nearly 2 months with regards to the empirical curve before the descendant phase. The same delay was not showed by the curves in Fig. 6 where the human agents behaviour is included. Also the simulated curves with no human behaviour included show pronounced minima in October and November but the absolute value of these minima is far from the value that the observed malaria curves take in the same months. Monthly malaria incidence as calculated by the ABM in the case when the human agents do not change the adherence to protection method against mosquito bites. The simulated curve (red) is compared with the observed monthly malaria incidence (black) in Padre Cocha, Loreto, Peru, during the period from the beginning of the year 1996 to the end of the year 1998 As can be seen in Fig. 8, the spatial cluster analysis evidenced significant clusters of both P. vivax and P. falciparum. The relative risk of these clusters is defined as the number of cases in a windows centered over a household observed during the simulation, divided by the number of expected cases in the same window. The range of variation of the relative risk is quite high, fluctuating from a maximum of 5.6 to a minimum of 0.01. For both P. vivax and P. falciparum clusters of high relative risk are observed for the households located closer to the permanent and seasonal mosquitoes breeding sites. In the central part of the village, far from water bodies there are clusters of low risk. The ABM simulation evidences also that isolated houses like those located on the northeastern and northwestern part of the village could be subjected to higher risk of malaria. In the case of P. falciparum it is possible to observe the presence of high relative risk clusters close to the floodable areas in the southeastern side of the village not observed for P. vivax. The significant spatial fluctuations caused by the households and larval habitats heterogeneous spatial distribution evidenced by the simulation are in good agreement with the results obtained from the same spatial analysis carried out on experimental data by Bautista et al. [27] whose study also evidenced the center and the borders of the village as areas of low and high relative risk, respectively. Clusters of relative risk of households in Padre Cocha. Red circles correspond to high relative risk; blue circles correspond to low relative risk A closer view of relative positions of households, malaria cases and breeding sites is showed in Fig. 9. The breeding productive sites are located close to inhabited places predominantly along the inundated river bank close to the village southeastern side. A comparison between Figs. 2 and 9 shows that during the river low-level season only few mosquito agents laid eggs along the river bank and the most productive breeding sites during this season are the permanent stretches of water scattered around the village. The number of ovipositions in these permanent pools around the village is higher in proximity to the village. Furthermore, in agreement with the spatial analysis presented above, a great number of malaria cases among human agents are concentrated in households located close to the floodable areas in the southeastern part of the village and close to the permanent pools far from the river. Oviposition sites and malaria cases during the simulation period. The households are showed as graduated red circles whose areas are proportional to the number of simulated malaria cases. The geographical grid cover pixels used as breeding site by the mosquito agents are showed in graduated colors to highlight the number of ovipositions Larval source management The breeding sites management or larval source management (LSM) is an approach aimed at reducing the number of adult mosquito individuals emerging from breeding sites. The LSM can be implemented following various strategies, for example by manipulating the habitats eliminating standing water. A second strategy is the habitat alteration adding chemicals to water in order to kill or to prevent the development of the larvae [10]. Obviously when the land water cover changes remarkably as consequence of massive flooding originating from the changes of a river level like in the Amazon, adding chemical to flooded areas would be ineffective. In the other hand, many strategies of habitat manipulation are aimed to reduce the extension of aquatic habitats. As noted in the introduction to this paper, the malaria outbreaks observed in the Iquitos region from the year 1996 were correlated with anthropogenic activities that altered the mosquito habitats leading to a change in the mosquito species composition. Certain types of environment modifications amplify the presence of A. darlingi: areas associated with low level of shading resulting from deforestation, areas with large water bodies and areas close to human population [7, 41]. Moreover Turell et al. [38] found that in 1999 the density of A. darlingi in Puerto Almendra, a village 18 km far from Padre Cocha, located in a riverine environment very similar to Padre Cocha, resulted very high if measured inside the village, while the same density fell almost to zero if measured 300 m far from the village in a unaltered forested site. These facts may suggest that if the unaltered total cover of the original forested habitat is restored in critical areas around a village as a kind of LSM habitat manipulation, the resulting shading effect would decrease the presence of A. darlingi and the malaria transmission. Another habitat alteration strategy could be the construction of barriers to prevent the water of the river to inundate the areas close to the village. In any case the result of the LSM would be the elimination of mosquito breeding sites around the village. The calibrated version of the ABM was used to simulate the effect of the elimination of mosquito breeding sites around the village of Padre Cocha. To carry out this test, several version of the model were built creating a sequence of "what if" scenarios. In these test simulations a buffer area is created around every house of the simulation. The buffer area is delimited by two radius rb and Rb. The first radius is fixed in every scenario simulation and is equal to 20 m. The second buffer radius is greater than rb and changes in every test scenario. All the geographical grid cover pixels that are distant more than rb and less than Rb from a house belong to the buffer area and the pixels belongings to the buffer area are considered as not suitable as breeding sites. Six independent simulations were then carried out varying the value of Rb from 50 to 300 m in order to observe the effects of larval habitats elimination on the malaria incidence. Figure 10 shows a plot resuming the results of the "what if" scenarios simulations. The total annual malaria incidence calculated as an average over the three (1996, 1997, 1998) simulation years considerably decreases with the increasing of Rb. For Rb values above 200 m the malaria incidence fall almost to zero while for Rb = 150 m the incidence show a decreasing of the 87 %. This sharp effect is due to the fact that the mosquito agents have to increase the length of the blood-seeking and the breeding site-seeking times because with the increase of Rb the breeding sites are moving progressively further from the houses. This generates a gonotrophic cycle increase leading to a point where the average length of two gonotrophic cycles is greater than the average length of a mosquito agent life and the malaria transmission is blocked. Simulated average yearly malaria incidence as function of the larval habitats control buffer radius Rb. All the breeding sites inside the area delimited by the buffer radius are removed from the simulation. Red line: total yearly average malaria incidence, dark blue line: P. vivax yearly average malaria incidence, light blue line: P. falciparum yearly average malaria incidence The ABM presented in this paper reproduces the observed patterns of local scale malaria transmission dynamics in a small riverine village in the Northern Peruvian Amazon. As usual in agent-based modelling, a bottom-up strategy was followed: individual scale elements (human and mosquito agents) were assembled together with a macro scale environment description to create a model where macroscopic patterns emerge as a collective behaviour. On this regard the model offers an explicit representation of interactions between two populations of humans and mosquitoes in the context of a changing environment. The model was calibrated against empirical data of malaria incidence and the resulting calibrated model generated outputs that are in agreement with observed data. Obviously the model is not able to reproduce perfectly the malaria incidence time series and the observed spatial heterogeneities. Many factors not represented explicitly in the model contribute to the malaria transmission dynamics in the real world system. For example the exact relationships among the mosquito biology, ecology and ethology and environmental and climatic factors is far from being understood and studied in full detail. Consequently those interactions are represented only approximately in the model. For example the aquatic development stage of the mosquito life cycle is not represented in full details as shown in other individual-based models of malaria transmission [18, 23, 72]. A more detailed representation of A. darlingi subadult development stage would imply the use of an extended set of unknown parameters describing the aquatic habitat in terms of mosquito eggs, larvae, and pupae predation, death rates, biomasses, number of eggs per oviposition and habitats carrying capacity. In the presented model mosquitoes breeding sites are only characterized as areas where the mosquito agents lay eggs and from where adult mosquitoes emerge in number proportional to the ovipositions. Other aspects of mosquitoes life cycle are not included: mosquito agents are strictly anthropophilic and the possibility of blood meals from animals is not included. Also some aspects describing the malaria transmission process like the transmission efficiency from mosquito to human are no included in the model because the value determination for this parameter is really problematic. For that reason in the model, the malaria transmission efficiency from mosquitoes to symptomatic humans is considered equal to 1 although in the real world this efficiency could be significantly less than 1 [73]. Similarly the immunity acquisition and dynamics in humans are not represented in the model. With respect to the complex behaviours and features of humans as individuals and the socioeconomics features of the village under study, only a limited and approximate representation is included in the model. Patterns of human individual movements are not represented in the model although human work-related and regional human movements are identified as important driving forces in determining the malaria transmission process [74]. More specifically Roper et al. [31] showed that some of the daily human activities in Padre Cocha after 6:00 pm and before 6:00 am are associated with an increased malaria risk. The model calibration had possibly included implicitly these increased malaria risks tuning the overall degree of protection against mosquito bites of the human agent population. The validated version of the model was used to build several "what if" scenarios to understand the effect of larval sources site management. This result can be considered as an example of how an ABM of local malaria transmission could be used as a tool to study malaria transmission and to build a malaria control planning system [75]. A number of scenarios were considered to calculate the malaria incidence as a function of the extension of the area around the study village where the mosquito aquatic habitats were eliminated. The simulated scenarios showed that, eliminating mosquito larva in a buffer area extended more than 200 m around the village the malaria transmission is completely eliminated. A similar "what if" scenario study was carried out by Gu and Novak [17] for A. gambiae in an ABM of a hypothetical village where human habitations and breeding sites were created at random over a squared 40 × 40 grid. In this work the elimination of aquatic habitats generated a slower malaria incidence decrease, respect to what presented here, giving a reduction of 94 % in malaria incidence when the breeding sites were eliminated in a range of 300 m from the houses. The same model gives a reduction of 86.8 % with a buffer radius of 200 m. Although the flight ranges of mosquitos agent are similar in the model presented here respect to the model of Gu and Novak, the different behaviour of malaria reduction with the increase of the buffer radius can be explained in term of the different gonotrophic cycle lengths used in the two models and also because in the model presented here the aquatic habitats are no uniformly distributed across the simulation landscape and are concentrated in the floodable areas in southeastern side of the village very close to the houses. Possible future development of this study will be the design and the inclusion into the model of more policy-oriented scenarios in order to create a simulation tool that could be integrated into a malaria control scenario planning system for low incidence malaria areas. Such a simulation tool could be used to explore the effect of several local scale strategies to control or eliminate the malaria transmission like intra-household prevention, environment management prevention and effects of anti-malaria treatments. Image of the Nanay River high level in the area around Padre Cocha: date of the image: 24 of March 2013 from Google Earth, 3°41′56.48″S, 73°16′43.04″W Image of the Nanay River low level in the area around Padre Cocha: date of the image: 12 of September 2013 from Google Earth, 3°41′56.48″S, 73°16′43.04″W. ABM: agent-based model DS: duration of Sporogony LSM: Malanson GP, Zeng Y, Walsh SJ. Landscape frontiers, geography frontiers: lessons to be learned. Prof Geogr. 2006;58:383–96. Malanson GP, Zeng Y, Walsh SJ. Complexity at advancing ecotones and frontiers. Environ Plan A. 2006;38:619–32. Pan W, Carr D, Barbieri A, Bilsborrow R, Suchindran C. Forest clearing in the Ecuadorian Amazon: a study of patterns over space and time. Popul Res Policy Rev. 2007;26:635–59. Perz SG, Aramburú C, Bremner J. Population, land use and deforestation in the Pan Amazon Basin: a comparison of Brazil, Bolivia, Colombia, Ecuador, Perú and Venezuela. Environ Dev Sustain. 2005;7:23–49. Coomes OT, Grimard F, Burt GJ. Tropical forests and shifting cultivation: secondary forest fallow dynamics among traditional farmers of the Peruvian Amazon. Ecol Econ. 2000;32:109–24. Arce-Nazario JA. Human landscapes have complex trajectories: reconstructing Peruvian Amazon landscape history from 1948 to 2005. Landsc Ecol. 2007;22:89–101. Vittor A, Pan W, Gilman RH, Tielsch J, Glass G, Shields T, et al. Linking deforestation to malaria in the Amazon: characterization of the breeding habitat of the principal malaria vector, Anopheles darlingi. Am J Trop Med Hyg. 2009;81:5–12. Hiwat H, Bretas G. Ecology of Anopheles darlingi Root with respect to vector importance: a review. Parasit Vectors. 2011;4:177. Griffing SM, Gamboa D, Udhayakumar V. The history of 20th century malaria control in Peru. Malar J. 2013;12:303. World Health Organization. World malaria report, 2013. Geneva. 2013. Barbieri AF, Sawyer DO, Soares-Filho BS. Population and land use effects on malaria prevalence in the southern Brazilian Amazon. Hum Ecol. 2005;33:847–74. Smith DL, McKenzie FE. Statics and dynamics of malaria infection in Anopheles mosquitoes. Malar J. 2004;3:13. Smith DL, Battle KE, Hay SI, Barker CM, Scott TW, McKenzie FE. Ross, macdonald, and a theory for the dynamics and control of mosquito-transmitted pathogens. PLoS Pathog. 2012;8:e1002588. Mandal S, Sarkar RR, Sinha S. Mathematical models of malaria-a review. Malar J. 2011;10:202. Gao D, Ruan S. A multipatch malaria model with logistic growth populations. SIAM J Appl Math. 2012;72:819–41. Brown DG, Riolo R, Robinson DT, North M, Rand W. Spatial process and data models: toward integration of agent-based models and GIS. J Geogr Syst. 2005;7:25–47. Gu W, Novak RJ. Agent-based modelling of mosquito foraging behaviour for malaria control. Trans R Soc Trop Med Hyg. 2009;103:1–14. Bomblies A, Duchemin J-B, Eltahir EAB. Hydrology of malaria: model development and application to a Sahelian village. Water Resour Res. 2008;44:W12445. Bomblies A. Agent-based modeling of malaria vectors: the importance of spatial simulation. Parasit Vectors. 2014;7:1–10. Linard C, Ponçon N, Fontenille D, Lambin EF. A multi-agent simulation to assess the risk of malaria re-emergence in southern France. Ecol Modell. 2009;220:160–74. Niaz Arifin SM, Madey GR, Collins FH. Examining the impact of larval source management and insecticide-treated nets using a spatial agent-based model of Anopheles gambiae and a landscape generator tool. Malar J. 2013;12:290. Yamana TK, Bomblies A, Laminou IM, Duchemin J-B, Eltahir EAB. Linking environmental variability to village-scale malaria transmission using a simple immunity model. Parasit Vectors. 2013;6:226. Depinay J-MO, Mbogo CM, Killeen G, Knols B, Beier J, Carlson J, et al. A simulation model of African Anopheles ecology and population dynamics for the analysis of malaria transmission. Malar J. 2004;3:29. Crooks A, Castle C, Batty M. Key challenges in agent-based modelling for geo-spatial simulation. Comput Environ Urban Syst. 2008;32:417–30. Ligtenberg A, van Lammeren RJA, Bregt AK, Beulens AJM. Validation of an agent-based model for spatial planning: a role-playing approach. Comput Environ Urban Syst. 2010;34:424–34. Pizzitutti F, Mena C, Walsh S. Modelling tourism in the Galapagos islands: an agent-based model approach. J Artif Soc Soc Simul. 2014; 17. Bautista CT, Chan AST, Ryan JR, Calampa C, Roper MH, Hightower AW, et al. Epidemiology and spatial analysis of malaria in the Northern Peruvian Amazon. Am J Trop Med Hyg. 2006;75:1216–22. Kulldorff M. A spatial scan statistic. Commun Stat Theory Methods. 1997;26:1481–96. Kulldorff M. SaTScan TM. 2014. Luke S, Cioffi-Revilla C, Panait L, Sullivan K, Balan G. MASON: a multiagent simulation environment. Simulation. 2005;81:517–27. Roper MH, Carrion Torres RS, Cava Goicochea CG, Andersen EM, Aramburú Guarda JS, Calampa C, et al. The epidemiology of malaria in an epidemic area of the Peruvian Amazon. Am J Trop Med Hyg. 2000;62:247–56. Deane L, Causey O, Deane M. Notas sobre a distribuição e a biologia dos anofelinos das regiões nordestina e amazônica do Brasil. Rev Fund SESP. 1948;1:827–35. de Barros F, Honório N. Man biting rate seasonal variation of malaria vectors in Roraima, Brazil. Mem Inst Oswaldo Cruz. 2007;102:299–302. Charlwood J, Alecrim W. Capture-recapture studies with the South American malaria vector Anopheles darlingi, Root. Ann Trop Med Parasitol. 1989;83:569–76. Achee N, Grieco J, Andre R, Rejmankova E, Roberts D. A mark-release-recapture study using a novel portable hut design to define the flight behavior of Anopheles darlingi in Belize, Central America. J Am Mosq Control Assoc. 2005;21:366–79. Hole-filled SRTM for the globe Version 4. Villarreal-treviño C, Vásquez GM, López-sifuentes VM, Escobedo-vargas K, Huayanay-repetto A, Linton Y, et al. Establishment of a free-mating, long-standing and highly productive laboratory colony of Anopheles darlingi from the Peruvian Amazon. Malar J. 2015;14:227. Turell MJ, Sardelis MR, Jones JW, Watts DM, Fernandez R, Carbajal F, et al. Seasonal distribution, biology, and human attraction patterns of mosquitoes (Diptera: Culicidae) in a rural village and adjacent forested site near Iquitos, Peru. J Med Entomol. 2008;45:1165–72. León WC, Valle JT, Naupay RO, Tineo E V, Rosas AA, Palomino MS: Comportamiento estacional del Anopheles (Nyssorhynchus) darlingi Root 1926 en localidades de Loreto Y Madre de Dios, Peru 1999–2000. 2003; 20:22–27. Reinbold-Wasson DD, Sardelis MR, Jones JW, Watts DM, Fernandez R, Carbajal F, et al. Determinants of Anopheles seasonal distribution patterns across a forest to periurban gradient near Iquitos, Peru. Am J Trop Med Hyg. 2012;86:459–63. Vittor AY, Gilman RH, Tielsch J, Glass G, Shields T, Lozano WS, et al. The effect of deforestation on the human-biting rate of Anopheles darlingi, the primary vector of Falciparum malaria in the Peruvian Amazon. Am J Trop Med Hyg. 2006;74:3–11. Yuval B. Mating systems of blood-feeding flies. Annu Rev Entomol. 2006;51:413–40. Lounibos LP, Lima DC, Lourenço-de-Oliveira R. Prompt mating of released Anopheles darlingi in western Amazonian Brazil. J Am Mosq Control Assoc. 1998;14:210–3. Gibson G, Torr SJ. Visual and olfactory responses of haematophagous Diptera to host stimuli. Med Vet Entomol. 1999;13:2–23. Alonso WJ, Schuck-Paim C. The "ghosts" that pester studies on learning in mosquitoes: guidelines to chase them off. Med Vet Entomol. 2006;20:157–65. Thomas CJ, Cross DE, Bøgh C. Landscape movements of Anopheles gambiae malaria vector mosquitoes in rural Gambia. PLoS One. 2013;8:e68679. Service M. Effects of wind on the behaviour and distribution of mosquitoes and blackflies. Int J Biometeorol. 1980;24:347–53. Achee NL, Grieco JP, Rejmankova E, Andre RG, Vanzie E, Polanco J, et al. Biting patterns and seasonal densities of Anopheles mosquitoes in the Cayo District, Belize, Central America with emphasis on Anopheles darlingi Biting patterns and seasonal densities of Anopheles mosquitoes in the Cayo District, Belize, Central Ameri. J Vector Ecol. 2006;31:45–57. Rozendaal J. Biting and resting behavior of Anopheles darlingi in the Suriname rainforest. J Am Mosq Control Assoc. 1989;5:351–8. Dantas C, Tadei WP, Abdalla FC, Filemon P, De Oliveira CD, Pimenta P, et al. Multiple blood meals in Anopheles darlingi (Diptera: Culicidae). J Vector Ecol. 2012;37:351–8. de Barros FSM, Honório NA, Arruda ME. Survivorship of Anopheles darlingi (Diptera: Culicidae) in relation with malaria incidence in the Brazilian Amazon. PLoS One. 2011;6:e22388. Bergo ES, Buralli GM, Santos JLF, Gurgel SM. Avaliação Do Desenvolvimento Larval De Anopheles Darlingi Criado Em Laboratório Sob Diferentes Dietas. Rev Saude Publica. 1990;24:95–100. Craig MH, Snow RW, le Sueur D. A climate-based distribution model of malaria transmission in sub-Saharan Africa. Parasitol Today. 1999;15:105–11. Ekpenyong E, Eyo J. Plasmodium infection in man: a review. Anim Res Int. 2006;3:573–80. Detinova TS. Age grouping methods in Diptera of medical importance. Geneva. 1962. Price R, Tjitra E, Guerra C, Yeung S, White NJ, Anstey NN. Vivax malaria: neglected and not benign. Am J Trop Med Hyg. 2007;77:79–87. Warrell DA, Gilles HM. Essential malariology. CRC Press. 2002. Hayward RE, Tiwari B, Piper KP, Baruch DI, Day KP. Virulence and transmission success of the malarial parasite Plasmodium falciparum. Proc Natl Acad Sci USA. 1999;96:4563–8. Bousema T, Drakeley C. Epidemiology and infectivity of Plasmodium falciparum and Plasmodium vivax gametocytes in relation to malaria control and elimination. Clin Microbiol Rev. 2011;24:377–410. McKenzie FE, Jeffery GM, Collins WE. Gametocytemia and fever in human malaria infections. J Parasitol. 2007;93:627–33. Douglas NM, Simpson JA, Phyo AP, Siswantoro H, Hasugian AR, Kenangalem E, et al. Gametocyte dynamics and the role of drugs in reducing the transmission potential of plasmodium vivax. J Infect Dis. 2013;208:801–12. Bousema T, Okell L, Shekalaghe S, Griffin JT, Omar S, Sawa P, et al. Revisiting the circulation time of Plasmodium falciparum gametocytes: molecular detection methods to estimate the duration of gametocyte carriage and the effect of gametocytocidal drugs. Malar J. 2010;9:136. Van den Eede P, Soto-Calle VE, Delgado C, Gamboa D, Grande T, Rodriguez H, et al. Plasmodium vivax sub-patent infections after radical treatment are common in Peruvian patients: results of a 1-year prospective cohort study. PLoS One. 2011;6:e16257. Bharti A, Chuquiyauri R, Brouwer K. Experimental infection of the neotropical malaria vector Anopheles darlingi by human patient-derived Plasmodium vivax in the Peruvian Amazon. Am J Trop Med Hyg. 2006;75:610–6. Alves F, Gil L, Marrelli M. Asymptomatic carriers of Plasmodium spp. as infection source for malaria vector mosquitoes in the Brazilian Amazon. J Med Entomol. 2005;42:777–9. Heggenhougen HK, Hackethal V, Vivek P. The behavioural and social aspects of malaria and its control. Geneva. 2003. Kroeger A, Mancheno M, Alarcon J, Pesse K. Insecticide-impregnated bed nets for malaria control : concerning acceptability and effectiveness. Am Soc Trop Med Hyg. 1995; 53. Santos JB. Baixa aderência e alto custo como fatores de insucesso do uso de mosquiteiros impregnados com inseticida no controle da malária na Amazônia Brasileira Low adherence and high cost as failure factors of impregnated bed nets with insecticide for malaria cont. Rev Soc Bras Med Trop. 1999;32:333–41. Coleman R, Sattabongkot J, Promstaporm S, Maneechai N, Tippayachai B, Kengluecha A, et al. Comparison of PCR and microscopy for the detection of asymptomatic malaria in a Plasmodium falciparum/vivax endemic area in Thailand. Malar J. 2006;5:121. Roshanravan B, Kari E, Gilman R, Cabrera L, Lee E, Metcalfe J, et al. Endemic malaria in the Peruvian Amazon region of Iquitos. Am J Trop Med Hyg. 2003;69:45–52. Alves F, Durlacher R, Menezes M. High prevalence of asymptomatic Plasmodium vivax and Plasmodium falciparum infections in native Amazonian populations. Am J Trop Med Hyg. 2002;66:641–8. Niaz Arifin SM, Arifin R, Pitts D, Rahman M, Nowreen S, Madey G, et al. Landscape epidemiology modeling using an agent-based model and a geographic information system. Land. 2015;4:378–412. Burkot T, Graves P, Cattan J. The efficiency of sporozoite transmission in the human malarias, Plasmodium falciparum and P. vivax. Bull World Health Organ. 1987;65:375–80. Cohen JM, Smith DL, Cotter C, Ward A, Yamey G, Sabot OJ, et al. Malaria resurgence: a systematic review and assessment of its causes. Malar J. 2012;11:122. WHO Malaria Policy Advisory Committee and Secretariat. Malaria Policy Advisory Committee to the WHO: conclusions and recommendations of March 2013 meeting. Malar J. 2013;12:213. FP developed and implemented the model, performed the literature review, designed the experiments, conducted the simulations, data analysis, and drafted the manuscript. JAV validated the plasmodium module, JM, BF and GRG interpreted the results. CFM, WP and AB supervised the study. All authors read and approved the final manuscript. This study is an outcome of the LUCIA project funded by the IAI—The Interamerican Institute for Global Change Research, CRNIII 3036. Special thanks to Patricia Martinez for geographical information analysis. Universidad San Francisco de Quito, Diego de Robles, s/n, Cumbayá, Ecuador Francesco Pizzitutti , Javiera Alarcon-Valenzuela & Carlos F. Mena Duke University, 310 Trent Drive, Room 227, Box 90519, Durham, NC, 27708, USA William Pan Instituto de Geociências-IGC Belo Horizonte, Universidade Federal de Minas Gerais, Belo Horozonte, Brazil Alisson Barbieri Oswaldo Cruz Foundation (FIOCRUZ), Universidad Peruana Cayetano Heredia, Lima, Peru J Jaime Miranda Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, 1 University Place GEC, 145 Rensselaer, New York, NY, 12144, USA Beth Feingold College of Economics Departamento de Demografia/FACE/UFMG, Office 3093, Av. Antônio Carlos, 6627-Pampulha, Belo Horizonte, Minas Gerais, 31270-901, Brazil Gilvan R. Guedes Search for Francesco Pizzitutti in: Search for William Pan in: Search for Alisson Barbieri in: Search for J Jaime Miranda in: Search for Beth Feingold in: Search for Gilvan R. Guedes in: Search for Javiera Alarcon-Valenzuela in: Search for Carlos F. Mena in: Correspondence to Francesco Pizzitutti. Pizzitutti, F., Pan, W., Barbieri, A. et al. A validated agent-based model to study the spatial and temporal heterogeneities of malaria incidence in the rainforest environment. Malar J 14, 514 (2015) doi:10.1186/s12936-015-1030-7 Low endemicity Anopheles darlingi Plasmodium vivax Plasmodium falciparum By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines.
CommonCrawl
Efficient payload communications for IoT-enabled ViSAR vehicles using discrete cosine transform-based quasi-sparse bit injection Mohammad R. Khosravi1 & Sadegh Samadi1 High-performance remote sensing payload communication is a vital problem in air-borne and space-borne surveillance systems. Among different remote sensing imaging systems, video synthetic aperture radar (ViSAR) is a new technology with lots of principal and managerial data which should be compressed, aggregated, and communicated from a radar platform (or a network of radars) to a ground station through wireless links. In this paper, a new data aggregation technique is proposed towards efficient payload transmission in a network of aerial ViSAR vehicles. Our proposed method is a combination of a recent interpolation-based data hiding (IBDH) technique and visual data transformation process using discrete cosine transform (DCT) which is able to outperform the reference method in terms of data aggregation ability. Video synthetic aperture radar (ViSAR) is a new imaging mode of SAR to generate video sequences [1, 2]. ViSAR is recently used for aerial remote sensing imaging with air-borne radar platforms. Despite the conventional SAR sensors for capturing still images, communication data rate needed for ViSAR sensors is extremely more of which the current implemented systems mostly do not send their acquired data through wireless communication links. In fact, they have to store the data into memory and after landing, data is transferred physically to remote sensing surveillance centers to be analyzed. This shortfall is caused by two reasons; at first, frame formation process (like SAR image formation) is a relatively complicated and time-consuming procedure. Thus, while the imaging system in ViSAR mode has to generate many frames, for example 16–24 frames per second, this issue would be a big challenge. Researchers who are working on ViSAR imaging techniques have a substantial focus on this point that computational complexity must be reduced alongside improving the frame acquisition quality. In addition, using powerful computers, high-performance hardware implementation and benefits of parallel programing can speed up the formation process. The second issue that can be noted is to have a large data size for video frames (including processed frames from raw data and other related data for control and managerial information) that should be compressed and aggregated to be transferable for wireless transmission through a low-bandwidth link. Otherwise, we have to use the ViSAR technology just for non-real-time applications whereas the main idea behind ViSAR is to apply it for real-time monitoring and surveillance in remote sensing, smart cities, and civil applications in all the time and all weather (for instance, natural hazards and traffic control even in dark environment without any light source). Here, we do not work on efficient image/frame formation because it is a problem for signal processing experts to process raw data of radar sensing. Instead, we try to aggregate relevant managerial and control data and embed this data into the video frames considering specific features of SAR videos. This can reduce the data size significantly and is indeed a process towards data compression. Therefore, in order for ViSAR data to be communicated between two aerial radar platforms or an air-borne imaging radar and a ground control station (they can be assumed as a ViSAR sensor network), we should use such compression or aggregation techniques to reduce remote sensing data size. In details, remote sensing data always includes some payload information about geographic systems, control data, and so on, in addition to the main images and videos. Because of the low bandwidth in radar communication systems, there is no solution except to apply lossy/lossless data aggregation techniques to integrate the payloads and radar data (raw data or processed videos). On the other hand, sending compressed raw data is difficult and not sufficiently effective for real-time systems, so our preference is to convert raw data into formed video frames and then to compress and transmit the frames along with some payloads aggregated in them. As a consequence, the main aim of this research is ViSAR payload communication through a data hiding-based aggregation. For integrating a general bit-stream data and video frames, a recently proposed watermarking scheme is selected as the main reference method to embed bit stream into ViSAR frames. Although the selected reference method is really powerful for quasi-sparse image data like ViSAR frames, however, we wish to improve its embedding capacity while keeping the final imaging quality as much as possible. The reference method can be followed in [3] and is a method based on interpolation-based data hiding (IBDH) towards watermarking using an interpolator and error histogram computation [4, 5]. The core interpolator in our research is similar to [3], but some other interpolators can also be used, For example, in [6], the authors have provided a novel efficient optimization algorithm for tree-based classification that can be adopted as a fast interpolator. This method is the first efficient algorithm for optimizing classification trees which can help our problem, see more about spatial interpolation of ViSAR frames and its pre-processing in [7, 8]. In addition, many similar works exist that are about interpolation-based data embedding and histogram processing [9,10,11,12,13,14,15,16,17]; readers can follow them. Also, more information about interpolators can be found in [4, 18,19,20]. Our focus in this research is on histogram transformation using a decomposition transform; however, an additional histogram processing like [5] is utilized. We combine the reference method with DCT transform to change error histogram in order to find more suitable places in the frames to add hidden bits. The proposed technique can be used for lossless payload aggregation in an IoT-enabled ViSAR sensor networks (Fig. 1), since radar networks have been nowadays a hot topic of research [21,22,23]. In addition, our finding may be useful in other visual data and sensory systems [24,25,26]. This paper is organized as follows. Section 2 presents the proposed approach, Section 3 contains all simulation results, and Section 4 is the conclusions. A typical ViSAR sensor network with air-borne radar platforms towards Internet of ViSAR vehicles In order to extend the reference IBDH algorithm [3], we use DCT as a decomposition transform to change the error image histogram compared to the basic algorithm. In fact, we want to create a quasi-sparse frame [27] with less zero pixels (fully black pixels) and much more non-zero pixels which their gray levels are very near to zero. One of the most popular ways to modify interpolation-based data hiding techniques is to use a better interpolator or histogram modification through histogram shifting and histogram adjustment. As IBDH method in [3] is a most recent version of IBDH techniques that uses a novel interpolator alongside a histogram modification process [3], we wish to combine this method with another process based on discrete cosine transform (DCT) to improve its aggregation performance. In this regard, we use DCT with different patch sizes to make a combinational approach entitled interpolation-based data hiding using discrete cosine transform (IBDH-DCT). Our experiments show medium-sized patches are more effective. If a transform is able to create a quasi-sparse image with less zero pixels, it is probably able to improve IBDH in ViSAR frames. As we know, the mentioned transform can be invertible generally, but in the use of it to make transformed frames, we have to scale and quantize the coefficients matrix, so after re-scaling, a loss may be seen because of the quantization. However, this loss does not affect the watermark/embedded data, but the final data hiding approach might be non-reversible. In the next sub-sections, basic concepts around DCT will be reviewed at first, and then, the proposed method will be presented. 2D DCT for frame transformation DCT is one of the most important decomposition transforms for signal and image processing. For example, JPEG compression works based on a core DCT. This transform avails the cosine basis functions which can be orthonormal. An important property of DCT is its real coefficients compared to discrete Fourier transform (DFT) or fast Fourier transform (FFT). Another property of DCT is lower computational complexity which makes it appropriate for real-time multimedia coding. Furthermore, with respect to energy compression for high-performance image coding (i.e., maximum information at the lowest file size), DCT is a powerful transform like Karhunen Loeve transform (KLT), but with a lower complexity. Equation (1) shows 2D DCT for two-dimensional data like gray-scale frames. Also, Eq. (2) denotes the inverse DCT (IDCT). X(k, l) as DCT coefficients are real and converted version of an image/patch with size of N-by-N will be N-by-N again (in below, x(m, n) shows the image pixels, size of the source image is N × N, i.e., 0 ≤ m, n ≤ N − 1). The basis functions are seen in Fig. 2 for N = 8. N is the patch size in which for an N-by-N patch, there are N2 basis functions. Sixty-four virtually colored DCT basis functions for 8 × 8 patch size, where M = N = 8 $$ {\displaystyle \begin{array}{l}X\left(k,l\right)=\alpha (k)\;\alpha (l)\sum \limits_{m=0}^{N-1}\sum \limits_{n=0}^{N-1}x\left(m,n\right)\kern0.24em \cos\;\left(\frac{k}{N}\left(m+\frac{1}{2}\right)\pi \right)\;\cos\;\left(\frac{l}{N}\left(n+\frac{1}{2}\right)\pi \right)\\ {} where\kern0.24em \alpha (s)=\Big\{\begin{array}{c}\sqrt{\frac{1}{N}}\kern0.5em for\kern0.48em s=0\\ {}\sqrt{\frac{2}{N}}\kern0.5em Otherwise\end{array}\end{array}} $$ $$ x\left(m,n\right)=\sum \limits_{k=0}^{N-1}\sum \limits_{l=0}^{N-1}\alpha (k)\;\alpha (l)\kern0.24em X\left(k,l\right)\kern0.24em \cos\;\left(\frac{k}{N}\left(m+\frac{1}{2}\right)\pi \right)\;\cos\;\left(\frac{l}{N}\left(n+\frac{1}{2}\right)\pi \right) $$ DCT is also computable in matrix form of which \( \underset{\_}{x} \) is the image matrix, \( \underset{\_}{C} \) is the DCT matrix, and the transformed image matrix is \( \underset{\_}{X} \) from Eq. (3). The functions of DCT are generally defined as Eq. 4. Figure 3 shows virtually texturized results for different patch sizes in a sample ViSAR frame. It is obvious that each patch is different in terms of ability of creating a quasi-sparse illustration. DCT decomposed frames with different patch sizes. This figure shows virtually texturized results for 2-by-2 to 256-by-256 patch sizes in a sample frame (256 × 256) $$ {\displaystyle \begin{array}{l}\underset{\_}{X}=\underset{\_}{C}\;\underset{\_}{x}\;{\underset{\_}{C}}^t\\ {}\mathrm{where}\kern0.36em C\left(i,j\right)=\Big\{\begin{array}{c}\frac{1}{\sqrt{N}}\\ {}\sqrt{\frac{2}{N}}\cos \left(\frac{i}{N}\left(j+\frac{1}{2}\right)\pi \right)\;\end{array}\kern0.5em \begin{array}{c}\begin{array}{l}i=0\\ {}\end{array}\\ {}i>0\end{array}\\ {}\mathrm{and}\kern0.34em \mathrm{similarly}:\kern0.36em \underset{\_}{x}={\underset{\_}{C}}^t\underset{\_}{X}\underset{\_}{C}\end{array}} $$ $$ F\;\left(k,l,m,n,M,N\right)=\alpha (k)\;\alpha (l)\kern0.24em \cos\;\left(\frac{k}{N}\left(m+\frac{1}{2}\right)\pi \right)\cos\;\left(\frac{l}{M}\left(n+\frac{1}{2}\right)\pi \right);\kern0.36em {\displaystyle \begin{array}{c}0\le k,m\le N-1\\ {}0\le l,n\le M-1\end{array}} $$ Quasi-sparse bit injection using IBDH and DCT The reference method of IBDH has been discussed in [3]. This method is applied to ordinary ViSAR frames, and the only histogram processing is performed using some modification or shifting techniques like [5] which a little help IBDH find more suitable places for injecting payload bits. Our experiments show that a transform that can basically change histogram of the ViSAR frames towards a quasi-sparse condition is more effective in comparison to usual histogram processing techniques which do not work on the frames to be quasi-sparse. However, we can use both histogram modification and histogram transformation concurrently. To do so, we use a basic theory like IBDH in [3], a histogram modification technique as per [5], and a DCT-based decomposition process towards histogram transformation. Our proposed method is given in Algorithms 1 and 2 for sender side and receiver side, respectively. All DCT patches are assumed as a single image because size of the original frame and its transformed version (towards quasi-sparsity) should be the same. Therefore, a plotted histogram corresponds to a transformed image, not a specific patch. Algorithm 1: The embedding process in IBDH-DCT at the sender side. Input: An original host frame and hidden data. 1) Compute DCT coefficients of the original host frame. 2) Scale the DCT coefficients matrix into an interval of [0,255]. 3) Quantize scaled DCT coefficients matrix according to a digital image and consider as a new host frame with quasi-sparse spatial distribution. 4) Down-sample the quasi-sparse host frame (standard down-sampling is used). 5) Calculate a reconstructed version (up-scaled interpolated frame) of quasi-sparse host frame using interpolation technique. 6) Calculate an error image by subtraction of the original quasi-sparse host frame and its interpolated version considering histogram modification. 7) Calculate four key parameters of the reference IBDH technique based on histogram of the error image. 8) Inject bits of hidden data into the quasi-sparse host frame according to key parameters in the prior step and create a watermarked frame. 9) Transfer the watermarked frame to the receiver along with all key parameters computed at sender side. Output: The watermarked frame and key parameters related to the error image. Algorithm 2: The extraction process in IBDH-DCT at the receiver side. Input: Receive watermarked frame, and the key parameters in Algorithm 1. 1) Extract the hidden bits and the error image through an inverse function in IBDH theory (see the main source for IBDH details). 2) Down-sample the watermarked frame (standard down-sampling is used to have a down-sampled version which is exactly equal to the down-sampled version of original frame in Algorithm 1). 3) Re-construct the down-sampled frame of the prior step by interpolator to generate the interpolated frame. 4) Restore the quasi-sparse host frame by adding error image and the interpolated frame. 5) Rescale the quasi-sparse host frame to generate approximate DCT coefficients. 6) Compute an approximate version of the rescaled quasi-sparse host frame through inverse DCT as the original host frame. Output: The original host frame and injected bits. Algorithm 1 includes all steps of data embedding process at the sender, and Algorithm 2 contains steps of the reverse process at the receiver side which is named extraction. The proposed method is not although fully reversible in terms of the host image reversibility because the frame transformation process is lossy; however, this transformation process is near-lossless with a loss that can be ignored. Since we use a real decomposition transform, near-lossless happens (for example in the case of FFT with complex basis, a huge loss happens). Therefore, all the process can be near-lossless. On the other hand, because there is a full reversibility for the hidden data, we can compute quality metrics in the transformed samples. As dataset, ViSAR frame with size of 256 × 256 is selected; see a sample input frame in Fig. 6 (part: main frame). For simulating the proposed algorithm, we used Matlab R2013a through a device with 2.53 GHz CPU (Intel CFI i3 350M Core i5), and 4.00 GB RAM. It is explicit that a DCT-decomposed version of each image under 1-by-1 patch is equal to itself; thus, this specific patch means "no decomposition exists." The ViSAR frames are very low energy with a histogram density near to zero. Therefore, these frames have a different behavior in comparison to ordinary images. They may be according to Markov random field (MRF) neighborhood system, and with some textural features. We compare all the proposed method with the reference method in [3], and all results are given in Table 1 and Fig. 4 for aggregation and quality performance and also Table 2 and Fig. 5 for execution times (towards complexity). The quality assessment metrics are PSNR, SSIM, and EPI [2, 3, 6, 7] wherein the first two cases are for similarity evaluation and the third one shows ability of each method in using edges. Capacity is the main factor which we aim to increase it. All running times are presented in Table 2 to find out more things about computational complexity of both methods. Equations (5), (6), and (7) describe our quality metrics, and Eq. (8) gives average capacity index (API) which is directly computed based on the aggregation performance (embedding capacity). In these equations, x and y are the host frame and watermarked frame. ACI is a metric for video communications. A similar metric in such a situation is bits per pixel (BPP) which is usually computed for still images, not videos. BPP, of course, can be computed for a single frame, but its result is not as reliable as ACI values such that we have to introduce an average on BPPs in terms of video sequences. ACI is explicitly computed for video data as per Eq. (8). ACI is more reliable than BPP because it has an intrinsic scaling factor in itself through two parameters (α and β) to be set. In addition, relationship between BPP and ACI is a little similar to the case of MSE versus PSNR of which we know MSE cannot provide a newer thing compared to PSNR (because they are interpretation of each other). Currently, we have adjusted α = \( \frac{\sqrt{5}}{2} \) and β = 1000. $$ \mathrm{PSNR}=20\kern0.36em \log \kern0.24em \frac{255}{\sqrt{\frac{1}{\left({256}^2\right)}\;{\sum}_{i=1}^{256}{\sum}_{j=1}^{256}{\left({x}_{ij}-{y}_{ij}\right)}^2}} $$ Table 1 Quality and aggregation performance in the reference method [3] and the proposed approaches entitled IBDH-DCT (best results are shown in italicized form) An average of different measures for the reference method (IBDH) and proposed IBDH-DCT. Just best patches (32-by-32, 64-by-64, and 128-by-128) are shown (in terms of both quality metrics and complexity) Table 2 Complexity analysis through execution times (best results are shown in italicized form) Complexity for all different patches. 1-by-1 is the reference method (IBDH) and all the other approaches are related to the proposed IBDH-DCT $$ \mathrm{SSIM}=\frac{2{u}_x{u}_y}{u_x^2+{u}_y^2}\times \frac{2{\sigma}_x{\sigma}_y}{\sigma_x^2+{\sigma}_y^2}\times \frac{\sigma_{xy}}{\sigma_x{\sigma}_y} $$ $$ \mathrm{EPI}=\frac{\sum \limits_i\sum \limits_j\mid {y}_{i-1,j-1}-{y}_{i+1,j+1}\mid }{\sum \limits_i\sum \limits_j\mid {x}_{i-1,j-1}-{x}_{i+1,j+1}\mid } $$ $$ \mathrm{Average}\kern0.17em \mathrm{Capacity}\kern0.17em \mathrm{Index}\;(ACI)={\alpha}^{\left(\frac{\raisebox{1ex}{$\sum \limits_{\mathrm{All}\;\mathrm{frames}}\mathrm{Capacity}\;\left(\mathrm{bit}\right)$}\!\left/ \!\raisebox{-1ex}{$\mathrm{Number}\kern0.17em \mathrm{of}\kern0.17em \mathrm{frames}$}\right.}{\beta}\right)} $$ The simulation results clearly show that DCT-based approach can be effective for sample frames compared to the reference method. It is noticeable that all combinational forms based on DCT decomposition are more complex than the reference method because two image transformation steps (direct + inverse) should be performed in them, in addition more time is needed to find suitable places for injecting bits because their histograms are complicated. However, this more execution time of the proposed method is a cost for having better aggregation performance. Another cost is a little loss for just host frame in combinational approaches which would be acceptable and optimized in most of real-world applications. Table 1 shows some smaller patches cannot outperform the reference method; however, 32-by-32, 64-by-64, 128-by-128, and 256-by-256 patches have recorded the best performance in terms of similarity measures, edge handling indicator, and aggregation capacity (italicized values). According to Table 2, among winner proposed approaches, 32-by-32, 64-by-64, and 128-by-128 patches have recorded minimum execution time. Figures 6, 7, and 8 illustrate decomposed frames from a sample frame and their corresponding histograms. Figure 6 includes small-sized patches (2-by-2 and 4-by-4), Fig. 7 is for medium-sized (8-by-8, 16-by-16 and 32-by-32), and Fig. 8 is for large-sized patches (64-by-64, 128-by-128 and 256-by-256). Main frame alongside two DCT-decomposed frames using small-sized patches Three DCT-decomposed frames using medium-sized patches Three DCT-decomposed frames using large-sized patches In this research, a new data aggregation method based on discrete cosine transform and quasi-sparse bit injection for IoT-enabled ViSAR sensor networks was proposed towards enhancing the embedding capacity (or aggregation performance). This method could outperform a recent data hiding approach which was used as a reference method in our work. We used four various metrics to evaluate efficiency of the proposed method in terms of general frame quality (similarity and edge handling) and aggregation performance, and finally, all of them approved its suitability. One of the findings of our research is to show the importance of checking different patch sizes. In our experiments, average-sized patches and upper-average cases were the best selections. Moreover, a study on complexity using execution times was performed which can help us find the best DCT patches. As a next idea of research, we can work on more suitable decomposition transforms to create a quasi-sparse space in order to improve the aggregation performance once again in SAR/ViSAR systems. In addition, finding a high-performance, fully lossless decomposition transform can make the aggregation mechanism reversible which may be important in some specific applications. There are many decomposition techniques like KLT that can be used for this application, but the main focus of our research was on how to combine a state-of-the-art data hiding method with a powerful decomposition technique towards quasi-sparse bit injection. Of course, investigation on application of other transforms (instead of DCT) can be done as a future work. Specifically, KLT is not suitable for real-time processing because of an inherent high computational complexity compared to DCT. FFT is a complex transform and is not therefore suitable for this frame transformation towards quasi-sparsity. One of the good ideas can thus be wavelet. In the current version, just the process of extracting injected bits is fully reversible (lossless). All the data and computer programs are available. ViSAR: Video synthetic aperture radars IoT: IBDH: Interpolation-based data hiding DCT: Discrete cosine transform IBDH-DCT: Interpolation-based data hiding using discrete cosine transform IDCT: Inverse DCT KLT: Karhunen Loeve transform Average capacity index PSNR: Peak signal to noise ratio SSIM: Structural similarity EPI: Edge preservation index MSE: BPP: Bit per pixel B. Bahri-Aliabadi, M.R. Khosravi, S. Samadi, Frame Rate Computing in Video SAR Using Geometrical Analysis, The 24th Int'l Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'18), pp. 165-167, 2018; Las Vegas. USA. M.R. Khosravi, S. Samadi, R. Mohseni, Spatial Interpolators for Intra-Frame Resampling of SAR Videos: A Comparative Study Using Real-Time HD (Medical and Radar Data, Current Signal Transduction Therapy, 2019) M.R. Khosravi, M. Yazdi, A lossless data hiding scheme for medical images using a hybrid solution based on IBRW error histogram computation and quartered interpolation with greedy weights. Neural Computing and Applications 30, 2017–2028 (2018) L. Zhang, X. Wu, An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Transactions on Image Processing 15(8), 2226–2238 (2006) M. Arabzadeh, H. Danyali, M. S. Helfroush, Reversible Watermarking Based on Interpolation Error Histogram Shifting, International Symposium on Telecommunications (IST'2010), pp. 840-845, 2010. M.A. Carreira-Perpinán et al., Alternating optimization of decision trees, with application to learning sparse oblique trees, 32nd Conference on Neural Information Processing Systems (Montr´eal, Canada, 2018) M.R. Khosravi, H. Rostami, S. Samadi, "Enhancing the Binary Watermark-Based Data Hiding Scheme Using an Interpolation-Based Approach for Optical Remote Sensing Images". International Journal of Agricultural and Environmental Information Systems 9(2), 53–71 (2018). https://doi.org/10.4018/IJAEIS.2018040104. M.R. Khosravi et al., A Tutorial and Performance Analysis on ENVI Tools for SAR Image Despeckling. Current Signal Transduction Therapy (2019) L. Luo, Z. Chen, M. Chen, X. Zeng, Z. Xiong, Reversible Image Watermarking Using Interpolation Technique. IEEE Transactions on Information Forensics and Security 5(1), 187–193 (2010) C.-C. Lin, W.-L. Tai, C.-C. Chang, Multilevel reversible data hiding based on histogram modification of difference images. Pattern Recognition 41, 3582–3591 (2008) S. Zhang, T. Gao, L. Yang, A reversible data hiding scheme based on histogram modification in integer DWT domain for BTC compressed images. International Journal of Network Security 18(4), 718–727 (2016) J. Tian, Reversible data embedding using a difference expansion. IEEE Transactions on Circuits and Systems for Video Technology 13(8), 890–896 (2003) A. Malik, G. Sikka, H. Verma, An image interpolation based reversible data hiding scheme using pixel value adjusting feature. Multimedia Tools and Applications (2016) T.-C. Lu, C.-C. Chang, Y.-H. Huang, High capacity reversible hiding scheme based on interpolation, difference expansion, and histogram shifting. Multimedia Tools and Applications 72, 417–435 (2014) X. Zhang, Z. Sun, Z. Tang, C. Yu, X Wan, High capacity data hiding based on interpolated image. Multimedia Tools and Applications 76(7), 9195–9218 (2017) A. Shaik, T. V., High capacity reversible data hiding using 2D parabolic interpolation, Multimedia Tools and Applications, vol. 78, no. 8, pp. 9717–9735, 2019. M.A. Wahed, H. Nyeem, High capacity reversible data hiding with interpolation and adaptive embedding, PLoS ONE 14(3): e0212093 (2019). https://doi.org/10.1371/journal.pone.0212093 R.C. Gonzalez, R.E. Woods, Digital Image Processing, third edn. (Prentice Hall, NJ, 2008) L. Zhang, X. Wu, Color Demosaicking Via Directional Linear Minimum Mean Square-Error Estimation. IEEE Transactions on Image Processing 14(12), 2167–2178 (2005) P. Getreuer, Zhang-Wu (Directional LMMSE Image Demosaicking, Image Processing On Line (IPOL), 2011) V. Karimi, R. Mohseni, Intelligent target spectrum estimation based on OFDM signals for cognitive radar applications. Journal of Intelligent & Fuzzy Systems 36, 2557–2569 (2019) V. Karimi, OFDM waveform design based on mutual information for cognitive radar applications. The Journal of Supercomputing (2019) S. Kafshgari, High-Performance GLR Detector for Moving Target Detection in OFDM Radar-Based Vehicular Networks. Wireless Personal Communications 108, 751–768 (2019) M. Yazdi, An Efficient Training Procedure for Viola-Jones Face Detector, International Conference on Computational Science and Computational Intelligence (ICCSCI) (Las Vegas, USA, 2017) M. Yazdi, Robust cascaded skin detector based on AdaBoost. Multimedia Tools and Applications 78(2), 2599–2620 (2019) M. Singhal, Optimization of hierarchical regression model with application to optimizing multi-response regression k-ary trees, Association for the Advancement of Artificial Intelligence (AAAI) (Honolulu, Hawaii, USA, 2019) M. R. Khosravi, S. Samadi, Modified Data Aggregation for Aerial ViSAR Sensor Networks in Transform Domain, 25th Int'l Conf. Par. and Dist. Proc. Tech. and Appl. (PDPTA'19), pp. 87-90, 2019. We would like to thank Sandia National Laboratory for ViSAR data used as dataset in this research. Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran Mohammad R. Khosravi & Sadegh Samadi Search for Mohammad R. Khosravi in: Search for Sadegh Samadi in: MK participated in mathematical design of the proposed method and its computer implementation. SS coordinated industrial application and raw data preparation and helped out for study. MK and SS have completed the first draft of this paper. All authors have read and approved the final manuscript. Authors' information Correspondence to Mohammad R. Khosravi. Khosravi, M.R., Samadi, S. Efficient payload communications for IoT-enabled ViSAR vehicles using discrete cosine transform-based quasi-sparse bit injection. J Wireless Com Network 2019, 262 (2019). https://doi.org/10.1186/s13638-019-1572-4 Interpolation-based data hiding (IBDH) Video synthetic aperture radar (ViSAR) Discrete cosine transform (DCT) ViSAR sensor networks Multi-modal Sensor Data Fusion in Internet of Things
CommonCrawl
Search SpringerLink Mental Well-being Among Workers: A Cross-national Analysis of Job Insecurity Impact on the Workforce Concetta Russo ORCID: orcid.org/0000-0001-5822-64731 & Marco Terraneo1 Social Indicators Research volume 152, pages 421–442 (2020)Cite this article Drawing on 2011 and 2016 European Quality of Life Survey data from eight European countries, this paper considers the importance of subjective indicators of employment conditions in impacting mental well-being. Among employment conditions, job insecurity has been discussed as having a negative impact on mental well-being by enhancing the worker's sense of unpredictability. The idea of losing one's job brings with it the fear of an uncertain or unclear future and the sense of lack of agency—i.e. feeling powerless with respect to the risk of becoming unemployed. Thus, we investigate two dimensions of job insecurity, namely 'cognitive job insecurity' and 'labour market insecurity'. Our dependent variable is mental health well-being, measured using the 5-item World Health Organization Well-Being Index (WHO-5), which is a self-reported health scale validated by several studies and internationally adopted for measuring psychological well-being. We apply a fixed-effects model and use a set of individual control variables to obtain parameter estimates. Moreover, to control for country-level heterogeneity, two macro-level variables are considered: the type of welfare regime and employment protection. The novelty of this research lies in disentangling the concept of precariousness from the dichotomy of open-ended/non-open-ended contract and in including in the analysis subjective categories such as self-perceived job insecurity. The findings of our study suggest that self-perceived job insecurity is negatively related to mental well-being for both permanent and temporary workers, making this stressor an important feature in predicting the emergence of psychological distress (i.e. feelings of anxiety or depression) among the workforce. Working on a manuscript? Avoid the common mistakes Gradually over the last 20 years and at an even faster pace after the 2008 crisis which affected Western economies, the labour market has shrunk, and employment patterns have critically changed. Standard employment (represented by the open-ended contract) has gradually decreased, making way for more unstable forms of workforce contracts. Terms such as insecurity, precariousness and vulnerability entered the employment-related lexicon and came to assume relevance in the scientific debate among sociologists (Anderson and Pontusson 2007; Juliá et al. 2017), psychologists (Dooley et al. 1987; Vander Elst et al. 2014) and economists (Böckerman 2004; Origo and Pagani 2009; Böckerman et al. 2011). In The Corrosion of Character, Richard Sennett (1998) discusses the profound effect that a fixed-term contract can have on an individual's consciousness and, more generally, on society. As case studies, Sennett uses a group of former IBM workers, an advertising agent who decided to leave her job because it was causing her frustration and a group of people working as bakers after losing the jobs they were trained for. He argues that under the regime of flexibility, which is a common work condition in contemporary times all over the world, the production of subjectivities dramatically interplays with new forms of anxiety. Those new forms of anxiety could be considered to be deeply linked with the sense of lack of agency that people face when they struggle with uncertainty and they lose a sense of liability for their actions (Sennett 1998). Because working is not only a matter of 'breadwinning' but is also one of the main features placing an individual in the social structure, scholars have started to investigate to what extent and in which peculiar ways the increase in temporary and precarious forms of employment might affect workers' physical and mental health (Benach and Muntaner 2007; Vives et al. 2013). A rich volume of scientific literature has focused on the relationship between 'working conditions' (specifically, the material conditions under which the job is performed) and workers' physical and mental well-being to the point that factors related to working conditions are recognised, by both scholars and international bodies such as the World Health Organization, as social determinants of health and health inequalities (Benavides et al. 2000; Theorell 2000). Far less substantial attention has been paid to 'employment conditions' (which include precarious jobs, informal jobs and other forms of job-related insecurity), even though these are a defining feature of welfare states (Esping-Andersen 1990; Huber and Stephens 2001). Self-perceived job insecurity has been investigated by numerous economic studies (Gottschalk and Moffitt 1999; Böckerman 2004; Böckerman et al. 2011). With regard to our interest in disentangling the research on the effects of precariousness on mental well-being from investigating the type of contract per se, it is particularly relevant that Origo and Pagani (2009), by analysing the microdata from 2001 Special Eurobarometer 56.1, found that self-perceived job insecurity had a deeper impact on job satisfaction than the type of contract, to the point that the combination 'temporary but secure job' emerged as preferable to the combination 'permanent but insecure job' (Origo and Pagani 2009). Drawing on 2011 and 2016 European Quality of Life Survey data from eight European countries, this paper considers the importance of subjective indicators of employment conditions in impacting mental well-being according to Thomas' theorem 'if men define situations as real, they are real in their consequences' (Thomas and Thomas 1928: 572). The novelty of this approach lies in disentangling the quantitative analysis of the relationship between employment conditions and workers' well-being from only considering objective categories such as the duration of the contract and employees' union participation, and interrogating more subjective categories—such as self-perceived job insecurity—which so far have been explored by qualitative studies (Sennett 1998; Spyridakis 2016) and have been almost completely overlooked by scholars of the sociology of health. In designing our study, we referred to two main theoretical models: the effort-reward imbalance model (Siegrist 1996; Siegrist and Li 2017) and the locus of control model (Rotter 1954). Both perspectives have been used to analyse stressful psychosocial work-environments. The effort-reward imbalance model engages with stressful aspects of the employment contract, stating that reciprocity results from the adequacy between the effort a certain job requires and the wage, job security and the social recognition it offers. Thus, a certain level of insecurity about the future of one's job fails to provide employees with the status-related reward they need not to experience an imbalance, and it could determine a stressful situation (Siegrist 1996; Siegrist and Li 2017). The locus of control model is concerned with the life features that are out of our control, since the feeling of not being in control of our decision has been proven to influence negatively mental well-being (Rotter 1954; Sennett 1998; Kirkcaldy et al. 2002). For the concern of this study, by potentially threatening the workers' internal locus of control, thus causing a sense of impotence with respect to their work life (Argentero and Vidotto 1994), job insecurity could negatively impact on workers' mental well-being. Finally, the paper considers in its analysis cross-national differences in welfare state regimes. To achieve this purpose, following the Ferrera model, we chose eight countries, two for each type of welfare state regime, specifically: the United Kingdom and Ireland for the liberal regime, Germany and France for the Bismarckian regime, Denmark and Sweden for the social-democratic regime and, finally, Italy and Spain for the Southern regime (Ferrera 1996). Given the limited number of countries available, the model applies a fixed-effects approach, which is a valuable alternative to the application of conventional multilevel models in country-comparative analysis. Some pivotal quantitative studies on the relation between mental and physical well-being and employment conditions have been led by the GREDS-EMCONET research group which, to fulfil the existing gap in the scientific literature, analysed the way in which the increasing precariousness of work can be considered to be a social determinant in the production of health inequalities. The group also started to investigate the pathways and mechanisms which explain the higher morbidity of precarious workers (Artazcoz et al. 2005; Vives et al. 2013;). To pursue this aim, the GREDS-EMCONET research group built a multidimensional measurement instrument, called the Employment Precarious Scale (EPRES), which counts using six different dimensions: employment instability, low wages, erosion of workers' rights, disempowerment, vulnerability towards undesirable treatment, and incapacity to exercise workplace rights (Vives et al. 2013). Even though the EPRES scale was explicitly designed to tackle the limitations of the rigid dichotomy between permanent and temporary workers, and Benach and Muntaner (2007) previously underlined the importance of the role played by job insecurity as a chronic stressor for workers, the scale tends to focus almost exclusively on objective indicators regulated by the relations between the employer and the employees, and overlooks more subjective ones such as self-perceived job insecurity. Job Insecurity: A Threefold Concept The concept of job insecurity—defined as the subjectively perceived and undesired possibility of losing one's employment in the near future (Vander Elst et al. 2014)—began to gain scholars' attention in the late eighties because of 'chronic stressors' often related to poorer physical and mental health outcomes (Dooley et al. 1987; Ferrie et al. 1998, 2005). In the literature, the notion of job insecurity is usually considered to be a threefold conceptual framework based on 'cognitive job insecurity' (self-perceived probability of losing one's current job), 'affective job insecurity' (personal fear of losing current job) and 'labour market insecurity' (self-perceived probability of finding equally remunerated employment in the case of a job loss)Footnote 1 (Anderson and Pontusson 2007; Reyneri 2013; Lübke and Erlinghagen 2014). All three dimensions of the concept should be considered subjective; nevertheless, job insecurity is fed by social factors, such as labour market performance (Chung and Van Oorschot 2011) and, in particular, the unemployment rate (Reyneri 2013). Indeed, a persistent high national unemployment rate has been proven to have a deep impact on cognitive job insecurity, where an abrupt rise in the unemployment rate tends to negatively impact labour market insecurity (Anderson and Pontusson 2007). Lübke and Erlinghagen (2014) stated that job insecurity characteristics vary significantly across Europe due to social security in different welfare state regimes (i.e. active and passive labour market policies) (Esping-Andersen 1990) and even the degree and the speed of socio-economic changes (Evers et al. 1987). Indeed, political science scholars, such as Anderson and Pontusson, stemming from the hypothesis that job insecurity generates demand for social protection, have investigated the relationship between OECD welfare state countries' and workers' job insecurity, finding that employment protection and active labour market programmes moderate the impact of the national unemployment rate on cognitive job insecurity and on labour market insecurity respectively (Anderson and Pontusson 2007). Perceived job insecurity causes health impairments along with a large range of personal and family problems. It is also associated with reduced well-being (Lübke and Erlinghagen 2014: 320); in other words, being insecure about one's job does not just threaten a worker's material life conditions but also their health, social relationships and thus their quality of life as a whole (Spyridakis 2016). Mental Health Well-being The concept of well-being has increasingly attracted the attention of social sciences over the past two decades. On one hand, this is because Western biomedicine has shifted its aim from making individuals 'not ill' to keeping them 'healthy' in a wider sense, but mostly this is because psychology redefined its object of study as 'mental health' instead of 'mental illness' (Mathews and Izquierdo 2009). On the other hand, this is also because of the growing interest from economists, who started to use subjective well-being measures as indicators of how well a society is doing (Diener and Tov 2012). Mental well-being could be defined as 'the presence of positive emotions and moods (e.g. contentment, happiness), the absence of negative emotions (e.g. depression, anxiety), satisfaction with life, fulfilment and positive functioning' (Bosmans et al. 2016: 251). This definition concerns both the hedonic aspect, by considering feelings and emotions such as happiness or anxiety, and the eudemonic aspect, related to the experience of a sense of meaning and purpose (Diener and Tov 2012). There is a general consensus among social scientists that mental well-being depends on both external conditions and personal resources (Thompson and Marks 2008). Scholars have posited a large variety of theories which present the impact of external conditions such as economic status (Diener and Seligman 2004; Howell and Howell 2008), degree of freedom (Robeyns 2017) and social inequality (Wilkinson and Pickett 2009). Some attention has also been paid to the link between both working and employment conditions and well-being. The Jahoda theory of 'latent deprivation' (1982), for instance, points out how, other than gaining an income, there are important needs that are fulfilled by working, such as acquiring a social network, being able to structure oneself during the daytime and, most importantly, developing as an individual. Nevertheless, unemployment has been widely discussed as a factor which causes mental health impairment (Murphy and Athanasou 1999; Paul and Moser 2009). Meanwhile, just recently, the relation between precarious employment and mental well-being has been investigated by sociologists (Bosmans et al. 2016; Juliá et al. 2017). Among employment conditions, job insecurity has been discussed as having a negative impact on well-being by enhancing the worker's sense of unpredictability because the idea of losing one's job brings with it the fear of an uncertain or unclear future (De Witte 1999) and the already mentioned sense of lack of agency—i.e. feeling powerless with respect to the risk of becoming unemployed (Sennett 1998). Lack of agency has also been described by some psychologists as a feeling of uncontrollability: 'due to the insecurity about job loss in the future, employees lack control to deal with the insecure situation, which in turn may result in poor well-being' (Vander Elst et al. 2014: 366). Drawing on the last two waves of the European Quality of Life Survey (EQLS) data, the following analysis focuses on whether and how both self-perceived job insecurity and labour market insecurity affect workers' mental health well-being. The EQLS is a well-established tool for monitoring and analysing the quality of life in European Union countries. It currently consists of four waves, conducted respectively in 2003, 2007, 2011 and 2016. It includes both subjective measures (such as health self-assessment) and objective measures (such as type of contract) and investigates attitudes and preferences in social living, such as resources and experiences. For the purpose of this article, we chose to analyse the third and fourth waves—specifically the one carried out between 2011 and 2012 and the one carried out in 2016—because some of the variables we focus on (i.e. how likely or unlikely is it that the interviewee will find a job with a similar salary) were only integrated into the survey in 2011. To increase the sample size, we pooled the two waves and conducted the analysis on this pooled sample in order to make the results more reliable. Moreover, we applied the WCalib weight (Eurofound 2017),Footnote 2 which is useful when calculating confidence intervals or significance at country level for analysis within the EU and with a view to comparing European countries. We selected eight countries, two for each of the four different welfare regimes considered; respectively, the United Kingdom and Ireland for the liberal regime, Germany and France for the Bismarckian one, Denmark and Sweden for the social-democratic one and Spain and Italy for the Southern one (Ferrera 1996). After deleting observations for which there are missing cases for the variables of interest, the final data set contained 10,230 cases. (The size of the samples used in the analysis, by country, are shown in Appendix Table 3). This paper investigates two dimensions of job insecurity: specifically, cognitive job insecurity and labour market insecurity. The first dimension is measured through the question: 'How likely or unlikely do you think it is that you might lose your job in the next six months?' The second dimension is measured by the question: 'If you were to lose or had to quit your job, how likely or unlikely is it that you will find a job with a similar salary?' For both questions, respondents were given five possible answers (coded respectively from 1 to 5): 'very likely', 'rather likely', 'neither likely nor unlikely', 'rather unlikely' and 'very unlikely'. Starting from these two dimensions, we constructed an insecurity typology. First, we computed two dichotomous variables: a) the risk of losing one's job, where 0 is a low-risk condition (codes 3 to 5 of the original variable) and 1 is a high-risk variable (codes 1 and 2); and b) the risk of not finding a similar job, where 0 is a low-risk condition (codes 1 and 2 of the original variable) and 1 is a high risk variable (codes 3 to 5). Second, by crossing these new variables, we obtained a typology with four types of job insecurity: (i) not at all insecure; (ii) insecure, risks losing job; (iii) insecure, risks not finding a similar job; and (iv) totally insecure. This typology, called an 'insecurity index' is the independent variable used in regression models.Footnote 3 Our dependent variable is mental health well-being, measured using the 5-item World Health Organization Well-Being Index (WHO-5), which is a self-reported health scale validated by several studies and internationally adopted for measuring psychological well-being (Topp et al. 2015). The scale covers, according to ICD-10 (International Statistical Classification of Diseases and Related Health Problems, 10th Revision), the three main areas of depression: mood, interests and energy (World Health Organization 1993), and it consists of five items (specifically, five statements) which the interviewees use to evaluate how they have been feeling in the past 2 weeks according to a range that spans from 5 (all the time) to 0 (at no time).Footnote 4 The WHO-5 only contains positively phrased items, i.e. characteristics which have been proven to decrease the ceiling effect (Bech et al. 2003; Topp et al. 2015).Footnote 5 The items are: (1) I have felt cheerful and in good spirits; (2) I have felt calm and relaxed; (3) I have felt active and vigorous; (4) I woke up feeling fresh and rested; and (5) my daily life has been filled with things that interest me. The WHO-5 index rates respondents on a scale from 0 to 100, where people with a score of 50 or lower are considered at risk of depression (Topp et al. 2015). For our multivariate models, we used a set of control variables to obtain parameter estimates. First, we included a measure of perceived health. Respondents' self-assessed health was rated originally according to a 5-point scale from very good to very bad, and participants were subsequently dichotomised as healthy individuals if they declared a 'very good,' 'good' or 'fair' health status, or unhealthy if they declared a 'bad' or 'very bad' health status. Second, we included three variables as predisposing factors: age in years, gender and household structure (single; couple; couple with children; single with children; other). The third type of confounder was enabling factors, which included four variables. Specifically, these were: educational attainment, rated in three categories, 'lower secondary or below', 'upper secondary or post-secondary', and 'tertiary'; a measure of subjective financial circumstances (i.e. household difficulty in making ends meet), originally rated according to six categories from 'very easily' to 'with great difficulty' and subsequently categorised into two levels of 'easily' and 'with difficulty'; the respondents' current occupation aggregated into nine categories (manager, professional, technician, clerical support, service, sales, craft, elementary, other); and the respondents' employment contract which was rated originally in seven categories and subsequently dichotomised into 'unlimited permanent contract' and 'fixed-term or temporary'. Finally, to control for country-level heterogeneity, two macro-level variables were considered: the type of welfare regime and employment protection. First, as noted above, our study categorised the eight countries into four welfare regimes based upon Ferrera's (1996) classification. Second, we used the EPL GAP (Employment Protection Legislation, the last available data are for 2013), which is an OECD indicator measuring the difference between employment protection legislation of open-ended contract and temporary or fixed-term jobs, which is designed to keep track of disparities in protection (in terms of access to fringe benefits, protection in cases of termination of the contract, salary and prospects of upward mobility) across contract types (OECD 2014). In particular, we used a measure of the strictness of employment protection for regular contracts related to individual or collective dismissal. In theory, EPL ranges from 0 (no protection) to 5 (maximum protection), but in the analysed countries, it varies from 1.10 to 2.68. Therefore, we distinguished three levels of job protection: low (values from 1.10 to 1.80), medium (from 1.81 to 2.40) and high (from 2.41 to 2.68). Descriptive statistics for the dependent variable series and predictor variables are shown in Appendix Table 4. To analyse the data, given its hierarchical nature with individuals nested in countries, the obvious choice would be to use multilevel regression models. However, multilevel models are associated with some problems when the estimated models have a small number (i.e. N < 30) of macro-level units (Bryan 2013). Specifically, first, a small sample size at level two leads to biased estimates of second-level standard errors (Cora et al. 2005). Second, because of the low number of degrees of freedom at the country level, only a small number of macro-indicators can be controlled for. Therefore, country-level estimators of these models are affected by omitted variable bias (Möhring 2012). We used the fixed-effects approach (Allison 2009) as an alternative to the application of multilevel methods for country comparisons when the number of second-level units is small. Compared with a multilevel model, in a fixed-effects estimation, a country-specific error term is explicitly estimated, and it belongs to the fixed part of the equation. Formally: $$\begin{aligned} y_{ij} & = \gamma_{00} + \beta_{1} x_{1ij} + \cdots + \beta_{k} x_{kij} + \delta_{1} x_{1ij} u_{j1} + \cdots + \delta_{N - 1} x1_{ij} u_{jN - 1} + \\ & \quad + \alpha_{1} u_{j1} + \cdots + \alpha_{N - 1} u_{jN - 1} + e_{ij} \\ \end{aligned}$$ with yij being the individual-level dependent variable of observation i in country j; γ00 is the intercept over all countries (note that the country-specific intercept γ0j equals γ00 + uj); xkij is the independent individual-level variable number k; βk is the coefficient on the individual-level variable number k; uj is the error term for each country j; and eij is the error term for observation i within country j. Four models have been estimated. Model 1 (M1) was calculated to test how much variance is explained from the second level. To do this, M1 only includes N − 1 dummy variables for countries. The coefficient of determination (R2) indicates the percentage of variance that is due to country-level variation. Model 2 (M2) added the independent variable (insecurity index) and micro-level predictors (individual variables). Model 3 (M3) tested whether the effects of insecurity vary across countries (i.e. what is called the 'slope effect' in multilevel models). Interaction terms of insecurity and country dummies were added to M3. Finally, Model 4 (M4) added the cross-level interaction effect (i.e. interactions between micro and macro variables). The fixed-effects estimation technique does not include the main effect of macro variables because the country dummies use all the variance at the country level; thus, no variance remains to be explained by additional country-level variables. In this respect, the use of macro-cross-level interaction terms allows for the estimation of a moderator effect of macro variables on individual characteristics. In Fig. 1, we report the standardised rate of the WHO-5 Well-Being Index as being less than 50, which is indicative of reduced well-being (Topp et al. 2015), adjusted for age by country. We found significant differences across countries. Individuals' WHO-5 Well-Being Index and 95% confidence intervals adjusted for age by country. Note. WHO-5 Well-Being Index variable was dichotomised: reduced well-being = score ≤ 50; age is a continuous variable between 18 and 85 years The United Kingdom had the worst well-being condition (i.e. an adjusted rate of 30.4, confidence interval (CI) = 28.4–32.4), followed by Italy (26.3; CI 24.6–28.1) and France (25.3; CI 23.5–27.2). In contrast, the proportion of people in Denmark with a WHO-5 Well-Being Index lower than 50 is 13.1 (CI 11.4–15.0) and in Sweden was 20.3 (CI 18.1.–22.6). In general, we can state that individuals who live in northern countries (plus Ireland) claim to have the highest well-being. Moreover, significant differences in perceived job insecurity among countries were found (Fig. 2): in this case, Scandinavian countries (Sweden 1.26, CI 1.23–1.29; Denmark 1.39, CI 1.42–1.46) also showed the lowest level of insecurity, whereas Southern countries (Spain 1.72, CI 1.68–1.76; Italy 1.66, CI 1.63–1.68) and Ireland (1.62, CI 1.58–1.66) displayed the highest level of job insecurity. Perceived job insecurity and 95% confidence intervals by country. Note. Job insecurity variable was codified here into three modalities: 1 = secure (no risk of losing job and not finding a similar job); 2 = partially insecure (risk of losing job or not finding a similar job); 3 = insecure (risk of losing job and not finding a similar job) Within this framework, the next step was to estimate whether and to what extent subjective job insecurity affects people's mental well-being. The estimated multivariate M1 (data not shown) includes N − 1 dummy variables to represent individual countries. The variance explained by country level as indicated by the R2 value was very low: 2.4%. In M2, which also includes individual-level variables, the explained variance increased appreciably: 12.8%. We tested whether micro-level variables introduced in M2 significantly improved the fit of the model compared with M1. For this purpose, the Bayesian information criterion (BIC) was used. According to this test, M2 improved the prediction relative to M1 (see Table 2). The effect of variable of interest—specifically, our measure of job insecurity—is shown in Table 1. We found that the higher the degree of insecurity, the worse the well-being of people. Even after controlling for confounders, individuals coded as totally insecure displayed a level of mental well-being that was about seven points lower than the level of well-being of those who were secure. Job insecurity, as we have defined it, therefore, had a significant effect on people's well-being. Table 1 Fixed effects estimation of Model 2—the impact of job insecurity on WHO-5 Well-Being Index in eight European countries Self-assessment health has a strong association with well-being. Individuals with bad health conditions scored about 18 points less than those in good health. Also, predisposing factors are related to mental well-being, but the direction and magnitude of association differ based on the variables considered. In particular, age was not associated with the WHO-5 index. On the other hand, only couples without children showed a statistically significant increase in well-being above single people, whereas singles with children revealed a lower level, though in both cases, the differences were very low (less than two points). Moreover, females had lower well-being, by around three points, than males. A final set of variables included in the model is related to the enabling factors. Neither education, occupation, nor type of contract was associated with mental well-being. On the contrary, people who claimed to make ends meet with difficulty saw their well-being index reduced by nine points. Next, we evaluated whether the effect of job insecurity varies across countries. While in M2 the impact was found to be the same for each country, this was not necessarily true and therefore should be tested. For this purpose, we estimated a new model, M3 (data not shown), which included interaction effects of the country dummies and the measure of individual job insecurity. These interaction effects (i.e. the so-called 'slope effect' in multilevel models) allowed for assessment of differences in the impact of insecurity across countries. First, we compared M3 with M2 through the BIC statistics. Differences in the BIC statistics (Table 2) showed evidence that M2 (i.e. models without interaction effects between the insecurity index and country dummy variables) was preferred over M3 (i.e. models with interaction effects). From a substantive point of view, this means that the impact of job insecurity on the outcomes we considered was similar in each of the countries analysed in this study. In other words, the more insecure people were, the more their well-being was reduced to a similar extent in all countries considered. Table 2 BIC statistics and likelihood-ratio test to compare models To visualise this result, Fig. 3 illustrates the predictive margins of job insecurity and country interaction as indicated by M3. As we can see, the effect of job insecurity differed slightly among countries where the outcome was studied. In general, we observed that more insecure individuals had a lower score on the WHO-5 Well-Being Index than those with a higher degree of security. However, in a picture of otherwise substantial homogeneity, we observed some variations in the relationship between job insecurity and well-being. On one side, we noted that in Germany, Denmark, Spain and Sweden, people who considered their job to be insecure did not show mental well-being that was statistically lower than people who evaluated their job as secure; on the other side, in France, Italy, Ireland and the United Kingdom, a clear gradient was found. People with higher insecurity showed significant differences (although with some disparities among countries) in their levels of well-being compared to more secure individuals. Specifically, in France secure people had a higher level of well-being than insecure individuals, who showed, independent of their category, the same grade of well-being. This pattern was found also in the UK, whereas in Italy the significant difference in the WHO-5 Well-Being Index was only between secure and totally insecure people. However, interaction effects contributed very slightly to explain the differences in the level of mental well-being. The increase of explained variance in passing from M2 to M3 was very modest, from 12.8 to 13.0%. Fixed-effects estimation of Model 3, well-being in European countries. Predictive margins and 95% confidence intervals of interaction between job insecurity and country. Note. Country: DE, Germany; DK, Denmark; ES, Spain; FR, France; IE, Ireland; IT, Italy; SE, Sweden; UK, United Kingdom. Job insecurity: SEC, Not at all insecure; LOO, Insecure, risks losing job; FIN, Insecure, risks not finding a similar job; INS, totally insecure These small disparities could reflect social, economic and/or public policy differences—what we call context—across the countries. In this perspective, we tested the moderator effect of context (i.e. the welfare regime and employment protection) on the relationship between job insecurity and well-being. Thus, specific welfare state regimes could have different capacities to reduce dependency on the market, guaranteeing the right to revenue and social protection, no matter the participation in the (labour) market (Esping-Andersen 1990). If so, one could expect that job insecurity would have a lower impact on well-being due to welfare state arrangements, considering that well-being changed considerably across the countries and that Northern countries with a social-democratic or Scandinavian welfare regime, such as Sweden and Denmark, were shown to perform better on the WHO-5 well-being index on a cross-national basis. On the other hand, it was found that countries with high insider protection (i.e. greater EPL) were not associated with a lower level of job insecurity on a cross-national basis (Anderson and Pontusson 2007). To investigate this dilemma, two further models were developed: M4_WEL includes the interaction between job insecurity and the welfare regime, while M4_EPL includes the interaction between job insecurity and employment protection. Also in this case, we initially compared the BIC statistics of M4_WEL and M4_EPL with those from M2. The differences detected in BIC statistics show that the goodness-of-fit of both M4 models were lower than that of M2 (see Table 2). It can be stated that there was no evidence that different welfare regimes and employment protection legislation were able to mitigate to different extents the effect of job insecurity on well-being. However, looking at Fig. 4, which displays the predictive margins of interaction between job insecurity and the type of welfare regime (Panel a) and between job insecurity and the level of work protection (Panel b), we see a small dissimilarity in the capacity of welfare models and EPL to reduce the negative impact of insecurity. Fixed-effects estimation of Model 4_WEL (Panel a) and Model 4_EPL (Panel b) well-being in European countries. Predictive margins and 95% confidence intervals of interaction between job insecurity and a welfare regime and b employment protection legislation. Note. Types of welfare regime: SC Scandinavian; BI Bismarckian; SE Southern; and PS post-socialist. Level of employment protection: low (values from 1.10 to 1.80), medium (from 1.81 to 2.40) and high (from 2.41 to 2.68). Employment protection: LP, low protection; MP, medium protection; HP, high protection In particular, the magnitude of inequalities seems to be lower for the model that considered countries belonging to the Scandinavian regime, whereas the liberal regime had a not significant moderating effect on job insecurity with respect to well-being (with the exception of insecure people regarding the probability of finding a similar job if they lose their current position). Finally, Bismarckian and Southern models are in the middle position: they appear to have alleviated the effects of insecurity for people at risk of losing their job and of not finding a similar job but not for those who were totally insecure. Regarding employment protection, we can state that countries with high protection showed a (slightly) greater level of well-being, in particular for people at risk of not finding a similar job, than countries with low or medium protection. Moreover, between countries with medium and low EPL, no substantive difference was found. To examine how the baseline results change when different specification and estimation scenarios are applied, we conducted three types of robustness checks. The first related to the value of the WHO-5 index threshold as an indicator of reduced well-being. Topp et al. (2015) report numerous studies in which a WHO-5 cut-off score of < 50 is used to indicate a clinically relevant condition. Therefore, we examined how sensitive the analysis is to use a dichotomous variable for WHO-5 instead of a continuous one. The results (here not presented) were very similar to the baseline specification in Table 1 when we used a continuous dependent variable. Considering 'not at all insecure' as the reference category, the odds ratio of the self-perceived probability of losing one's current job was 1.70 (CI 1.38–2.10), the odds ratio of the self-perceived probability of finding equally remunerated employment in the case of a job loss was 1.39 (CI 1.23–1.56) and the odds ratio of totally insecure was 2.00 (CI 1.59–2.51). These findings confirm the relevant impact of perceived insecurity on individuals' mental health. The second type of sensitivity analysis assessed whether the effect of the insecurity index differed between the third (2011) and the fourth wave (2016), instead of using pooled data. First, we employed two linear regressions, one for each wave. Second, we combined the estimation results, both parameter estimates and associated (co)variance matrices, into one parameter vector and simultaneous (co)variance matrix of the robust type (suest command in Stata). Third, we computed a Wald test about the difference between estimated parameters (data not shown) to evaluate the hypothesis that results are different by wave. Results show that there were not significant differences between waves, with the only exception of the parameters associated with three countries: Spain, France and Ireland, which have seen their mental health increase considerably with respect to Germany (as a reference category). However, it is notable that the impact of insecurity index categories remained stable over time (2011 versus 2016). Finally, we conducted a heterogeneity analysis to assess whether and to what extent the effect of the main explanatory variables (age, education, gender, employment contract, occupation and make ends meet) varied within a country (see Fig. 5 in Appendix). Fixed-effects estimation of Model 3, well-being in European countries. Predictive margins and 95% confidence intervals of interaction between education, gender, type of contract, occupation, financial circumstances and country. Note. Education: LOW = Lower secondary or below; UPP = Upper secondary or post-secondary; TER = Tertiary. Gender: MAL = male; FEM = Female. Contract: UNL = Unlimited; FIX = Fixed-term. Occupation: MAN = Manager; PRO = Professional; TEC = Technician; CLE = Clerical support; SER = Service; SAL = Sales; CRA = Craft; ELE = Elementary; OTH = Other. Making ends meet: FAI = Fairly; DIF = With difficulty We replicated the original Model 3, adding interaction variables between explanatory variables and countries. We did not observe statistically significant differences in estimated parameters among the countries, with only a few exceptions. For example, we found significant differences in mental health well-being between male (higher) and females (lower) in the UK, whereas in other countries, no difference was observed. On the other hand, as concerns financial circumstances (the make-ends-meet variable), in Denmark and Ireland the difference between the mental health well-being of people who experience economic difficulties and that of people who do not was higher in comparison with what happens in other countries. In general, these results suggest that the impact of explanatory variables on mental health well-being can vary between countries. This means that there is a growing need to conduct country-specific analyses. According to the recently published International Labour Organization's World Employment Social Outlook 2015 report entitled, 'The Changing Nature of Jobs', which covers 180 countries and about 84% of the global workforce, only 42% of employed people can count on a permanent contract (International Labour Organization 2015). This suggests that 'nonstandard employment', a definition that is used to regroup under a common denomination non-open-ended contracts, seasonal and casual work, temporary work and informal work, is becoming standard after all—or at the very least it concerns more than half of the global workforce. Scholars have thus started to research the pathways and mechanisms linking the precariousness of the contemporary labour market to the health and well-being of the growing flexible workforce (Artazcoz et al. 2005; Vives et al. 2013; Bosmans et al. 2016; Juliá et al. 2017). Those research studies have shown so far that, although a higher gradient of poor mental health cannot be found among all groups of precarious employees, significant differences have been found among specific categories of the precarious workforce: for instance, among lower occupational social classes (Juliá et al. 2017); non-manual female workers and manual male workers (Artazcoz et al. 2005); among workers with lower educational attainment, those who had been previously unemployed, and immigrant workers (Vives et al. 2013); and among workers who generally lack coping skills (Bosmans et al. 2016). Our study, which aims to disentangle the concept of precariousness from the dichotomy of the open-ended/non-open-ended contract and to include in its analysis subjective categories such as self-perceived job insecurity and labour market insecurity, has found that the higher the level of self-perceived insecurity, the lower the level of well-being, with the latter being measured by the validated WHO-5 Well-Being Index. Our findings suggest that the type of contract, the level of education, and the occupational category do not directly impact mental well-being, not because they are not relevant but because these characteristics already contribute to shape workers' perceived level of job insecurity. The latter is defined by Sverke and Hellgren (2002: 39) in their integrated model 'as a subjectively experienced multidimensional phenomenon which may arise as a function of the interaction between the objective situation (…) and subjective characteristics'. Therefore, the level of job insecurity reported by workers is related to a set of plausible objective variables which include certain characteristics of the job currently held and which are typically associated with fragile employment. Despite this, there remains significant variation in job insecurity not explained by characteristics of the present job. This unexplained variation could reflect the fact that individuals hold private information relating to their chances of becoming unemployed in the future (Green et al. 2001). As Cuyper and De Witte state, 'traditional psychological explanations for the consequences of temporary employment cannot account (…) for the absence of a clear-cut contract-based differences' (2006: 396). Job insecurity has been described as 'an internal event reflecting a transformation of beliefs about what is happening in the organisation and its environment' (Jacobson 1991: 15) which comes along with a general sense of powerlessness when faced with these seemingly uncontrollable events (De Witte 1999; Vander Elst et al. 2014). The lack of control, along with low social participation, have been proven to have a powerful influence on health because both factors enhance one's feeling of being left out of the community and losing/not deserving one's status quo. Facing a lack of control in the work environment increases the risk for both health and mental health issues (Marmot 2006) since the internal worker's locus of control is threatened (Rotter 1954). Stemming from those premises, it seems understandable that a subjective variable such as self-perceived job insecurity could have more influence on worsening a worker's psychological well-being than an objective variable such as the type of contract and occupational category. Two other variables that play a role in moderating/worsening the influence of job insecurity on psychological well-being are the gender of the respondent along with their ability to make ends meet. Discussing the latter variable, if we consider the ability to make ends meet as a proxy variable for the economic level of a household, our findings are coherent with the literature on the vastly researched link between economic poverty and mental health issues. Haushofer and Fehr (2014) collected 18 studies investigating the pathways in which the level of income affects psychological well-being and found in all of them that lower levels of income were positively associated with lower levels of psychological well-being both across countries and within countries. Another debated topic is the gender gap; this finds that female workers are more likely to experience distress resulting from their precariousness than their male counterparts, which, according to the existing literature, has been found to be true among different social groups, such as immigrant workers, previously unemployed people (Vives et al. 2013) and non-manual workers (Artazcoz et al. 2005). This result could be linked to work-related gender inequality, which concerns not only wage gap, gender stratification and labour force participation (Cotter et al. 2004) but also gender inequalities to accessing health resources, which is considered a cause for the gender disparity in depression (Pacheco et al. 2019). Finally, one of the most interesting results concerns the impact of job insecurity on well-being across countries and, most importantly, across the four different welfare regimes taken into consideration. Following our findings and contrary to expectations linked to the fact that welfare institutions seem to have an impact on well-being (as shown in Table 2), the EPL gap does not significantly affect the relation between our two main variables. Lübke and Erlinghagen (2014) explain the low impact of EPL on self-perceived job insecurity with the so-called 'security paradox' from the work of Evers et al. (1987), which states that 'people have the tendency to get used to a certain level of security so that different levels of objective (in)security may lead to the same levels of subjective insecurity' (Lübke and Erlinghagen 2014: 321). In other words, because workers tend to calibrate their expectations to their given situation in terms of EPL and generally to the welfare regime to which they are accustomed, a sudden disinvestment in social protection programmes is more likely to affect workers than the regulation per se, although—as the quoted scholars admit—an elaborated theory about this correlation has not yet been formulated (Lübke and Erlinghagen 2014). It would be interesting for further studies to explore whether the countries where job insecurity is determined to have a deeper impact on well-being (France, Italy, Ireland and the UK) have recently experienced important changes in terms of their EPL and more generally in the organisation of their welfare regimes. The findings of our study suggest that self-perceived job insecurity is negatively associated with mental well-being for both permanent and temporary workers, making this stressor an important feature in predicting the emergence of psychological distress (i.e. feelings of anxiety or depression) among the workforce. There are four main drawbacks to this study. First, the sample size of the countries under study was small in some cases (i.e. Ireland, Spain, Denmark), and this can lead to biased estimates. Second, data are repeated in cross-sectional surveys, and the study did not use a longitudinal panel; therefore, we cannot establish how the respondents' work-life changes impacted on their well-being over time. Third, the data set we used for the study does not contain a second validated scale for health and well-being such as, for instance, the SF-36. Having a second scale would make possible a comparison of results and an examination of the concept of well-being from a different perspective. Finally, the data set contains only qualitative survey questions (or verbal expectations) about job insecurity which, as Binelli (2019) has argued, by restricting the range of possible answers of the respondents, could potentially represent a source of bias. Despite the aforementioned limitations, this study furthers the debate on job insecurity and its influence on psychological well-being, and it suggests some moderation measures for policymakers and employers. For instance, resources should be dedicated to employees' well-being at work, not only because it has been proven that doing so improves the level of production (Gavin and Mason 2004; Schütte et al. 2014) but also because in the mental health field early intervention is often more effective and less invasive (Jorm and Griffiths 2006). Finally, a wider measure could include some psychological well-being-related policies among employment protection programmes, with an aim to moderate the impact of job loss and precariousness on workers' mental health. The concept of 'labour market insecurity' could be considered quite similar to what some scholars have defined as 'subjective employability', which Silla et al. described as 'employees' perception of the available alternatives in labour market' (2009: 741). The WCalib weight generated is recommended for analysis of within-the-EU data (Eurofound 2017). It is used for improving the calculation of confidence intervals or significance at country level. As a robustness check, we replicated the models by coding differently the variables of risk of losing job and risk of not finding a similar job. Specifically, we examined how sensitive the analysis is to setting the answer 'neither likely nor unlikely' as high risk (code 1) instead of low risk (code 0) for both variables. Results (not shown here) are very similar to the baseline specification in models presented later. The WHO-5 has a coefficient of homogeneity of 0.63 and a Cronbach coefficient alpha of 0.88 (Zierau et al. 2002; Bech et al. 2003). A Danish general population study compared psychometrically the mental health subscale from the SF-36 questionnaire with the WHO-5 and found that the latter has a significant lower ceiling effect than the first (Bech et al. 2003). Allison, P. D. (2009). Fixed effects regression models. Los Angeles: SAGE. Anderson, C. J., & Pontusson, J. (2007). Workers, worries and welfare states: Social protection and job insecurity in 15 OECD countries. European Journal of Political Research, 46(2), 211–235. Argentero, P., & Vidotto, G. (1994). LOC-L: Una scala di locus of control lavorativo: Manuale. Mediatest. Artazcoz, L., Benach, J., Borrell, C., & Cortès, I. (2005). Social inequalities in the impact of flexible employment on different domains of psychosocial health. Journal of Epidemiology and Community Health, 59(9), 761–767. Bech, P., Olsen, L. R., Kjoller, M., & Rasmussen, N. K. (2003). Measuring well-being rather than the absence of distress symptoms: A comparison of the SF-36 Mental Health subscale and the WHO-Five Well-Being Scale. International Journal Of Methods In Psychiatric Research, 12(2), 85–91. Benach, J., & Muntaner, C. (2007). Precarious employment and health: Developing a research agenda. Journal of Epidemiology and Community Health, 61(4), 276–277. Benavides, F. G., Benach, J., Diez-Roux, A. V., & Roman, C. (2000). How do types of employment relate to health indicators? Findings from the Second European Survey on Working Conditions. Journal of Epidemiology and Community Health, 54(7), 494–501. Binelli, C. (2019). Employment and earnings expectations of jobless young skilled: Evidence from Italy. Social Indicators Research, pp. 1–31. Böckerman, P. (2004). Perception of job instability in Europe. Social Indicators Research, 67(3), 283–314. Böckerman, P., Ilmakunnas, P., & Johansson, E. (2011). Job security and employee well-being: Evidence from matched survey and register data. Labor Economics, 18, 547–554. Bosmans, K., Hardonk, S., De Cuyper, N., & Vanroelen, C. (2016). Explaining the relation between precarious employment and mental well-being: A qualitative study among temporary agency workers. Work, 53(2), 249–264. Bryan, M. L., & Jenkins S. P. (2013). Regression analysis of country effects using multilevel data: A cautionary tale. In ISER 2013e2014 (working paper series). Chung, H., & Van Oorschot, W. (2011). Institutions versus market forces: Explaining the employment insecurity of European individuals during (the beginning of) the financial crisis. Journal of European Social Policy, 21(4), 287–301. Cora, J., Maas, M., & Hox, J. (2005). Sufficient sample sizes for multilevel modeling. Methodology, 1(3), 86–92. Cotter, D. A., Hermsen, J. M., & Vanneman, R. (2004). Gender inequality at work. New York: Russell Sage Foundation. De Cuyper, N., & De Witte, H. (2006). The impact of job insecurity and contract type on attitudes, well-being and behavioural reports: A psychological contract perspective. Journal of Occupational and Organizational Psychology, 79(3), 395–409. De Witte, H. (1999). Job insecurity and psychological well-being: Review of the literature and exploration of some unresolved issues. European Journal of Work and Organizational Psychology, 8(2), 155–177. Diener, E., & Seligman, M. E. P. (2004). Beyond money: Toward an economy of well-being. Psychological Science in the Public Interest, 5, 1–31. Diener, E., & Tov, W. (2012). National accounts of well-being. In K. C. Land, A. C. Michalos, & M. J. Sirgy (Eds.), Handbook of social indicators and quality of life research (pp. 137–156). New York: Springer. Dooley, D., Rook, K., & Catalano, R. (1987). Job and non-job stressors and their moderators. Journal of Occupational Psychology, 60(2), 115–132. Esping-Andersen, G. (1990). The three political economies of the welfare state. International Journal of Sociology, 20(3), 92–123. Eurofound. (2017). European Quality of Life Survey 2016: Quality of life, quality of public services, and quality of society. Luxemburg: Publication Office of the European Union. Evers, A., Nowotny, H., & Wintersberger, H. (1987). The changing face of welfare (Vol. 27). Aldershot: Gower Publishing Company. Ferrera, M. (1996). The 'Southern model' of welfare in social Europe. Journal of European Social Policy, 6(1), 17–37. Ferrie, J. E., Shipley, M. J., Marmot, M. G., Stansfeld, S., & Smith, G. D. (1998). The health effects of major organizational change and job insecurity. Social Science and Medicine, 46(2), 243–254. Ferrie, J. E., Shipley, M. J., Newman, K., Stansfeld, S. A., & Marmot, M. (2005). Self-reported job insecurity and health in the Whitehall II study: Potential explanations of the relationship. Social Science and Medicine, 60(7), 1593–1602. Gavin, J. H., & Mason, R. O. (2004). The virtuous organization: The value of happiness in the workplace. Organizational Dynamics, 33(4), 379–392. Gottschalk, P., & Moffitt, R. (1999). Changes in job instability and insecurity using monthly survey data. Journal of Labor Economics, 17(S4), S91–S126. Green, F., Dickerson, A., Carruth, A., & Campbell, D. (2001). An analysis of subjective views of job insecurity (No. 01, 08). Department of Economics Discussion Paper. University of Kent. Haushofer, J., & Fehr, E. (2014). On the psychology of poverty. Science, 344(6186), 862–867. Howell, R. T., & Howell, C. J. (2008). The relation of economic status to subjective well-being in developing countries: A meta-analysis. Psychological Bulletin, 134(4), 536–560. Huber, E., & Stephens, J. D. (2001). Welfare state and production regimes in the era of retrenchment. In P. Pierson (Ed.), The new politics of the welfare state (pp. 44–107). Oxford: Oxford University Press. International Labour Organization. (2015). World employment and social outlook 2015: The changing nature of jobs. International Labour Office. Jacobson, D. (1991). Toward a theoretical distinction between the stress components of the job insecurity and job loss experiences. Research in the Sociology of Organizations, 9(1), 1–19. Jahoda, M. (1982). Employment and unemployment: A social-psychological analysis (Vol. 1). Cambridge: CUP Archive. Jorm, A. F., & Griffiths, K. M. (2006). Population promotion of informal self-help strategies for early intervention against depression and anxiety. Psychological Medicine, 36(1), 3–6. Juliá, M., Vanroelen, C., Bosmans, K., Van Aerden, K., & Benach, J. (2017). Precarious employment and quality of employment in relation to health and well-being in Europe. International Journal of Health Services, 47(3), 389–409. Kirkcaldy, B. D., Shephard, R. J., & Furnham, A. F. (2002). The influence of type A behaviour and locus of control upon job satisfaction and occupational health. Personality and Individual Differences, 33(8), 1361–1371. Lübke, C., & Erlinghagen, M. (2014). Self-perceived job insecurity across Europe over time: Does changing context matter? Journal of European Social Policy, 24(4), 319–336. Marmot, M. G. (2006). Status syndrome: A challenge to medicine. JAMA, 295(11), 1304–1307. Mathews, G., & Izquierdo, C. (2009). Anthropology, happiness and well-being. In G. Mathews & C. Izquierdo (Eds.), Pursuits of happiness: Well-being in anthropological perspective (pp. 1–19). New York: Berghahn Books. Möhring, K. (2012). The fixed effect as an alternative to multilevel analysis for cross-national analyses. GK Soclife (working papers Series). Murphy, G. C., & Athanasou, J. A. (1999). The effect of unemployment on mental health. Journal of Occupational and Organizational Psychology, 72, 83–99. OECD. (2014). Non-regular employment, job security and the labour market divide. In OECD employment outlook 2014. Paris: OECD Publishing. https://doi.org/10.1787/empl_outlook-2014-7-en. Origo, F., & Pagani, L. (2009). Flexicurity and job satisfaction in Europe: The importance of perceived and actual job stability for well-being at work. Labour Economics, 16(5), 547–555. Pacheco, J. P. G., Silveira, J. B., Ferreira, R. P. C., Lo, K., Schineider, J. R., Giacomin, H. T. A., et al. (2019). Gender inequality and depression among medical students: A global meta-regression analysis. Journal of Psychiatric Research, 111, 36–43. Paul, K. I., & Moser, K. (2009). Unemployment impairs mental health: Meta-analyses. Journal of Vocational Behavior, 74(3), 264–282. Reyneri, E. (2013). Benessere e qualità dell'occupazione. In L. Bordogna, R. Pedersini, & G. Provasi (Eds.), Lavoro, mercato, istituzioni: Scritti in onore di Primo Cella. Milano: Franco Angeli. Robeyns, I. (2017). Well-being, freedom and social justice: The capability approach re-examined. Cambridge, UK: Open Book Publishers. Rotter, J. B. (1954). Social learning and clinical psychology. New York: Prentice-Hall. Schütte, S., Chastang, J. F., Malard, L., Parent-Thirion, A., Vermeylen, G., & Niedhammer, I. (2014). Psychosocial working conditions and psychological well-being among employees in 34 European countries. International Archives of Occupational and Environmental Health, 87(8), 897–907. Sennett, R. (1998). The corrosion of character: The personal consequences of work in the new capitalism. London: WW Norton & Company. Siegrist, J. (1996). Adverse health effects of high-effort/low-reward conditions. Journal of Occupational Health Psychology, 1(1), 27–41. Siegrist, J., & Li, J. (2017). Work stress and altered biomarkers: A synthesis of findings based on the effort–reward imbalance model. International Journal of Environmental Research and Public Health, 14(11), 1373. Silla, I., De Cuyper, N., Gracia, F. J., Peiró, J. M., & De Witte, H. (2009). Job insecurity and well-being: Moderation by employability. Journal of Happiness Studies, 10(6), 739–751. Spyridakis, M. (2016). The liminal worker: An ethnography of work, unemployment and precariousness in contemporary Greece. UK: Routledge. Sverke, M., & Hellgren, J. (2002). The nature of job insecurity: Understanding employment uncertainty on the brink of a new millennium. Applied Psychology, 51(1), 23–42. Theorell, T. (2000). Working conditions and health. Social Epidemiology, 2, 95–118. Thomas, W. I., & Thomas, D. S. (1928). The child in America: Behavior problems and programs. New York: Knopf. Thompson, S., & Marks, N. (2008). Measuring well-being in policy: Issues and applications. Report commissioned by the Foresight Project on Mental Capital and Well-being, Government Office for Science. London: Government Office for Science. Topp, C. W., Østergaard, S. D., Søndergaard, S., & Bech, P. (2015). The WHO-5 Well-Being Index: A systematic review of the literature. Psychotherapy and Psychosomatics, 84(3), 167–176. Vander Elst, T., De Witte, H., & De Cuyper, N. (2014). The Job Insecurity Scale: A psychometric evaluation across five European countries. European Journal of Work and Organizational Psychology, 23(3), 364–380. Vives, A., Amable, M., Ferrer, M., Moncada, S., Llorens, C., Muntaner, C., & Benach, J. (2013). Employment precariousness and poor mental health: Evidence from Spain on a new social determinant of health. Journal of Environmental and Public Health, 2013. Wilkinson, R., & Pickett, K. (2009). The spirit level. London: Why more equal societies almost always do better. Bloomsbury Press. World Health Organization. (1993). International classification of diseases 10th ed. (ICD-10). Geneva: World Health Organization. Zierau, F., Bille, A., Rutz, W., & Bech, P. (2002). The Gotland Male Depression Scale: A validity study in patients with alcohol abuse disorder. Nordic Journal of Psychiatry, 56, 265–271. Ackmowledgements Open access funding provided by Università degli Studi di Milano - Bicocca within the CRUI-CARE Agreement. Department of Sociology and Social Research, University of Milan-Bicocca, Via Bicocca degli Arcimboldi, 8, 20126, Milan, MI, Italy Concetta Russo & Marco Terraneo Concetta Russo Marco Terraneo Correspondence to Concetta Russo. See Tables 3 and 4; Fig. 5. Table 3 Samples used in analysis and missing cases by country Table 4 Descriptive statistics of variables employed in the multivariate models (N = 10.173) Russo, C., Terraneo, M. Mental Well-being Among Workers: A Cross-national Analysis of Job Insecurity Impact on the Workforce. Soc Indic Res 152, 421–442 (2020). https://doi.org/10.1007/s11205-020-02441-5 Accepted: 12 July 2020 Issue Date: November 2020 Job insecurity Over 10 million scientific documents at your fingertips Switch Edition Corporate Edition Not affiliated © 2023 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Advanced dynamic nonlinear schemes for geotechnical earthquake engineering applications: a review of critical aspects State-of-the-Art Review Gaetano Elia ORCID: orcid.org/0000-0001-7587-20041,2 & Mohamed Rouainia3 Geotechnical and Geological Engineering volume 40, pages 3379–3392 (2022)Cite this article Nonlinear time domain numerical approaches together with elasto-plastic effective stress soil constitutive models are nowadays available to geotechnical researchers and practitioners interested in geotechnical earthquake engineering. The use of such advanced two- and three-dimensional schemes allows the analysis and design of complex geotechnical structures within a performance-based framework, considering the build-up of excess pore water pressure during the earthquake, dynamic interaction between the soil deposit and the above surface buildings and infrastructures and effects of multi-directional seismic loading. Within this context, the paper focuses on the dynamic finite element (FE) method and presents a review of the key ingredients governing its predictive capabilities. These include i) the description of the fully coupled solid–fluid interaction formulation and time integration, ii) the calibration of Rayleigh damping and the soil constitutive model parameters, iii) prescription of the boundary conditions for the generated mesh and iv) input motion selection/scaling strategies. For each of the above points, the paper summarises the current knowledge and best practice, with the aim of providing protocols for a confident application of nonlinear FE schemes to evaluate the performance of critical geotechnical infrastructures. Useful hints to promote familiarity of advanced nonlinear soil dynamic analysis among geotechnical practitioners and to indicate areas for further improvement are also highlighted. Working on a manuscript? Avoid the most common mistakes and prepare your manuscript for journal editors. The design of geotechnical structures subjected to earthquake loading can be conducted using a variety of analytical and numerical approaches, which are characterised by different levels of complexity. Simplified dynamic analyses are based on the pseudo-static method, where the seismic action can be obtained from preliminary free-field ground response simulations. To predict site effects, seismic input motions are propagated through the deposit approximating the dynamic characteristics of the soil by means of equivalent linear or nonlinear models (e.g. Elia 2015). In the equivalent linear visco-elastic method (Schnabel et al. 1972; Idriss et al. 1973), the shear modulus (G) and damping ratio (D) are iteratively updated in order to approximate the soil nonlinear behaviour through a sequence of linear analyses. For each iteration, the exact solution proposed by Roesset (1977) is used within a total stress approach. Nonlinear time-domain schemes are also widely adopted in the literature for one-dimensional ground response analyses. Some of them employ the hyperbolic model, in its original or extended pressure dependent formulation (e.g. Hashash et al. 2011), but still adopting a total stress approach. In others, nonlinearity is simulated by incremental plasticity models, which allow to predict the permanent deformation of a soil deposit, the generation of excess pore pressure, the decay of the soil stiffness and the corresponding hysteretic dissipation induced by the earthquake action (e.g. Elgamal et al. 2006). In addition to the classical two-dimensional (2D) and three-dimensional Finite Element Method (FEM) (e.g. Zienkiewicz et al. 1977; Potts and Zdravković 1999) and Finite Difference Method (FDM) (e.g. Smith 1985; Lemos 2012), numerical approaches also include the Boundary Element Method (e.g. Banerjee and Butterfield 1981; Venturini 1983; Shinokawa and Mitsui 1993) and the Discrete Element Method (e.g. Cundall and Strack 1979; O'Sullivan 2017). Through a 2D or 3D numerical discretisation of the boundary value problem (BVP), the analysis can account for i) the interaction during earthquakes between buried complex morphologies and surface or subsurface buildings and infrastructures (i.e. dynamic soil-structure interaction) and ii) the effects of the multi-directional seismic loading conditions. This come at the expense of increasing the computational cost, especially when 3D geometries are considered, and a thorough understanding of the mathematical formulation implemented in the software. In addition, the soil mechanical behaviour can be described using advanced elasto-plastic soil constitutive laws, which are able to predict within a coupled effective stress formulation the solid–fluid interaction (Biot 1941; Zienkiewicz et al. 1999). Advanced constitutive models are, indeed, increasingly available in commercial FE and FD codes (e.g. Mazzoni et al. 2006; Itasca Consulting Group Inc 2016; Plaxis 2019). Nevertheless, their calibration is not straightforward and requires special in-situ and laboratory data (e.g. geophysical tests, resonant column/torsional shear, cyclic triaxial, bender element tests), often not included in a standard geotechnical characterisation. Therefore, advanced dynamic analyses are adopted by non-expert users with difficulty, as both the model calibration procedures and the software usage protocols are unclear or poorly documented, leading to uncertain numerical results and, as such, obscuring the possible benefits associated with their use in the geotechnical design process (e.g. Kwok et al. 2007; Amorosi et al. 2010). Nonetheless, the study of well-investigated and monitored case histories, where an accurate characterisation of the soil dynamic properties is available, can effectively be used to inform calibration of the adopted constitutive model and validation of numerical results. This paper focuses on the application of the FE method in the analysis and design of geotechnical BVPs subjected to seismic loading, covering a review of the key ingredients governing its predictive capabilities. In a sequential order, the work describes and discusses i) fully coupled solid–fluid interaction formulation and time integration of the dynamic equations of motion, ii) setting of the Rayleigh viscous damping, iii) calibration of the soil constitutive model parameters and the initialisation of their state variables, iv) definition of the dynamic boundary conditions along the sides of the numerical model and v) input motion selection/scaling strategies, as they all can play a crucial role in the numerical results. Useful hints for each of the above points, based on the most recent research, are provided to promote the adoption of dynamic FE schemes with more confidence. Fully coupled solid–fluid interaction formulation Considering the soil as a porous material with a single fluid filling the pores, the mechanical behaviour can be described by the effective stress carried by the solid skeleton, defined as (Biot 1941): $${{\varvec{\sigma}}}^{^{\prime}}={\varvec{\sigma}}-{\alpha \mathbf{m}}^{T} p$$ where σ is the total stress vector, p the pore pressure and m = [1 1 1 0 0 0]. In Eq. (1), α is a factor which becomes close to unity when the bulk modulus of the grains is much higher than that of the whole material (i.e. α ≈ 1 for soils). Geotechnical calculation models in special cases can be restricted to either fully drained or undrained conditions; however, the real soil behaviour is time dependent, with pore pressures related to the soil permeability, rate of loading and hydraulic boundary conditions. Hence, the numerical analysis of time dependent problems (such as those related to soil dynamics) requires simultaneous calculation of the soil deformation and groundwater flow by coupling the seepage equations with the equilibrium and constitutive equations. The fully coupled dynamic formulation for solid–fluid interactions stems from the general 3D Biot's theory of consolidation [1941], combining force equilibrium and flow continuity equations. Assuming that the relative acceleration of the fluid phase with respect to the solid skeleton and its convective part are negligible, the resulting system of ordinary equations of motion can be written as follows (Chan 1988; Zienkiewicz et al. 1999): $$\left\{\begin{array}{l}{\text{M}}\ddot{\mathbf{u}}+{\text{C}}\dot{\mathbf{u}}+{\text{Ku-Qp}}={\mathbf{f}}^{s}\\ {\text{Q}}\dot{\mathbf{u}}+{\text{S}}\dot{\mathbf{p}}+{\text{Hp}}={\mathbf{f}}^{\text{p}}\end{array}\right.$$ where u is the solid phase displacement vector and p is the pore fluid pressure vector, M is the mass matrix, K is the stiffness matrix, C is the viscous damping matrix, Q is the coupling matrix between the motion and flow equations, H is the permeability matrix, S is the fluid compressibility matrix, fp is the force vector for the fluid phase and fs is the force vector for the solid phase. The system (2) of mixed equations of displacements and pore pressures, valid for fully saturated soils, represents the so-called "u-p formulation" for geotechnical dynamic analysis. System (2) reduces to the general consolidation theory if the inertia of the soil is neglected (i.e. \(\ddot{\mathbf{u}}=0\)), to the seepage equation if also \(\dot{\mathbf{u}}=\mathbf{u}=0\) and \(\dot{\mathbf{p}}=0\), and to the static conditions when \(\ddot{\mathbf{u}}=\dot{\mathbf{u}}=0\) and \(\dot{\mathbf{p}}=0\). Zienkiewicz et al. [1999] showed that the u-p approximation is suitable for the analysis of dynamic problems when the frequency of the input is within a typical range of frequencies characterizing real earthquakes (i.e. 0 ÷ 20 Hz). If shocks (i.e. explosions) or very high frequencies are involved, the exact Biot's solution accounting for all the acceleration terms should be adopted. Frequency-dependent viscous damping is included in the equation of motion relative to the solid phase (Eq. (2)) via the Rayleigh matrix as follows (e.g. Clough and Penzien 2003): $$\mathbf{C}={\mathrm{a}}_{0}\mathbf{M}+{\mathrm{a}}_{1}\mathbf{K}$$ where the coefficients a0 and a1 can be calculated by selecting one target value of the damping ratio D and two angular frequencies, ωm and ωn, as follows: $$\left\{\begin{array}{c}{\mathrm{a}}_{0}\\ {\mathrm{a}}_{1}\end{array}\right\}=\frac{2D}{{\omega }_{m}+{\omega }_{n}}\left\{\begin{array}{c}{\omega }_{m}{\omega }_{n}\\ 1\end{array}\right\}$$ The viscous damping will be equal to or lower than D for frequencies inside the interval [ωm, ωn] and higher outside. Typically, the two frequencies ωm and ωn are expressed in Hertz and called fm and fn, respectively. Park and Hashash (2004) proposed a method to extend the classical two frequencies formulation of Eq. (4) to incorporate any number of frequencies. A frequency-independent viscous damping matrix for one-dimensional site response analysis is suggested by Phillips and Hashash (2009) and, more recently, by Nghiem and Chang (2019). Depending on how the formulation is implemented in the FE software, the Rayleigh matrix defined by Eq. (3) can be calculated using the initial or the current (tangent) stiffness of the system (e.g. Mazzoni et al. 2006). Hall (2006) and Charney (2008) discussed the potential problems associated with the introduction of some viscous dissipation in dynamic analysis of structures, suggesting to limit and bound the damping forces deriving from the Rayleigh scheme. The following section describes some recent advances for the appropriate use of Rayleigh damping in geotechnical earthquake engineering applications. Calibration of Rayleigh damping The Rayleigh damping is defined by selecting the coefficients a0 and a1, which depend on both D and the adopted frequency interval [ωm, ωn] according to Eq. (4). Woodward and Griffiths (1996) have clearly shown how the amount of viscous damping introduced in the FE dynamic analysis of an earth dam is difficult to quantify a priori but, at the same time, it can significantly control the results of the simulations in terms of permanent displacements induced by the seismic action. Different possible calibration procedures have been proposed in the literature to identify the two controlling frequencies fm and fn corresponding to ωm and ωn. A well-established rule (Hudson et al. 1994) suggests to select fm as the first natural frequency of the deposit f1, while fn is assumed equal to n times fm, where n is the closest odd integer larger than the ratio fp/f1 between the predominant frequency of the input earthquake motion (fp) and the fundamental frequency of the soil deposit (f1). This choice was based on the elastic analysis of the vibration modes of a shear beam. Kwok et al. (2007) proposed to select, as a first approximation, the first mode of the site and five times this frequency for fm and fn, respectively. Hence, their suggestion to use n equal to 5, as shown in Fig. 1. Example of Rayleigh damping calibration according to the procedure of Kwok et al. (2007) More recently, Amorosi et al. (2010) discussed a new calibration of the Rayleigh coefficients based on the linear equivalent amplification function and the frequency content of the input motion. In their work, it is suggested to select the first target frequency (fm) equal to the first site frequency significantly excited by the seismic motion, and to set the second frequency (fn) equal to the value where the amplification function gets lower than one. Adopting this rule, the overestimation of the peak ground acceleration with respect to the corresponding visco-elastic solution can be considerably reduced. Even if the frequency interval [ωm, ωn] can be appropriately defined, the real amount of viscous dissipation introduced in numerical simulations is not really know a priori. It could, nevertheless, be evaluated a posteriori from the examination of the free vibrations of the system after the end of the earthquake, as shown by Amorosi and Elia (2008) and Elia et al. (2011) for the dynamic analysis of an earth dam in Southern Italy. In this case study, a kinematic hardening constitutive model (Kavvadas and Amorosi 2000) was used to simulate the cyclic behaviour of the dam soils. To compensate for the model underestimation of hysteretic dissipation in the small-strain range with respect to the experimental data, a 2% Rayleigh damping was added at the beginning of the FE dynamic simulation. This was calibrated using the initial stiffness of the system and the procedure proposed by Kwok et al. (2007). Then, the evolution of the horizontal displacement recorded at the dam crest in the post-seismic stage of the analysis, shown in Fig. 2, was investigated. It should be noted that the damping ratio was evaluated through the logarithmic decrement method (Clough and Penzien 2003), the same method adopted for the determination of damping during resonant column tests, considering the reduction in amplitude of the residual oscillations of the system composed by the dam and its foundation layer. As the amplitude of the observed free vibrations was very small, the computed value of the damping ratio could not be associated with the material (hysteretic) dissipation provided by the constitutive model, but it was representative of the viscous dissipation, introduced through the Rayleigh matrix, acting at the first natural frequency of the system. The damping calculated a posteriori (i.e. 1.7%—Fig. 2) was found to be consistent with the target value of 2% defined a priori, thus showing how it can be possible to control with considerable confidence the amount of viscous dissipation introduced in the FE dynamic simulations. Example of viscous damping calculation from the horizontal displacement time history recorded in a node of the FE mesh (modified after Elia et al. 2011) Time stepping scheme The algebraic counterparts of Eq. (2) are obtained by applying a time integration scheme. Assuming that the values of displacements, pore pressures and their time derivatives have been obtained at time tn, the integration consists of updating the same quantities at the next time step tn+1. The integration of the dynamic equations of motion can be performed using time stepping schemes characterised by different accuracy, stability, algorithmic damping and run-time (Bathe and Wilson 1976). A review of time integration schemes for dynamic geotechnical analysis is presented by Kontoe (2006). Adopting, for example, the Generalised Newmark scheme (Katona and Zienkiewicz 1985), the solid phase can be solved as follows: $$\begin{array}{l}{\ddot{\mathbf{u}}}_{n+1}={\ddot{\mathbf{u}}}_{n}+\Delta {\ddot{\mathbf{u}}}_{\mathrm{n}}\\ {\dot{\mathbf{u}}}_{n+1}={\dot{\mathbf{u}}}_{n}+\left[{\ddot{\mathbf{u}}}_{n}+{\beta }_{1}\Delta {\ddot{\mathbf{u}}}_{n}\right]\Delta t\\ {\mathbf{u}}_{n+1}={\mathbf{u}}_{n}+{\dot{\mathbf{u}}}_{n}\Delta t+0.5\left[{\ddot{\mathbf{u}}}_{n}+{\beta }_{2}\Delta {\ddot{\mathbf{u}}}_{n}\right]{\Delta t}^{2}\end{array}$$ Similarly for the fluid phase: $$\begin{array}{l}{\dot{\mathbf{p}}}_{n+1}={\dot{\mathbf{p}}}_{n}+\Delta {\dot{\mathbf{p}}}_{n}\\ {\mathbf{p}}_{n+1}={\mathbf{p}}_{n}+\left[{\dot{\mathbf{p}}}_{n}+{{\beta }_{1}}^{*}\Delta {\dot{\mathbf{p}}}_{n}\right]\Delta t\end{array}$$ where the coefficients: $$\begin{array}{l}{\beta }_{1}\ge 0.5\\ {\beta }_{2}\ge 0.5{\left(0.5+{\beta }_{1}\right)}^{2}\\ {{\beta }_{1}}^{*}\ge 0.5\end{array}$$ are typically chosen for unconditional stability of the recurrence scheme (Zienkiewicz et al. 1999). The substitution of the above approximations into Eq. (2) leads to a system of coupled nonlinear equations which are solved iteratively by the FE code using the Newton–Raphson procedure. If the coefficients of Eq. (7) are all set equal to 0.5, the time stepping scheme reduces to the trapezoidal scheme, the algorithm remains implicit, but it does not provide numerical (or algorithmic) damping during the integration of the governing equations. In this case, numerical oscillations may occur during the analysis if no other sources of dissipation (e.g. viscous or hysteretic) are introduced in the simulation (Zienkiewicz et al. 1999). Therefore, coefficient values larger than 0.5 are typically adopted, leading to a numerical dissipation which is more pronounced at high frequencies (Amorosi et al. 2010). Soil constitutive modelling The stiffness matrix of the system K in Eq. (2) depends on the constitutive model adopted to describe the soil mechanical behaviour in the numerical simulations. Many nonlinear effective stress constitutive assumptions have been proposed in the past years, all stemming from the general theory of elasto-plasticity. A review of advanced soil constitutive models is outside the scope of the paper. In all cases, the implementation of these models into numerical software usually implies two issues: 1) the calibration of several parameters (sometimes even more than ten) against appropriate laboratory and in-situ data; 2) the initialisation of the model state variables accounting for the complex and, at times, unknown past stress history of the material. Calibration of the model parameters The higher is the complexity of the soil mechanical behaviour to be modelled (depending on the physical phenomena that needs to be captured) the larger is the number of parameters introduced in the constitutive formulation. This is not necessary an issue, provided that a clear procedure for the calibration of these coefficients can be followed. Most of the parameters have a clear physical meaning, such as the elastic moduli, the critical stress ratio or the slopes of normal compression and swelling lines in the compression plane for the elasto-plastic models derived from the Critical State Soil Mechanics theory (Roscoe and Burland 1968). Others, instead, can be obtained from an optimisation procedure through the comparison with laboratory and in-situ measurements. For dynamic BVPs, cyclic triaxial and simple shear, resonant column/torsional shear (RC/TS) and bender element tests could be adopted for the characterisation of the soil response in terms of small-strain stiffness, normalized shear modulus degradation curve, excess pore water pressure and damping curves with cyclic shear strain. This can be carried out in conjunction with geophysical, cross-hole and down-hole measurements to define the small-strain stiffness profile of the soil deposit in the FE model. In order to calibrate the model parameters against RC/TS results, single element simulations of undrained cyclic simple shear (CSS) tests can be performed, imposing different shear strain amplitudes and assessing the secant shear modulus (Gsec) and the damping ratio for each amplitude after a number of cycles sufficient to reach steady-state condition (e.g. Elia et al. 2011; Seidalinov and Taiebat 2014; Elia and Rouainia 2016). Specifically, Gsec is equal to the ratio between the cyclic shear stress (τc) and the cyclic shear strain (γc), while the damping ratio can be evaluated as: $$D=\frac{{W}_{D}}{{4\pi W}_{S}}=\frac{{A}_{loop}}{{2\pi G}_{sec} {\gamma }_{\mathrm{c}}^{2}}$$ where WD is the energy dissipated during the cycle (i.e. the area of the hysteresis loop Aloop) and WS is the maximum strain energy stored during the loading phase (approximated to the area ½τcγc). Figure 3a shows the typical stress–strain curves obtained with a kinematic hardening soil model imposing increasing cyclic shear strains during CSS single element simulations. Referring, in particular, to the CSS test carried out with a γc equal to 1%, the secant shear modulus is represented by the slope of the line connecting the tips of the last cycle, while the areas needed for the calculation of the damping ratio according to Eq. (8) are highlighted with shadings. Once the normalised shear modulus and the hysteretic dissipation are calculated for different imposed shear strain amplitudes, the G/G0 and D curves predicted by the model can be plotted point by point, as shown in Fig. 3b. Results of undrained CSS single element simulations with a kinematic hardening soil model: a) stress−strain response; b) corresponding normalised modulus decay and damping curves An example of a real calibration is presented in Figs. 4 and 5, where the comparison between the predictions of a kinematic hardening soil model (i.e. the RMW model proposed by Rouainia and Muir Wood (2000)) and the laboratory data from undrained triaxial compression tests and combined resonant column/torsional shear tests performed on natural Avezzano Clay samples is shown, respectively (Cabangon et al. 2019). It should be noted that only 2% Rayleigh damping has been added to avoid the propagation of spurious high frequencies and to compensate for the model underestimation of damping in the small-strain range (Fig. 5). At the same time, the hysteretic dissipation provided by the kinematic hardening formulation at large strains overestimates the experimental data, as typically observed when this class of constitutive models is adopted (e.g. Amorosi and Elia 2008; Elia et al. 2011; Elia and Rouainia 2016). To address this point, some adjustments in the plastic modulus law have been proposed (e.g. Seidalinov and Taiebat 2014), but very recently Elia et al. (2021) have demonstrated that the largest loops obtained from the back-analysis of ground response FE results, which can potentially provide a high source of dissipation, are statistically irrelevant, as their frequency of occurrence during the earthquake time history is very low. Comparison between the predictions of a kinematic hardening soil model (RMW) and the laboratory data on Avezzano Clay in terms of a) stress path; b) stress–strain response; c) pore pressure–strain response (Cabangon et al. 2019) Comparison between the predictions of a kinematic hardening soil model (RMW) and the normalised modulus decay and damping curves for Avezzano Clay (Cabangon et al. 2019) Moreover, the irregular nature of the earthquake motion implies that the stress–strain cycles induced in the FE soil deposit by the seismic event are typically non-symmetric. Figure 6 shows the predictions obtained at the local level (i.e. at a Gauss point) in terms of shear strain time history and corresponding stress–strain curve during nonlinear FE dynamic simulations of the free-field response of a soil deposit using the RMW model (Guzel 2018). The maximum value of the shear strain induced by the earthquake applied at bedrock (Fig. 6a) is reported in Fig. 6b and in Fig. 6c as the distance between points A and B. At the beginning of the analysis, when the earthquake has not attained its maximum acceleration values yet, the cycles are centred on the origin (see the red part of the curves). This is followed by larger loops associated to the higher accelerations (highlighted in green) and, then, the shear strains oscillate around a final non zero value at the end of the motion (blue part of the curves). Differently from visco-elastic dynamic results, a continuous change of shear stiffness and hysteretic damping can be observed throughout the earthquake excitation and most of the loops induced by the seismic action are not centred around the origin, due to plastic strain accumulation (Fig. 6c). Therefore, the maximum value attained by the shear strain at each depth may not be representative of the actual strain level induced by the earthquake in the soil deposit, as ratcheting effects should be taken into account (Guzel 2018). Example of a) input motion applied at bedrock and corresponding b) shear strain time history; c) stress–strain response of an advanced soil constitutive model obtained at Gauss point level from FE dynamic results mesh (modified after Guzel 2018) From the experimental point of view, further laboratory testing is needed to better understand the asymmetrical cyclic behaviour of soils at large strains (i.e. for γ higher than 1.0%) and subjected to multi-directional loading conditions (e.g. Yang et al. 2018; Sun and Biscontin 2019). This could guide further developments of advanced soil constitutive models. Initialisation of the model state variables When dynamic analyses of geotechnical BVPs are performed, the application of the seismic loading is usually preceded by the simulation of the geostatic initial stress condition. Nevertheless, the at-rest coefficient of earth pressure predicted by an advanced elasto-plastic model is not known in advance. Therefore, consistently with the model parameters calibration described before, a simplified, though realistic simulation of the geological and past stress history of the soil deposit and geotechnical structure under investigation should be conducted prior to any dynamic analysis. This is the case, for example, presented by Elia et al. (2011) in the evaluation of the seismic performance of the Marana Capacciotti earth dam. Before the application of the seismic loading, the initial values of the internal variables of the adopted constitutive model (i.e. initial stress state and hardening variables) have been determined from a sequence of static FE analyses reproducing the geological history of the deposit, the dam construction phases and the subsequent first reservoir impounding. At the end of the static simulations, the stress state and the values of the internal variables have been checked for consistency with those assumed in the model calibration phase. The values of the overconsolidation ratio, R, along the dam axis were in good agreement with those assumed during the model calibration (Fig. 7a). The computed values of the initial shear modulus, G0, were also found to agree well with the bender element results (Fig. 7b), thus confirming the consistency between laboratory measurements, model state variables values assumed in the calibration stage and those obtained at the end of the static FE analyses. Profiles with depth of a) overconsolidation ratio along three different verticals; b) initial shear modulus along the dam axis obtained from the initialisation of the FE model and compared with experimental data (modified after Elia et al. 2011) Finite element modelling The accuracy of the FE solution of the system of Eq. (2) is greatly controlled by the adopted time and spatial discretisation (i.e. the value of Δt used in the time stepping scheme and the dimension and type of elements, respectively). In addition, the definition of appropriate boundary conditions, including the seismic input motion applied at bedrock, is required. Model discretisation and boundary conditions When performing numerical simulations of BVPs, infinite continuous soil deposits are reduced to finite discretised domains with the adoption of some constraints along their boundaries, which allow to artificially simulate the far-field condition. Users are, therefore, required to define the appropriate extension of the numerical model, the geometrical dimensions of the finite elements and the boundary conditions to be applied. While all these aspects are well understood in the context of static simulations, the literature concerning numerical analyses in dynamic conditions is less exhaustive. The characteristic dimension of the finite elements adopted for dynamic analyses must satisfy the condition that the spacing between two consecutive nodes, Δlnode, has to be smaller than approximately one-tenth to one-eighth of the wavelength associated with the maximum frequency component, fmax, of the input wave (Kuhlemeyer and Lysmer 1973): $${\Delta l}_{node}\le {\lambda }_{\mathrm{min}}/\left(8\div 10\right)={V}_{S,\mathrm{min}}/\left(8\div 10\right){f}_{\mathrm{max}}$$ where VS,min is the lowest shear wave velocity in the soil deposit. Equation (9) has been proposed for the propagation of a single, harmonic, one-dimensional wave into an elastic material. A finer discretisation should be adopted when the mechanical behaviour is expected to be nonlinear, due to plasticity. To avoid spurious wave reflections at the vertical boundaries of symmetric numerical models, "tied-nodes" can be employed in geotechnical dynamic simulations. Their effectiveness in absorbing the energy induced by the seismic action has been demonstrated by Zienkiewicz et al. (1999). For non-symmetric 2D and 3D problems, the hypothesis of tied displacements along the lateral boundaries needs to be abandoned. Possible alternatives are represented by viscous (Lysmer and Kuhlemeyer 1969) and free-field boundaries or by the Domain Reduction Method (DRM) (Bielak et al. 2003; Yoshimura et al. 2003; Kontoe et al. 2008, 2009). Each technique used to define the boundary conditions has its own advantages and limitations (e.g. Roesset and Ettouney 1977), but not all of them are commonly implemented in FE/FD codes (as the DRM). In 2D space, the standard viscous boundary is mechanically equivalent to a system of dashpots oriented in normal and tangential directions of the mesh boundary, such that: $$\left\{\begin{array}{c}{\sigma }_{xx}={\rho V}_{P}{v}_{x}\\ {\sigma }_{xy}=\tau ={\rho V}_{S}{v}_{y}\end{array}\right.$$ where vx, vy, σxx and τ are the normal and tangential node velocities and stresses, respectively, ρ is the mass density of the soil and VP and VS are the P- and S-wave velocities, respectively. This option is generally used for problems where the dynamic source is applied inside the mesh (e.g. vibrations induced by a generator or by pile driving). When adopting viscous boundaries, questions concerning the appropriate lateral extension of the mesh arise, as they may not be able to totally absorb the incoming seismic wave, especially for low angles of incidence (Lysmer and Kuhlemeyer 1969). For instance, Amorosi et al. (2010) have shown that the use of viscous boundaries in 2D ground response analyses requires a mesh characterised by a width to height ratio between five and eight to avoid spurious reflections in the centre of the FE model. As an alternative to employing dampers, the user can set a high value of Rayleigh damping (e.g. 25 ÷ 30%) to the finite elements closer to the vertical mesh boundaries. Again, Elia et al. (2011) have demonstrated the efficacy of this solution in imposing the far-field response when the model boundaries are sufficiently distant from the centre of the mesh. A free-field boundary consists of a one-dimensional soil column (in 2D problems) coupled to the main domain by viscous dampers. The free-field motion computed for the soil column is transferred to the numerical model by applying along the boundary the equivalent normal and tangential stresses. These are obtained from Eq. (10) using the relative particle velocities between the main grid and the column element at the same depth. This option is more suitable for seismic analysis and has been validated for 2D BVPs (e.g. Cundall et al. 1980). The effectiveness of viscous and free-field boundary conditions in the dynamic analysis of 3D cases needs, instead, to be investigated further. The dynamic input can be applied directly as a prescribed displacement or velocity time history at the base of the numerical model if a rigid bedrock condition is assumed. In this case, the bottom side of the mesh will be a non-absorbing boundary, i.e. it will reflect back into the model any outgoing wave. In contrast to a rigid base hypothesis, compliant base boundaries (Joyner and Chen 1975) are nowadays available in some commercial software to simulate a deformable (i.e. dissipating) bedrock, when the relative stiffness between the bedrock and the soil deposit (i.e. the impedance) is not infinite. With this option, the velocity record of input motion is transformed into a shear stress time history and applied to a non-reflecting (i.e. viscous) boundary as follows: $$\tau ={\rho }_{\mathrm{bedrock}}{V}_{\mathrm{S},\mathrm{bedrock}}\frac{1}{2}{v}_{\mathrm{outcrop}}$$ where ρbedrock and VS,bedrock are the mass density and the shear wave velocity of the bedrock, respectively, and the term ½voutcrop is one half of the velocity time history recoded at the outcrop of the bedrock. In this way, half of the input is absorbed by the viscous dashpots located along the base and half is transferred to the main domain. Equation (11) does not depend on the bedrock depth or on the thickness of the bedrock implemented in the numerical model, but it is based on the assumption of an elastic behaviour of the bedrock. The compliant base hypothesis makes the accelerograms at the bottom of the numerical model an output of the simulation, rather than an input data. This allows to avoid a specific deconvolution procedure of the input motion from the outcrop to the base of the FE model. This boundary condition can successfully be employed to simulate complex geometries of deformable bedrocks, as shown, among others, by Falcone et al. (2018) for 3D ground response analyses aimed at microzonation studies. Some insights on the input motion selection and scaling strategies The correct definition of the design seismic actions, based on seismic hazard studies, is essential to fully implement a performance-based design approach proposed by code prescriptions (e.g. Eurocode 8). The majority of the research on the appropriate methods to select and scale ground motions for linear and nonlinear time history analyses has been devoted, in the last years, to structural dynamics problems more than to ground response (e.g. NIST 2011). In the earthquake geotechnical engineering field, the seismic input signals are typically scaled to the PGA (Peak Ground Acceleration) of the specific site, representing the maximum acceleration expected at the bedrock outcropping surface. Nevertheless, different ground motion modification methods might be used to adapt the selected acceleration time histories, in order to minimise the bias and reduce the number of simulations needed to obtain statistically stable and robust results. In fact, the input motion could be linearly scaled by using a suitable scale factor without altering the shape of the response spectrum. The spectral shape of the input motion can also be modified by adding wavelets to match a target spectrum. Whereas the adoption of such scaling/matching techniques is common in the structural engineering literature (Shome et al. 1998; Hancock et al. 2006; Haselton 2009; Galasso 2010; Michaud and Léger 2014), their application to geotechnical earthquake engineering problems is still limited (Amirzehni et al. 2015; Guzel et al. 2017; Guzel 2018; Elia et al. 2019). As an example, Elia et al. (2019) investigated the effect of different earthquake scaling/matching strategies on the nonlinear dynamic response of an anchored diaphragm wall supporting a deep excavation. Five scaling/matching methods are chosen to linearly or spectrally modify seven input motions, selected on the basis of the seismic hazard analysis of the site, specifically: PGA, Sa(T1), ASCE and MSE scaling techniques and the Spectral Matching (SM) method. Two seismic intensity levels were considered. The results of the nonlinear dynamic simulations are illustrated in Fig. 8 in terms of mean and standard deviation of the maximum horizontal displacement experienced by the wall during each set of input motions. The statistical analysis indicated that, for low intensity input motions, the Sa(T1) scaling and spectral matching methods are characterised by the lowest mean and standard deviation of the output, while higher variability is obtained using the MSE, ASCE and PGA scaling techniques. When higher seismic intensity levels are considered, the best performance is provided by the MSE scaling and SM techniques. Simply scaling the ground motion records at the same PGA does not return results that are statistically matched with those obtained with more advanced scaling strategies. The PGA is not a good indicator of the frequency content of the ground motion and PGA scaling should be avoided in nonlinear dynamic analyses (e.g. Amirzehni et al. 2015; Elia et al. 2019). The Sa(T1) scaling technique does return good statistical results when a moderate elongation of the fundamental period of the system is induced by soil nonlinearity. The variability of the dynamic response is, instead, significantly reduced when the selection strategy seeks the compatibility with the target response spectrum over a range of periods (e.g. MSE scaling), especially for high seismic intensity levels. Although the full spectral matching strategy implies a fictitious modification of the frequency content of natural records (and, therefore, it is not always accepted in the national and international code provisions), statistically robust output can be obtained by using spectrally matched accelerograms (e.g. Amirzehni et al. 2015; Elia et al. 2019). Maximum horizontal displacement of the wall obtained with different scaling/matching techniques for a) low intensity; b) high intensity input motions (modified after Elia et al. 2019) Important advances have been made during recent years in the field of geotechnical earthquake engineering and soil dynamics modelling, although some challenges remain to be addressed. The paper has shown, with particular reference to advanced numerical methods for the integration of nonlinear constitutive equations in seismic design of geotechnical structures, that: the effect of the Rayleigh damping on FE dynamic results can be fully regulated by an appropriate selection of the controlling frequencies and by the evaluation of its real amount in the post-seismic stage of the simulations; the user of a FE code should be able to fully control and distinguish between the different sources of dissipation which can be introduced in the simulation (i.e. viscous, numerical and hysteretic damping); the main dissipation of the energy introduced by the seismic action should be guaranteed by the material (hysteretic) damping provided by the soil constitutive model employed in the numerical simulations, thus relegating the Rayleigh and numerical damping to a small contribution; the adoption of effective stress based soil constitutive laws, able to capture the material cyclic/dynamic behaviour observed in-situ and in the laboratory, should be promoted in the engineering practice by a clear calibration and initialisation strategy of their parameters and state variables; the effectiveness of the lateral and bottom boundary conditions should always be tested in detail, especially when 3D dynamic analyses are performed, through a direct comparison with the free-field solution; the influence of the scaling/matching techniques of the input motions on the results of nonlinear dynamic analyses of geotechnical BVPs should be investigated further in order to reduce the number of simulations needed to obtain statistically stable and robust outputs. In conclusion, the paper has presented a review of the key ingredients governing the predictive capabilities of nonlinear FE schemes implementing sophisticated elasto-plastic soil models. It is hoped that this will provide a useful framework for their implementation by non-expert users, as an on-going transfer of research knowledge into practice represents a significant requisite to support performance-based earthquake engineering and quantitative seismic risk assessments. The datasets generated during and/or analysed during the current study are available in previously published papers by the authors. Amirzehni E, Taiebat M, Finn WDL, Devall RH (2015) Ground motion scaling/matching for nonlinear dynamic analysis of basement walls. 11th Canadian Conference on Earthquake Engineering. Victoria, BC, Canada Amorosi A, Elia G (2008) Analisi dinamica accoppiata della diga Marana Capacciotti. Rivista Italiana Di Geotecnica 4:78–96 (in Italian) Amorosi A, Boldini D, Elia G (2010) Parametric study on seismic ground response by finite element modelling. Comput Geotech 37(4):515–528 Banerjee PK, Butterfield R (1981) Boundary Element Methods in Engineering Science. McGraw-Hill Bathe KJ, Wilson EL (1976) Numerical Methods in Finite Element Analysis. Prentice Hall, Upper Saddle River Bielak J, Loukakis K, Hisada Y, Yoshimura C (2003) Domain Reduction Method for three-dimensional earthquake modeling in localized regions, Part I: Theory. Bull Seism Soc Am 93(2):817–824 Biot MA (1941) General theory of three-dimensional consolidation. J Appl Phys 12:155–164 Cabangon LT, Elia G, Rouainia M (2019) Modelling the transverse behaviour of circular tunnels in structured clayey soils during earthquakes. Acta Geotech 14:163–178 Chan AHC (1988) A unified finite element solution to static and dynamic problems of geomechanics. Ph.D. Thesis. Swansea University, UK Charney FA (2008) Unintended consequences of modeling damping in structures. J Struct Eng 134(4):581–592 Clough R, Penzien J (2003) Dynamics of Structures. Computers and Structures Inc, Berkeley Cundall PA, Strack ODL (1979) A discrete numerical model for granular assemblies. Géotechnique 29(1):47–65 Cundall PA, Hansteen H, Lacasse S, Selnes PB (1980). NESSI - Soil Structure Interaction Program for Dynamic and Static Problems, Norwegian Geotechnical Institute, Report 51508–9 Elgamal A, Yang Z, Lu J (2006) Cyclic1D: A Computer Program for Seismic Ground Response. Report No. SSRP-06/05, Department of Structural Engineering, University of California, San Diego, La Jolla, CA Elia G, Amorosi A, Chan AHC, Kavvadas M (2011) Fully coupled dynamic analysis of an earth dam. Géotechnique 61(7):549–563 Elia G (2015) Site response for seismic hazard assessment. In: Encyclopedia of Earthquake Engineering - Springer (doi:https://doi.org/10.1007/978-3-642-36197-5_241-1) Elia G, Rouainia M (2016) Investigating the cyclic behaviour of clays using a kinematic hardening soil model. Soil Dyn Earthq Eng 88:399–411 Elia G, di Lernia A, Rouainia M (2019) Ground motion scaling for the assessment of the seismic response of a diaphragm wall. 7th International Conference on Earthquake Geotechnical Engineering (VII ICEGE), Roma, Italy Elia G, Rouainia M, di Lernia A, D'Oria AF (2021) Assessment of damping predicted by kinematic hardening soil models during strong motions. Géotech Lett 11(1):48–55 Falcone G, Boldini D, Amorosi A (2018) Site response analysis of an urban area: A multi-dimensional and non-linear approach. Soil Dyn Earthq Eng 109:33–45 Galasso C (2010) Consolidating record selection for earthquake resistant structural design. PhD Thesis. Università degli Studi di Napoli "Federico II", Italy Guzel Y, Elia G, Rouainia M (2017) The effect of input motion selection strategies on nonlinear ground response predictions. COMPDYN 2017, 6th ECCOMAS Thematic Conference on Computational Methods in Structural Dynamics and Earthquake Engineering, Rhodes Island, Greece Guzel Y (2018) Influence of input motion selection and soil variability on nonlinear ground response analyses. Ph.D. Thesis. Newcastle University, UK Hall JF (2006) Problems encountered from the use (or misuse) of Rayleigh damping. Earthq Eng Struct Dyn 35(5):525–545 Hancock J, Watson-Lamprey J, Abrahamson NA, Bommer JJ, Markatis A, Mccoyh E, Mendis R (2006) An improved method of matching response spectra of recorded earthquake ground motion using wavelets. J Earthq Eng 10(sup001):67–89 Haselton CB (2009) Evaluation of ground motion selection and modification methods: predicting median interstory drift response of buildings. PEER Report, Berkeley Hashash YMA, Groholski DR, Phillips CA, Park D, Musgrove M (2011) DEEPSOIL 5.0, User Manual and Tutorial Hudson M, Idriss IM, Beikae M (1994) QUAD4M: A Computer Program to Evaluate the Seismic Response of Soil Structures using Finite Element Procedures and Incorporating a Compliant Base. University of California, Davis, Center for Geotechnical Modeling Idriss IM, Lysmer J, Hwang R, Seed HB (1973) QUAD-4: a computer program for evaluating the seismic response of soil structures by variable damping finite element procedures. Report no EERC 73–16, Earthquake Engineering Research Center, University of California, Berkeley Itasca Consulting Group Inc. (2016) FLAC – Fast Lagrangian Analysis of Continua, Ver. 8.0. Minneapolis: Itasca Joyner WB, Chen ATF (1975) Calculation of nonlinear ground response in earthquakes. Bull Seismol Soc Am 65:1315–1336 Katona MC, Zienkiewicz OC (1985) A unified set of single step algorithms. III. The beta-m method, a generalization of the Newmark scheme. Int J Numer Methods Eng 21(7):1345–1359 Kavvadas M, Amorosi A (2000) A constitutive model for structured soils. Géotechnique 50(3):263–273 Kontoe S (2006) Development of time integration schemes and advanced boundary conditions for dynamic geotechnical analysis. Ph.D. Thesis. Imperial College London, UK Kontoe S, Zdravkovic L, Potts DM (2008) The domain reduction method for dynamic coupled consolidation problems in geotechnical engineering. Int J Numer Anal Methods Geomech 32(6):659–680 Kontoe S, Zdravkovic L, Potts DM (2009) An assessment of the domain reduction method as an advanced boundary condition and some pitfalls in the use of conventional absorbing boundaries. Int J Numer Anal Methods Geomech 33(3):309–330 Kuhlemeyer RL, Lysmer J (1973) Finite element method accuracy for wave propagation problems. J Soil Mech Found Div ASCE 99(5):421–427 Kwok AOL, Stewart JP, Hashash YMA, Matasovic N, Pyke R, Wang Z, Yang Z (2007) Use of exact solutions of wave propagation problems to guide implementation of nonlinear seismic ground response analysis procedures. J Geotech Geoenviron Eng ASCE 133(11):1385–1398 Lemos JV (2012) Explicit codes in geomechanics – FLAC, UDEC and PFC. Chapter 16 of Innovative Numerical Modelling in Geomechanics, CRC Press, Taylor & Francis, London, pp 299–315 Lysmer J, Kuhlemeyer RL (1969) Finite dynamic model for infinite media. ASCE EM 90:859–887 Mazzoni S, McKenna F, Fenves GL (2006) OpenSees command language manual. University of California, Pacific Earthquake Engineering Center, Berkeley (CA) Michaud D, Léger P (2014) Ground motions selection and scaling for nonlinear dynamic analysis of structures located in Eastern North America. Can J Civ Eng 41(3):232–44. NRC Research Press Nghiem HM, Chang N-Y (2019) A new viscous damping formulation for 1D linear site response analysis. Soil Dyn Earthq Eng 127:105860 NIST (2011) Selecting and scaling earthquake ground motions for performing response-history analyses. NIST GCR 11–917–15, National Institute of Standards and Technology, Gaithersburg, MD O'Sullivan C (2017) Particulate Discrete Element Modelling: A Geomechanics Perspective. 1st Edition, CRC Press Park D, Hashash YMA (2004) Soil damping formulation in nonlinear time domain site response analysis. J Earthq Eng 8(2):249–274 Phillips C, Hashash YMA (2009) Damping formulation for nonlinear 1D site response analyses. Soil Dyn Earthq Eng 29:1143–1158 Plaxis Connect Edition V20 (2019) User manuals. Plaxis bv, P.O. Box 572, 2600 AN Delft, Netherlands Potts DM, Zdravković L (1999) Finite Element Analysis in Geotechnical Engineering: Theory, Thomas Telford Roesset JM (1977) Soil amplification of earthquakes. In: Desai C (ed) Numerical Methods in Geotechnical Engineering. McGraw-Hill, New York, pp 639–682 Roesset JM, Ettouney MM (1977) Transmitting boundaries: A comparison. Int J Numer Anal Methods Geomech 1(2):151–176 Roscoe KH, Burland JB (1968) On the generalized stress–strain behaviour of ''wet" clay. Engineering plasticity. Cambridge University Press, Cambridge, pp 535–609 Rouainia M, Muir Wood D (2000) A kinematic hardening constitutive model for natural clays with loss of structure. Géotechnique 50(2):153–164 Schnabel PB, Lysmer J, Seed HB (1972) SHAKE: a computer program for earthquake response analysis of horizontally layered sites. Report no EERC72–12, Earthquake Engineering Research Center, University of California, Berkeley Seidalinov G, Taiebat M (2014) Bounding surface SANICLAY plasticity model for cyclic clay behavior. Int J Numer Anal Methods Geomech 38:702–724 Shinokawa T, Mitsui Y (1993) Application of boundary element method to geotechnical analysis. Comput Struct 47(2):179–187 Shome N, Cornell CA, Bazzurro P, Carballo JE (1998) Earthquakes, records, and nonlinear responses. Earthq Spectra 14(3):469–500 Smith GD (1985) Numerical solution of partial differential equations: finite difference methods. Oxford University Press Sun M, Biscontin G (2019) The Development of Shear Strain in Undrained Multi-Directional Simple Shear Tests. 7th International Conference on Earthquake Geotechnical Engineering (VII ICEGE), Roma, Italy Venturini WS (1983) Boundary Element Method in Geomechanics. Springer Woodward PK, Griffiths DV (1996) Influence of viscous damping in the dynamic analysis of an earth dam using simple constitutive models. Comput Geotech 19(3):245–263 Yang M, Seidalinov G, Taiebat M (2018) Multidirectional cyclic shearing of clays and sands: Evaluation of two bounding surface plasticity models. Soil Dyn Earthq Eng 124:230–258 Yoshimura C, Bielak J, Hisada Y, Fernández A (2003) Domain Reduction Method for three-dimensional earthquake modeling in localized regions, Part II: Verification and applications. Bull Seism Soc Am 93(2):825–840 Zienkiewicz OC, Chan AHC, Pastor M, Schrefler BA, Shiomi T (1999) Computational Geomechanics (with special reference to earthquake engineering). Wiley & Sons, Chichester Zienkiewicz OC, Taylor RL, Nithiarasu P, Zhu JZ (1977) The Finite Element Method. McGraw-Hill Open access funding provided by Politecnico di Bari within the CRUI-CARE Agreement. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript. Department of Civil, Environmental, Land, Building Engineering and Chemistry (DICATECh), Technical University of Bari (Politecnico Di Bari), via Orabona 4, 70125, Bari, Italy Gaetano Elia School of Engineering, Newcastle University, Newcastle Upon Tyne, NE1 7RU, UK Mohamed Rouainia All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by both authors. The first draft of the manuscript was written by Gaetano Elia and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Correspondence to Gaetano Elia. The authors have no relevant financial or non-financial interests to disclose. Elia, G., Rouainia, M. Advanced dynamic nonlinear schemes for geotechnical earthquake engineering applications: a review of critical aspects. Geotech Geol Eng 40, 3379–3392 (2022). https://doi.org/10.1007/s10706-022-02109-6 Issue Date: July 2022 Soil constitutive laws Nonlinear dynamic analysis Fully coupled formulation
CommonCrawl
Space Exploration Meta Space Exploration Stack Exchange is a question and answer site for spacecraft operators, scientists, engineers, and enthusiasts. It only takes a minute to sign up. Is it better to develop more powerful rockets instead of seeking and developing new technologies? [closed] I have question about Space Industry and Era. I'm aiming to being Rocket Scientist but I think it looks like will be so hard to reach far planets with these rocket technology that uses Newton's Laws and conservation of momentum. Filling up rocket's weight's 80%-90% with liquid hydrogen and liquid oxygene and a small percentage of payload I think, it would be hard and inefficient way to reach much more far planets or stars. If you think it's not right, please describe to me. I would be very happy. Due to this situation, I think to involve Einstein's Unified Field Theory - General Relativity Theory and the bending of these theories with quantum field physics theories result of the efficient way to manage the whole universe's time and gravity. So this way, there will be a technology that doesn't obey 3rd dimension's rules (Newton's way). We can manage the 4th dimension (time, as Einstein's way). I want to devote my life to develop a technology and new vision to these problems. Do you think this is a good and possible? spacecraft physics technology ICCQBEICCQBE $\begingroup$ There are middle ways -- technologies like ion engines, lightsails, mass drivers and various forms of nuclear rocket potentially offer travel around the solar system more quickly and conveniently than chemical fuelled rockets, without requiring completely new physics. $\endgroup$ – Steve Linton $\begingroup$ Conservation of momentum is a fundamental part of both quantum mechanics and relativity. $\endgroup$ – WaterMolecule $\begingroup$ There's no reason to wait for new understandings of physics to come before we try exploring the solar system. Travel to the outer planets is within our physical understanding, if not our engineering. Have you heard of the [en.wikipedia.org/wiki/Alcubierre_drive](Alcubierre drive)? Something like that, if possible, would be what we want for travel been star systems. $\endgroup$ – Snoopy $\begingroup$ Welcome to Space! Your question appears to be primarily opinion-based, and there may be no "correct" answer. I'm glad to hear of your enthusiasm; studying aeronautical engineering may help you to invent the next space technology! $\endgroup$ – DrSheldon $\begingroup$ @uhoh eh sure, why not. $\endgroup$ Refined from my general comment: We don't need new physics knowledge to explore our solar system, whether the inner or outer planets. There's been considerable theoretical work on engines suitable for everything from Earth to orbit to Pluto and beyond. What you're referring to with most of a spacecraft's mass being propellant is the mass ratio: MR = (1 + lambda) / (epsilon + lambda), where epsilon is the mass of the structure divided by the mass of the structure with propellant, and lambda is the mass of the payload divided by the mass of the propellant and structure. The larger the mass ratio, the more payload you get for how much mass you're sending. So, how do we maximize the mass ratio? We choose engines with a high efficiency, or Isp (specific impulse). Basically, how effective an engine is at using its propellant. An engine such as a Hall effect thruster may have an Isp in the thousands of seconds, while an engine such as the Saturn V's F-1 has an Isp of about 304 seconds. The tradeoff there is that a Hall effect thruster has a pitiful thrust-weight ratio, and the F-1 has a high TWR. When it comes to intersystem travel, spacecraft have multiple factors to consider: launch vehicle, type of propellant, type of engine, destination, cost, availability of solar or nuclear power, and much more. In general, though, unmanned craft can afford long transit times with very efficient engines, while manned spacecraft should have a higher thrust to shorten transits. Now, when it comes to interstellar travel, there are a number of interesting options. One is to take a large array of lasers, propel a lightsail to a significant fraction of the speed of light, and send it to another star system - with the proviso that unless it has onboard (and very powerful) engines, it will be unable to reverse acceleration in the target system and be captured by the star's gravity. Antimatter, if we could make it in sufficient quantities, would also enable interstellar travel in something less than a human lifetime. But if you want to go much, much faster, you would need something like the Alcubierre drive to make the trip, and the physics there is very theoretical indeed - we don't know how to make the exotic matter required, and that's one of the least of its issues. I don't see humans traveling to another solar system in this century as being probable, though certainly there will be humans among the outer planets once we build spacecraft such as argon-fueled nuclear electric ships. You may find this table of engines useful and informative. The site itself is a gold mine of space-related information, so I hope it proves intriguing and helpful. SnoopySnoopy $\begingroup$ I had a hunch you had more to say, but... wow! $\endgroup$ $\begingroup$ @uhoh Given the subjective manner of his question, I figured I should try to answer it in a way that would give him a basis for more objective future questions. $\endgroup$ $\begingroup$ The problem is, what about if you don't just want to explore the solar system, but want to explore and travel around it in reasonable times, say under 30 Ms (~1 Earth year) to get to every destination including Pluto? That would seem to require still some considerable new improvements in propulsion technology, even if not interstellar-grade. E.g. just roughing by distance, with Pluto at ~6500 Gm and Mars at ~225 Gm that roughly entails a 1 megasecond transit to Mars as the ideal, maybe 2 if we let things be a bit less linear. Today's tech takes around 20. $\endgroup$ – The_Sympathizer $\begingroup$ Thus necessitating the development of technologies to reach speeds of around 150-300 km/s or so, which is stilll very much beyond the reach of chemical rockets. $\endgroup$ $\begingroup$ ADD: Oops, just screwed that up - $t_c$ should be $1\ \mathrm{Ms}$, not 2... I used the total time and that $t_c$ is cruise time! Thus 300 km/s, 30 kN ... OUCH! $\endgroup$ Not the answer you're looking for? Browse other questions tagged spacecraft physics technology or ask your own question. What might be a viable avenue of propulsion research in aiming for mission delta-V north of 100 km/s? Why can solid rockets be both the skinniest and most spherical launch vehicles while liquid fuel rockets have a more limited range of aspect ratios?
CommonCrawl
Limit of convergent series Master sun korean drama streaming Diablo film poster Sinopsis drama korea art of seduction Carlos lopez moctezuma imdb In general, there is no process that gives you the limit of any convergent sequence. That does not mean, however, that limits cannot be found. For example, take the. Approximating the Sum of a Convergent Series - Academics How to Determine Convergence of Infinite Series. If the limit of a series is 0, that does not imply that the series converges. We must do further checks. 2.Divergent Series: why 1+2+3. but such arguments can be dismissed on the grounds that even for convergent series rearranging. the limit as x → 1− of 1−xm.This limit is certainly zero since the numerator is constant and the. Is the series X∞ n=1 n! nn absolutely convergent,. conditionally convergent, or divergent?. The sum of an infinite series - mathcentre.ac.uk My Sequences & Series course: https://www.kristakingmath.com/sequences-and-series-course Learn how to find the limit of a convergent sequence. Since we've. Cauchy criteria - Encyclopedia of Mathematics The sum of an infinite series mc-TY-convergence-2009-1 In this unit we see how finite and infinite series are obtained from. a real limit. If the series is X. 5. Taylor and Laurent series Complex sequences and series Sum of infinite divergent series. for the limit of the partial sums. As such. series that diverge to $\infty$ yields an absolutely convergent series. Calculating of the sum of series online - OnSolver.com Uniform convergence and its consequences. the derivative of the original limit? For the geometric series above,. be a convergent series of positive constants.Sum of series online. a necessary condition for a numerical sequence convergence is that limit of common term of series is equal to zero,. A series is convergent if the sequence of its partial sums { S 1, S 2, S 3, … } {\displaystyle \left\{S_{1},\ S_{2},\ S_{3},\dots \right\}} tends to a limit; that means that the partial sums become closer and closer to a given number when the number of their terms increases. Section 3 Sequences and Limits - School of Convergence and the Limit of Complex Sequences. Serieses If the limit of a sequence is 0, does the series converge We explain Limits of Convergent Series with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers.This lesson teaches students to find. The Common Series Tests Divergence Test. If the limit of a[n] is not zero, or does not exist. Return to the Series, Convergence, and Series Tests starting page.Series and Convergence. We say that a series converges if such a limit exists and is finite and diverges otherwise. Here are three examples, the first series.Chapter 1. Series and sequences. This series converges to the limit 2. The main problem with conditionally convergent series is that if the terms.ON THE INTRODUCTION OF CONVERGENCE FACTORS INTO SUMMABLE SERIES AND SUMMABLE INTEGRALS* BY. in the simple case of a convergent series,. If the limit lim S n n.What is the necessary condition for convergence of. is said to converge if there is some limit L such that for. then every convergent sequence (or series).Convergence tests for infinite series. When this limit is strictly less than 1, the series converges absolutely. Another important test is the Ratio test. ON THE INTRODUCTION OF CONVERGENCE FACTORS INTO SUMMABLE Section 3 Sequences and Limits Definition A sequence of real numbers is an infinite ordered list a 1,a 2,a 3, a. convergent limit 2, (iii) 3+2,3. Absolutely convergent series in the canonical inductive limits Double Sequences and Double Series Eissa D. Habil. • Uniform convergence and double limits. • Subsequences of double sequences and their convergence. Does it converge or diverge? If it converges, find its Infinite Sequence and Series | Udemy Example. The decimal representation of a real number is a convergent infinite series. For example, Repeating decimals represent rational numbers.The meanings of the terms "convergence" and "the limit of a sequence". 3. The notion of recursive sequences. CHAPTER 12 INFINITE SEQUENCES AND SERIES.So the limit as n goes to infinity of 10/(n+1) can be found by dividing by n, and taking the limit of the result. And the limit of n as it goes to infinity of (10/n)/(1+1/n)=0/(1+0)=0. Since this is less than one, the Ratio Test says that the series (-10)n/n! is absolutely convergent, and therefore convergent.Let us mention that every sequence of real numbers which is summable is also convergent. convergence and the limit of complex. 2 (9) If for every n holds 0.If the limit of a sequence is 0, does the series converge? Sign up with Facebook or Sign up manually. Already have an account?.Define convergence. convergence. Also called convergent evolution. maths the property or manner of approaching a finite limit, esp of an infinite series:.Convergence Tests for Infinite Series. we review some of the most common tests for the convergence of an infinite series. The series $\sum\limits_{k=1. Math 115 HW #4 Solutions - Colorado State University Dvd alexandre pires mais alem download avi Fleet woven 1000 series ii review Past cast members of dancing with the stars The man your man could smell like actor 2009 oscar nominees best actress
CommonCrawl
Language Modeling with Reduced Densities Today I'd like to share with you a new paper on the arXiv—my latest project in collaboration with mathematician Yiannis Vlassopoulos (Tunnel). To whet your appetite, let me first set the stage. A few months ago I made a 10-minute introductory video to my PhD thesis, which was an investigation into mathematical structure that is both algebraic and statistical. In the video, I noted that natural language is an example of where such mathematical structure can be found. Language is algebraic, since words can be concatenated to form longer expressions. Language is also statistical, since some expressions occur more frequently than others. As a simple example, take the words "orange" and "fruit." We can stick them together to get a new phrase, "orange fruit." Or we could put "orange" together with "idea" to get "orange idea." That might sound silly to us, since the phrase "orange idea" occurs less frequently in English than "orange fruit." But that's the point. These frequencies contribute something to the meanings of these expressions. So what is this kind of mathematical structure? As I mention in the video, it's helpful to have a set of tools to start exploring it, and basic ideas from quantum physics are one source of inspiration. I won't get into this now—you can watch the video or read the thesis! But I do want to emphasize the following: In certain contexts, these tools provide a way to see that statistics can serve as a proxy for meaning. I didn't explain how in the video. I left it as a cliffhanger. But I'll tell you the rest of the story now. As with many good things in life, it begins with the Yoneda lemma—or rather, the Yoneda perspective. The Yoneda Perspective Let's think back to language for a moment. To understand the meaning of the word "orange," for instance, it's helpful to know something about how that word fits into language. We noted that "orange idea" is not a meaningful expression in English, although "orange fruit" is. So the meaning of "orange" is somehow built into the network of ways that it fits into other expressions. I have the following picture in mind, where an arrow indicates when one expression is contained in another. This is very much like the Yoneda lemma in category theory, which informally says that a mathematical object is completely determined by the network of relationships that object has with other objects in its environment. I like to call this maxim the "Yoneda Perspective" and have blogged about it before. And it's the same perspective we're taking above. Want to distinguish between meanings of words? Then look at the network of ways they relate to other words. But actually... this isn't the whole story. It's merely the algebraic part. There's also statistics! In other words, it's not enough to know that "orange" can go with "fruit" or with "chrysanthemum" or with "safety vest." There's additional information to be aware of, namely the statistics of these expressions. So intuitively, we'd like to decorate, or enrich, the network with something like conditional probabilities. And here comes the best part. ‍This is more than just intuition. ‍This idea can be made precise! And that's what our new paper is about. We use ideas from linear algebra and category theory to anchor this intuition on firmer ground. As a bonus, we're also able to model a preliminary form of hierarchy in language. There are a few other aspects to the paper, but these are the ideas I'd like to emphasize here. Let's look at some of the details. Diving Deeper We begin by modeling expressions in language as sequences of symbols from some finite set $X$. For simplicity, we start with the set $X^N$ of all sequences of a fixed length $N$ and consider any joint probability distribution $\pi\colon X^N\to\mathbb{R}$ on that set. Using these ingredients, we then describe a recipe for assigning to any sequence $s$ of length at most $N$ a special kind of linear operator $\rho_s$. We describe a way of assigning to any word or expression $s$ in a language a particular linear operator $\rho_s$. This operator is jam-packed with information. We obtain the operator $\rho_s$ by passing from classical probability theory to quantum probability theory in a way discussed previously on this blog and in Chapter 3 of my thesis. Briefly, we use the joint distribution $\pi$ to define a rank 1 density operator $\rho$; that is, a self-adjoint, positive semidefinite operator with trace 1. Then we compose $\rho$ with a standard basis vector representation ("one hot encoding") of an expression $s$, and finally we apply the quantum analogue of marginalization, namely the partial trace. The process of marginalizing is a bit like reducing your attention to a smaller subsystem, and so the resulting operator $\rho_s$ is called a reduced density operator. (We have to normalize to ensure it has unit trace, but that's a small detail.) The matrix representation for $\rho_s$ is quite illuminating. We show that its diagonal entries are conditional probabilities that $s$ will be followed by other expressions in the language! To illustrate, we include several easy, hands-on examples* as well as pictures. For instance, here's a tensor network diagram to illustrate the passage from $\pi$ to $\rho$ to $\rho_s$ just described. Now, I claimed that $\rho_s$ is jam-packed with information. Indeed, it knows a lot about the "network of ways that $s$ fits into other expressions in the language." From the perspective of the Yoneda lemma, the "meaning" of the word $s$ is neatly packaged into the linear operator $\rho_s$. The precise math statement behind this is as follows. We show that the operator $\rho_s$ decomposes as a weighted sum of operators, one operator $\rho_{s'}$ for every expression $s'$ that contains $s$, where the weights are conditional probabilities $\pi(s'|s)$. As an example, think back to the network of the word "orange" illustrated above. Now imagine replacing every expression (like "ripe small orange") with a matrix. Then the matrix for "orange" is equal to the sum of all the other matrices, after multiplying each matrix by a particular probability. So these operators $\rho_s$ capture something of the environment—and hence the meanings—of the words $s$ that they represent! In some cases, we prefer to work with an "unnormalized" version, $\hat\rho_s$, whose trace may be less than one. Doing so allows us to see that our passage $s\mapsto\hat\rho_s$ fits neatly into the world of category theory. In particular, we show that the assignment $s\mapsto\hat\rho_s$ is a functor between categories enriched** over probabilities. That's right. We define a category for language and a category for the linear operators that represent language. We incorporate statistics by enriching** these categories over a probabilities (explained below), and we show that the assignment $s\mapsto\hat\rho_s$ respects this categorical structure. This might sound grandiose, but it's quite simple. Our categories are preordered sets; that is, sets equipped with a partial order $\leq$ that isn't necessarily antisymmetric. And our "enrichment" is just a fancy way of labeling, or decorating, the $\leq$ with probabilities. The assignment $s\mapsto\hat\rho_s$ is simply a function that respects these labels. In more detail, we show the following three things: Language forms a category enriched over probabilities. We define a category whose objects are valid expressions in language—that is, valid sequences from the alphabet $X$—and we write $s\leq s'$ if the sequence $s'$ contains $s$ as a subsequence. One checks that $\leq$ is a preorder, and so the $\leq$ are the morphisms of the category. For instance, we have that orange $\leq$ small orange $\leq$ ripe small orange, while orange $\not\leq$ cold iced tea}. But we're not merely interested in whether or not "orange" is contained in "small orange fruit," we're interested in the probability! So we then replace the hom-set $\text{hom}(s,s')$ with a hom-probability, which we choose to be conditional probability $\pi(s'|s)$. This gives us a category enriched over the unit interval $[0,1]$, viewed as a monoidal category. See Proposition 5.1. Operators associated to expressions also form a category enriched over probabilities. Because our operators $\hat\rho_s$ are positive semidefinite, they form a preordered set—and hence a category—under the Loewner order. As an immediate consequence, we can model a simple kind of hierarchy in language. For instance, the operators associated to orange and small orange are comparable under the Loewner order, as one finds that $\hat\rho_{\text{orange}}\geq\hat\rho_{\text{small orange}}$. This models the intuitive idea that orange is a more abstract concept than something that's both small and orange. But again, we're really interested in the probabilities associated to this, so we proceed to enrich this category over $[0,1]$ in a similar way. See Proposition 5.2. The assignment $s\mapsto \hat\rho_s$ is an enriched functor between enriched categories. Finally, we show that the function $s\mapsto \hat\rho_s$ is compatible with probabilities in a sensible way—in other words, we show that the main construction of the paper is actually a functor between categories enriched over $[0,1]$. See Theorem 5.1. In the end, we accomplished what we set out to do. Motivated by the Yoneda perspective, we use probabilities to "decorate" the network of ways a word fits into other expressions in language. We show that this algebraic and statistical information is packaged neatly into a linear operator associated to that word. And we conclude that the recipe to do so has a nice, categorical explanation. In closing, notice that the simple form of hierarchy we're able to model is totally dependent on the sequential structure of language. So while we can represent the intuitive idea that "orange" is a more abstract notion than "small orange," we don't have a way to conclude that "color" is more abstract than "orange," simply because the word "color" doesn't contain "orange" as a subsequence. So there's more work to be done. What's more, in practice, the linear maps $\hat\rho_s$ will operate on ultra-large-dimensional spaces, and so storing and manipulating them on a computer won't be feasible for real-world datasets. But the ideas we present in the paper are naturally adaptable to tensor network techniques, which are well-suited for dealing with operators on large-dimensional spaces. In fact, tensor networks also complement our algebraic and statistical perspective of language in other nice ways, and I'm hopeful they'll be useful in exploring more complex concept hierarchies later on. To be continued! *Heads up: In this blog post, all of my examples involve the word "orange." We switch it up in the paper. There, most of our examples involve the word "dog." **I haven't properly blogged about enriched category theory on Math3ma before, but I might get around to it if there's enough interest. Limits and Colimits, Part 1 (Introduction) Language, Statistics, & Category Theory, Part 3 What is a Functor? Definition and Examples, Part 1 Topology: A Categorical Approach
CommonCrawl
About TTIC Industry Affiliates Program About TTIJ TTIC Colloquium Young Researcher Seminar Series Machine Learning Seminar Series Research at TTIC International Scholars Postdoctoral Alumni Senior Faculty Visiting Student/Intern Machine Learning Seminar Series, 2016-2018 The Machine Learning Seminar Series features weekly talks on recent research in machine learning. The seminar is typically held Fridays 11-12am at TTIC To receive announcements regarding the talks please . Date: December 14, 2018 11am-noon Venue: Room 526, 6045 S Kenwood Ave, Chicago, IL 60637 Abstract: Bayesian optimization (BO) aims at efficiently optimizing expensive black-box functions, such as hyperparameter tuning problems in machine learning. Scaling up BO to many variables relies on structural assumptions about the underlying black-box, to alleviate the curse of dimensionality. In this talk, we review several options to this end, with emphasis on the low effective dimensionality hypothesis. Starting with a sampled random embedding, we discuss several practical issues related to selecting a suitable search space. We also present alternative sampling methods for the embedding, as well as techniques to identify the low effective subspace. The performance and robustness gains of the proposed enhancements are illustrated on numerical examples. Date:一道本不卡免费高清 November 30, 2018 11am-noon Venue:一道本不卡免费高清 Room 526, 6045 S Kenwood Ave, Chicago, IL 60637 Abstract:一道本不卡免费高清 We suggest a general oracle-based framework that captures different parallel stochastic optimization settings described by a dependency graph, and derive generic lower bounds in terms of this graph. We then use the framework and derive lower bounds for several specific parallel optimization settings, including delayed updates and parallel processing with intermittent communication. We highlight gaps between lower and upper bounds on the oracle complexity, and cases where the "natural" algorithms are not known to be optimal. Date: October 26, 2018 11am-noon Abstract: Neural networks perform extremely well at object recognition tasks, sometimes even meeting or exceeding human performance. However, neural networks have recently been shown to be susceptible to "adversarial attacks," in which small (or even imperceptible) changes to an image completely alter it's class label according to a neural net. This talk explores several new angles on the concept of adversarial attacks. First, I explore the idea of "poisoning" attacks, which manipulate a network at train time rather than test time. Then, I address a fundamental question about adversarial attacks: Are adversarial attacks inevitable? I'll try to answer this question from a both a theoretical and experimental perspective. Date:一道本不卡免费高清 October 19, 2018 11am-noon Abstract:一道本不卡免费高清 We bring the tools from Blackwell's seminal result on comparing two stochastic experiments from 1953, to shine a new light on a modern application of great interest: Generative Adversarial Networks (GAN). Binary hypothesis testing is at the center of training GANs, where a trained neural network (called a critic) determines whether a given sample is from the real data or the generated (fake) data. By jointly training the generator and the critic, the hope is that eventually the trained generator will generate realistic samples. One of the major challenges in GAN is known as "mode collapse"; the lack of diversity in the samples generated by thus trained generators. We propose a new training framework, where the critic is fed with multiple samples jointly (which we call packing), as opposed to each sample separately as done in standard GAN training. With this simple but fundamental departure from existing GANs, experimental results show that the diversity of the generated samples improve significantly. We analyze this practical gain by first providing a formal mathematical definition of mode collapse and making a fundamental connection between the idea of packing and the intensity of mode collapse. Precisely, we show that the packed critic naturally penalizes mode collapse, thus encouraging generators with less mode collapse. The analyses critically rely on operational interpretation of hypothesis testing and corresponding data processing inequalities, which lead to sharp analyses with simple proofs. For this talk, I will assume no prior background on GANs Date: May 18, 2018 11am-noon Abstract:一道本不卡免费高清 Computing optimal transport distances between distributions is a fundamental problem that is becoming increasingly prominent in statistics, image processing, and machine learning. Key to the use of optimal transport in practice is the existence of fast, stable algorithms for computing these distances. In this work, we exhibit a simple approximation algorithm for this problem that runs in near-linear time. Our work is based on new analysis of the celebrated Sinkhorn algorithm for matrix scaling, as well as new results about the behavior of linear programs under entropic penalization. Joint work with Jason Altschuler and Philippe Rigollet. Date:一道本不卡免费高清 April 27, 2018 11am-noon Abstract:一道本不卡免费高清 The problem of handling adaptivity in data analysis, intentional or not, permeates a variety of fields, including test-set overfitting in ML challenges and the accumulation of invalid scientific discoveries. We propose a mechanism for answering an arbitrarily long sequence of potentially adaptive statistical queries, by charging a price for each query and using the proceeds to collect additional samples. Crucially, we guarantee statistical validity without any assumptions on how the queries are generated. We also ensure with high probability that the cost for M non-adaptive queries is O(\log M), while the cost to a potentially adaptive user who makes M queries that do not depend on any others is O(\sqrt{M}). Abstract: The fast growing size of modern data sets and the nature of data acquisition motivate new algorithms and systems that are able to learn from distributed data efficiently. In this talk, I will present two recent work that explore the theoretical foundations of distributed machine learning. First, I will present novel algorithms to learn sparse linear predictors from distributed data. To match centralized optimal predictor, the number of rounds required to communicate only scales logarithmically with the number of machines. Second, I will discuss how to parallelize the stochastic gradient descent (SGD) with a more sophisticated minibatch prox scheme, which improves over minibatch SGD by allowing larger minibatch size without slowing down the convergence. This is joint work with Mladen Kolar, Nathan Srebro, Weiran Wang and Tong Zhang. Abstract: Density ratio estimation has become a versatile tool in machine learning community recently. However, due to its unbounded nature, density ratio estimation is vulnerable to corrupted data points, which often pushes the estimated ratio toward infinity. In this paper, we present a robust estimator which automatically identifies and trims outliers. The proposed estimator has a convex formulation, and the global optimum can be obtained via subgradient descent. We analyze the parameter estimation error of this estimator under high-dimensional settings. Experiments are conducted to verify the effectiveness of the estimator. Title: Oracle-efficient Online Learning and Applications to Auction Design Date: October 5, 2017 11am-noon Speaker: Nika Haghtalab, Carnegie Mellon University Abstract: We consider the fundamental problem of learning from expert advice, a.k.a online no-regret learning, where we have access to an offline optimization oracle that can be used to compute, in constant time, the best performing expert at any point in time. We consider the design of no-regret algorithms that are computationally efficient using such an oracle. We present structural properties under which we show oracle-efficient no-regret algorithms exist, even when the set of experts is exponentially large in a natural representation of the problem. Our algorithm is a generalization of the Follow-The-Perturbed-Leader algorithm of Kalai and Vempala that at every step follows the best-performing expert subject to some perturbations. Our design uses a shared source of randomness across all experts that can be efficiently implemented by using an oracle on a random modification of the history of the play at every time step. Our second main contribution is showing that the structural properties required for our oracle-efficient online algorithm are present in a large class problems. As examples, we discuss applications of our oracle-efficient learning results to the adaptive optimization of large classes of auctions, including (1) VCG auctions with bidder-specific reserves in single-parameter settings, (2) envy-free item pricing in multi-item auctions, and (3) Myerson-like auctions for single-item settings. This talk is based on joint work with Miro Dudik, Haipeng Luo, Rob Schapire, Vasilis Syrgkanis, and Jenn Wortman Vaughan. Date: September 29, 2017 11am-noon Abstract:一道本不卡免费高清 In this talk we will consider linear structural equation models (SEMs) with non-Gaussian errors. In particular, these models assume that each observed variable is a linear functions of the other variables, plus some error term. The first half of the talk will consider SEMs in which the error terms may be dependent; these models correspond to mixed graphs. We propose empirical likelihood procedures for inference and estimation, and suggest several modifications–including a profile likelihood–in order to improve tractability and performance. We show that when the error distributions are non-Gaussian, the use of EL may increase statistical efficiency and improve assessment of significance. In the second half of the talk, we will consider SEMs which correspond to directed acyclic graphs (DAGs). It has been previously shown for DAGs, when the error terms in a SEM are non-Gaussian, the exact causal structure–not simply a larger equivalence class–can be identified. We show that for suitably sparse graphs, when the error terms follow a log-concave distribution (but are non-Gaussian), the graph can also be identified in the high dimensional setting where the number of variables may exceed the number of observations. Title: Compressed and penalized linear regression Date: June 2, 2017 11am-noon Speaker: Daniel J. McDonald, Assistant Professor of Statistics, Indiana University, Bloomington, IN Abstract: Modern applications require methods that are computationally feasible on large datasets but also preserve statistical efficiency. Frequently, these two concerns are seen as contradictory: approximation methods that enable computation are assumed to degrade statistical performance relative to exact methods. In applied mathematics, where much of the current theoretical work on approximation resides, the inputs are considered to be observed exactly. The prevailing philosophy is that while the exact problem is, regrettably, unsolvable, any approximation should be as small as possible. However, from a statistical perspective, an approximate or regularized solution may be preferable to the exact one. Regularization formalizes a trade-off between fidelity to the data and adherence to prior knowledge about the data-generating process such as smoothness or sparsity. The resulting estimator tends to be more useful, interpretable, and suitable as an input to other methods. 一道本不卡免费高清In this work, we propose new methodology for estimation and prediction under a linear model borrowing insights from the approximation literature. We explore these procedures from a statistical perspective and find that in many cases they improve both computational and statistical performance. Date: May 12, 2017 10-11am Abstract: In the probabilistic topic models, the quantity of interest—a low- rank matrix consisting of topic vectors—is hidden in the text corpus matrix, masked by noise, and Singular Value Decomposition (SVD) is a potentially useful tool for learning such a low-rank matrix. However, the connection between this low-rank matrix and the singular vectors of the text corpus matrix are usually complicated and hard to spell out, so how to use SVD for learning topic models faces challenges. We overcome the challenge by revealing a surprising insight: there is a low-dimensional simplex structure which can be viewed as a bridge between the low-rank matrix of interest and the SVD of the text corpus matrix, and which allows us to conveniently reconstruct the former using the latter. Such an insight motivates a new SVD- based approach to learning topic models. For asymptotic analysis, we show that under the popular probabilistic topic model (Hofmann, 1999), the convergence rate of the l1-error of our method matches that of the minimax lower bound, up to a multi-logarithmic term. In showing these results, we have derived new element-wise bounds on the singular vectors and several large- deviation bounds for weakly dependent multinomial data. Our results on the convergence rate and asymptotical minimaxity are new. 一道本不卡免费高清We have applied our method to two data sets, Associated Process (AP) and Statistics Literature Abstract (SLA), with encouraging results. In particular, there is a clear simplex structure associated with the SVD of the data matrices, which largely validates our discovery. Date:一道本不卡免费高清 March 31, 2017 10-11am Venue: Room 526, TTIC. 6045 S Kenwood Ave, Chicago, IL 60637 Abstract:一道本不卡免费高清 Detecting causal associations in time series datasets is a key challenge for novel insights into complex dynamical systems such as the Earth system or the human brain. Interactions in high-dimensional dynamical systems involve time-delays, nonlinearity, and strong autocorrelations, which present major challenges for causal discovery techniques such as Granger causality leading to low detection power. Through large-scale numerical experiments and theoretical analyses we further identify strong detection biases of common causal discovery methods: The detection power for individual links may depend not only on their causal strength, but also on autocorrelation and other dependencies. Here we introduce a reliable method for large-scale linear and nonlinear causal discovery that provides more detection power than current methods and largely overcomes detection biases, allowing to more accurately rank associations in large-scale analyses by their causal strength. The method is demonstrated on a global surface pressure dataset representing atmospheric dynamics. Date: March 17, 2017 10-11am Abstract: We propose the combinatorial inference to explore the topological structures of graphical models. The combinatorial inference can conduct the hypothesis tests on many graph properties including connectivity, hub detection, perfect matching, etc. In particular, our methods can be applied to any graph property which is invariant under the addition of edges. On the other side, we also propose a shortest self-returning path lemma to prove the general optimality of our testing procedures for various graph properties. The combinatorial inference is also generalized to the time-varying graphical models and we can infer the dynamic topological structures for graphs. Our methods are applied to the neuroscience by discovering hub voxels contributing to visual memories. Abstract:一道本不卡免费高清 Stochastic gradient descent procedures have gained popularity for iterative parameter estimation from large datasets because they are simpler and faster than classical optimization procedures. However, their statistical properties are not well understood in theory. And in practice, avoiding numerical instability requires careful tuning of key parameters. In this talk, we will focus on implicit stochastic gradient descent procedures, which involve parameter updates that are implicitly defined. Intuitively, implicit updates shrink standard stochastic gradient descent updates, and the amount of shrinkage depends on the observed Fisher information matrix, which does not need to be explicitly computed. Thus, implicit procedures increase stability without increasing the computational burden. Our theoretical analysis provides a full characterization of the asymptotic behavior of both standard and implicit stochastic gradient descent-based estimators, including finite-sample error bounds. Importantly, analytical expressions for the variances of these stochastic gradient-based estimators reveal their exact loss of efficiency, which enables principled statistical inference on large datasets. Part of ongoing work focuses on a crucial aspect of such inference with stochastic approximation procedures, which is to know when the procedure has reached the asymptotic regime. Date: November 18, 2016 10-11am Abstract: We study the sample and time complexity of canonical correlation analysis (CCA) in the population setting. With mild assumptions on the data distribution, we show that in order to achieve $\epsilon$-suboptimality in a properly defined measure of alignment between the estimated canonical directions and the population solution, we can solve the empirical objective exactly with $O(\frac{1}{\epsilon^? \Delta^2 \gamma^2}$ samples, where $\Delta$ is the singular value gap of the whitened cross-covariance matrix and $\frac{1}{\gamma}$ is an upper bound of the condition numbers of the auto-covariance matrices. Moreover, we can achieve the same learning accuracy by drawing the same level of samples and solving the empirical objective approximately with a stochastic optimization algorithm; this algorithm is based on the shift-and-invert power iterations and only needs to process the dataset for $O(log(1/\epsilon)$ passes. Finally, we show that, given an estimate of the canonical correlation, the streaming version of shift-and-invert power iterations achieves the same learning accuracy with the same level of sample complexity. This is joint work with Jialei Wang, Dan Garber, and Nathan Srebro. Date:一道本不卡免费高清 November 11, 2016 10-11am Abstract: Computational protein design algorithms search for modifications in protein systems that will optimize a desired function. Usually, this function involves chemical binding of a protein to another protein or to a drug. Binding is exquisitely sensitive to the 3-D geometry adopted by the system, with low-energy geometries preferred, so to optimize binding we must be able to optimize energy with respect to molecular geometry. Thus, our optimization—both over chemical modifications and over 3-D geometries—must take as input an "energy function," which maps atomic coordinates to energy. In this work, we present methods to represent the energy function that are amenable to more efficient and more accurate protein design calculations than were previously possible. We pose the calculation of this representation as a machine learning problem. Two relatively simple models are found to be helpful: a continuous polynomial representation and a discrete expansion in local terms. These models open the possibility of modeling binding much more realistically in protein design calculations, while performing optimization efficiently. Date:一道本不卡免费高清 November 4, 2016 10-11am Abstract: As the size and complexity of neural datasets continue to grow, there is an increasing need for scalable and black-box approaches to extract knowledge from these data. In the first half of this talk, I will discuss my recent work in developing methods to uncover the structure and function of neural circuits in large-scale neural recordings. My aim is to provide an introduction to modern neural datasets for a ML audience, and discuss some of the challenges that we now face in developing learning algorithms for these data. In the second half of the talk, I will discuss my recent work in global black-box optimization (at UAI 2016). The idea behind our approach is to learn a convex relaxation of a (possibly) non-convex function by estimating its convex envelope (i.e., tight convex lower bound). We show that our approach can lead to provable convergence to the global optimum for classes of smooth functions. 一道本不卡免费高清This is work with Mohammad Gheshlaghi Azar (Google DeepMind), Konrad Kording (Northwestern), and Bobby Kasthuri (University of Chicago, Argonne National Laboratory). Date:一道本不卡免费高清 October 28, 2016 10-11am Abstract: Extracting knowledge and providing insights into the complex mechanisms underlying noisy high-dimensional data sets is of utmost importance in many scientific domains. Networks are an example of simple, yet powerful tools for capturing relationships among entities over time. For example, in social media, networks represent connections between different individuals and the type of interaction that two individuals have. In systems biology, networks can represent the complex regulatory circuitry that controls cell behavior. Unfortunately the relationships between entities are not always observable and need to be inferred from nodal measurements. I will present a line of work that deals with the estimation and inference in high-dimensional semi-parametric elliptical copula models. I will explain why these models are useful in exploring complex systems, how to efficiently estimate parameters of the model, and also provide theoretical guarantees that justify usage of the models in scenarios where more rigid Gaussian graphical models are commonly used. Joint work with Rina Foygel Barber, Junwei Lu, and Han Liu. Date: October 21, 2016 10-11am Abstract: In this work, we show that there are no spurious local minima in the non-convex factorized parametrization of low-rank matrix recovery from incoherent linear measurements. With noisy measurements we show all local minima are very close to a global optimum. Together with a curvature bound at saddle points, this yields a polynomial time global convergence guarantee for stochastic gradient descent from random initialization. 一道本不卡免费高清(Joint work with Behnam Neyshabur, Nathan Srebro) Abstract: We provide tight upper and lower bounds on the complexity of minimizing the average of m convex functions using gradient and prox oracles of the component functions. We show a significant gap between the complexity of deterministic vs randomized optimization. For smooth functions, we show that accelerated gradient descent (AGD) and an accelerated variant of SVRG are optimal in the deterministic and randomized settings respectively, and that a gradient oracle is sufficient for the optimal rate. For non-smooth functions, having access to prox oracles reduces the complexity and we present optimal methods based on smooth- ing AGD that improve over methods using just gradient accesses. Date: October 7, 2016 10-11am Abstract: As machine learning increasingly replaces human judgment in decisions protected by anti discrimination law, the problem of algorithmicly measuring and ensuring fairness in machine learning is pressing. What does it mean for a predictor to not discriminate with respect to protected group (e.g. according to race, gender, etc)? We propose a notion of non-discrimination that can be measured statistically, used algorithmicly, and avoids many of the pitfalls of previous definitions. We further study what type of discrimination and non-discrimination can be identified with oblivious tests, which treat the predictor as an opaque black-box, and what different oblivious tests tell us about possible discrimination. 一道本不卡免费高清(Joint work with Mortiz Hardt and Eric Price) Title: END-TO-END TRAINING APPROACHES FOR DISCRIMINATIVE SEGMENTAL MODELS Date:一道本不卡免费高清 September 30, 2016 10-11am Speaker: Hao Tang, TTIC Abstract: Recent work on discriminative segmental models has shown that they can achieve competitive speech recognition performance, using features based on deep neural frame classifiers. However, segmental models can be more challenging to train than standard frame-based approaches. While some segmental models have been successfully trained end-to-end, there is a lack of understanding of their training under different settings and with different losses. We investigate a model class based on recent successful approaches, consisting of a linear model that combines segmental features based on an LSTM frame classifier. Similarly to hybrid HMM-neural network models, segmental models of this class can be trained in two stages (frame classifier training followed by linear segmental model weight training), end-toend (joint training of both frame classifier and linear weights), or with end-to-end fine-tuning after two-stage training. In this work we present several findings. First, models in this class can be trained with various kinds of losses. We present segmental models trained end-to-end with hinge loss, log loss, latent hinge loss, and marginal log loss. We compare several losses for the case where training alignments are available as well as where they are not. We find that in general, marginal log loss provides the most consistent strong performance without requiring ground-truth alignments. We also find that training with dropout is very important in obtaining good performance with end-to-end training. Finally, the best results are often obtained by a combination of two-stage training and fine-tuning. 一道本不卡免费高清Our mission is to achieve international impact through world-class research and education in fundamental computer science and information technology. IBHE Complaint 6045 South Kenwood Ave Website: ehelpcarolina.com Email: [email protected] Copyright © Toyota Technological Institute at Chicago
CommonCrawl
Home Journals ACSM Influence of Plastic Deformation of Copper on the Behavior of Electromagnetic Shielding Influence of Plastic Deformation of Copper on the Behavior of Electromagnetic Shielding Benhorma Mohammed Elhadi* | Hadjadj Abdechafik | Gaoui Bachir | Benhorma Hadj Aissa Laboratory of Mechanic Laghouat University, Laghouat 0300, Algeria Laboratory of LACOSERE, Laghouat 0300, Algeria Corresponding Author Email: [email protected] https://doi.org/10.18280/acsm.430301 | Citation 43.03_01.pdf Nowadays, one of the main reasons for increasing material performances is the need to reduce electromagnetic interference (EMI) generated by high-frequency electronic circuits, which is a serious problem in terms of equipment performances. For this purpose, copper is one of the most used materials for shielding applications. Plastic deformation at the macroscopic scale generates at the microscopic scale many moving dislocations (internal stresses) affecting the efficiency of the electromagnetic shielding. However, the plastic deformation which affects the electrical properties ensuring the shielding properties is still badly known and should be more studied and constitutes a promising research field. The main objective of this work is to study the effect of the copper plastic deformation on its electromagnetic shielding efficiency in the ]0-1GHz] frequency range. A series of electromagnetic shielding experiments was carried out by means of an Electro Magnetic Dual Transverse Cell (DTEM), on copper samples with a purity level of 99 % i) without plastic deformation ii) deformed with a rate of 2 % and 3 %. The results obtained clearly shown the variation of the electromagnetic shielding efficiency as a function of the copper plastic deformation rate. plastic deformation, electric field, dislocation, shielding, TEM cell, electrical conductivity The Electromagnetic interference (EMI) is becoming a serious problem due to the multiplication of domestic and military electrical appliances as well as scientific equipment emitting electromagnetic radiation. These radiations can easily interfere with electrical and electronic devices and generate harmful effects on themselves [1-3]. For this reason, researches focus on the electromagnetic compatibility of high efficiency materials whose purpose is to reduce the EMI. Over the years, the shielding technology was perfected to improve various aspects, such as flexibility, lightness and materials electromagnetic shielding efficiency. Materials must be designed and developed to inhibit undesirable radiation [4]. The interactions between electromagnetic waves and materials can be divided into three mechanisms: reflection, absorption and multiple reflections within the shield. The first mechanism, reflection, depends on the permittivity and conductivity of the material. The reflection increases proportionally, at the same time, with the permittivity and the conductivity of the material. The second mechanism of protection against electromagnetic interference is absorption, which requires the existence of mobile charge carriers (electrons or holes), which interact with electromagnetic radiation. The third mechanism of attenuation of electromagnetic interference is the multiple reflections; this mechanism depends on the physical properties and the geometric shape effect of the shielding material. When the material is thicker, this third mechanism can be neglected [4-6]. Electromagnetic shielding is the main effective solution to reduce interference and interactions between devices or subcomponents of the devices themselves [6-7]. In general, materials of good electrical conductivity such as copper, aluminum, etc. have good shielding performance. Copper is one of the most widely used metals in this field and is extremely versatile and advantageous for a wide variety of applications because of its mechanical and electrical properties. Copper is a malleable and ductile material. In addition, copper and copper alloy pieces show a highly ductile behavior [8-11]. The plastic deformation of a material can be defined as a permanent deformation or change in shape of a solid boby without fracture. This occurs when a sufficient stretching force is applied to a material leading to a significant elongation. In this case, two distinguishable deformations can be observed first an elastic deformation followed by a plastic deformation. The elastic deformation is reversible when the stress is released before the elastic limit, therefore the material returns to its original shape. For the plastic deformation even after the release of the stretching force a permanent deformation of the material remains [12-14]. The study of plastic deformation is probably due to the need of controlling the shaping and use of materials. As a result, this has long been empirical and it is only in recent decades that the concepts necessary to understand the physical phenomena that occur during plastic flows have been developed [15-17]. For crystalline solids, the basic mechanisms are fairly well known, but the effect of plastic deformation on shielding efficiency (electrical parameters) is poorly known and is currently a promising area of research [18-20]. Several researchers [13, 15-16] obtained preliminary results, studying the effect of the interaction of dislocations that interact during plastic deformation on electrical properties. In the present work, we study the effect of plastic deformation on shielding immunity of copper against electromagnetic interference (EMI). The aim of this work is to perform a preliminary analysis of the variation of the electrical property (electrical conductivity) according to the internal stresses generated by the plastic deformation of a high purity metallic copper (99%), hence making possible to see the shielding efficiency of Cu as function of the mechanical deformation rate (plastic domain) in the frequency range [0-1 GHz]. 2. Shielding Theory The rapid increase in electromagnetic applications has also increased the need for EMI protection materials. The attenuation of electromagnetic radiation is one of the main indicators for measuring the effectiveness of EMI shielding. This indicator is the ratio of the intensity of an electromagnetic signal before and after the shield. In other words, the propagation of electromagnetic waves from one region to another can be efficiently controlled by electromagnetic wave insulating materials. These materials are selected according to the desired applications [21]. The efficiency of the shielding (SE) is a property of the materials indicating the ability to fight against undesirable electromagnetic interference [1, 3, 7]. In this work, the parameter S obtained from the vector network analyzer was used to calculate the total EM EMI as follows: $\mathrm{SE}=10 \log \frac{1}{\left|\mathrm{S}_{12}\right|^{2}}=10 \log \frac{1}{\left|S_{21}\right|^{2}}$ (1) where Sij represents the power transmitted from port i to port j. Also, the shielding effectiveness (SE, dB) of the shielding material (show Figure 2) can be expressed as [4, 6]: $S E=S E_{R}+S E_{A}+S E_{B}$ (2) where SEA and SER are the absorption and reflection loss respectively; SEB is the multiple reflection loss. They can be further expressed as: $S E_{R}=20 \log \left(Z_{0} / 4 Z_{s}\right)$ (3) $S E_{A}=20 \log \left(e^{-2 d / \delta}\right)$ (4) $S E_{B}=20 \log \left(1-e^{-2 d / \delta}\right)$ (5) $\delta=(f \pi \sigma \mu)^{1 / 2}$ (6) where f is the frequency (Hz) of electromagnetic waves, σ is the electrical conductivity, μ is the magnetic permeability, d is the thickness (mm) of the shielding materials, Z0 is the intrinsic impedance of an electric wave ZS is impedance of shield and $\delta$ means the skin depth. Figure 1. Different mechanism of shielding 3. Materials Deformation When a load is applied to a metallic material, enough to provoke deformation, this material deforms by changing its sizes. This is the result of a reversible elastic deformation followed by an irreversible plastic one even after unloading. The plastic deformation at a macroscopic scale of crystalline material finds explanation at a microscopic scale by generation and mobility of multiple dislocation lines known in mechanics as internal stresses. Plastic deformation occurs when loading is enough to deform definitively a material. In other words, plastic deformation is a permanent deformation or a change in shape without breaking under the effect of stress. Plastic deformation provokes dislocations inside materials making them thermodynamically unbalanced [22-23]. $\sigma=E \cdot \varepsilon$ (7) where σ is the stress, E the Young's modulus and $\mathcal{E}$ the deformation Figure 2. Material deformation curve 4.1 Samples preparation The sizes of the copper sample for the tensile test as a total size of 150 × 70 × 4 mm³. The effective size of the sample in which the TEM test is performed is 70 × 70 mm2 with a thickness of 1 mm (Figure 3). To meet this size requirement, the samples intended for carrying out the plastic deformation by tensile test required the making of test pieces from larger copper sheets according to the following steps: The test pieces were cut from larger copper sheets with a thickness of 4 mm. The sample got had a size of 150X70X4 mm³ also to reach the shielding measurement requirements, the thickness of central area of the sample (i.e. 70 × 70 mm2) was milled with a very small step of 0,05 mm with a rotational speed of 500 rpm until to reach a thickness of 1 mm. To achieve a shine surface aspect a delicate polishing is performed with fine-grained abrasive paper, it also allowed eliminating surface microcraks generated by the milling operation. Figure 3. Deformation of Cu by unidirectional traction 4.2 Plastic deformation​ With the aim of achieving a more or less homogeneous plastic deformation in the useful zones of the specimens and taking into account the copper ductility, a deformation rate of 10 μm/s was imposed, avoiding microcracks formation and introducing dislocations (internal stresses). The deformation was performed with a rate of 3 % for the 1st samples group to 2% for the 2nd and without deformation for the 3rd. Note, that our tensile test specimens were deformed until necking so that the deformation was obtained in the zone of the homogeneous plastic field; this plastic deformation homogeneity is requisite for satisfactory measurement of the shielding effectiveness in the double TEM cell (DTEM) The samples are deformed at 2 % and 3 % and other samples without plastic deformation in order to compare their shielding efficiency as a function of the deformation rates. Cells of measurement the shielding efficiency 4.3 Definition of TEM cell The TEM cell is a closed chamber; it has the principle of a tri-plate transmission line. The TEM cell consists of two ground planes, interposed by a central conductor, which is usually referred by the English term "septum". In the TEM cell, the electromagnetic wave propagates and passes through the sample (TEM mode) providing that the wavelength is less than the transverse dimension of the cell [24].The use of the TEM cell is limited by the cutoff frequency of this mode. The latter is fixed by the dimensions of the cell which are chosen so as to allow only the generation of a TEM mode. 4.4 DTEM cell The DTEM cell (dual TEM cell) is composed of two TEM cells that coupled through an opening in a common wall (Figure 18), one of these cells is used for the emission of the electromagnetic field and the other for the reception. The theory of the small aperture is often used to explain, the coupling between the two cells [24]. In general, the coupling opening takes a circular or squared shape. Figure 4. Dual cell TEM When the transmitting cell is excited, part of the electromagnetic field is transmitted to the receiver cell through the opening. The transmitted field results in the appearance of voltages at the connectors of the receiving cell. When an element under test is put on the opening, the coupling of the two cells is reduced. This reduction gives a direct measurement of the shielding efficiency. The DTEM cell allows the separation between the electric field and magnetic field, this is explained by the theory of the small opening [25-26]. If the opening is electrically small, then the diffusion effect is substantially equivalent to that produced by the set. This dipole moment can be used to predict scattered fields, which in the case of the double TEM cell gives a description of the opening coupling. The dipole moments depend on the incidences of the fields and the shape of the opening (opening polarizability). 5.1 X-ray diffraction results In this part of the work, was proceeded the X-ray diffraction analysis of the Cu samples. These samples are tested under X-rays before and after the plastic deformation. This operation aims to control the state of the microstructure of our material. The analysis of these diffractograms shows the superposability of the reflection peaks of the same reflector planes families, thus proving that Cu samples did not undergo any phase change following a plastic deformation imposed to Cu, but also without any change of the microstructure. Moreover, according to the theory of dislocation mobility, the samples, after plastic deformation, underwent an introduction of dislocations. Figure 5. Xray results of Cu before stress Figure 6. XRD diffractograms of Cu samples after deformation Figure 7 shows the measurement set up for the shielding effectiveness of the samples which consists of a vectorial network analyzer (VNA), an amplifier, a frequency generator, and a DTEM cell. This measures the direct and reflected signal amplitude as a function of the frequency. Figure 7. The shielding efficiency measuring device 5.2 Shielding effectiveness results Figure 8 presents the results of shielding effectiveness against the electric field of the three Cu prepared samples with a high purity of 99 %. These results are obtained thanks to the DTEM chamber used in the frequency range 0-1 GHz. The results obtained on the first Cu sample (undeformed) exhibits a single peak at 380 MHz. The SE of the second sample with 2 % deformation is higher than the first (undeformed) for frequencies under 600 MHz and the peak of resonance is delayed up to 1 GHz. This is due to the dislocations generated by the mechanical traction of the Cu sample. For the third sample with 3 % deformation, it exhibited a good SE compared to the two firsts samples for the frequency range used for this study. This better performance can be explained by a higher increase in the dislocation density. It is well known that dislocations occur at the angstrom scale and are generated by the linear stretching of the metal, affecting as such positively the SE behavior of Cu, as it has been demonstrated in this article. It can be concluded that dislocations play a key role in trapping the electric waves inside the materials reducing thereby outside reflections. These promising results open new perspectives for shielding applications for the development of new equipment subjected to electromagnetic fields and therefore avoiding equipment failure. Figure 8. Comparison of the experimental shielding effectiveness results of Cu samples with 0, 2 and 3 % deformation respectively This work deals with the effects of plastic deformation on the shielding electromagnetic efficiency SE of copper. The use of DTEM for the SE measurement has allowed seeing a proportional evolution of SE as a function of the plastic deformation rate. It was found that the higher the deformation rate the higher the dislocation density. The dislocation advantageously traps a maximum of electric and/or electromagnetic waves and dissipates their energy inside the material instead of dissipating them outside it. This makes possible to increase the electromagnetic shielding efficiency of copper. Such behavior is very similar to the role played by carbon fiber in composite material. According to equation (5) and (6), this efficiency varies also proportionally to the electrical copper conductivity. This allows us to conclude that this latter is also proportional to the plastic strain rate applied to the material. Consequently, these experimental results got in this article have proved that the copper electromagnetic shielding quality can be improved by controlled plastic deformation, and can be regarded as noticeable progress. This work was supported partly by the Polytechnic Military School of Bordj El Bahri of Algiers. The authors would like to thank, for his assistance, for his valuable advice and to have provided the material and equipment, Pr.Chabira Fouad Salem of the mechanical laboratory (lme), at the mechanic's department of Laghouat University. [1] Danlée, Y., Bailly, C., Huynen, I. (2014). Thin and flexible multilayer polymer composite structures for effective control of microwave electromagnetic absorption. Composites Science and Technology, 100: 182-188. https://doi.org/10.1016/j.compscitech.2014.06.010 [2] Gaoui, B., Hadjadj, A., Kious, M. (2017). Enhancement of the shielding effectiveness of multilayer materials by gradient thickness in the stacked layers. Journal of Materials Science: Materials in Electronics, 28(15): 11 292-11 299. https://doi.org/10.1007/s10854-017-6920-8 [3] Danlée, Y., Huynen, I., Bailly, C. (2012). Thin smart multilayer microwave absorber based on hybrid structure of polymer and carbon nanotubes. Applied Physics Letters, 100(21): 213105. https://doi.org/10.1063/1.4717993 [4] Thomassin, J.-M., Vuluga, D., Alexandre, M., J´erôme, C., Molenberg, I., Huynen, I., Detrembleur, C. (2012). A convenient route for the dispersion of carbon nanotubes in polymers: Application to the preparation of electromagnetic interference (emi) absorbers. Polymer, 53(1): 169-174. https://doi.org/10.1016/j.polymer.2011.11.026 [5] Tahar, M., Hadjadj, A., Kious, M., Arun Prakash, V.A., Gaoui, B. (2018). Effect of magnetic iron (iii) oxide particle addition with mwcnts in kenaf fibre-reinforced epoxy composite shielding material in 'e','f','i' and 'j' band microwave frequencies. Materials Research Express, 6(4). https://doi.org/10.1088/2053-1591/aaf9de [6] Gaoui, B., Hadjadj, A., Kious, M. (2016). Comparison electromagnetic shielding effectiveness between single layer and multilayer shields. in Power Engineering Conference (UPEC), 2016 51st International Universities. IEEE, pp. 1-5. https://doi.org/10.1109/UPEC.2016.8114106 [7] Gaoui, B., Hadjadj, A., Kious, M. (2017). Novel multilayer arrangement of conductive layers traps the electromagnetic interferences by multiple internal reflections at high frequency in the far field. Journal of Materials Science: Materials in Electronics, 28(4): 3924-3930. https://doi.org/10.1007/s10854-016-6006-z [8] He, Y., Ao, J., Yang, J., Tang, X. (2014). The shielding-effectiveness based magnetic field shielding theory and its application in mobile payment systems. in Vehicular Technology Conference (VTC Fall), 2014 IEEE 80th. IEEE, pp. 1-5. https://doi.org/10.1109/VTCFall.2014.6966203 [9] Lou, C.-W., Lin, T.A., Chen, A.-P., Lin, J.-H. (2016). Stainless steel/polyester woven fabrics and copper/polyester woven fabrics: Manufacturing techniques and electromagnetic shielding effectiveness. Journal of Industrial Textiles, 46(1): 214-236. https://doi.org/10.1177/1528083715580518 [10] Tong, X.C. (2016). Advanced materials and design for electromagnetic interference shielding. CRC press. [11] Papanikolaou, S., Cui, Y., Ghoniem, N. (2017). Avalanches and plastic flow in crystal plasticity: An overview. Modelling and Simulation in Materials Science and Engineering, 26(1): 013001. https://doi.org/10.1088/1361-651X/aa97ad [12] Borodin, E.N., Mayer, A.E. (2015). Structural model of mechanical twinning and its application for modeling of the severe plastic deformation of copper rods in Taylor impact tests. International Journal of Plasticity, 74: 141-157. https://doi.org/10.1016/j.ijplas.2015.06.006 [13] Hubert, O., Lazreg, S. (2012). Multidomain modeling of the influence of plastic deformation on the magnetic behavior. IEEE Transactions on Magnetics, 48(4): 1277-1280. https://doi.org/10.1109/TMAG.2011.2172935 [14] Li, J., Li, F., Zhao, C., Chen, H., Ma, X., Li, J. (2016). Experimental study on pure copper subjected to different severe plastic deformation modes. Materials Science and Engineering: A, 656: 142-150. https://doi.org/10.1016/j.msea.2016.01.018 [15] Rodrigues-Jr, D., Silveira, J., Gerhardt, G., Missell, F., Landgraf, F., Machado, R., de Campos, M. (2012). Effect of plastic deformation on the excess loss of electrical steel. IEEE Transactions on Magnetics, 48(4): 1425-1428. https://doi.org/10.1109/TMAG.2011.2174214 [16] Lazreg, S., Hubert, O. (2012). Influence of plasticity on magnetic and magnetostrictive behaviors of dual-phase steel. IEEE Transactions on Magnetics, 48(4): 1273-1276. https://doi.org/10.1109/TMAG.2011.2172936 [17] Daniel, L. (2018). An analytical model for the magnetostriction strain of ferromagnetic materials subjected to multiaxial stress. The European Physical Journal Applied Physics, 83(3): 30904. https://doi.org/10.1109/TMAG.2013.2239264 [18] Daniel L., Hubert, O., Buiron, N., Billardon, R. (2008). Reversible magneto-elastic behavior: A multiscale approach. Journal of the Mechanics and Physics of Solids, 56(3): 1018-1042. https://doi.org/10.1016/j.jmps.2007.06.003 [19] Hahner, P., Bay, K., Zaiser, M. (1998). Fractal dislocation patterning during plastic deformation. Physical Review Letters, 81(12): 2470-2473. https://doi.org/10.1103/PhysRevLett.81.2470 [20] Jakobsen, B., Poulsen, H.F., Lienert, U., Almer, J., Shastri, S.D., Sørensen, H.O., Gundlach, C., Pantleon, W. (2006). Formation and subdivision of deformation structures during plastic deformation. Science, 312(5775): 889-892. https://doi.org/10.1126/science.1124141 [21] Saini, P., Arora, M. (2012). Microwave absorption and emi shielding behavior of nanocomposites based on intrinsically conducting polymers, graphene and carbon nanotubes. in New Polymers for Special Applications. InTech. https://doi.org/10.5772/48779 [22] Dewers, T.A., Issen, K.A., Holcomb, D.J., Olsson, W.A., Ingraham, M.D. (2017). Strain localization and elastic-plastic coupling during deformation of porous sandstone. International Journal of Rock Mechanics and Mining Sciences, 98: 167-180. https://doi.org/10.1016/j.ijrmms.2017.06.005 [23] Gupta, Y., Mandal, A. (2017). Elastic-plastic deformation of molybdenum single crystals shocked to 12.5 gpa: Crystal anisotropy effects. in APS Shock Compression of Condensed Matter Meeting Abstracts. https://doi.org/10.1063/1.5048131 [24] Voicu, V., P˘atru, I., Nicolae, P.M., Dina, L.A. (2017). Analyzing the attenuation of electromagnetic shielding materials for frequencies under 1 ghz. in Advanced Topics in Electrical Engineering (ATEE), 2017 10th International Symposium on. IEEE, 2017, pp. 336-339. https://doi.org/10.1109/ATEE.2017.7905057 [25] Pan, S.M., Kim, J., Kim, S., Park, J., Oh, H., Fan, J. (2010). An equivalent three-dipole model for IC radiated emissions based on TEM cell measurements. In: 2010 IEEE International Symposium on Electromagnetic Compatibility. IEEE, pp. 652-656. https://doi.org/10.1109/isemc.2010.5711354 [26] Tumayan, R., Bunlon, X., Reineix, A., Andrieu, G., Guiffaut, C. (2014). A method using an open TEM cell to extract the complex permittivity of an unknown material. 2014 International Symposium on Electromagnetic Compatibility. IEEE. https://doi.org/10.1109/emceurope.2014.6931067
CommonCrawl
Differentially Quantized Gradient Descent Chung-Yi Lin, Victoria Kostina, Babak Hassibi Consider the following distributed optimization scenario. A worker has access to training data that it uses to compute the gradients while a server decides when to stop iterative computation based on its target accuracy or delay constraints. The only information that the server knows about the problem instance is what it receives from the worker via a rate-limited noiseless communication channel. We introduce the technique we call differential quantization (DQ) that compensates past quantization errors to make the descent trajectory of a quantized algorithm follow that of its unquantized counterpart. Assuming that the objective function is smooth and strongly convex, we prove that differentially quantized gradient descent (DQ-GD) attains a linear convergence rate of $\max\{\sigma_{\text{GD}}, \rho_{n}2^{-R}\}$ , where $\sigma_{\text{GD}}$ is the convergence rate of unquantized gradient descent (GD), $\rho_{n}$ is the covering efficiency of the quantizer, and $R$ is the bitrate per problem dimension $n$ . Thus at any $R\geq\log_{2}\rho_{n}/\sigma_{\text{GD}}$ , the convergence rate of DQ-GD is the same as that of unquantized GD, i.e., there is no loss due to quantization. We show a converse demonstrating that no GD-like quantized algorithm can converge faster than $\max\{\sigma_{\text{GD}}, 2^{-R}\}$ . Since quantizers exist with $\rho_{n}\rightarrow 1$ as $n\rightarrow\infty$ (Rogers, 1963), this means that DQ-GD is asymptotically optimal. In contrast, naively quantized GD where the worker directly quantizes the gradient attains only $\sigma_{\text{GD}}+\rho_{n}2^{-R}$ . The technique of differential quantization continues to apply to gradient methods with momentum such as Nesterov's accelerated gradient descent, and Polyak's heavy ball method. For these algorithms as well, if the rate is above a certain threshold, there is no loss in convergence rate obtained by the differentially quantized algorithm compared to its unquantized counterpart. Experimental results on both simulated and realworld least-squares problems validate our theoretical analysis. 2021 IEEE International Symposium on Information Theory (ISIT) https://doi.org/10.1109/isit45174.2021.9518254 10.1109/isit45174.2021.9518254 Dive into the research topics of 'Differentially Quantized Gradient Descent'. Together they form a unique fingerprint. Gradient Descent Mathematics 100% Quantization Mathematics 35% Text Mathematics 28% Server Mathematics 12% Convergence Rate Mathematics 10% Distributed Optimization Mathematics 9% Gradient Mathematics 9% Rate of Convergence Mathematics 9% Lin, C-Y., Kostina, V., & Hassibi, B. (2021). Differentially Quantized Gradient Descent. In 2021 IEEE International Symposium on Information Theory (ISIT) IEEE. https://doi.org/10.1109/isit45174.2021.9518254 Lin, Chung-Yi ; Kostina, Victoria ; Hassibi, Babak. / Differentially Quantized Gradient Descent. 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. @inproceedings{fe5e031cd7eb4eefb2fafd1a64262eb9, title = "Differentially Quantized Gradient Descent", abstract = "Consider the following distributed optimization scenario. A worker has access to training data that it uses to compute the gradients while a server decides when to stop iterative computation based on its target accuracy or delay constraints. The only information that the server knows about the problem instance is what it receives from the worker via a rate-limited noiseless communication channel. We introduce the technique we call differential quantization (DQ) that compensates past quantization errors to make the descent trajectory of a quantized algorithm follow that of its unquantized counterpart. Assuming that the objective function is smooth and strongly convex, we prove that differentially quantized gradient descent (DQ-GD) attains a linear convergence rate of $\max\{\sigma_{\text{GD}}, \rho_{n}2^{-R}\}$ , where $\sigma_{\text{GD}}$ is the convergence rate of unquantized gradient descent (GD), $\rho_{n}$ is the covering efficiency of the quantizer, and $R$ is the bitrate per problem dimension $n$ . Thus at any $R\geq\log_{2}\rho_{n}/\sigma_{\text{GD}}$ , the convergence rate of DQ-GD is the same as that of unquantized GD, i.e., there is no loss due to quantization. We show a converse demonstrating that no GD-like quantized algorithm can converge faster than $\max\{\sigma_{\text{GD}}, 2^{-R}\}$ . Since quantizers exist with $\rho_{n}\rightarrow 1$ as $n\rightarrow\infty$ (Rogers, 1963), this means that DQ-GD is asymptotically optimal. In contrast, naively quantized GD where the worker directly quantizes the gradient attains only $\sigma_{\text{GD}}+\rho_{n}2^{-R}$ . The technique of differential quantization continues to apply to gradient methods with momentum such as Nesterov's accelerated gradient descent, and Polyak's heavy ball method. For these algorithms as well, if the rate is above a certain threshold, there is no loss in convergence rate obtained by the differentially quantized algorithm compared to its unquantized counterpart. Experimental results on both simulated and realworld least-squares problems validate our theoretical analysis.", author = "Chung-Yi Lin and Victoria Kostina and Babak Hassibi", note = "KAUST Repository Item: Exported on 2021-09-07 Acknowledgements: This work was supported in part by the National Science Foundation (NSF) under grants CCF-1751356, CCF-1956386, CNS-0932428, CCF-1018927, CCF-1423663 and CCF-1409204, by a grant from Qualcomm Inc., by NASAs Jet Propulsion Laboratory through the President and Directors Fund, and by King Abdullah University of Science and Technolog This publication acknowledges KAUST support, but has no KAUST affiliated authors.", doi = "10.1109/isit45174.2021.9518254", booktitle = "2021 IEEE International Symposium on Information Theory (ISIT)", Lin, C-Y, Kostina, V & Hassibi, B 2021, Differentially Quantized Gradient Descent. in 2021 IEEE International Symposium on Information Theory (ISIT). IEEE. https://doi.org/10.1109/isit45174.2021.9518254 Differentially Quantized Gradient Descent. / Lin, Chung-Yi; Kostina, Victoria; Hassibi, Babak. 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. T1 - Differentially Quantized Gradient Descent AU - Lin, Chung-Yi AU - Kostina, Victoria AU - Hassibi, Babak N1 - KAUST Repository Item: Exported on 2021-09-07 Acknowledgements: This work was supported in part by the National Science Foundation (NSF) under grants CCF-1751356, CCF-1956386, CNS-0932428, CCF-1018927, CCF-1423663 and CCF-1409204, by a grant from Qualcomm Inc., by NASAs Jet Propulsion Laboratory through the President and Directors Fund, and by King Abdullah University of Science and Technolog This publication acknowledges KAUST support, but has no KAUST affiliated authors. N2 - Consider the following distributed optimization scenario. A worker has access to training data that it uses to compute the gradients while a server decides when to stop iterative computation based on its target accuracy or delay constraints. The only information that the server knows about the problem instance is what it receives from the worker via a rate-limited noiseless communication channel. We introduce the technique we call differential quantization (DQ) that compensates past quantization errors to make the descent trajectory of a quantized algorithm follow that of its unquantized counterpart. Assuming that the objective function is smooth and strongly convex, we prove that differentially quantized gradient descent (DQ-GD) attains a linear convergence rate of $\max\{\sigma_{\text{GD}}, \rho_{n}2^{-R}\}$ , where $\sigma_{\text{GD}}$ is the convergence rate of unquantized gradient descent (GD), $\rho_{n}$ is the covering efficiency of the quantizer, and $R$ is the bitrate per problem dimension $n$ . Thus at any $R\geq\log_{2}\rho_{n}/\sigma_{\text{GD}}$ , the convergence rate of DQ-GD is the same as that of unquantized GD, i.e., there is no loss due to quantization. We show a converse demonstrating that no GD-like quantized algorithm can converge faster than $\max\{\sigma_{\text{GD}}, 2^{-R}\}$ . Since quantizers exist with $\rho_{n}\rightarrow 1$ as $n\rightarrow\infty$ (Rogers, 1963), this means that DQ-GD is asymptotically optimal. In contrast, naively quantized GD where the worker directly quantizes the gradient attains only $\sigma_{\text{GD}}+\rho_{n}2^{-R}$ . The technique of differential quantization continues to apply to gradient methods with momentum such as Nesterov's accelerated gradient descent, and Polyak's heavy ball method. For these algorithms as well, if the rate is above a certain threshold, there is no loss in convergence rate obtained by the differentially quantized algorithm compared to its unquantized counterpart. Experimental results on both simulated and realworld least-squares problems validate our theoretical analysis. AB - Consider the following distributed optimization scenario. A worker has access to training data that it uses to compute the gradients while a server decides when to stop iterative computation based on its target accuracy or delay constraints. The only information that the server knows about the problem instance is what it receives from the worker via a rate-limited noiseless communication channel. We introduce the technique we call differential quantization (DQ) that compensates past quantization errors to make the descent trajectory of a quantized algorithm follow that of its unquantized counterpart. Assuming that the objective function is smooth and strongly convex, we prove that differentially quantized gradient descent (DQ-GD) attains a linear convergence rate of $\max\{\sigma_{\text{GD}}, \rho_{n}2^{-R}\}$ , where $\sigma_{\text{GD}}$ is the convergence rate of unquantized gradient descent (GD), $\rho_{n}$ is the covering efficiency of the quantizer, and $R$ is the bitrate per problem dimension $n$ . Thus at any $R\geq\log_{2}\rho_{n}/\sigma_{\text{GD}}$ , the convergence rate of DQ-GD is the same as that of unquantized GD, i.e., there is no loss due to quantization. We show a converse demonstrating that no GD-like quantized algorithm can converge faster than $\max\{\sigma_{\text{GD}}, 2^{-R}\}$ . Since quantizers exist with $\rho_{n}\rightarrow 1$ as $n\rightarrow\infty$ (Rogers, 1963), this means that DQ-GD is asymptotically optimal. In contrast, naively quantized GD where the worker directly quantizes the gradient attains only $\sigma_{\text{GD}}+\rho_{n}2^{-R}$ . The technique of differential quantization continues to apply to gradient methods with momentum such as Nesterov's accelerated gradient descent, and Polyak's heavy ball method. For these algorithms as well, if the rate is above a certain threshold, there is no loss in convergence rate obtained by the differentially quantized algorithm compared to its unquantized counterpart. Experimental results on both simulated and realworld least-squares problems validate our theoretical analysis. UR - https://ieeexplore.ieee.org/document/9518254/ U2 - 10.1109/isit45174.2021.9518254 DO - 10.1109/isit45174.2021.9518254 BT - 2021 IEEE International Symposium on Information Theory (ISIT) Lin C-Y, Kostina V, Hassibi B. Differentially Quantized Gradient Descent. In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE. 2021 https://doi.org/10.1109/isit45174.2021.9518254
CommonCrawl
(Redirected from 8 (number)) This article is about the number. For the year, see AD 8. For other uses, see 8 (disambiguation) and Number Eight (disambiguation). "8th" redirects here. For other uses, see Eighth (disambiguation). -1 0 1 2 3 4 5 6 7 8 9 → List of numbers — Integers ← 0 10 20 30 40 50 60 70 80 90 → (eighth) Numeral system Divisors 1, 2, 4, Greek numeral Roman numeral Roman numeral (unicode) Ⅷ, ⅷ Greek prefix octa-/oct- Latin prefix octo-/oct- η (or Η) Persian, Arabic, & Kurdish ፰ Chinese numeral 八,捌 Devanāgarī Ը ը 8 (eight) is the natural number following 7 and preceding 9. 1 In mathematics 1.1 List of basic calculations 3 Glyph 4 In science 4.1 Physics 4.2 Astronomy 4.3 Chemistry 4.5 Biology 4.6 In Psychology 5 In technology 5.1 In measurement 6 In culture 6.2 In religion, folk belief and divination 6.2.6 Taoism 6.2.8 As a lucky number 6.2.9 In astrology 6.3 In music and dance 6.4 In film and television 6.5 In sports and other games 6.6 In foods 6.7 In literature 6.8 In slang In mathematics[edit] 8 is: a composite number, its proper divisors being 1, 2, and 4. It is twice 4 or four times 2. a power of two, being 23 (two cubed), and is the first number of the form p3, p being an integer greater than 1. the first number which is neither prime nor semiprime. the base of the octal number system, which is mostly used with computers. In octal, one digit represents three bits. In modern computers, a byte is a grouping of eight bits, also called an octet. a Fibonacci number, being 3 plus 5. The next Fibonacci number is 13. 8 is the only positive Fibonacci number, aside from 1, that is a perfect cube.[1] the only nonzero perfect power that is one less than another perfect power, by Mihăilescu's Theorem. the order of the smallest non-abelian group all of whose subgroups are normal. the dimension of the octonions and is the highest possible dimension of a normed division algebra. the first number to be the aliquot sum of two numbers other than itself; the discrete biprime 10, and the square number 49. A number is divisible by 8 if its last three digits, when written in decimal, are also divisible by 8, or its last three digits are 0 when written in binary. There are a total of eight convex deltahedra. A polygon with eight sides is an octagon. Figurate numbers representing octagons (including eight) are called octagonal numbers. A polyhedron with eight faces is an octahedron. A cuboctahedron has as faces six equal squares and eight equal regular triangles. A cube has eight vertices. Sphenic numbers always have exactly eight divisors. The number 8 is involved with a number of interesting mathematical phenomena related to the notion of Bott periodicity. For example, if O(∞) is the direct limit of the inclusions of real orthogonal groups O ( 1 ) ↪ O ( 2 ) ↪ … ↪ O ( k ) ↪ … {\displaystyle O(1)\hookrightarrow O(2)\hookrightarrow \ldots \hookrightarrow O(k)\hookrightarrow \ldots } , π k + 8 ( O ( ∞ ) ) ≅ π k ( O ( ∞ ) ) {\displaystyle \pi _{k+8}(O(\infty ))\cong \pi _{k}(O(\infty ))} . Clifford algebras also display a periodicity of 8. For example, the algebra Cl(p + 8,q) is isomorphic to the algebra of 16 by 16 matrices with entries in Cl(p,q). We also see a period of 8 in the K-theory of spheres and in the representation theory of the rotation groups, the latter giving rise to the 8 by 8 spinorial chessboard. All of these properties are closely related to the properties of the octonions. The spin group Spin(8) is the unique such group that exhibits the phenomenon of triality. The lowest-dimensional even unimodular lattice is the 8-dimensional E8 lattice. Even positive definite unimodular lattices exist only in dimensions divisible by 8. A figure 8 is the common name of a geometric shape, often used in the context of sports, such as skating. Figure-eight turns of a rope or cable around a cleat, pin, or bitt are used to belay something. List of basic calculations[edit] 8 × x 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 8 ÷ x 8 4 2.6 2 1.6 1.3 1.142857 1 0.8 0.8 0.72 0.6 0.615384 0.571428 0.53 x ÷ 8 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1 1.125 1.25 1.375 1.5 1.625 1.75 1.875 Exponentiation 8x 8 64 512 4096 32768 262144 2097152 16777216 134217728 1073741824 8589934592 68719476736 549755813888 x8 1 256 6561 65536 390625 1679616 5764801 16777216 43046721 100000000 English eight, from Old English eahta, æhta, Proto-Germanic *ahto is a direct continuation of Proto-Indo-European *oḱtṓ(w)-, and as such cognate with Greek ὀκτώ and Latin octo-, both of which stems are reflected by the English prefix oct(o)-, as in the ordinal adjective octaval or octavary, the distributive adjective is octonary. The adjective octuple (Latin octu-plus) may also be used as a noun, meaning "a set of eight items"; the diminutive octuplet is mostly used to refer to eight sibling delivered in one birth. The Semitic numeral is based on a root *θmn-, whence Akkadian smn-, Arabic ṯmn-, Hebrew šmn- etc. The Chinese numeral, written 八 (Mandarin: bā; Cantonese: baat), is from Old Chinese *priāt-, ultimately from Sino-Tibetan b-r-gyat or b-g-ryat which also yielded Tibetan brgyat. It has been argued that, as the cardinal number 7 is the highest number of item that can universally be cognitively processed as a single set, the etymology of the numeral eight might be the first to be considered composite, either as "twice four" or as "two short of ten", or similar. The Turkic words for "eight" are from a Proto-Turkic stem *sekiz, which has been suggested as originating as a negation of eki "two", as in "without two fingers" (i.e., "two short of ten; two fingers are not being held up");[2] this same principle is found in Finnic *kakte-ksa, which conveys a meaning of "two before (ten)". The Proto-Indo-European reconstruction *oḱtṓ(w)- itself has been argued as representing an old dual, which would correspond to an original meaning of "twice four". Proponents of this "quaternary hypothesis" adduce the numeral 9, which might be built on the stem new-, meaning "new" (indicating the beginning of a "new set of numerals" after having counted to eight).[3] Glyph[edit] Evolution of the numeral 8 from the Indians to the Europeans The modern 8 glyph, like all modern Arabic numerals (other than zero) originates with the Brahmi numerals. The Brahmi numeral for eight by the 1st century was written in one stroke as a curve └┐ looking like an uppercase H with the bottom half of the left line and the upper half of the right line removed. However the eight glyph used in India in the early centuries of the Common Era developed considerable variation, and in some cases took the shape of a single wedge, which was adopted into the Perso-Arabic tradition as ٨ (and also gave rise to the later Devanagari numeral ८; the alternative curved glyph also existed as a variant in Perso-Arabic tradition, where it came to look similar to our glyph 5.[year needed] The numerals as used in Al-Andalus by the 10th century were a distinctive western variant of the glyphs used in the Arabic-speaking world, known as ghubār numerals (ghubār translating to "sand table"). In these numerals, the line of the 5-like glyph used in Indian manuscripts for eight came to be formed in ghubār as a closed loop, which was the 8-shape that became adopted into European use in the 10th century.[4] Just as in most modern typefaces, in typefaces with text figures the 8 character usually has an ascender, as, for example, in . The infinity symbol ∞, described as a "sideways figure eight" is unrelated to the 8 glyph in origin; it is first used (in the mathematical meaning "infinity") in the 17th century, and it may be derived from the Roman numeral for "one thousand" CIƆ, or alternatively from the final Greek letter, ω. The numeral eight in Greek numerals, developed in Classical Greece by the 5th century BC, was written as Η, the eighth letter of the Greek alphabet. The Chinese numeral eight is written in two strokes, 八; the glyph is also the 12th Kangxi radical. In science[edit] Physics[edit] In nuclear physics, the second magic number. In particle physics, the eightfold way is used to classify sub-atomic particles. In statistical mechanics, the eight-vertex model has 8 possible configurations of arrows at each vertex. Astronomy[edit] Messier object M8, a magnitude 5.0 nebula in the constellation of Sagittarius. The New General Catalogue object NGC 8, a double star in the constellation Pegasus. Since the demotion of Pluto to a dwarf planet on August 24, 2006, in our Solar System, eight of the bodies orbiting the Sun are considered to be planets. Chemistry[edit] The atomic number of oxygen. The number of allotropes of carbon. The most stable allotrope of a sulfur molecule is made of eight sulfur atoms arranged in a rhombic form. The maximum number of electrons that can occupy a valence shell. The red pigment lycopene consists of eight isoprene units. Geology[edit] A disphenoid crystal is bounded by eight scalene triangles arranged in pairs. A ditetragonal prism in the tetragonal crystal system has eight similar faces whose alternate interfacial angles only are equal. Biology[edit] All spiders, and more generally all arachnids, have eight legs. Orb-weaver spiders of the cosmopolitan family Areneidae have eight similar eyes. The octopus and its cephalopod relatives in genus Argonauta have eight arms (tentacles). Compound coelenterates of the subclass or order Alcyonaria have polyps with eight-branched tentacles and eight septa. Sea anemones of genus Edwardsia have eight mesenteries. Animals of phylum Ctenophora swim by means of eight meridional bands of transverse ciliated plates, each plate representing a row of large modified cilia. The eight-spotted forester (genus Alypia, family Zygaenidae) is a diurnal moth having black wings with brilliant white spots. The ascus in fungi of the class Ascomycetes, following nuclear fusion, bears within it typically eight ascospores. Herbs of genus Coreopsis (tickseed) have showy flower heads with involucral bracts in two distinct series of eight each. Timothy Leary identified a hierarchy of eight levels of consciousness. In human adult dentition there are eight teeth in each quadrant. The eighth tooth is the so-called wisdom tooth. There are eight cervical nerves on each side in man and most mammals. In Psychology[edit] There are eight Jungian cognitive functions, according to the MBTI models by John Beebe and Linda Berens. In technology[edit] NATO signal flag for 8 A byte is eight bits. Many (mostly historic) computer architectures are eight-bit, among them the Nintendo Entertainment System. Standard-8 and Super-8 are 8 mm film formats. Video8, Hi8 and Digital8 are related 8 mm video formats. On most phones, the 8 key is associated with the letters T, U, and V, but on the BlackBerry it is the key for B, N, and X. An eight may refer to an eight-cylinder engine or automobile. A V8 engine is an internal combustion engine with eight cylinders configured in two banks (rows) of four forming a "V" when seen from the end. A figure-eight knot (so named for its configuration) is a kind of stopper knot. The number eight written in parentheses is the code for the musical note in Windows Live Messenger. In a seven-segment display, when an 8 is illuminated, all the display bulbs are on. In measurement[edit] The SI prefix for 10008 is yotta (Y), and for its reciprocal, yocto (y). In liquid measurement (United States customary units), there are eight fluid ounces in a cup, eight pints in a gallon and eight tablespoonfuls in a gill. There are eight furlongs in a mile. The clove, an old English unit of weight, was equal to eight pounds when measuring cheese. An eight may be an article of clothing of the eighth size. Force eight is the first wind strength attributed to a gale on the Beaufort scale when announced on a Shipping Forecast. In culture[edit] Various types of buildings are usually eight-sided (octagonal), such as single-roomed gazebos and multi-roomed pagodas (descended from stupas; see religion section below). Eight caulicoles rise out of the leafage in a Corinthian capital, ending in leaves that support the volutes. In religion, folk belief and divination[edit] Also known as Ashtha, Aṣṭa, or Ashta in Sanskrit, it is the number of wealth and abundance. The Goddess of wealth and prosperity Lakshmi has eight forms which is known as Ashta Lakshmi and worshipped as: "Maha-lakshmi,Dhana-lakshmi,Dhanya-lakshmi,Gaja-lakshmi, Santana-lakshmi,Veera-lakshmi,Vijaya-lakshmi and Vidhya-lakshmi" There are eight nidhi, or seats of wealth according to Hinduism. There are eight Guardians of the directions known as Astha-dikpalas. There are eight Hindu monasteries established by saint Madhvacharya in Udupi, India popularly known as the Ashta Mathas of Udupi. In Buddhism, the 8-spoked Dharmacakra represents the Noble Eightfold Path The Dharmacakra, a Buddhist symbol, has eight spokes. The Buddha's principal teaching—the Four Noble Truths—ramifies as the Noble Eightfold Path and the Buddha emphasizes the importance of the eight attainments or jhanas. In Mahayana Buddhism, the branches of the Eightfold Path are embodied by the Eight Great Bodhisattvas: (Manjusri, Vajrapani, Avalokiteśvara, Maitreya, Ksitigarbha, Nivaranavishkambhi, Akasagarbha, and Samantabhadra). These are later (controversially) associated with the Eight Consciousnesses according to the Yogacara school of thought: consciousness in the five senses, thought-consciousness, self-consciousness, and unconsciousness-"consciousness" or "store-house consciousness" (alaya-vijñana). The "irreversible" state of enlightenment, at which point a Bodhisattva goes on "autopilot", is the Eight Ground or bhūmi. In general, "eight" seems to be an auspicious number for Buddhists, e.g., the "eight auspicious symbols" (the jewel-encrusted parasol; the goldfish (always shown as a pair, e.g., the glyph of Pisces); the self-replenishing amphora; the white kamala lotus-flower; the white conch; the eternal (Celtic-style, infinitely looping) knot; the banner of imperial victory; the eight-spoked wheel that guides the ship of state, or that symbolizes the Buddha's teaching). Similarly, Buddha's birthday falls on the 8th day of the 4th month of the Chinese calendar. The religious rite of brit milah (commonly known as circumcision) is held on a baby boy's eighth day of life. Hanukkah is an eight-day Jewish holiday that starts on the 25th day of Kislev. Shemini Atzeret (Hebrew: "Eighth Day of Assembly") is a one-day Jewish holiday immediately following the seven-day holiday of Sukkot. The spiritual Eighth Day, because the number 7 refers to the days of the week (which repeat themselves). The number of Beatitudes. 1 Peter 3:20 states that there were eight people on Noah's Ark. The Antichrist is the eighth king in the Book of Revelation.[5] In Islam, eight is the number of angels carrying the throne of Allah in heaven. The number of gates of heaven. Taoism[edit] Ba Gua Ba Xian Ba Duan Jin Other[edit] The Eight Immortals are Chinese demigods. In Wicca, there are eight Sabbats, festivals, seasons, or spokes in the Wheel of the Year. In Taoism and Chinese cosmology, the eight trigrams of the Bagua. "Bagua" literally means "eight symbols". In Ancient Egyptian mythology, the Ogdoad represents the eight primordial deities of creation. In Scientology there are eight dynamics of existence. As a lucky number[edit] The number eight is considered to be a lucky number in Chinese and other Asian cultures.[6] Eight (八; accounting 捌; pinyin bā) is considered a lucky number in Chinese culture because it sounds like the word meaning to generate wealth (發(T) 发(S); Pinyin: fā). Property with the number 8 may be valued greatly by Chinese. For example, a Hong Kong number plate with the number 8 was sold for $640,000.[7] The opening ceremony of the Summer Olympics in Beijing started at 8 seconds and 8 minutes past 8 pm (local time) on 8 August 2008.[8] Eight (八, hachi, ya) is also considered a lucky number in Japan, but the reason is different from that in Chinese culture. Eight gives an idea of growing prosperous, because the letter (八) broadens gradually. The Japanese thought of eight (や, ya) as a holy number in the ancient times. The reason is less well-understood, but it is thought that it is related to the fact they used eight to express large numbers vaguely such as manyfold (やえはたえ, Yae Hatae) (literally, eightfold and twentyfold), many clouds (やくも, Yakumo) (literally, eight clouds), millions and millions of Gods (やおよろずのかみ, Yaoyorozu no Kami) (literally, eight millions of Gods), etc. It is also guessed that the ancient Japanese gave importance to pairs, so some researchers guess twice as four (よ, yo), which is also guessed to be a holy number in those times because it indicates the world (north, south, east, and west) might be considered a very holy number. In numerology, 8 is the number of building, and in some theories, also the number of destruction. In astrology[edit] In astrology, Scorpio is the 8th astrological sign of the Zodiac. In the Middle Ages, 8 was the number of "unmoving" stars in the sky, and symbolized the perfection of incoming planetary energy. In music and dance[edit] A note played for one-eighth the duration of a whole note is called an eighth note, or quaver. An octave, the interval between two musical notes with the same letter name (where one has double the frequency of the other), is so called because there are eight notes between the two on a standard major or minor diatonic scale, including the notes themselves and without chromatic deviation. The ecclesiastical modes are ascending diatonic musical scales of eight notes or tones comprising an octave. There are eight notes in the octatonic scale. There are eight musicians in a double quartet or an octet. Both terms may also refer to a musical composition for eight voices or instruments. Caledonians is a square dance for eight, resembling the quadrille. Albums with the number eight in their title include 8 by the Swedish band Arvingarna, 8 by the American rock band Incubus, The Meaning of 8 by Minnesota indie rock band Cloud Cult and 8ight by Anglo-American singer-songwriter Beatie Wolfe. Dream Theater's eighth album Octavarium contains many different references to the number 8, including the number of songs and various aspects of the music and cover artwork. "Eight maids a-milking" is the gift on the eighth day of Christmas in the carol "The Twelve Days of Christmas". The 8-track cartridge is a musical recording format. "#8" is the stage name of Slipknot vocalist Corey Taylor. "Too Many Eights" is a song by Athens, Georgia's Supercluster. Eight Seconds, a Canadian musical group popular in the 1980s with their most notable song "Kiss You (When It's Dangerous)". "Eight Days a Week" is a #1 single for the music group The Beatles. Figure 8 is the fifth studio album by singer-songwriter Elliott Smith, released in the year 2000. Ming Hao from the k-pop group Seventeen goes by the name "The8". "8 (circle)" is the eighth song on the album 22, A Million by the American band Bon Iver. "8" is the eighth song on the album When We All Fall Asleep, Where Do We Go? by Billie Eilish. In film and television[edit] 8 Guys is a 2003 short film written and directed by Dane Cook. 8 Man (or Eightman): 1963 Japanese manga and anime superhero. 8 Mile is a 2002 film directed by Curtis Hanson. 8mm is a 1999 film directed by Joel Schumacher. 8 Women (Original French title: 8 femmes) is a 2002 film directed by François Ozon. Eight Below is a 2006 film directed by Frank Marshall. Eight Legged Freaks is a 2002 film directed by Ellory Elkayem. Eight Men Out is a 1988 film directed by John Sayles. Jennifer Eight, also known as Jennifer 8, is a 1992 film written and directed by Bruce Robinson. Eight Is Enough is an American television comedy-drama series. In Stargate SG-1 and Stargate Atlantis, dialing an 8-chevron address will open a wormhole to another galaxy. The Hateful Eight is a 2015 American western mystery film written and directed by Quentin Tarantino. Kate Plus 8 is an American reality television show. The Weather Channel's segment Local on the 8s features daily and weekly forecasts for regions and cities all over the United States. SBS 8 News is a South Korean primetime news program broadcast on SBS. In sports and other games[edit] An 8-ball in billiards Eight-ball pocket billiards is played with a cue ball and 15 numbered balls, the black ball numbered 8 being the middle and most important one, as the winner is the player or side that legally pockets it after first pocketing its numerical group of 7 object balls (for other meanings see Eight ball (disambiguation)). Balklines divide a billiards table into eight outside compartments or divisions called balks. In balkline billiards the table also has eight anchor spaces. In chess, each side has eight pawns and the board is made of 64 squares arranged in an eight by eight lattice. The eight queens puzzle is a challenge to arrange eight queens on the board so that none can capture any of the others. In the game of eights or Crazy Eights, each successive player must play a card either of the same suit or of the same rank as that played by the preceding player, or may play an eight and call for any suit. The object is to get rid of all one's cards first. In association football, the number 8 has historically been the number of the Central Midfielder. in baseball: The center fielder is designated as number 8 for scorekeeping purposes. The College World Series, the final phase of the NCAA Division I tournament, features eight teams. In rugby union, the only position without a proper name is the Number 8, a forward position. In most rugby league competitions (though not the Super League, which uses static squad numbering), one of the two starting props wears the number 8. In rowing, an "eight" refers to a sweep-oar racing boat with a crew of eight rowers plus a coxswain. In the 2008 Games of the XXIX Olympiad, the official opening was on 08/08/08 at 8:08:08 p.m. local time in Beijing, China. In The Stanley Parable Demonstration, there is an eight button, that, when pressed, says the word eight. In the racing video game Mario Kart 8, there is an item called the Crazy Eight. In foods[edit] Nestlé sells a brand of chocolates filled with peppermint-flavoured cream called After Eight, referring to the time 8 p.m. There are eight vegetables in V8 juice. In cooking recipes, there are approximately 8 pinches to a teaspoon. In literature[edit] Eights may refer to octosyllabic, usually iambic, lines of verse. The drott-kvaett, an Old Icelandic verse, consisted of a stanza of eight regular lines. In Terry Pratchett's Discworld series, eight is a magical number and is considered taboo. Eight is not safe to be said by wizards on the Discworld and is the number of Bel-Shamharoth. Also, there are eight days in a Disc week and eight colours in a Disc spectrum, the eighth one being octarine. Lewis Carroll's poem The Hunting of the Snark has 8 "fits" (cantos), which is noted in the full name "The Hunting of the Snark – An Agony, in Eight Fits. Eight apparitions appear to Macbeth in Act 4 scene 1 of Shakespeare's Macbeth as representations of the eight descendants of Banquo. In slang[edit] An "eighth" is a common measurement of marijuana, meaning an eighth of an ounce. It is also a common unit of sale for psilocybin mushrooms. Also, an eighth of an ounce of cocaine is commonly referred to as an "8-ball."[9] The numeral "8" is sometimes used in informal writing and Internet slang to represent the syllable "ate", as in writing "H8" for "hate", or "congratul8ions" for "congratulations". Avril Lavigne's song "Sk8er Boi" uses this convention in the title. "Section 8" is common U.S. slang for "crazy", based on the U.S. military's Section 8 discharge for mentally unfit personnel. The Housing Choice Voucher Program, operated by the United States Department of Housing and Urban Development, is commonly referred to as the Section 8 program, as this was the original section of the Act which instituted the program. In Colombia and Venezuela, "volverse un ocho" (meaning to tie oneself in a figure 8) refers to getting in trouble or contradicting oneself. In China, "8" is used in chat speak as a term for parting. This is due to the closeness in pronunciation of "8" (bā) and the English word "bye". The Magical Number Seven, Plus or Minus Two ^ Bryan Bunch, The Kingdom of Infinite Number. New York: W. H. Freeman & Company (2000): 88 ^ Etymological Dictionary of Turkic Languages: Common Turkic and Interturkic stems starting with letters «L», «M», «N», «P», «S», Vostochnaja Literatura RAS, 2003, 241f. (altaica.ru Archived 31 October 2007 at the Wayback Machine) ^ the hypothesis is discussed critically (and rejected as "without sufficient support") by Werner Winter, 'Some thought about Indo-European numerals' in: Jadranka Gvozdanović (ed.), Indo-European Numerals, Walter de Gruyter, 1992, 14f. ^ Georges Ifrah, The Universal History of Numbers: From Prehistory to the Invention of the Computer transl. David Bellos et al. London: The Harvill Press (1998): 395, Fig. 24.68. ^ "Life Application New Testament Commentary" Archived 11 May 2013 at the Wayback Machine, Bruce B. Barton. Tyndale House Publishers, Inc., 2001. ISBN 0-8423-7066-8, ISBN 978-0-8423-7066-0. p. 1257 ^ Ang, Swee Hoon (1997). "Chinese consumers' perception of alpha-numeric brand names". Journal of Consumer Marketing. 14 (3): 220–233. doi:10.1108/07363769710166800. Archived from the original on 5 December 2011. Retrieved 29 June 2011. ^ Steven C. Bourassa; Vincent S. Peng (1999). "Hedonic Prices and House Numbers: The Influence of Feng Shui" (PDF). International Real Estate Review. 2 (1): 79–93. Archived from the original (PDF) on 22 February 2016. ^ "Patriot games: China makes its point with greatest show" Archived 5 March 2017 at the Wayback Machine by Richard Williams, The Guardian, published 9 August 2008 ^ "Cocaine - Frequently Asked Questions". thegooddrugsguide.com. Archived from the original on 6 October 2008. Retrieved 29 September 2008. The Octonions, John C. Baez Retrieved from "https://en.wikipedia.org/w/index.php?title=8&oldid=905535159" 8 (number) Articles containing Urdu-language text Articles needing the year an event occurred from October 2014 Use dmy dates from June 2013
CommonCrawl
Improved graph-based SFA: information preservation complements the slowness principle Alberto N. Escalante-B. ORCID: orcid.org/0000-0002-8704-14321 & Laurenz Wiskott ORCID: orcid.org/0000-0001-6237-740X1 Machine Learning volume 109, pages 999–1037 (2020)Cite this article Slow feature analysis (SFA) is an unsupervised learning algorithm that extracts slowly varying features from a multi-dimensional time series. SFA has been extended to supervised learning (classification and regression) by an algorithm called graph-based SFA (GSFA). GSFA relies on a particular graph structure to extract features that preserve label similarities. Processing of high dimensional input data (e.g., images) is feasible via hierarchical GSFA (HGSFA), resulting in a multi-layer neural network. Although HGSFA has useful properties, in this work we identify a shortcoming, namely, that HGSFA networks prematurely discard quickly varying but useful features before they reach higher layers, resulting in suboptimal global slowness and an under-exploited feature space. To counteract this shortcoming, which we call unnecessary information loss, we propose an extension called hierarchical information-preserving GSFA (HiGSFA), where some features fulfill a slowness objective and other features fulfill an information preservation objective. The efficacy of the extension is verified in three experiments: (1) an unsupervised setup where the input data is the visual stimuli of a simulated rat, (2) the localization of faces in image patches, and (3) the estimation of human age from facial photographs of the MORPH-II database. Both HiGSFA and HGSFA can learn multiple labels and offer a rich feature space, feed-forward training, and linear complexity in the number of samples and dimensions. However, the proposed algorithm, HiGSFA, outperforms HGSFA in terms of feature slowness, estimation accuracy, and input reconstruction, giving rise to a promising hierarchical supervised-learning approach. Moreover, for age estimation, HiGSFA achieves a mean absolute error of 3.41 years, which is a competitive performance for this challenging problem. Dimensionality reduction (DR) has many applications in computer vision and machine learning, where it is frequently used, for example, as a pre-processing step for solving classification and regression problems on high-dimensional data. DR can be done in either a supervised or an unsupervised fashion. An approach for unsupervised DR relies on the slowness principle, which requires the extraction of the slowest varying features (Hinton 1989). This principle forms the basis of the slow feature analysis (SFA) algorithm (Wiskott 1998; Wiskott and Sejnowski 2002), which extracts temporally stable features. Moreover, it has been shown that this principle might explain in part how the neurons in the brain self-organize to compute invariant representations. When the final goal is supervised learning, supervised DR is more appropriate. Its objective is to extract a low-dimensional representation of the input samples that contains the predictive information about the labels (e.g., Rish et al. 2008). An advantage is that dimensions irrelevant to the supervised problem can be discarded, resulting in a more compact representation and more accurate estimations. Although SFA is unsupervised it can be used for supervised DR, because the order of the samples can be regarded as a weak form of supervised information (consecutive samples are likely to have similar labels). However, an extension to SFA called graph-based SFA (GSFA, Escalante-B. and Wiskott 2012, 2013) is preferable for supervised DR, because it uses the labels explicitly, yielding more accurate label estimations. The input data of GSFA is a graph structure where the vertices are the samples and label similarities can be encoded by the weights of the edges connecting pairs of samples. Technical applications of GSFA include the estimation of age and gender from synthetic face images (Escalante-B. and Wiskott 2010), traffic sign classification (Escalante-B. and Wiskott 2016), and face detection (Mohamed and Mahdi 2010). A frequent problem of DR algorithms is their high computational cost when the input data is high-dimensional. For example, when GSFA is applied to the data directly, what we call direct GSFA, it has cubic complexity w.r.t. the number of dimensions. However, great efficiency can be achieved by resorting to hierarchical GSFA (HGSFA), where the extraction of slow features is done progressively: in a first layer, slow features are extracted from lower dimensional splits of the input data by independent instances of GSFA, called GSFA nodes, reducing the dimensionality of each split. This procedure can be repeated in the next layers until the number of dimensions is small enough. Additional advantages of HGSFA over direct GSFA include lower memory complexity, a more complex global nonlinear feature space, and less overfitting. Although SFA is based on the promising slowness principle, HSFA has not achieved state-of-the-art performance on practical applications. The question of why this is the case has puzzled researchers (Goodfellow et al. 2016). This work contributes to the solution of this question. Through an assessment of HGSFA and HSFA networks, we show that HGSFA suffers from an important shortcoming: The GSFA nodes of the network may prematurely discard features that are not slow at a local scale but that would have been useful to improve global slowness (i.e., the slowness of the final output features) if combined in subsequent nodes with features computed by other nodes. This drawback, which we call unnecessary information loss, leads to an under-exploited feature space, i.e., the global feature space contains slower features than those actually extracted by HGSFA. As a countermeasure to unnecessary information loss, we propose to complement slowness with information preservation (i.e., maximization of mutual information between the input data and the output features). For simplicity and efficiency, we implement this idea as a minimization of the mean squared reconstruction error with PCA. The resulting network that considers both optimization goals is called hierarchical information-preserving GSFA (HiGSFA), and the algorithm used in each node of the network is called information-preserving GSFA (iGSFA). The feature vector computed by iGSFA comprises two parts: (1) a slow part, which is a linear mixture of the (nonlinear) features computed using GSFA, and (2) an input-reconstructive part computed using PCA. The iGSFA algorithm, which is the main building block of HiGSFA, reduces the redundancy between the slow and reconstructive parts by making both parts decorrelated: The reconstructive part does not directly approximate the input data but only a version of it where the slow part has been projected out, called residual data. Moreover, iGSFA also ensures that the scale of the slow components is compatible with the scale of the reconstructive components. This enables meaningful processing by PCA in subsequent layers (PCA is sensitive to input feature magnitudes). Different versions of SFA with and without information preservation are shown in Fig. 1. A fundamental motivation for investigating H(iG)SFA is that, in contrast to gradient-based training of neural networks, it has a stronger biological feasibility because each iGSFA node adapts only to its local input. Thus, the method allows massive parallelization and scalability. On the other hand, it does not minimize a global loss function explicitly, as gradient-based method do, but our experimental results prove that one can achieve remarkable accuracy with the resulting networks. The experiments show the versatility and generality of the proposed extensions in three setups: (1) Unsupervised learning, where the input data is the view of a simulated rat moving inside a box. This is solved with HiSFA, a special case of HiGSFA. (2) Supervised learning, where four pose parameters of faces depicted in image patches must be estimated. This problem is closely connected to face detection and tracking. (3) A supervised problem requiring the estimation of age from facial photographs. All three experiments show the advantages of using information preservation: (a) slower features, (b) better generalization to unseen data, (c) much better input reconstruction (see Fig. 2), and (d) improved classification/regression accuracy (experiments 2 and 3). Furthermore, the computational and memory requirements of HiGSFA are moderate, having the same asymptotic order as those of HGSFA. Illustration of three basic extensions to SFA (graph-based, hierarchical, information-preserving), each of them is represented by a different direction. The combined use of these extensions results in 8 different variants of SFA. This article proposes information preservation, which is used in iSFA, iGSFA, HiSFA, and HiGSFA, the last one being the most promising variant of SFA a An image from a private database after pose normalization. b The same image fully pre-processed (i.e., after pose normalization and face sampling, 96\(\times \)96 pixels). Linear reconstructions on 75 features extracted with either c PCA, d HiGSFA or e HGSFA. f Average over all pre-processed images of the MORPH-II database. Notice that the reconstruction using HiGSFA features is most similar to that of PCA, whereas the reconstruction using HGSFA features is most similar to the average image. HiGSFA is the main extension to SFA proposed in this work and belongs to supervised DR. Other algorithms for supervised DR include Fisher discriminant analysis (FDA, Fisher 1936), local FDA (LFDA, Sugiyama 2006), pairwise constraints-guided feature projection (PCGFP, Tang and Zhong 2007), semi-supervised dimensionality reduction (SSDR, Zhang et al. 2007), and semi-supervised LFDA (SELF, Sugiyama et al. 2010). Previous extensions to SFA include extended SFA (xSFA, Sprekeler et al. 2014), generalized SFA (genSFA, Sprekeler 2011) and graph-based SFA (GSFA, Escalante-B. and Wiskott 2010, 2012, 2013). With some adaptations, SFA has been shown to be successful for classification (e.g., Berkes 2005b; Koch et al. 2010; Franzius et al. 2011; Kuhnl et al. 2011; Zhang and Tao 2012), and regression (e.g., Franzius et al. 2011). HiGSFA extends hierarchical GSFA (HGSFA) by adding information preservation. Slow Feature Analysis (SFA) Slow features can be computed using a few methods, such as online learning rules (e.g., Földiák 1991; Mitchison 1991), slow feature analysis (SFA, Wiskott 1998; Wiskott and Sejnowski 2002), which is a closed-form algorithm specific for this task and has biologically feasible variants (Sprekeler et al. 2007), and an incremental-learning version (inc-SFA, Kompella et al. 2012). The SFA optimization problem is defined as follows (Wiskott 1998; Wiskott and Sejnowski 2002; Berkes and Wiskott 2005). Given an I-dimensional input signal \({\mathbf {x}}(t)=(x_1(t), \ldots , x_{I}(t))^T\), where \(t \in \mathbb {R}\), find J instantaneous scalar functions \(g_j: \mathbb {R}^I \rightarrow \mathbb {R}\), for \(1 \le j \le J\), within a function space \(\mathcal {F}\) such that the output signal components \(y_j(t) {\mathop {=}\limits ^{{\mathrm {def}}}}g_j({\mathbf {x}}(t))\) minimize the objective function \(\varDelta (y_j) {\mathop {=}\limits ^{{\mathrm {def}}}}\langle \dot{y}_j(t)^2 \rangle _t\) (delta value), under the following constraints: (1) \(\langle y_j(t) \rangle _t = 0\) (zero mean), (2) \(\langle y_j(t)^2 \rangle _t = 1\) (unit variance), and (3) \(\langle y_j(t) y_{j'}(t) \rangle _t = 0, \forall j'<j\) (decorrelation and order). The delta value \(\varDelta (y_j)\) is defined as the time average \((\langle \cdot \rangle _t)\) of the squared derivative of \(y_j\) and is therefore a measure of the slowness (or rather fastness) of the signal, whereas the constraints require that the output signals are normalized, not constant, and represent different aspects of the input signal. SFA computes an optimal solution to the problem above within a linear feature space (possibly after the data has been expanded nonlinearly). Typically, discrete time is used, and the derivative is approximated as \(\dot{y}_j(t) {\mathop {=}\limits ^{{\mathrm {def}}}}y_j(t+1) - y_j(t)\). Hierarchical SFA (HSFA) and terminology Hierarchical SFA (HSFA) was already introduced in the paper that first introduces the SFA algorithm (Wiskott 1998), where it is employed as a model of the visual system for learning invariant representations. Various publications have followed this biological interpretation. Franzius et al. (2007) have used HSFA to learn invariant features from the simulated view of a rat walking inside a box. In conjunction with a sparseness post-processing step, the extracted features are similar to the responses of place cells in the hippocampus. Other works have exploited the computational efficiency of HSFA compared to direct SFA. Franzius et al. (2011) have used HSFA for object recognition from images and to estimate pose parameters of single objects moving and rotating over a homogeneous background. Escalante-B. and Wiskott (2013) have used an 11-layer HGSFA network to accurately find the horizontal position of faces in photographs. In general, HSFA networks (composed of several SFA nodes) are directed and acyclic. They are usually structured in multiple layers of nodes, following a grid structure. Most HSFA networks in the literature have a similar architecture. Typical differences are how the data is split into smaller blocks and the particular pre-processing done by before the SFA nodes themselves. For simplicity, we refer to the input data as layer 0. Important parameters that define the structure of the network include: (a) The output dimensionality of the nodes. (b) The fan-in of the nodes, which is the number of nodes (or data elements) in a previous layer that feed into them. (c) The receptive field of the nodes, which refers to all the elements of the input data that (directly or indirectly) provide input to a particular node. (d) The stride of a layer, which tells how far apart the inputs to adjacent nodes in a layer are. If the stride is smaller than the fan-in along the same direction, then at least one node in the previous layer will feed two or more nodes in the current layer. This is called receptive field overlap. Graph-based SFA (GSFA) Graph-based SFA (Escalante-B. and Wiskott 2013) is an extension to SFA designed for supervised learning. GSFA extracts a compact set of features that is usually post-processed by a typical supervised algorithm to generate the final label or class estimate. In GSFA the training data takes the form of a training graph\(G=({\mathbf {V}}, {\mathbf {E}})\) (illustrated in Fig. 3a), which is a structure composed of a set \({\mathbf {V}}\) of vertices \({\mathbf {x}}(n)\), each vertex being a sample (i.e. an I-dimensional vector), and a set \({\mathbf {E}}\) of edges \(({\mathbf {x}}(n), {\mathbf {x}}(n'))\), which are pairs of samples, with \(1 \le n ,n' \le N\). The index n (or \(n'\)) replaces the time variable t of SFA. The edges are undirected and have symmetric weights \(\gamma _{n,n'} \,=\, \gamma _{n',n}\), which indicate the label similarity between the connected vertices; also, each vertex \({\mathbf {x}}(n)\) has an associated weight \(v_n > 0\), which can be used to reflect its frequency. This representation includes the standard time series of SFA as a special case (Fig. 3b). Two examples of training graphs: a A training graph with \(N=7\) vertices. b A linear graph suitable for GSFA that learns the same features as SFA on the time series \({\mathbf {x}}(1), \dots , {\mathbf {x}}(6)\). (Figure from Escalante-B. and Wiskott 2013) The GSFA optimization problem (Escalante-B. and Wiskott 2013) can be stated as follows. For \(1 \le j \le J\), find output features \(y_j(n)=g_j({\mathbf {x}}(n))\) with \(g_j \in \mathcal {F}\), where \(1 \le n \le N\) and \(\mathcal {F}\) is the function space, such that the objective function (weighted delta value) $$\begin{aligned} \varDelta _j {\mathop {=}\limits ^{{\mathrm {def}}}}\frac{1}{R} \sum _{n,n'} \gamma _{n,n'} (y_j(n')-y_j(n))^2 {\text { is minimal }} \end{aligned}$$ under the constraints $$\begin{aligned} \frac{1}{Q} \sum _{n} v_n y_j(n)&= 0 \mathbf{ (weighted zero mean) }, \end{aligned}$$ $$\begin{aligned} \frac{1}{Q} \sum _{n} v_n (y_j(n))^2&= 1 \, \mathbf{ (weighted unit variance) } , {\text { and}} \end{aligned}$$ $$\begin{aligned} \frac{1}{Q} \sum _{n} v_n y_{j}(n) y_{j'}(n)&= 0 \, {\text { for }} j' < j \mathbf{ (weighted decorrelation) } \end{aligned}$$ with \(R {\mathop {=}\limits ^{{\mathrm {def}}}}\sum _{n,n'} \gamma _{n,n'}\) and \(Q {\mathop {=}\limits ^{{\mathrm {def}}}}\sum _{n} v_n \,\). The factors 1 / R and 1 / Q provide invariance to the scale of the edge and vertex weights. Typically, a linear function space is used, but the input samples are preprocessed by a nonlinear expansion function. One should choose the edge weights of the training graph properly, because they control what kind of features are extracted by GSFA. In general, to obtain features useful for classification, one should favor connections between samples from the same class (stronger edge weighs for same-class samples). To obtain features useful for regression, one should favor connections between samples with similar labels. In Sects. 5.2.3 and 5.3.2, we combine various training graphs to learn feature representations that allow the solution of various classification and regression problems simultaneously (multi-label learning). The rest of the article is organized as follows. In the next section we analyze the advantages and limitations of hierarchical networks for slow feature extraction. The shortcomings we expose motivate the introduction of the iSFA (and iGSFA) algorithms in Sect. 4. In Sect. 5 these algorithms, or more precisely, hierarchical neural networks built with them, are evaluated experimentally using three different problems of supervised and unsupervised DR. We conclude with a discussion section. Assessment of hierarchical processing by HSFA and HGSFA networks In this section we analyze HSFA networks in terms of their advantages and limitations. This analysis is crucial, since it justifies the extensions with information preservation (HiSFA and HiGSFA) proposed in Sect. 4. Advantages of HSFA and HGSFA networks The central advantages of hierarchical processing in H(G)SFA—compared to direct (G)SFA—have been mentioned in the introduction and Sect. 2.2: (1) It reduces overfitting and can be seen as a regularization method, (2) the nonlinearity of the layers of the neural network accumulates in a compositional manner, so that even when using simple expansions the network as a whole may describe a highly nonlinear feature space, and (3) better computational efficiency than (G)SFA. Some remarks about these advantages are pertinent: Advantage (1) is explained by the fact that the input dimensionality to the nodes of the hierarchical network is much smaller than the original input dimensionality, whereas the number of samples remains unchanged. Thus, individual nodes are less susceptible to overfitting. Consequently, the gap in generalization performance between HSFA and direct SFA is larger when polynomial expansions are introduced (compared to their linear versions), because the dimensionality of the expanded data is much larger in direct SFA and this translates into stronger overfitting. On the other hand HSFA is not guaranteed to find the optimal (possibly overfitting) solution within the available function space whereas SFA is. Advantage (2) is valuable in practice, because most real-life problems are nonlinear. A complex feature space may be necessary to extract the slowest hidden parameters and solve the supervised problem with good accuracy. Advantage (3) is addressed more precisely in the following paragraphs by first recalling the computational complexity of SFA and GSFA. This complexity is then compared with the complexity of a particular HSFA network ("Appendix A"). We focus on the training complexity rather than on the complexity of feature extraction during testing, because the former can be considerable in practice, whereas the latter is relatively lightweight in both HSFA and direct SFA. Following standard notation of algorithm complexity, computational (time) complexity is denoted by T (e.g., \(T_{\text {SFA}}\)). Training linear SFA has a time (computational) complexity $$\begin{aligned} T_{\text {SFA}}(N, I) = \mathcal {O}(NI^2 + I^3) \, , \end{aligned}$$ where N is the number of samples and I is the input dimensionality (possibly after a nonlinear expansion). The same complexity holds for GSFA if one uses an efficient training graph (e.g., the serial and clustered graphs, see Sects. 5.2.3 and 5.3.2), otherwise (for arbitrary graphs) it can be as large as $$\begin{aligned} T_{\text {GSFA}}(N, I) = \mathcal {O}(N^2I^2 + I^3) \, . \end{aligned}$$ For large I (i.e., high-dimensional data) direct SFA and direct GSFA are therefore inefficient.Footnote 1 In contrast, HSFA and HGSFA can be much more efficient. The exact complexity of HSFA and HGSFA depends on the structure and parameters of the hierarchical network. "Appendix A" proves that the training complexity is linear in I and N for certain networks. Limitations of HSFA networks Although HSFA and HGSFA networks show remarkable advantages, they also have some shortcomings in their current form. The analysis in this section focuses on HSFA, but it also applies to other networks in which the nodes have only one criterion for DR, namely slowness maximization, such as HGSFA. Besides the slowness maximization objective, no other restriction is imposed on the nodes; they might be linear or nonlinear, include additive noise, clipping, various passes of SFA, etc. It is shown here that relying only on slowness to determine which aspects of the data are preserved results in two shortcomings: unnecessary information loss and poor input reconstruction, explained below. (1) Unnecessary information loss This shortcoming occurs when the nodes of the network discard dimensions of the data that are not significantly slow locally (i.e., at the node level), but which would have been useful for slowness optimization by other nodes in subsequent layers if they had been preserved (and combined with other dimensions). The following minimal theoretical experiment shows that dimensions crucial to extract global slow features are not necessarily slow locally. Consider four zero-mean, unit-variance signals: \(s_1(t)\), \(s_2(t)\), \(s_3(t)\) and n(t) that can only take binary values, either \(-1\) or \(+1\) (n stands for noise here, and t is time, or more precisely, sample number). Assume these signals are ordered by slowness \((\varDelta _{s_1}< \varDelta _{s_2}< \varDelta _{s_3} < \varDelta _{n} = 2.0)\) and they are statistically independent. The value 2.0 is precisely the expected \(\varDelta \) value of a random unit-variance i.i.d. noise feature. The same holds for GSFA if the graph is consistent and has no self-loops (Escalante-B. and Wiskott 2016). Let the 4-dimensional input to the network be \((x_1, x_2, x_3, x_4) {\mathop {=}\limits ^{{\mathrm {def}}}}(s_2, s_1 n, s_3, n)\) and assume the number of samples is large enough. The direct application of quadratic SFA (QSFA, i.e., a quadratic expansion followed by linear SFA) to this data would allow us to extract the slowest possible feature, namely, \(x_2 x_4 = (s_1 n) n = s_1\) (or equivalently \(-x_2 x_4\)). However, let us assume that a 2-layer quadratic HSFA (QHSFA) network with 3 nodes is used, where the output of the network is: \(\text {QSFA}\big ( \text {QSFA}(s_2, s_1 n), \text {QSFA}(s_3, n) \big )\). Each quadratic QSFA node reduces the number of dimensions from 2 to 1. Since \(\varDelta _{s_2} < \varDelta _{s_1 n} = 2.0\), the first bottom node computes \(\text {QSFA}(s_2, s_1 n)=s_2\), and since \(\varDelta _{s_3} < \varDelta _{n} = 2.0\), the second bottom node computes \(\text {QSFA}(s_3, n)=s_3\). The top node would then extract \(\text {QSFA}(s_2, s_3) = s_2\). Therefore, the network would miss the slowest feature, \(s_1\), even though it actually belongs to the feature space spanned by the network. The problem can be expressed in information theoretic terms: $$\begin{aligned} I(s_1 n, s_1)&= 0 \, \text {, and} \end{aligned}$$ $$\begin{aligned} I(n, s_1)&= 0 \, \text {, but} \end{aligned}$$ $$\begin{aligned} I((s_1 n, n), s_1)&= H(s_1) > 0 \, , \end{aligned}$$ where H is entropy, and I denotes mutual information.Footnote 2 Equations (7)–(9) show that it is impossible to locally rule out that a feature contains information that might yield a slow feature (in this case \(s_1\)), unless one globally observes the whole data available to the network. The problem above could be solved if the network could preserve n and \(s_1 n\) by resorting to another criteria besides slowness. For example, if the signals n and \(s_1 n\) were 10n and \(10 s_1 n\) instead, one could distinguish and preserve them based on their larger variance. \(\varDelta \) values of the first 40 slow features of an HGSFA network trained for age estimation and averaged over all the nodes of the first layer (\(\varDelta _1=1.859\), \(\varDelta _2=1.981\), and \(\varDelta _3=1.995\), not shown). The training graph employed is a serial graph with 32 groups (see Sect. 5.3.2). Most \(\varDelta \) values are close to 2.0, indicating that at this early stage, where the nodes have small 6\(\times \)6-pixel receptive fields, the slowest extracted features are not substantially slow. Some features have \(\varDelta > 2.0\) and are thus slightly faster than noise. Unnecessary information loss is not only a theoretical concern, it can affect applications in practice. Figure 4 shows the \(\varDelta \) values of the slowest features extracted by the first layer of an HGSFA network trained for age estimation from human face images. Most \(\varDelta \) values are approximately 2.0, and only a few of them are significanlty smaller than 2.0. The value 2.0 is crucial; a feature with \(\varDelta = 2.0\) can be a transformation of relevant inputs, a transformation of irrelevant noise inherent to the inputs, or something between these cases. In fact, if two or more feasible features have the same \(\varDelta \) value, GSFA outputs an arbitrary rotation of them, even though one might prefer features that are transformations of the relevant input rather than noise components. Due to DR only a fraction of the features may be preserved and the rest is discarded even though some of them might still contain useful information. One might try to preserve many features to reduce information loss. However, this might be impractical, because it would increase the computational cost and contradict the goal of dimensionality reduction. (2) Poor input reconstruction Input reconstruction is the task of generating an (artificial) input having a given feature representation (or an approximation of it). Wilbert (2012) has studied this task for HSFA. In the image domain, reconstruction can be useful to extrapolate images by computing distortions to an input image that reflect modifications introduced to the output features. For example, assume SFA was trained to extract body mass index (BMI) from facial images. Reconstruction would allow us to visualize how a particular person would look after losing or gaining a few kilograms. Experiments have shown that input reconstruction from top-level features extracted by HSFA is a challenging task (Wilbert 2012). We have confirmed this observation through previous experiments using various nonlinear methods for input reconstruction, including local and global methods. We show here that poor input reconstruction may not be caused by the weakness of the reconstruction algorithms employed but rather by insufficient reconstructive information in the slow features: The extracted features ideally depend only on the hidden slow parameters and are invariant to any other factor. In the BMI estimation example, the extracted features would be strongly related to the BMI (and nonlinear transformations of it). Thus, the features would be mostly invariant to other factors, such as the identity of the person, his or her facial expression, the background, etc. Therefore, in theory only BMI information would be available for reconstruction. In practice, residual information about the input data can still be found in the extracted features. However, one cannot rely on this information because it is partial (making reconstructions not unique) and highly nonlinear (making it difficult to untangle). Even the features extracted by linear SFA typically result in inaccurate reconstructions. Since HSFA has many layers of SFA nodes, the problem is potentially aggravated. The connection between the problems of unnecessary information loss and poor input reconstruction is evident if one distinguishes between two types of information: (a) information about the full input data and (b) information about the global slow parameters. Losing (a) results in poor input reconstruction, whereas losing (b) results in unnecessary information loss. Of course, (a) contains (b). Therefore, both problems originate from losing different but related types of information. Hierarchical information-preserving GSFA (HiGSFA) In this section, we formally propose HiGSFA, an extension to HGSFA that counteracts the problems of unnecessary information loss and poor input reconstruction described above by extracting reconstructive features in addition to slow features. HiGSFA is a hierarchical implementation of information-preserving GSFA (iGSFA). To simplify the presentation we first focus on information-preserving SFA (iSFA) and explain later how to easily extend iSFA to iGSFA and HiGSFA. We write iSFA with lowercase 'i' to distinguish it from independent SFA (ISFA, Blaschke et al. 2007). iSFA combines two learning principles: the slowness principle and information preservation. Information preservation requires the maximization of the mutual information between the output features and the input data. However, for finite, discrete and typically unique data samples, it is difficult to measure and maximize mutual information unless one assumes a specific probability model. Therefore, we implement information preservation more practically as the minimization of a reconstruction error. A closely related concept is the explained variance, but this term is avoided here because it is typically restricted to linear transformations. Algorithm overview (iSFA) HiSFA improves feature extraction of HSFA networks at the node and global level. The SFA nodes of an HSFA network are replaced by iSFA nodes, leaving the network structure unchanged (although the network structure could be tuned to achieve better accuracy, if desired). The feature vectors computed by iSFA consist of two parts: (1) a slow part derived from SFA features, and (2) a reconstructive part derived from principal components (PCs). Generally speaking, the slow part captures the slow aspects of the data and is basically composed of standard SFA features, except for an additional linear mixing step to be explained in Sects. 4.2 and 4.4. The reconstructive part ignores the slowness criterion and describes the input linearly as closely as possible (disregarding the part already described by the slow part). In Sect. 5 we show that, although the reconstructive features are not particularly slow, they indeed contribute to global slowness maximization. Block diagram of the iSFA node showing the components used for training and feature extraction. Signal dimensionalities are given in parenthesis. The components and signals are explained in the text. Algorithm description (training phase of iSFA) Figure 5 shows the components of iSFA, and Algorithm 1 summarizes the algorithm compactly. The iSFA algorithm (training phase) is defined as follows. Let \({\mathbf {X}} {\mathop {=}\limits ^{{\mathrm {def}}}}({\mathbf {x}}_1, \dots , {\mathbf {x}}_N)\) be the I-dimensional training data, D the output dimensionality, \({\mathbf {h}}(\cdot )\) the nonlinear expansion function, and \(\varDelta _T \approx 2.0\) (a \(\varDelta \)-threshold, in practice slightly smaller than 2.0). First, the average sample \(\bar{{\mathbf {x}}} {\mathop {=}\limits ^{{\mathrm {def}}}}\frac{1}{N}\sum _n {\mathbf {x}}_n\) is removed from the N training samples resulting in the centered data \({\mathbf {X}}' {\mathop {=}\limits ^{{\mathrm {def}}}}\{ {\mathbf {x}}'_n \}\), with \({\mathbf {x}}'_n {\mathop {=}\limits ^{{\mathrm {def}}}}{\mathbf {x}}_n - \bar{{\mathbf {x}}}\), for \(1 \le n \le N\). Then, \({\mathbf {X}}'\) is expanded via \({\mathbf {h}}(\cdot )\), resulting in vectors \({\mathbf {z}}_n={\mathbf {h}}({\mathbf {x'}}_n)\) of dimensionality \(I'\). Afterwards, linear SFA is trained with the expanded data \({\mathbf {Z}} {\mathop {=}\limits ^{{\mathrm {def}}}}\{ {\mathbf {z}}_n \}\), resulting in a weight matrix \({\mathbf {W}}_{\text {SFA}}\) and an average expanded vector \(\bar{{\mathbf {z}}}\). The slow features extracted from \({\mathbf {Z}}\) are \({\mathbf {s}}_n = {\mathbf {W}}_{\text {SFA}}^T ({\mathbf {z}}_n - \bar{{\mathbf {z}}})\). The first J components of \({\mathbf {s}}_n\) with \(\varDelta < \varDelta _T\) and \(J \le \min (I', D)\) are denoted \({\mathbf {s}}'_n\). The remaining components of \({\mathbf {s}}_n\) are discarded. \({\mathbf {S}}' {\mathop {=}\limits ^{{\mathrm {def}}}}\{ {\mathbf {s}}'_n \}\) has J features, each of them having zero mean and unit variance. The next steps modify the amplitude of \({\mathbf {S}}'\): The centered data \({\mathbf {X}}'\) is approximated from \({\mathbf {S}}'\) linearly by using ordinary least squares regression to compute a matrix \({\mathbf {M}}\) and a vector \({\mathbf {d}}\), such that $$\begin{aligned} {\mathbf {A}} {\mathop {=}\limits ^{{\mathrm {def}}}}{\mathbf {M}}{\mathbf {S'}}+{\mathbf {d}}{\mathbf {1}}^T \approx {\mathbf {X'}} \, , \end{aligned}$$ where \({\mathbf {A}}\) is the approximation of the centered data given by the slow part (i.e., \({\mathbf {x}}'_n \approx {\mathbf {a}}_n {\mathop {=}\limits ^{{\mathrm {def}}}}{\mathbf {M}}{\mathbf {s}}'_n+{\mathbf {d}}\)) and \({\mathbf {1}}\) is a vector of 1s of length N. Since \({\mathbf {X}}'\) and \({\mathbf {S}}'\) are centered, \({\mathbf {d}}\) could be discarded because \({\mathbf {d}}={\mathbf {0}}\). However, when GSFA is used the slow features have only weighted zero mean, and \({\mathbf {d}}\) might improve the approximation of \({\mathbf {X}}'\). Afterwards, the QR decomposition of \({\mathbf {M}}\) $$\begin{aligned} {\mathbf {M}} = {\mathbf {Q}} {\mathbf {R}} \, \end{aligned}$$ is computed, where \({\mathbf {Q}}\) is orthonormal and \({\mathbf {R}}\) is upper triangular. Then, the (amplitude-corrected) slow feature part is computed as $$\begin{aligned} {\mathbf {y}}'_n= {\mathbf {R}} {\mathbf {s}}'_n \, . \end{aligned}$$ Section 4.4 justifies the mixing and scaling (12) of the slow features \({\mathbf {s}}'_n\). To obtain the reconstructive part, residual data \({\mathbf {b}}_n {\mathop {=}\limits ^{{\mathrm {def}}}}{\mathbf {x}}'_n - {\mathbf {a}}_n\) is computed, i.e., the data that remains after the data linearly reconstructed from \({\mathbf {y}}'_n\) (or \({\mathbf {s}}'_n\)) is removed from the centered data. Afterwards, PCA is trained with \(\{ {\mathbf {b}}_n \}\), resulting in a weight matrix \({\mathbf {W}}_{\text {PCA}}\). There is no bias term, because \({\mathbf {b}}_n\) is centered. The reconstructive part \({\mathbf {c}}_n\) is then defined as the first \(D-J\) principal components of \({\mathbf {b}}_n\) and computed accordingly. Thereafter, the slow part \({\mathbf {y'_n}}\) (J features) and the reconstructive part \({\mathbf {c_n}}\) (\(D-J\) features) are concatenated, resulting in the D-dimensional output features \({\mathbf {y_n}} {\mathop {=}\limits ^{{\mathrm {def}}}}{\mathbf {y'_n}} | {\mathbf {c_n}}\), where | is vector concatenation. Finally, the algorithm returns \({\mathbf {Y}}=({\mathbf {y_1}}, \dots , {\mathbf {y_N}})\), \(\bar{{\mathbf {x}}}\), \({\mathbf {W}}_{\text {SFA}}\), \(\bar{{\mathbf {z}}}\), \({\mathbf {W}}_{\text {PCA}}\), J, \({\mathbf {Q}}\), \({\mathbf {R}}\), and \({\mathbf {d}}\). The output features \({\mathbf {Y}}\) are usually computed only during feature extraction. Still, we keep them here to simplify the understanding of the signals involved. We would like to remark that the algorithm proposed above takes care of the following important considerations: (a) From the total output dimensionality D, it decides how many features the slow and reconstructive part should contain. (b) It minimizes the redundancy between the slow and the reconstructive part, allowing the output features to be more compact and have higher information content. (c) It corrects the amplitudes of the slow features (SFA features have unit variance) to make them compatible with the PCs and allow their processing by PCA in subsequent nodes (PCA is a rotation and projection. Thus, it preserves the amplitude of the original data in the retained directions). Feature extraction by iSFA The feature extraction algorithm (described in Algorithm 2) is similar to the training algorithm, except that the parameters \(\bar{{\mathbf {x}}}\), \({\mathbf {W}}_\text {SFA}\), \(\bar{{\mathbf {z}}}\), \({\mathbf {W}}_\text {PCA}\), J, \({\mathbf {Q}}\), \({\mathbf {R}}\), and \({\mathbf {d}}\) have already been learned from the training data. Algorithm 2 processes a single input sample, but it can be easily and efficiently adapted to process multiple input samples by taking advantage of matrix operations. Mixing and scaling of slow features: QR scaling In the iSFA algorithm, the J-dimensional slow features \({\mathbf {s}}'_n\) are transformed into the scaled \({\mathbf {y}}'\) features. Such a transformation is necessary to make the amplitude of the slow features compatible with the amplitude of the PCA features, so that PCA processing of the two sets of features together is possible and meaningful in the next layers. In general, a feature scaling method should ideally offer two key properties of PCA. (1) If one adds unit-variance noise to one of the output features (e.g., principal components), the reconstruction error also increases by one unit on average. (2) If one adds independent noise to two or more output features simultaneously, the reconstruction error increases additively. The feature scaling method (10)–(12) used by Algorithm 1 is called QR scaling, and it can be shown that it fulfills the two properties above. QR scaling ensures that the amplitude of the slow features is approximately equal to the reduction in the reconstruction error that each feature allows. In practice, a lower bound on the scales (not shown in the pseudo-code) ensures that all features have amplitudes \(> 0\) even if they do not contribute to reconstruction. In the iSFA algorithm, we approximate the input linearly as \(\tilde{{\mathbf {x}}} = \tilde{{\mathbf {a}}} + \tilde{{\mathbf {b}}} + \bar{{\mathbf {x}}}\), where \(\tilde{{\mathbf {a}}} = {\mathbf {Q}}{\mathbf {y'}}+{\mathbf {d}}\) and \(\tilde{{\mathbf {b}}} = {\mathbf {W}}_{\text {PCA}} {\mathbf {c}}\) (see Sect. 4.5 and recall that \({\mathbf {W}}_{\text {PCA}}\) is the Moore-Penrose pseudoinverse of \({\mathbf {W}}_{\text {PCA}}^T\)). Approximations are denoted here using tilded variables. Therefore, vector \({\mathbf {y}}={\mathbf {y'}}|{\mathbf {c}}\) fulfills the two key properties of reconstruction of PCA above because matrix \({\mathbf {Q}}\) and matrix \({\mathbf {W}}_{\text {PCA}}\) are orthogonal, and because the columns of the two matrices are mutually orthogonal. One small drawback is that (12) mixes the slow features, which is undesirable when one uses certain expansion functions. Therefore, as an alternative we propose a sensitivity-based scaling method in "Appendix B", which does not mix the slow features. Input reconstruction from iSFA features One interesting property of iSFA is that both the slow and the reconstructive features are nonlinear w.r.t. the input data. The slow features are nonlinear due to the expansion function. The residual data is nonlinear because it is computed using the (nonlinear) slow part and the centered data. The reconstructive features are computed using the residual data and are linear w.r.t. the residual data but nonlinear w.r.t. the input data. Even though the computed features are all nonlinear, iSFA allows a linear approximation of the input (linear input reconstruction). In contrast, standard SFA does not have a standard input reconstruction method, although various gradient-descent and vector-quantization methods have been tried (e.g., Wilbert 2012) with limited success. The reconstruction algorithm simply computes the input approximation described in Sect. 4.4, and further detailed in Algorithm 3. Moreover, the linear reconstruction algorithm has practical properties: It is simpler than the feature extraction algorithm, since the nonlinear expansion \({\mathbf {h}}\) and \({\mathbf {W}}_\text {SFA}\) are not used, and it has lower computational complexity, because it consists of only two matrix-vector multiplications and three vector additions, none of them using expanded \(I'\)-dimensional data. Alternatively, nonlinear reconstruction from iSFA features is also possible, as described in the algorithm below. Assume \({\mathbf {y}}\) is the iSFA feature representation of a sample \({\mathbf {x}}\), which we denote as \({\mathbf {y}} = {\text {iSFA}}({\mathbf {x}})\). Since \({\mathbf {x}}\) is unknown, the reconstruction error cannot be computed directly. However, one can indirectly measure the accuracy of a particular reconstruction \(\tilde{{\mathbf {x}}}\) by means of the feature error, which is defined here as \(e_\text {feat} {\mathop {=}\limits ^{{\mathrm {def}}}}|| {\mathbf {y}} - \text {iSFA}(\tilde{{\mathbf {x}}}) ||\). The feature error can be minimized for \(\tilde{{\mathbf {x}}}\) using the function \(\text {iSFA}(\cdot )\) as a black box and gradient descent or other generic nonlinear minimization algorithms. Frequently, such algorithms require a first approximation, which can be very conveniently provided by the linear reconstruction algorithm. Although nonlinear reconstruction methods might result in higher reconstruction accuracy than linear methods in theory, they are typically more expensive computationally. Moreover, in the discussion we explain why minimizing \(e_\text {feat}\) typically increases the reconstruction error in practice. Some remarks on iSFA and derivation of iGSFA and HiGSFA Clearly, the computational complexity of iSFA is at least that of SFA, because iSFA consists of SFA and a few additional computations. However, none of the additional computations is done on the expanded \(I'\)-dimensional data but at most on I or D-dimensional data (e.g., PCA is applied to I-dimensional data, and the QR decomposition is applied to an \(I \times I\)-matrix, which have an \(\mathcal {O}(NI^2+I^3)\) and \(\mathcal {O}(I^3)\) complexity, respectively). Therefore, iSFA is slightly slower than SFA but has the same complexity order. Practical experiments (Sect. 5) confirm this observation. The presentation above focuses on iSFA. To obtain information-preserving GSFA (iGSFA) one only needs to substitute SFA by GSFA inside the iSFA algorithm and provide GSFA with the corresponding training graph during the training phase. Notice that GSFA features have weighted zero mean instead of the simple (unweighted) zero mean enforced by SFA. This difference has already been compensated by vector \({\mathbf {d}}\). HiGSFA is constructed simply by connecting iGSFA nodes in a hierarchy, just as HSFA is constructed by connecting SFA nodes. The iGSFA node has been implemented in Python (including iSFA) and is publicly available in CuicuilcoFootnote 3 as well as in the MDP toolkit (Zito et al. 2009). Experimental evaluation of HiGSFA This section evaluates the proposed extensions, HiSFA and HiGSFA, on three problems: (1) The unsupervised extraction of temporally stable features from images simulating the view of a rat walking inside a box, which is addressed with HiSFA. (2) The extraction of four pose parameters of faces in image patches, which is a multi-label learning problem suitable for HiGSFA. (3) The estimation of age, race, and gender from human face photographs, which is also solved using HiGSFA. The chosen problems exemplify the general applicability of the proposed algorithms. Experiment 1: Extraction of slow features with HiSFA from the view of a rat The input data in this first experiment consists of the (visual) sensory input of a simulated rat. The images have been created using the RatLab toolkit (Schönfeld and Wiskott 2013). RatLab simulates the random path of a rat as it explores its environment, rendering its view according to its current position and head-view angle. For simplicity the rat is confined to a square area bounded by four walls having different textures. Franzius et al. (2007) have shown theoretically and in simulations that the slowest features one extracted from this data are trigonometric functions of the position of the rat and its head direction (i.e., the slow configuration/generative parameters). Training and test images For this experiment, first a large fixed sequence of 200,000 images was generated. Then, the training data of a single experiment run is selected randomly as a contiguous subsequence of size 40,000 from the full sequence. The test sequence is selected similarly but enforcing no overlap with the training sequence. All images are in color (RGB) and have a resolution of 320\(\times \)40 pixels, see Fig. 6. Example of four images generated using RatLab. Notice that the images are quite elongated. This default image size approximates the panoramic field of view of a rat Description of HiSFA and HSFA networks To evaluate HiSFA we reproduced an HSFA network that has already been used (Schönfeld and Wiskott 2013). This network has three layers of quadratic SFA. Each layer consists of three components: (1) linear SFA, which reduces the data dimensionality to 32 features, (2) a quadratic expansion, and (3) linear SFA, which again reduces the expanded dimensionality to 32 features. The first two layers are convolutional (i.e., the nodes of these layers share the same weight matrices). For a fair comparison, we built an HiSFA network with exactly the same structure as the HSFA network described above. The HiSFA network has additional hyperparameters \(\varDelta _T\) (one for each node), but in order to control the size of the slow part more precisely, we removed this parameter and fixed instead the size of the slow part to 6 features in all nodes of the HiSFA network (i.e., \(J=6\)). For computational convenience we simplified the iSFA algorithm to enable the use of convolutional layers as in the HSFA network. Convolutional layers can be seen as a single iSFA node cloned at different spatial locations. Thus, the total input to a node consists not of a single input sequence (as assumed by the iSFA algorithm) but of several sequences, one at each location of the node. The iSFA node was modified as follows: (1) the decorrelation step (between the slow and reconstructive parts) is removed and (2) all slow features are given the same variance as the median variance of the features in the corresponding reconstructive part (instead of QR scaling). To evaluate HSFA and HiSFA we compute \(\varDelta \) values of the 3 slowest features extracted from test data, which are shown in Table 1. The experiments were repeated 5 times, each run using training and test sequences randomly computed using the procedure of Sect. 5.1.1. Table 1 Delta values of the first three features extracted by HSFA and HiSFA from training data (above) and test data (below) Table 1 shows that HiSFA extracts clearly slower features than HSFA for both training and test data in the main setup (40 k training images). For instance, for test data \(\varDelta _1\), \(\varDelta _2\), and \(\varDelta _3\) are 28–52% smaller in HiSFA than in HSFA. This is remarkable given that HSFA and HiSFA span the same global feature space (same model capacity). In order to compare the robustness of HSFA and HiSFA w.r.t. the number of images in the training sequence, we also evaluate the algorithms using shorter training sequences of 5 k, 10 k, and 20 k images. As usual, the test sequences have 40k images. Table 1 shows that HiSFA computes slower features than HSFA given the same number of training images. This holds for both training and test data. In fact, when HiSFA is trained using only 5 k images the slowest extracted features are already slower than those extracted by HSFA trained on 40 k images (both for training and test data). In contrast, the performance of HSFA decreases significantly when trained on 10 k and 5 k images. Therefore, HiSFA is much more robust than HSFA w.r.t. the number of training images (higher sample efficiency). Experiment 2: Estimation of face pose from image patches The second experiment evaluates the accuracy of HiGSFA compared to HGSFA in a supervised learning scenario. We consider the problem of finding the pose of a face contained in an image patch. Face pose is described by four parameters: (1) the horizontal and (2) vertical position of the face center (denoted by x-pos and y-pos, respectively), (3) the size of the face relative to the image patch, and (4) the in-plane rotation angle of the eyes-line. Therefore, we solve a regression problem on four real-valued labels. The resulting system can be easily applied to face tracking and to face detection with an additional face discrimination step. Generation of the image patches The complete dataset consists of 64,470 images that have been extracted from a few datasets to increase image variability: Caltech (Fink et al. 2003), CAS-PEAL (Gao et al. 2008), FaceTracer (Kumar et al. 2008), FRGC (Phillips et al. 2005), and LFW (Huang et al. 2007). In each run of the system two disjoint image subsets are randomly selected, one of 55,000 images, used for training, and another of 9000 images, used for testing. The images are processed in two steps. First they are normalized (i.e., centered, scaled, and rotated). Then, they are rescaled to 64 \(\times \) 64 pixels and are systematically randomized: In the resulting image patches the center of the face deviates horizontally from the image center by at most \(\pm 20\) pixels, vertically by at most \(\pm 10\) pixels, the angle of the eye-line deviates from the horizontal by at most \(\pm 22.5\deg \), and the size of the largest and smallest faces differs by a factor of \(\sqrt{2}\) (a factor of 2 in their area). The concrete pose parameters are sampled from uniform distributions in the above-mentioned ranges. To increase sample efficiency, each image of the training set is used twice with different random distortions, thus the effective training set has size 110,000. Examples of the final image patches are shown in Fig. 7. Examples of image patches after pose randomization illustrating the range of variations in pose. The eyes of the subjects have been pixelated for privacy and copyright reasons HiGSFA and HGSFA networks For comparison purposes, we adopt an HGSFA network that has been previously designed (and partially tuned) to estimate facial pose parameters by Escalante-B. and Wiskott (2013) without modification. Most SFA nodes of this network consist of expansion function $$\begin{aligned} 0.8{\textsc {Exp}}(x_1, x_2, \dots , x_I)&{\mathop {=}\limits ^{{\mathrm {def}}}}\nonumber \\ (x_1, x_2, \dots , x_I, |x_1&|^{0.8}, |x_2|^{0.8}, \dots , |x_I|^{0.8}) \, , \end{aligned}$$ followed by linear SFA, except for the SFA nodes of the first layer, which have an additional preprocessing step that uses PCA to reduce the number of dimensions from 16 to 13. In contrast to the RatLab networks, this network does not use weight sharing, increasing feature specificity at each node location. For a fair comparison, we construct an HiGSFA network having the same structure as the HGSFA network (e.g., same number of layers, nodes, expansion function, data dimensions, receptive fields). Similarly to experiment 1, we directly set the number of slow features preserved by the iGSFA nodes, which in this experiment varies depending on the layer from 7 to 20 features (these values have been roughly tuned using a run with a random seed not used for testing). These parameters used to construct the networks are shown in Table 2. Table 2 Description of the HiGSFA and HGSFA networks used for pose estimation Training graphs that encode pose parameters In order to train HGSFA and HiGSFA one needs a training graph (i.e., a structure that contains the samples and the vertex and edge weights). A few efficient predefined graphs have already been proposed (Escalante-B. and Wiskott 2013), allowing training of GSFA with a complexity of \(\mathcal {O}(NI^2+I^3)\), which is of the same order as SFA, making this type of graphs competitive in terms of speed. One example of a training graph for classification is the clustered graph (see Sect. 5.3.2) and one for regression is the serial training graph, described below. Serial training graph (Escalante-B. and Wiskott 2013) The features extracted using this graph typically approximate a monotonic function of the original label and its higher frequency harmonics. To solve a regression problem and generate label estimates in the appropriate domain, a few slow features (e.g., extracted using HGSFA or HiGSFA) are post-processed by an explicit regression step. There are more training graphs suitable for regression (e.g., mixed graph), but this one has consistently given good results in different experiments. Figure 8 illustrates a serial graph useful to learn x-pos. In general, a serial graph is constructed by ordering the samples by increasing label. Then, the samples are partitioned into L groups of size \(N_g = N/L\). A representative label \(\in \{ \ell _1, \ldots , \ell _{L} \}\) is assigned to each group, where \(\ell _1< \ell _2< \cdots < \ell _{L}\). Edges connect all pairs of samples from two consecutive groups with group labels (\(\ell _{l}\) and \(\ell _{l+1}\)). Thus, all connections are inter-group, no intra-group connections are present. Notice that since any two vertices of the same group are adjacent to exactly the same neighbors, they are likely to be mapped to similar outputs by GSFA. We use this procedure to construct four serial graphs \(\mathcal {G}_{x\text {-pos}}\), \(\mathcal {G}_{y\text {-pos}}\), \(\mathcal {G}_{\text {angle}}\), and \(\mathcal {G}_{\text {scale}}\) that encode the x-pos, y-pos, angle, and scale label, respectively. All of them have the same parameters: \(L=50\) groups, \(N=110{,}000\) samples, and \(N_g = 2200\) samples per group. Illustration of a serial training graph used to learn the position of the face center (x-pos). The training images are first ordered by increasing x-pos and then grouped into \(L=50\) groups of \(N_g = {2200}\) images each. Each dot represents an image, edges represent connections, and ovals represent the groups. The images of the first and last group have weight 1 and the remaining images have weight 2 (image weights are represented by smaller/bigger dots). All edges have a weight of 1. The serial graphs for learning x-pos, y-pos, angle, and scale are constructed in the same way. Combined graph to learn all pose parameters One disadvantage of current pre-defined graphs is that they allow to learn only a single (real-valued or categorical) label. In order to learn various labels simultaneously, we resort to a method proposed earlier that allows the combination of graphs (Escalante-B. and Wiskott 2016). We use this method here for the first time on real data to create an efficient graph that encodes all four labels combining \(\mathcal {G}_{x\text {-pos}}\), \(\mathcal {G}_{y\text {-pos}}\), \(\mathcal {G}_{\text {angle}}\), and \(\mathcal {G}_{\text {scale}}\). The combination preserves the original samples of the graphs, but the vertex and edge weights are added, which is denoted \(\mathcal {G}'_{\text {4L}} {\mathop {=}\limits ^{{\mathrm {def}}}}\mathcal {G}_{x\text {-pos}} + \mathcal {G}_{y\text {-pos}} + \mathcal {G}_{\text {angle}} + \mathcal {G}_{\text {scale}} \). The combination of graphs guarantees that the slowest optimal free responses of the combined graph span the slowest optimal free responses of the original graphs, as long as three conditions are fulfilled: (1) all graphs have the same samples and are consistent, (2) all graphs have the same (or proportional) node weights, and (3) optimal free responses that are slow in one graph (\(\varDelta < 2.0\)) should not be fast (\(\varDelta > 2.0\)) in any other graph. Since the labels (i.e., pose parameters) have been computed randomly and independently of each other, these conditions are fulfilled on average (e.g., for the \(\mathcal {G}_{x\text {-pos}}\) graph, any feature \({\mathbf {y}}\) that solely depends on y-pos, scale, or angle has \(\varDelta _{{\mathbf {y}}}^{\mathcal {G}_{x\text {-pos}}} \approx 2.0\)). However, the naive graph \(\mathcal {G}'_{\text {4L}}\) does not take into account that the 'angle' and 'scale' labels are more difficult to estimate compared to the x-pos and y-pos labels. Thus, the feature representation is dominated by features that encode the easier labels (and their harmonics) and only few features encode the difficult labels, making their estimation even more difficult. To solve this problem, we determine weighting factors such that each label is represented at least once in the first five slow features. Features that are more difficult to extract have higher weights to avoid an implicit focus on easy features. The resulting graph and weighting factors are: \(\mathcal {G}_{\text {4L}} {\mathop {=}\limits ^{{\mathrm {def}}}}\mathcal {G}_{x\text {-pos}} + 1.25 \mathcal {G}_{y\text {-pos}} + 1.5 \mathcal {G}_{\text {angle}} + 1.75 \mathcal {G}_{\text {scale}}\). Supervised post-processing We extract 20 slow features from the training dataset to train four separate Gaussian classifiers (GC), one for each label. Ground truth classes are generated by discretizing the labels into 50 values representing class 1–50. After the GC has been trained with this data, the pose estimate (on the test patches) is computed using the soft-GC method (Escalante-B. and Wiskott 2013), which exploits class membership probabilities of the corresponding classifier: Let \(\mathbf {P}(C_{{\ell }_l} | {\mathbf {y}} )\) be the estimated class probability that the input sample \({\mathbf {x}}\) with feature representation \({\mathbf {y}}= {\mathbf {g}}({\mathbf {x}})\) belongs to the group with average label \({\ell }_l\). Then, the estimated label is $$\begin{aligned} \tilde{\ell } \; {\mathop {=}\limits ^{{\mathrm {def}}}}\; \sum _{l=1}^{50} {\ell }_l \cdot \mathbf {P}(C_{{\ell }_l} | {\mathbf {y}}) \, . \end{aligned}$$ Equation (14) has been designed to minimize the root mean squared error (RMSE), and although it incurs an error due to the discretization of the labels, the soft nature of the estimation has provided good accuracy and low percentage of misclassifications. Results: Accuracy of HiGSFA for pose estimation The HiGSFA and HGSFA networks were trained using graph \(\mathcal {G}_{\text {4L}}\) described above. Table 3 shows the estimation error of each pose parameter. The results show that HiGSFA yields more accurate estimations than HGSFA for all pose parameters. We would like to remark that the HiGSFA network has the same structure as the HGSFA network, which has been tuned for the current problem. However, one may adapt the network structure specifically for HiGSFA to further exploit the advantages of this algorithm. Concretely, the improved robustness of HiGSFA allows to handle more complex networks (e.g., by increasing the output dimensionality of the nodes or using more complex expansion functions). In the following experiment on age estimation, the network structure of HiGSFA and its hyperparameters are tuned, yielding higher accuracy. Table 3 Estimation errors (RMSE) of HGSFA and HiGSFA for each pose parameter Experiment 3: Age estimation from frontal face photographs Systems for age estimation from photographs have many applications in areas such as human-computer interaction, group-targeted advertisement, and security. However, age estimation is a challenging task, because different persons experience facial aging differently depending on several intrinsic and extrinsic factors. The first system for age estimation based on SFA was a four-layer HSFA network that processes raw images without prior feature extraction (Escalante-B. and Wiskott 2010). The system was trained on synthetic input images created using special software for 3D-face modeling. However, the complexity of the face model was probably too simple, which allowed linear SFA (in fact linear GSFA) to achieve good performance, and left open the question of whether SFA/GSFA could also be successful on real photographs. This subsection first describes the image pre-processing method. Then, a training graph used to learn age, race, and gender simultaneously is presented. Finally, an HiGSFA network is described and evaluated according to three criteria: feature slowness, age estimation error (compared with state-of-the-art algorithms), and linear reconstruction error. Image database and pre-processing The MORPH-II database (i.e. MORPH, Album 2, Ricanek Jr. and Tesafaye 2006) is a large database suitable for age estimation. It contains 55,134 images of about 13,000 different persons with ages ranging from 16 to 77 years. The images were taken under partially controlled conditions (e.g. frontal pose, good image quality and lighting), and include variations in head pose and expression. The database annotations include age, gender (M or F), and "race" ("black", "white", "Asian", "Hispanic", and "other", denoted by B, W, A, H, and O, respectively) as well as the coordinates of the eyes. The procedure used to assign the race label does not seem to be documented. Most of the images are of black (77%) or white races (19%), making it probably more difficult to generalize to other races, such as Asian. We follow the evaluation method proposed by Guo and Mu (2014), which has been used in many other works. In this method, the input images are partitioned in 3 disjoint sets \(S_1\) and \(S_2\) of 10, 530 images, and \(S_3\) of 34,074 images. The racial and gender composition of \(S_1\) and \(S_2\) is the same: about 3 times more images of males than females and the same number of white and black people. Other races are omitted. More exactly, \(|MB|=|MW|={3980}\), \(|FB|=|FW|={1285}\). The remaining images constitute the set \(S_3\), which is composed as follows: \(|MB|=28{,}872\), \(|FB|={3187}\), \(|MW|=1\), \(|FW|=28\), \(|MA|=141\), \(|MH|={1667}\), \(|MO|=44\), \(|FA|=13\), \(|FH|=102\) and \(|FO|={19}\). The evaluation is done twice by using either \(S_1\) and \(S_1\text {-test} {\mathop {=}\limits ^{{\mathrm {def}}}}S_2 + S_3\) or \(S_2\) and \(S_2\text {-test} {\mathop {=}\limits ^{{\mathrm {def}}}}S_1 + S_3\) as training and test sets, respectively. We pre-process the input images in two steps: pose normalization and face sampling (Fig. 2). The pose-normalization step fixes the position of the eyes ensuring that: (a) the eye line is horizontal, (b) the inter-eye distance is constant, and (c) the output resolution is 256\(\times \)260 pixels. After pose normalization, a face sampling step selects the head area only, enhances the contrast, and scales down the image to 96\(\times \)96 pixels. In addition to the \(S_1\), \(S_2\), and \(S_3\) datasets, three extended datasets (DR, S, and T) are defined in this work: A DR-dataset is used to train HiGSFA to perform dimensionality reduction, an S-dataset is used to train the supervised step on top of HiGSFA (a Gaussian classifier), and a T-dataset is used for testing. The DR and S-datasets are created using the same set of training images (either \(S_1\) or \(S_2\)), and the T-dataset using the corresponding test images, either \(S_1\text {-test}\) or \(S_2\text {-test}\). The images of the DR and S-datasets go through a random distortion step during face sampling, which includes a small random translation of max \(\pm 1.4\) pixels, a rotation of max \(\pm 2\) degrees, a rescaling of \(\pm 4\%\), and small fluctuations in the average color and contrast. The exact distortions are sampled uniformly from their respective ranges. Although these small distortions are frequently imperceptible, they teach HiGSFA to become invariant to small errors during image normalization and are necessary due to its feature specificity to improve generalization to test data. Other algorithms that use pre-computed features, such as BIF, or particular structures (e.g., convolutional layers, max pooling) are mostly invariant to such small transformations by construction (e.g., Guo and Mu 2014). Distortions allow us to increase the number of training images. The images of the DR-dataset are used 22 times, each time using a different random distortion, and those of the S-dataset 3 times, resulting in 231,660 and 31,590 images, respectively. The images of the T-dataset are not distorted and used only once. A multi-label training graph for learning age, gender, and race We create an efficient training graph by combining three pre-defined graphs: a serial graph for age estimation and two clustered graphs (one for gender and the other for race classification). Clustered training graphs The clustered graph generates features useful for classification that are equivalent to those of FDA (see Klampfl and Maass 2010, also compare Berkes 2005a and Berkes 2005b). This graph is illustrated in Fig. 9. The optimization problem associated with this graph explicitly demands that samples from the same class should be mapped to similar outputs. If C is the number of classes, \(C-1\) output (slow) features can be extracted and passed to a standard classifier, which computes the final class estimate. Illustration of a clustered training graph for gender classification with 7 images of females and 6 of males. Each vertex represents an image and edges represent transitions. In general, pairs of images \({\mathbf {x}}(n)\) and \({\mathbf {x}}(n')\), with \(n \ne n'\), that belong to the same class \(s \in \{\text {F}, \text {M} \}\) are connected with an edge weight \(\gamma _{n,n'} = 1/(N_s-1)\), where \(N_s\) is the number of images in class s. This results in 2 fully connected subgraphs. Images of different genders are not connected. The weight of all vertices is equal to one. For the actual experiments, we use \(N_F = 56{,}540\) and \(N_M = 175{,}120\). The graph used for gender classification is a clustered graph that has only two classes (female/male) of \(N_F = 56{,}540\) and \(N_M= 175{,}120\) samples, respectively. The graph used for race classification is similar to the graph above: Only two classes are considered (B and W), and the number of samples per class is \(N_B = N_W = 115{,}830\). Serial training graph for age estimation Serial graphs have been described in Sect. 5.2.3. To extract age-related features, we create a serial graph with \(L=32\) groups, where each group has 7238 images. Efficient graph for age, race, and gender estimation We use again the method for multiple label learning (Escalante-B. and Wiskott 2016) to learn age, race, and gender labels, by constructing a graph \(\mathcal {G}_\text {3L}\) that combines a serial graph for age estimation, a clustered graph for gender, and a clustered graph for race. Whereas the vertex weights of the clustered graph are constant, the vertex weights of the serial graph are not (first and last groups have smaller vertex weights), but we believe this does not affect the accuracy of the combined graph significantly. For comparison purposes, we also create a serial graph \(\mathcal {G}_{\text {1L}}\) that only learns age. The graph combination method yields a compact feature representation. For example, one can combine a clustered graph for gender (M or F) estimation and another for race (B or W). The first 2 features learned from the resulting graph are then enough for gender and race classification. Alternatively, one could create a clustered graph with four classes (MB, MW, FB, FW), but to ensure good classification accuracy one must keep 3 features instead of 2. Such a representation would be impractical for larger numbers of classes. For example, if the number of classes were \(C_1=10\) and \(C_2=12\), one would need to extract \(C_1 C_2 - 1 = 119\) features, whereas with the proposed graph combination, one would only need to extract \((C_1-1) + (C_2 -1) = 20\) features. Supervised post-processing We use the first 20 or fewer features extracted from the S-dataset to train three separate Gaussian classifiers, one for each label. For race and gender only two classes are considered (B, W, M, F). For age, the images are ordered by increasing age and partitioned in 39 classes of the same size. This hyperparameter has been tuned independently of the number of groups in the age graph, which is 32. The classes have average ages of \(\{16.6, 17.6, 18.4, \dots , 52.8, 57.8\} \) years. To compute these average ages, as well as to order the samples by age in the serial graph, the exact birthday of the persons is used, representing age with a day resolution (e.g., an age may be expressed as 25.216 years). The final age estimation (on the T-dataset) is computed using the soft-GC method (14), except that 39 groups are used instead of 50. Moreover, to comply with the evaluation protocol, we use integer ground-truth labels and truncate the age estimates to integers. Evaluated algorithms We compare HiGSFA to other algorithms: HGSFA, PCA, and state-of-the-art age-estimation algorithms. The structure of the HiGSFA and HGSFA networks is described in Table 4. In both networks, the nodes are simply an instance of iGSFA or GSFA preceded by different linear or nonlinear expansion functions, except in the first layer, where PCA is applied to the pixel data to preserve 20 out of 36 principal components prior to the expansion. The method used to scale the slow features is the sensitivity method, described in "Appendix B". The hyperparameters have been hand-tuned to achieve best accuracy on age estimation using educated guesses, sets \(S_1\), \(S_2\) and \(S_3\) different to those used for the evaluation, and fewer image multiplicities to speed up the process. The proposed HGSFA/HiGSFA networks are different in several aspects from SFA networks used so far (e.g., Franzius et al. 2007). For example, to improve feature specificity at the lowest layers, no weight sharing is used. Moreover, the input to the nodes (fan-in) originates mostly from the output of 3 nodes in the preceding layer (3\(\times \)1 or 1\(\times \)3). Such small fan-ins reduce the computational cost because they minimize the input dimensionality. The resulting networks have 10 layers. Table 4 Description of the HiGSFA and HGSFA networks The employed expansion functions consist of different nonlinear functions on subsets of the input vectors and include: (1) The identity function \({\textsc {I}}({\mathbf {x}}) = {\mathbf {x}}\), (2) quadratic terms \({\textsc {QT}}({\mathbf {x}}) {\mathop {=}\limits ^{{\mathrm {def}}}}\{ x_i x_j \}_{i,j=1}^I\), (3) a normalized version of \({\textsc {QT}}\): \({\textsc {QN}}({\mathbf {x}}) {\mathop {=}\limits ^{{\mathrm {def}}}}\{ \frac{1}{1+||{\mathbf {x}}||^2} x_i x_j \}_{i,j=1}^I\), (4) the terms \({\textsc {0.8ET}}({\mathbf {x}}) {\mathop {=}\limits ^{{\mathrm {def}}}}\{ |x_i|^{0.8} \}_{i=1}^I\) of the \({\textsc {0.8Exp}}\) expansion, and (5) the function \({\textsc {max2}}({\mathbf {x}}) {\mathop {=}\limits ^{{\mathrm {def}}}}\{ \max (x_i,x_{i+1}) \}_{i=1}^{I-1}\). The \({\textsc {max2}}\) function is proposed here inspired by state-of-the-art CNNs for age estimation (Yi et al. 2015; Xing et al. 2017) that include max pooling or a variant of it. As a concrete example of the nonlinear expansions employed by the HiGSFA network, the expansion of the first layer is \({\textsc {I}}(x_1, \dots , x_{18}) \, | {\textsc {0.8ET}}(x_1, \dots , x_{15}) \,|\, {\textsc {max2}}(x_1, \dots , x_{17}) \,|\, {\textsc {QT}}(x_1, \dots , x_{10})\), where | indicates vector concatenation. The expansions used in the remaining layers can be found in the available source code. The parameter \(\varDelta _T\) of layers 3 to 10 is set to 1.96. \(\varDelta _T\) is not used in layers 1 and 2, and instead the number of slow features is fixed to 3 and 4, resp. The number of features given to the supervised algorithm, shown in Table 5, has been tuned for each DR algorithm and supervised problem. Table 5 Number of output features passed to the supervised step, a Gaussian classifier Since the data dimensionality allows it, PCA is applied directly (it was not resorted to hierarchical PCA) to provide more accurate principal components and smaller reconstruction errors. The results of HiGSFA, HGSFA and PCA (as well as other algorithms, where appropriate) are presented from three angles: feature slowness, age estimation error, and reconstruction error. Individual scores are reported as \(a \pm b\), where a is the average over the test images (\(S_1\text {-test}\) and \(S_2\text {-test}\)), and b is the standard error of the mean (i.e., half the absolute difference). Feature slowness The weighted \(\varDelta \) values of GSFA (Eq. 1) are denoted here as \(\varDelta ^{\text {DR},\mathcal {G}_\text {3L}}_j\) and depend on the graph \(\mathcal {G}_\text {3L}\), which in turn depends on the training data and the labels. To measure slowness (or rather fastness) of test data (T), standard \(\varDelta \) values are computed using the images ordered by increasing age label, \(\varDelta ^{\text {T,lin}}_j {\mathop {=}\limits ^{{\mathrm {def}}}}\frac{1}{N-1}\sum _n (y_j(n+1)-y_j(n))^2\). The last expression is equivalent to a weighted \(\varDelta \) value using a linear graph (Fig. 3b). In all cases, the features are normalized to unit variance before computing their \(\varDelta \) values to allow for fair comparisons in spite of the feature scaling method. Table 6 shows \(\varDelta ^{\text {DR},\mathcal {G}_\text {3L}}_{1,2,3}\) (resp. \(\varDelta ^{\text {T,lin}}_{1,2,3}\)), that is, the \(\varDelta \) values of the three slowest features extracted from the DR-dataset (resp. T-dataset) using the graph \(\mathcal {G}_\text {3L}\) (resp. a linear graph). HiGSFA maximizes slowness better than HGSFA. The \(\varDelta ^\text {T,lin}\) values of the PCA features are larger, which is not surprising, because PCA does not optimize for slowness. Since \(\varDelta ^{\text {DR},\mathcal {G}_\text {3L}}\) and \(\varDelta ^\text {T,lin}\) are computed from different graphs, they should not be compared with each other. \(\varDelta ^\text {T,lin}\) considers transitions between images with the same or very similar ages but arbitrary race and gender. \(\varDelta ^{\text {DR},\mathcal {G}_\text {3L}}\) only considers transitions between images having at least one of a) the same gender, b) the same race, or c) different but consecutive age groups. Table 6 Average delta values of the first three features extracted by PCA, HGSFA, and HiGSFA on either training (DR) or test (T) data Age estimation error We treat age estimation as a regression problem with estimates expressed as an integer number of years, and use three metrics to measure age estimation accuracy: (1) the mean absolute error (MAE) (see Geng et al. 2007), which is the most frequent metric for age estimation, (2) the root mean squared error (RMSE), which is a common loss function for regression. Although it is sensitive to outliers and has been barely used in the literature on age estimation, some applications might benefit from its stronger penalization of large estimation errors. And (3) cumulative scores (CSs, see Geng et al. 2007), which indicate the fraction of the images that have an estimation error below a given threshold. For instance, \(\mathrm {CS}(5)\) is the fraction of estimates (e.g., expressed as a percentage) having an error of at most 5 years w.r.t. the real age. Table 7 Accuracy in years of state-of-the-art algorithms for age estimation on the MORPH-II database (test data) The accuracies are summarized in Table 7. The MAE of HGSFA is 3.921 years, which is better than that of BIF+3Step, BIF+KPLS, BIF+rKCCA, and a baseline CNN, similar to BIF+rKCCA+SVM, and worse than the MCNNs. The MAE of HiGSFA is 3.497 years, clearly better than HGSFA and also better than MCNNs. At first submission of this publication, HiGSFA achieved the best performance, but newer CNN-based methods have now improved the state-of-the-art performance. In particular, MRCNN yields an MAE of 3.48 years, and \(\text {Net}^\text {VGG}_\text {hybrid}\) an MAE of only 2.96 years. In contrast, PCA has the largest MAE, namely 6.804 years. MCNN denotes a multi-scale CNN (Yi et al. 2015) that has been trained on images decomposed as 23 48\(\times \)48-pixel image patches. Each patch has one out of four different scales and is centered on a particular facial landmark. A similar approach was followed by Liu et al. (2017) to propose the use of a multi-region CNN (MRCNN). Xing et al. (2017) evaluated several CNN architectures and loss functions and propose the use of special-purpose hybrid architecture (\(\text {Net}^\text {VGG}_\text {hybrid}\)) with five VGG-based branches. One branch estimates a distribution on demographic groups (black female, black male, white female, white male), and the remaining four branches estimate age, where each branch is specialized in a single demographic group. The demographic distribution is used to combine the outputs of the four branches to generate the final age estimate. In an effort to improve our performance, we also tried support vector regression (SVR) as supervised post-processing and computed an average age estimate using the original images and their mirrored version (HiGSFA-mirroring). This slightly improved the estimation to 3.412 years, becoming better than MRCNN. Mirroring is also done by MCNN and \(\text {Net}^\text {VGG}_\text {hybrid}\). Detailed cumulative scores for HiGSFA and HGSFA are provided in Table 8, facilitating future comparisons with other methods. The RMSE of HGSFA on test data is 5.148 years, whereas HiGSFA yields an RMSE of 4.583 years, and PCA an RMSE of 8.888 years. The RMSE of other approaches does not seem to be available. The poor accuracy of PCA for age estimation is not surprising, because principal components might lose wrinkles, skin imperfections, and other information that could reveal age. Another reason is that principal components are too unstructured to be properly untangled by the soft GC method, in contrast to slow features, which have a very specific and simple structure. Table 8 Percentual cumulative scores (the larger the better) for various maximum allowed errors ranging from 0 to 30 years The behavior of the estimation errors of HiGSFA is plotted in Fig. 10 as a function of the real age. On average, older persons are estimated much younger than they really are. This is in part due to the small number of older persons in the database, and because the oldest class used in the supervised step (soft-GC) has an average of about 58 years, making this the largest age that can be estimated by the system. The MAE is surprisingly low for persons below 45 years. The most accurate estimation is an MAE of only 2.253 years for 19-year-old persons. The average age estimates of HiGSFA are plotted as a function of the real age. The MAE is also computed as a function of the real age and plotted as \(\textit{age} \pm \text {MAE}(\textit{age})\). Reconstruction error A reconstruction error is a measure of how much information of the original input is contained in the output features. In order to compute it, we assume a linear global model for input reconstruction. Let \({\mathbf {X}}\) be the input data and \({\mathbf {Y}}\) the corresponding set of extracted features. A matrix \({\mathbf {D}}\) and a vector \({\mathbf {c}}\) are learned from the DR-dataset using linear regression (ordinary least squares) such that \(\hat{{\mathbf {X}}} {\mathop {=}\limits ^{{\mathrm {def}}}}{\mathbf {D}} {\mathbf {Y}} + {\mathbf {c}}{\mathbf {1}}^T\) approximates \({\mathbf {X}}\) as closely as possible, where \({\mathbf {1}}\) is a vector of N ones. Thus, \(\hat{{\mathbf {X}}}\) contains the reconstructed samples (i.e. \(\hat{{\mathbf {x}}}_n {\mathop {=}\limits ^{{\mathrm {def}}}}{\mathbf {D}} {\mathbf {y}}_n + {\mathbf {c}}\) is the reconstruction of the input \({\mathbf {x}}_n\) given its feature representation \({\mathbf {y}}_n\)). Figure 2 shows examples of face reconstructions using features extracted by different algorithms. The model is linear and global, which means that output features are mapped to the input domain linearly. For PCA this gives the same result as the usual multiplication with the transposed projection matrix plus image average. An alternative (local) approach for HiGSFA would be to use the linear reconstruction algorithm of each node to perform reconstruction from the top of the network to the bottom, one node at a time. However, such a local reconstruction approach is less accurate than the global one. The normalized reconstruction error, computed on the T-dataset, is then defined as $$\begin{aligned} e_\text {rec} {\mathop {=}\limits ^{{\mathrm {def}}}}\frac{\sum _{n=1}^{N} || ({\mathbf {x_n}} - \hat{{\mathbf {x}}_n}) || ^ 2 }{\sum _{n=1}^{N} || ({\mathbf {x_n}} - \bar{{\mathbf {x}}}) || ^ 2 } \, , \end{aligned}$$ which is the ratio between the energy of the reconstruction error and the variance of the test data except for a factor \(N/(N-1)\). Table 9 Reconstruction errors on test data using 75 features extracted by various algorithms Reconstruction errors of HGSFA, HiGSFA and PCA using 75 features are given in Table 9. The constant reconstruction \(\bar{{\mathbf {x}}}\) (chance level) is the baseline with an error of 1.0. As expected, HGSFA does slightly better than chance level, but worse than HiGSFA, which is closer to PCA. PCA yields the best possible features for the given linear global reconstruction method, and is better than HiGSFA by 0.127. For HiGSFA, from the 75 output features, 8 of them are slow features (slow part), and the remaining 67 are reconstructive. If one uses 67 features instead of 75, PCA yields a reconstruction error of 0.211, which is still better because the PCA features are computed globally. HiGSFA network with HGSFA hyperparameters We verify that the performance of HiGSFA for age estimation is better than that of HGSFA not simply due to different hyperparameters by evaluating the performance of an HiGSFA network using the hyperparameters of the HGSFA network (the only difference is the use of iGSFA nodes instead of GSFA nodes). The hyperparameter \(\varDelta _T\), not present in HGSFA, is set as in the tuned HiGSFA network. As expected, the change of hyperparameters affected the performance of HiGSFA: The MAE increased to 3.72 years, and the RMSE increased to 4.80 years. Although the suboptimal hyperparameters increased the estimation errors of HiGSFA, it was still clearly superior to HGSFA. Sensitivity to the delta threshold\(\varDelta _T\) The influence of \(\varDelta _T\) on estimation accuracy and numerical stability is evaluated by testing different values of \(\varDelta _T\). For simplicity, the same \(\varDelta _T\) is used from layers 3 to 10 in this experiment (\(\varDelta _T\) is not used in layers 1 and 2, where the number of features in the slow part is constant and equal to 3 and 4 features, respectively). The performance of the algorithm as a function of \(\varDelta _T\) is shown in Table 10. The \(\varDelta _T\) yielding minimum MAE and used in the optimized architecture is 1.96. Table 10 Performance of HiGSFA on the MORPH-II database using different \(\varDelta _T\) (default value is \(\varDelta _T=1.96\)) The average number of slow features in the third layer changes moderately depending on the value of \(\varDelta _T\), ranging from 2.87 to 4.14 features, and the final metrics change only slightly. This shows that the parameter \(\varDelta _T\) is not critical and can be tuned easily. Evaluation on the FG-NET database The FG-NET database (Cootes 2004) is a small database with 1002 facial images taken under uncontrolled conditions (e.g., many are not frontal) and includes identity and gender annotations. Due to its small size, it is unsuitable to evaluate HiGSFA directly. However, FG-NET is used here to investigate the capability of HiGSFA to generalize to a different test database. The HiGSFA \((\mathcal {G}_\text {3L})\) network that has been trained with images of the MORPH-II database (either with the set \(S_1\) or \(S_2\)) is tested using images of the FG-NET database. For this experiment, images outside the original age range from 16 to 77 years are excluded. For age estimation, the MAE is 7.32 ± 0.08 years and the RMSE is 9.51 ± 0.13 years (using 4 features for the supervised step). For gender and race estimation, the classification rates (5 features) are 80.85% ± 0.95% and 89.24% ± 1.06%, resp. The database does not include race annotations, but all inspected subjects appear to be closer to white than to black. Thus, we assumed that all test persons have white race. The most comparable cross-database experiment known to us is a system (Ni et al. 2011) trained on a large database of images from the internet and tested on FG-NET. By restricting the ages to the same 16–77 year range used above, their system achieves an MAE of approximately 8.29 years. In this article, we propose the use of information preservation to enhance HSFA and HGSFA. The resulting algorithms are denoted by HiSFA and HiGSFA, respectively, and significantly improve global slowness, input reconstruction and in supervised learning settings also label estimation accuracy. We have analyzed the advantages and limitations of HSFA and HGSFA networks, particularly the phenomena of unnecessary information loss and poor input reconstruction. Unnecessary information loss occurs when a node in the network prematurely discards information that would have been useful for slowness maximization in another node higher up in the hierarchy. Poor input reconstruction refers to the difficulty of approximating an input accurately from its feature representation. We show unnecessary information loss is a consequence of optimizing slowness locally, yielding suboptimal global features. HiSFA and HiGSFA improve the extracted features to address these shortcomings. In the conclusions below we focus on HiGSFA for simplicity, although most of them also apply to HiSFA. The feature vectors computed by iGSFA nodes in an HiGSFA network have two parts: a slow and a reconstructive part. The features of the slow part follow a slowness optimization goal and are slow features (in the sense of SFA) transformed by a linear scaling. The features of the reconstructive part follow the principle of information preservation (i.e. maximization of mutual information between the output and input data), which we implement in practice as the minimization of a reconstruction error. A parameter \(\varDelta _T\) (\(\varDelta \)-threshold) balances the lengths of the slow and reconstructive parts, consisting of J and \(D-J\) features, respectively, where D is the output dimensionality and J is the number of slow features selected having \(\varDelta < \varDelta _T\). Parameter \(\varDelta _T\) thus controls the composition of the feature vector. A small \(\varDelta _T\) results in more reconstructive features and a large \(\varDelta _T\) results in more slow features. In particular, when \(\varDelta _T < 0\), iGSFA becomes equivalent to PCA, and when \(\varDelta _T \ge 4.0\), iGSFA becomes equivalent to GSFA except for a linear transformation (this assumes positive edge weights and a consistent graph). Theory justifies fixing \(\varDelta _T\) slightly smaller than 2.0 (Sect. 3.2), resulting in some features being similar to those of GSFA and other features being similar to those of PCA (on residual data). Experiment 1 on unsupervised feature extraction from the view of a simulated rat shows that HiSFA yields significantly slower features than HSFA, and it is especially resistant to overfitting, allowing the extraction of slower features using much fewer training samples. Due to its improved feature slowness, HiSFA may be useful to improve simulations for neuroscience based on HSFA and SFA. A method proposed by Escalante-B. and Wiskott (2016) for the combination of training graphs is used for the first time on real data to estimate either pose or subject (age/race/gender) attributes by combining pre-defined training graphs for the individual attributes into a single training graph, allowing efficient and accurate learning of multiple labels. In Experiment 2 on pose estimation, we experienced the problem that the HGSFA and HiGSFA networks concentrated too much on the easiest labels, yielding very few features that convey information about the difficult labels. This was easily solved by increasing the weight of the graphs corresponding to less represented labels. For this experiment HiGSFA yielded better label estimation accuracy than HGSFA even though it was constrained to hyperparameters previously tuned for HGSFA. Experiment 3 on age estimation (where the hyperparameters are unconstrained) shows that HiGSFA is superior to HGSFA in terms of feature slowness, input reconstruction and label estimation accuracy. Moreover, HiGSFA offers a competitive accuracy for age estimation, surpassing approaches based on bio-inspired features and some convolutional neural networks, such as a multi-scale CNNs (MCNN), which HiGSFA outperforms by 48.5 days (\(\approx \) 1.5 months) mean absolute error. However, HiGSFA is not as accurate as current state-of-the-art CNNs for age estimation (Xing et al. 2017), which have a problem-specific structure. The improvement of HiGSFA over HGSFA is large: 154 days (\(\approx \) 5 months). This is a significant technical and conceptual advance. The next sections provide additional insights—mostly conceptual—into the proposed approach, the obtained results, and future work. Remarks on the approach We have shown that a single GSFA node in a network cannot always identify from its input which aspects or features contribute to the computation of global slow features. Hence, some of these features may be incorrectly discarded if the data dimensionality is reduced. Therefore, we resort in HiGSFA to a reconstruction goal to compress the local input and increase the chances that the local output features include information relevant to extract global slow features. Besides the method used in HiGSFA to combine the slowness principle and information preservation, another method is to optimize a single objective function that integrates both criteria, favoring directions that are slow and have a large variance. However, we found in previous experiments that balancing these two criteria is difficult in practice. In addition, splitting the two types of features allows to keep SFA nonlinear and PCA linear. HiGSFA should not be seen merely as a more sample efficient version of HGSFA, because it also provides a better feature representation and yields slower features. Even assuming unlimited training data and computational resources, features extracted by HGSFA would not necessarily reach the slowness achieved by HiGSFA. This hypothetical scenario is free of overfitting, but information loss would only decrease partially in HGSFA, because the main cause of this problem is not overfitting but blind optimization of slowness locally. One can observe this phenomenon in Table 1 (40 k), where HSFA does not reach the feature slowness of HiSFA even though overfitting is minimal. Network hyperparameters and training times By selecting the training graph and network structure appropriately, the computational complexity of HiGSFA (and other hierarchical versions of SFA) is linear w.r.t. the number of samples and their dimensionality, resulting in feasible training times, see "Appendix A". Training a single HiGSFA network (231, 660 images of 96\(\times \)96 pixels) takes only 10 hours, whereas HiSFA takes about 6 hours (including the time needed for data loading, the supervised step, training, and testing) on a single computer (24 virtual cores Xeon-X7542 @ 2.67GHz and 128 GB of RAM) without GPU computing. However, the algorithm could also benefit from GPUs or distributed computing, because nodes within the same layer can be trained independently. For comparison, the system of Guo and Mu (2014) takes 24.5 hours (training and testing) for fewer images and lower resolution. Training times of the various CNNs were not included in the publications. HiGSFA is more accurate than HGSFA even when using output dimensionalities and network hyperparameters that have been tuned for the latter network. However, HiGSFA can yield even higher accuracies if one uses larger output dimensionalities. This can be explained by various factors: (a) In both networks, the input dimensionality of a single GSFA node is \(I'\) (the expanded dimension). In HiGSFA the input dimensionality of a single PCA node is I, where typically \(I \ll I'\). Hence, the reconstructive features of HiGSFA may overfit less than the slow features of HGSFA that they replace. (b) In HGSFA, the features of all the nodes are attracted to the same optimal free responses, whereas in HiGSFA the reconstructive part is attracted to the local principal components, which are different for each node. Thus, HGSFA layers might potentiate overfitting more than HiGSFA layers. (c) HGSFA networks may benefit less from additional features computed by the nodes, because additional quickly varying features are frequently noise-like and cause more overfitting in later nodes without improving slowness or reconstruction significantly. In order to train iSFA and iGSFA, one must choose the \(\varDelta _T\) hyperparameter. A too small value \(\varDelta _T \approx 0.0\) results in only PCA features, whereas a too large value \(\varDelta _T \approx 4.0\) results in only slow features. The recommendation is to start with a value slightly smaller than 2.0. One can then tune this hyperparameter to improve the target metric. The control experiment in Sect. 5.3.4 shows that this tuning is straightforward. Alternatively, one can directly specify the number of slow features \(J'\). Remarks on experiment 3 Age estimation from adult facial photographs appears to be an ideal problem to test the capabilities of HiGSFA. PCA is not very useful for this problem because PCs represent wrinkles, skin texture and other higher-frequency features poorly. Therefore, it is counter-intuitive that feature slowness can be improved by incorporating PCs in HiGSFA. Improvements on feature slowness on other supervised learning problems (such as digit and traffic sign recognition, and the estimation of gender, race, and face pose from images) are less conclusive, because for such problems a few PCs encode discriminative information relatively well. To estimate age, race, and gender simultaneously, we construct a graph \(\mathcal {G}_\text {3L}\) that combines three pre-defined graphs, encoding sensitivity to the particular labels, and favoring invariance to any other factor. Out of the 75 features extracted in the top-most node, 8 are slow and 67 are reconstructive. The best number of features passed to the supervised step ranges from 4 to 7 slow features, and the reconstructive part is not used. This shows that HiGSFA and HGSFA concentrate the label information in the first features. One can actually replace the iGSFA node on the top of the HiGSFA network by a regular GSFA node, so that all features are slow, without affecting the performance. The superiority in age estimation of HiGSFA over HGSFA is thus not due to the use of principal components in the final supervised step but to the higher quality of the slow features. The performance of HiGSFA for age estimation on the MORPH-II database is competitive with an MAE of 3.497 years (or 3.412 years for HiGSFA-mirroring). Previous state-of-the-art results include an MAE of 3.63 years using a multi-scale CNN (Yi et al. 2015) and 3.92 using BIF+rKCCA+SVM (Guo and Mu 2014). During the preparation of the manuscript, newer results have improved the state of the art to 3.48 years (Liu et al. 2017) and 2.96 years (Xing et al. 2017). The main goal of this research is not to provide best performance on any particular application. Our specific goal is to improve HSFA and HGSFA. In this sense, the central claim is that the new extensions are better regarding feature slowness, input reconstruction and (in the case of HiGSFA) label estimation accuracy. Input reconstruction from slow features Experiment 3 confirms that PCA is more accurate than HiGSFA at reconstruction using the same number of features (here 75), which was expected because PCA features are optimal when reconstruction is linear. From the 75 features extracted by HiGSFA, 67 are reconstructive (8 fewer than in PCA) and they are computed hierarchically (locally), in contrast to the PCA features, which are global. Thus, it is encouraging that the gap between PCA and HiGSFA at input reconstruction is moderate. In turn, HiGSFA is much more accurate than HGSFA, because reconstruction is the secondary goal of HiGSFA, whereas HGSFA does not pursue reconstruction. Since the HiGSFA network implements a nonlinear transformation, it is reasonable to employ nonlinear reconstruction algorithms. Nonlinear reconstruction can provide more accurate reconstructions in theory, but in our experience it is difficult to train such type of algorithms well enough to perform better on test data than the simpler global linear reconstruction algorithm. An algorithm for nonlinear reconstruction that minimizes the feature error \(e_\text {feat}\) has been described in Sect. 4.5. However, since the number of dimensions is reduced by the network, one can expect many samples with feature error \(e_\text {feat}=0\) (or \(e_\text {feat}\) minimal) that differ from any valid input sample fundamentally (i.e., do not belong to the face manifold). To correct this problem, one might need to consider the input distribution and limit the range of valid reconstructions. Another option is to resort to generative adversarial networks (Goodfellow et al. 2014; Radford et al. 2015; Denton et al. 2015), which have been used in the context of CNNs to generate random inputs with excellent image quality. For HiGSFA networks, such a promising approach might also be applicable to do nonlinear reconstruction, if properly adapted. Age and pose estimation accuracy may be improved by using more training samples and more complex hierarchical networks, for instance by increasing the overlap of the receptive fields and using more complex nonlinearities. For age estimation one could use true face-distortion methods instead of simple image transformations. One key factor for the performance of MCNN and MRCNN is the use of receptive fields centered at specific facial points (e.g., see entries 'MCNN no align' and 'MCNN' in Table 7). For \(\text {Net}^\text {VGG}_\text {hybrid}\) the use of a face normalization procedure and an image distortion method may also be relevant. These ideas could also be applied to HiGSFA, and might particularly boost generalization. Another direction of research is to investigate a possible connection between information preservation in HiGSFA and other methods used in CNNs, particularly the use of shortcut connections in ResNets (He et al. 2016) and Dense Networks (Huang et al. 2017). These connections provide a new information channel that allows a layer access to the unmodified input of a previous layer. Although these methods are different and have been motivated by other ideas, it appears they may also be justified by the information preservation principle. We believe it is possible to develop successful learning algorithms based on a few simple but strong learning principles and heuristics, and this is the approach that we try to pursue with HiGSFA. An algorithm that might be strong but cannot be understood and justified analytically would be of less interest to us. HiGSFA follows two of the suggestions by Krüger et al. (2013) based on findings from neuroscience regarding the primate visual system useful for successful computer vision, namely, hierarchical processing and information-channel separation. The slow and reconstructive parts of the extracted features can be seen as two information channels: The first one encodes information representing the slow parameters, and the second one encodes information representing the remaining aspects of the input. The slow part can be further decomposed. For example, in Experiment 3 one can observe one or more features for each label; the 3rd slowest feature is mostly related to race, the 4th one to gender, and all the remaining features to age. This work explores the potential of bottom-up training and shows that it can be surprisingly accurate, making this type of architectures attractive for further research by exploring new heuristics and extensions. The proposed algorithm is general purpose (e.g., it does not know anything about face geometry), but it still provides competitive accuracy, at least for the age/race/gender estimation problem. The results show the improved versatility and robustness of the algorithm and make it a good candidate for many other problems of computer vision on high-dimensional data, particularly those lying at the intersection of image analysis, nonlinear feature extraction, and supervised learning. The problem is still feasible when N is small by applying singular value decomposition methods. However, a small number of samples \(N<I\) usually results in pronounced overfitting. H(X) is the average amount of information given by instances of a random variable X. I(X, Y) is the average amount of information that a random variable X gives about another random variable Y (or vice-versa). In other words, it denotes how much information is duplicated in X and Y on average. If \(I(X,Y)=0\), the variables are independent. https://github.com/AlbertoEsc/cuicuilco. Berkes, P. (2005a). Handwritten digit recognition with nonlinear Fisher discriminant analysis. In ICANN, LNCS (Vol. 3697, pp. 285–287). Berlin: Springer. Berkes, P. (2005b). Pattern recognition with Slow Feature Analysis. Cognitive Sciences EPrint Archive (CogPrints). Retrieved February 24, 2012 from http://cogprints.org/4104/. Berkes, P., & Wiskott, L. (2005). Slow Feature Analysis yields a rich repertoire of complex cell properties. Journal of Vision, 5(6), 579–602. MATH Google Scholar Blaschke, T., Zito, T., & Wiskott, L. (2007). Independent Slow Feature Analysis and nonlinear blind source separation. Neural Computation, 19(4), 994–1021. MathSciNet MATH Google Scholar Cootes, T. (2004). Face and gesture recognition research network (FG-NET) aging database. Retrieved June 17, 2009 from http://www-prima.inrialpes.fr/FGnet/. Denton, E. L., Chintala, S., Szlam, A., & Fergus, R. (2015). Deep generative image models using a Laplacian pyramid of adversarial networks. Advances in Neural Information Processing Systems, 28, 1486–1494. Escalante-B, A. N., & Wiskott, L. (2010). Gender and age estimation from synthetic face images with Hierarchical Slow Feature Analysis. In International conference on information processing and management of uncertainty in knowledge-based systems (pp. 240–249). Germany: Dortmund. Escalante-B, A. N., & Wiskott, L. (2012). Slow Feature Analysis: Perspectives for technical applications of a versatile learning algorithm. Künstliche Intelligenz [Artificial Intelligence], 26(4), 341–348. Escalante-B, A. N., & Wiskott, L. (2013). How to solve classification and regression problems on high-dimensional data with a supervised extension of Slow Feature Analysis. Journal of Machine Learning Research, 14, 3683–3719. Escalante-B, A. N., & Wiskott, L. (2016). Theoretical analysis of the optimal free responses of graph-based SFA for the design of training graphs. Journal of Machine Learning Research, 17(157), 1–36. Fink, M., Fergus, R., & Angelova, A. (2003). Caltech 10,000 web faces. Retrieved September 6, 2010 from http://www.vision.caltech.edu/Image_Datasets/Caltech_10K_WebFaces/. Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7, 179–188. Földiák, P. (1991). Learning invariance from transformation sequences. Neural Computation, 3(2), 194–200. Franzius, M., Sprekeler, H., & Wiskott, L. (2007). Slowness and sparseness lead to place, head-direction, and spatial-view cells. PLoS Computational Biology, 3(8), 1605–1622. MathSciNet Google Scholar Franzius, M., Wilbert, N., & Wiskott, L. (2011). Invariant object recognition and pose estimation with slow feature analysis. Neural Computation, 23(9), 2289–2323. Gao, W., Cao, B., Shan, S., Chen, X., Zhou, D., Zhang, X., et al. (2008). The CAS-PEAL large-scale Chinese face database and baseline evaluations. IEEE Transactions on Systems, Man, and Cybernetics, Part A, 38(1), 149–161. Geng, X., Zhou, Z. H., & Smith-Miles, K. (2007). Automatic age estimation based on facial aging patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(12), 2234–2240. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 2672–2680. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. Cambridge: MIT Press. Guo, G., & Mu, G. (2010). Human age estimation: What is the influence across race and gender? In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (pp 71–78). Guo, G., & Mu, G. (2011). Simultaneous dimensionality reduction and human age estimation via kernel partial least squares regression. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (pp 657–664). Guo, G., & Mu, G. (2014). A framework for joint estimation of age, gender and ethnicity on a large database. Image and Vision Computing, 32(10), 761–770. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 (pp 770–778). Hinton, G. E. (1989). Connectionist learning procedures. Artificial Intelligence, 40(1–3), 185–234. Huang, G., Liu, Z., van der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Huang, G. B., Ramesh, M., Berg, T., & Learned-Miller, E. (2007). Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. Rep. 07-49, University of Massachusetts, Amherst Klampfl, S., & Maass, W. (2010). Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks. In Proc. of NIPS 2009: Advances in Neural Information Processing Systems (Vol. 22, pp. 988–996), MIT Press. Koch, P., Konen, W., & Hein, K. (2010). Gesture recognition on few training data using slow feature analysis and parametric bootstrap. In International Joint Conference on Neural Networks (pp. 1–8). Kompella, V. R., Luciw, M. D., & Schmidhuber, J. (2012). Incremental slow feature analysis: Adaptive low-complexity slow feature updating from high-dimensional input streams. Neural Computation, 24(11), 2994–3024. Krüger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., et al. (2013). Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1847–1871. Kuhnl, T., Kummert, F., & Fritsch, J. (2011). Monocular road segmentation using slow feature analysis. In Intelligent Vehicles Symposium (pp. 800–806), IEEE Kumar, N., Belhumeur, P. N., & Nayar, S. K. (2008). FaceTracer: A search engine for large collections of images with faces. In European Conference on Computer Vision (ECCV) (pp 340–353). Liu, H., Lu, J., Feng, J., & Zhou, J. (2017). Group-aware deep feature learning for facial age estimation. Pattern Recognition, 66, 82–94. Mitchison, G. (1991). Removing time variation with the anti-Hebbian differential synapse. Neural Computation, 3(3), 312–320. Mohamed, N. M., & Mahdi, H. (2010). A simple evaluation of face detection algorithms using unpublished static images. In 10th International Conference on Intelligent Systems Design and Applications (pp. 1–5). Ni, B., Song, Z., & Yan, S. (2011). Web image and video mining towards universal and robust age estimator. IEEE Transactions on Multimedia, 13(6), 1217–1229. Phillips, P. J., Flynn, P. J., Scruggs, T., Bowyer, K. W., Chang, J., Hoffman, K., Marques, J., Min, J., & Worek, W. (2005). Overview of the face recognition grand challenge. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society (Vol. 1, pp. 947–954) Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. E-print arXiv:1511.06434 Ricanek Jr, K., & Tesafaye, T. (2006). Morph: A longitudinal image database of normal adult age-progression. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition, IEEE Computer Society, FGR '06 (pp. 341–345) Rish, I., Grabarnik, G., Cecchi, G., Pereira, F., & Gordon, G. J. (2008). Closed-form supervised dimensionality reduction with generalized linear models. In Proc. of the 25th ICML (pp. 832–839), ACM. Schönfeld, F., & Wiskott, L. (2013). Ratlab: An easy to use tool for place code simulations. Frontiers in Computational Neuroscience, 7, 104. https://doi.org/10.3389/fncom.2013.00104. Sprekeler, H. (2011). On the relation of slow feature analysis and Laplacian eigenmaps. Neural Computation, 23(12), 3287–3302. Sprekeler, H., Michaelis, C., & Wiskott, L. (2007). Slowness: An objective for spike-timing-dependent plasticity? PLoS Computational Biology, 3(6), 1136–1148. Sprekeler, H., Zito, T., & Wiskott, L. (2014). An extension of Slow Feature Analysis for nonlinear blind source separation. Journal of Machine Learning Research, 15, 921–947. Sugiyama, M. (2006). Local Fisher discriminant analysis for supervised dimensionality reduction. In Proc. of the 23rd ICML (pp. 905–912). Sugiyama, M., Idé, T., Nakajima, S., & Sese, J. (2010). Semi-supervised local Fisher discriminant analysis for dimensionality reduction. Machine Learning, 78(1–2), 35–61. Tang, W., & Zhong, S. (2007). Computational methods of feature selection, Chapman and Hall/CRC, chap pairwise constraints-guided dimensionality reduction Wilbert, N. (2012). Hierarchical slow feature analysis on visual stimuli and top-down reconstruction. PhD thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I Wiskott, L. (1998). Learning invariance manifolds. In Proc. of 5th Joint Symposium on Neural Computation, San Diego, CA, USA, Univ. of California (Vol. 8, pp. 196–203). Wiskott, L., & Sejnowski, T. (2002). Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4), 715–770. Xia, T., Tao, D., Mei, T., & Zhang, Y. (2010). Multiview spectral embedding. Transactions on Systems, Man, and Cybernetics, Part B, 40(6), 1438–1446. Xing, J., Li, K., Hu, W., Yuan, C., & Ling, H. (2017). Diagnosing deep learning models for high accuracy age estimation from a single image. Pattern Recognition, 66, 106–116. Yi, D., Lei, Z., & Li, S. (2015). Age estimation by multi-scale convolutional network. In Computer Vision—ACCV 2014, Lecture Notes in Computer Science (Vol. 9005, pp. 144–158). Zhang, D., Zhou, Z. H., & Chen, S. (2007) Semi-supervised dimensionality reduction. In Proc. of the 7th SIAM International Conference on Data Mining Zhang, Z., & Tao, D. (2012). Slow feature analysis for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3), 436–450. Zito, T., Wilbert, N., Wiskott, L., & Berkes, P. (2009). Modular toolkit for data processing (MDP): A python data processing framework. Frontiers in Neuroinformatics, 2, 8. Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44801, Bochum, Germany Alberto N. Escalante-B. & Laurenz Wiskott Alberto N. Escalante-B. Laurenz Wiskott Correspondence to Alberto N. Escalante-B.. Editor: Tijl De Bie. Complexity of a quadratic HSFA network Although existing systems based on HSFA have resorted to hierarchical processing for efficiency reasons, apparently its actual asymptotic complexity has not yet been formally established. In this section, we compute the computational complexity of a concrete quadratic HSFA (QHSFA) network illustrated in Fig. 11. This network operates on data with a 1D structure (i.e., vectors), where all nodes of the network perform quadratic SFA (QSFA, a quadratic expansion followed by linear SFA), and has two important parameters L and k. Parameter L indicates the number of layers and k is used for various purposes and is assumed to be fixed. In the first layer, the nodes have a fan-in and stride of k input values. In the remaining layers, the nodes have a fan-in and stride of two nodes. Each node of this network reduces the dimensionality of the data from k input components to k / 2 output components. We denote the total input dimensionality by I. Example of a 1D QHSFA network with binary fan-ins and no overlap. Each node performs quadratic SFA and reduces the dimensionality from k to k / 2 components. The use of such a small fan-in results in a network with a large number L of layers, although L only grows logarithmically with the input dimension I From the structure described above, it follows that the receptive fields of the nodes are non-overlapping, the network's output has k / 2 components, the number of layers L is related to I and k: $$\begin{aligned} I \; = \; k 2^{L-1}, \end{aligned}$$ and the total number of nodes in the network is $$\begin{aligned} M \; = \; 2^L-1 . \end{aligned}$$ Internally, all the QSFA nodes of the network have the same structure. The input data to a single node has k dimensions, which are increased by the quadratic expansion to \(k (k+3) /2\) components. Afterwards, linear SFA reduces the dimensionality to k / 2. Thus, the complexity of training a single (nonlinear) node is $$\begin{aligned} T_{\text {QSFA}}(N, k) \;\;\;&{\mathop {=}\limits ^{(5)}} \;\;\; \mathcal {O}(N (k (k+3) / 2) ^2 + (k (k+3) / 2)^3) \end{aligned}$$ $$\begin{aligned}&= \; \mathcal {O}(N k^4 + k^6) . \end{aligned}$$ On the other hand, the number of nodes is \(M {\mathop {=}\limits ^{((16,17))}} \mathcal {O}(I/k)\). Therefore, the complexity of training the whole network is $$\begin{aligned} T_{\text {QHSFA}}(N, I, k) {\mathop {=}\limits ^{(19)}} \mathcal {O}((N k^4 + k^6)I/k) = \mathcal {O}(I N k^3 + I k^5) \, . \end{aligned}$$ Thus, the complexity of the complete QHSFA network above is linear w.r.t. the input dimension I, whereas the complexity of direct QSFA (on I-dimensional data) is $$\begin{aligned} T_{\text {QSFA}}(N, I) {\mathop {=}\limits ^{(19)}} \mathcal {O}(N I^4 + I^6) , \end{aligned}$$ which is linear w.r.t. \(I^6\). This shows that the QHSFA network is computationally much more efficient than direct QSFA (given that \(k \ll I\)). Since each layer in the QHSFA network is quadratic, in general the output features of the l-th layer can be written as polynomials of degree \(2^l\) on the input values. In particular, the output features of the whole network are polynomials of degree \(2^L\). However, the actual feature space spanned by the network does not include all polynomials of this degree but only a subset of them due to the restricted connectivity of the network. In contrast, direct QSFA only contains quadratic polynomials (although all of them). One could try to train direct SFA on data expanded by a polynomial expansion of degree \(2^L\) (to encompass the feature space of QHSFA), but the complexity would be prohibitive due to the large expanded dimensionality \(\sum _{d=0}^{2^L} {{d+I-1}\atopwithdelims (){I-1}}\). Training SFA with such a high-dimensional data appears to be exponential in I. We are also interested in analyzing the memory complexity of the QHSFA network. Memory (space) complexity is denoted by S. Thus, the memory complexity of linear SFA is $$\begin{aligned} S_{\text {SFA}}(N, I) = \mathcal {O}(NI + I^2) \, , \end{aligned}$$ where the term NI is due to the input data, and \(I^2\) is due to the covariance matrices. If a quadratic expansion is added, SFA becomes QSFA and the memory complexity becomes $$\begin{aligned} S_{\text {QSFA}}(N, I) = \mathcal {O}(NI + I^4) \, . \end{aligned}$$ One can reduce these complexities by using HSFA/QHSFA and by training the nodes sequentially, one at a time, independently of whether an expansion has been applied or not. In particular, the memory complexity of the QHSFA network is only $$\begin{aligned} S_{\text {QHSFA}}(N, I, k) = \mathcal {O}(NI + k^4) \, . \end{aligned}$$ The excellent computational and memory complexity of the QHSFA network is not exclusive to this simple architecture. It is possible to design more sophisticated networks and preserve a similar computational complexity. For example, training the HiGSFA network proposed in Sect. 5, which has overlapping receptive fields, larger fan-ins, and a 2D structure, has complexity also linear in I and N if one adds more pairs of layers with a fan-in of \(1 \times 3\) and \(3 \times 1\) to match the size of the input data as needed (concrete analysis not provided). Sensitivity-based scaling As mentioned in Sect. 4.4, the QR scaling method (10)–(12) is useful to give the features in the slow part a meaningful scale, but it has the disadvantage that it mixes the slow features. This is irrelevant when one combines a polynomial expansion with SFA because the extracted features are invariant to invertible linear transformations of the input, including mixtures, i.e., \(\text {SFA}({\textsc {QExp}}({\mathbf {U}}{\mathbf {x}})) \equiv \text {SFA}(\text {QExp}({\mathbf {x}}))\), where \({\mathbf {U}}\) is any invertible matrix and \({\textsc {QExp}}\) is the quadratic expansion). In other words, polynomial SFA can extract the same features from \({\mathbf {s}}'\) or \({\mathbf {y}}'\). However, other expansions, including the \({\textsc {0.8Exp}}\) (13) expansion, do not have this property. When the \(0.8{\textsc {Exp}}\) expansion is combined with SFA, the resulting algorithm is only invariant to scalings of the input components but not to their mixing, i.e., \(\text {SFA}({\textsc {0.8Exp}}(\varvec{\Lambda }{\mathbf {x}})) \equiv \text {SFA}({\textsc {0.8Exp}}({\mathbf {x}}))\), where \(\varvec{\Lambda }\) is a diagonal matrix with diagonal elements \(\lambda _i \ne 0\), but \(\text {SFA}({\textsc {0.8Exp}}({\mathbf {U}} {\mathbf {x}})) \not \equiv \text {SFA}({\textsc {0.8Exp}}({\mathbf {x}}))\) in general. Thus, using QR scaling would change the features extracted in later nodes fundamentally. Furthermore, the \({\textsc {0.8Exp}}\) expansion has been motivated by a model where the input slow features are noisy harmonics of increasing frequency of a hidden parameter and the expansion should be applied to these features directly. Thus, mixing the slow features would break the assumed model and might compromise slowness extraction in the next layers in practice. Technically, feature mixing by QR scaling could be reverted in the next layer (e.g., by an additional application of linear SFA before the expansion), but such a step would add unnecessary complexity. Therefore, as an alternative to the QR scaling, we also propose a sensitivity based scaling, which scales the slow features without mixing them, as follows. $$\begin{aligned} {\mathbf {y'}}=\varvec{\Lambda } {\mathbf {s'}} \, , \end{aligned}$$ where \(\varvec{\Lambda }\) is a diagonal matrix with diagonal elements \(\lambda _j {\mathop {=}\limits ^{{\mathrm {def}}}}|| {\mathbf {M}}_j ||_2\) (the \(L_2\)-norm of the j-th column vector of \({\mathbf {M}}\)). Thus, $$\begin{aligned} {\mathbf {a}}(t) {\mathop {=}\limits ^{(10,25)}} {\mathbf {M}} \varvec{\Lambda }^{-1} {\mathbf {y}}'(t) + {\mathbf {d}} \, . \end{aligned}$$ Clearly, the transformation (25) does not mix the slow features, it only scales them. From the two key reconstruction properties of PCA mentioned in Sect. 4.4 (adding noise of certain variance to either one or many features increases the variance of the reconstruction error by the same amount), the first one (noise on a single feature) is fulfilled, because the columns of \({\mathbf {M}} \varvec{\Lambda }^{-1}\) have unit norm since \(\varvec{\Lambda }^{-1} = \text {Diag}(1/\lambda _1, \dots , 1/\lambda _J)\). The second property is not fulfilled, because \({\mathbf {M}}\varvec{\Lambda }^{-1}\) is in general not orthogonal (in contrast to \({\mathbf {Q}}\)). At first glance, it seems like multi-view learning (data fusion, e.g., Xia et al. 2010) might be an alternative for the scaling methods used here. However, multi-view learning actually solves a different problem since we do not join two information channels but actually split them. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Escalante-B., A.N., Wiskott, L. Improved graph-based SFA: information preservation complements the slowness principle. Mach Learn 109, 999–1037 (2020). https://doi.org/10.1007/s10994-019-05860-9 Received: 01 February 2016 Revised: 04 November 2019 Issue Date: May 2020 Supervised dimensionality reduction Similarity-based learning Information preservation Age estimation
CommonCrawl
Lower bound for Degenerate Codes? According to (Macchiavello, Palma, Zeilinger, 2001; pg82) a lower bound of the encoding Hilbert space of a non degenerate code is given by the quantum version of the Hamming bound: $$2^k \sum_{i=0}^t 3^i \begin{pmatrix} n \\ i\end{pmatrix}\le 2^n$$ where we are looking at a $[n,k,2t+1]$ code. Does such a bound exist for a degenerate code? and why is it different (if it indeed is)? error-correction Quantum spaghettificationQuantum spaghettification This bound works by counting the number of orthogonal states that must be available. If you're encoding into $n$ qubits, you can't require more than $2^n$ orthogonal states, because that's all that's available. This is the right hand side of the bound. If you wish to encode $k$ logical qubits in a distance $2t+1$ code, then each of the $2^k$ basis states of those logical qubits must encode to something different. Moreover, you need to be able to correct for up to $t$ errors of type $X$, $Y$ or $Z$. If we require each of these to map to a different orthogonal state, then there are $3n$ possible 1-qubit errors, $3^2\binom{n}{2}$ 2-qubit errors (choice of one of 3 Paulis for each error, and a pair of locations for them to happen at), and so on. So, this gives the stated bound, known as the Quantum Hamming bound (also Gilbert-Varshamov). However, an essential feature of the derivation is the assumption that each error is mapped onto a different orthogonal state. The very definition of a degenerate code is that multiple errors can be mapped onto the same state. As a trivial example, consider the effect of a single-qubit $Z$ error on any one of the qubits of the GHZ state $$ \frac{1}{\sqrt{2}}(|0\rangle^{\otimes n}+|1\rangle^{\otimes n}). $$ No matter where that error happens, the resultant state is the same, but that's fine: I don't need to be able to identify which of the $n$ qubits the error happened on to fix it. Once I know the error has happened, I can apply a $Z$ gate on any of the qubits that I choose in order to fix it. (I don't claim that this example enables you to detect that error.) So, the Quantum Hamming bound does not apply to degenerate codes. Indeed, there are known examples where the bound is beaten, e.g. D. P. DiVincenzo, P. W. Shor, and J. A. Smolin, Phys. Rev. A 57, 830 (1998) (free version), although there are surprisingly few. The only replacement that I know of is the Quantum Singleton Bound, $n-k\geq 4t$. The Quantum Hamming bound, in practice, appears to give very good estimates of what can be achieved, but is not absolute when it comes to degenerate codes. Not the answer you're looking for? Browse other questions tagged error-correction or ask your own question. Violation of the Quantum Hamming bound Zero-distance self-dual GF(4) quantum codes and constructing k > 0 codes from them Are all $[[n, k, d]]$ quantum codes equivalent to additive self-orthogonal $GF(4)^n$ classical codes? Significance of Clifford operations from quantum error correction perspective Confusion on the definition of the phase-damping channel Basics on CSS codes: manipulation of the ancillas to detect error Questions about theorem and proof: "Quantum error correction condition", Thm 10.1 Nielsen & Chuang Matrix Index and multiplication rules for Hermitian Pauli group products About a necessary condition for quantum error correcting codes
CommonCrawl
Investigation on typical occupant behavior in air-conditioned office buildings for South China's Pearl River Delta Manning He1, Huiwang Pen2, Meixiang Li2, Yu Huang ORCID: orcid.org/0000-0001-5199-383X2,3,4, Da Yan5, Siwei Lou2,3 & Liwei Wen2,3 Architectural Intelligence volume 1, Article number: 8 (2022) Cite this article The excessive simplification of occupant behavior is considered as the most important factor that affects the uncertainty of building performance simulation, thus affects the reliability and generalizability of simulation-based design and forecast. In this paper, occupant behavior in air-conditioned office buildings of the Pearl River Delta (PRD) region was investigated and defined. Copies of 873 questionnaires about the occupant behavior in air-conditioned office buildings in the PRD region were collected to study the relationship between indoor environment quality and adaptive behaviors. Eight typical office occupant schedules were defined via K-means clustering method. A probability prediction model of cooling temperature set-point was established by using the Ordinal Logistic Regression method. According to the different control modes of air conditioning, window, blind and lighting equipment, four types of typical behavior patterns were proposed using the K-prototype clustering method, which could be developed into 20 typical occupant behavior styles of office buildings in the PRD region. Avoid the common mistakes Due to the increasing demand for indoor environment and energy consumption, research related to building energy is considered a hot topic around the world. With the rapid development of computer science, building performance simulation is becoming a widely accepted method for building energy-related studies. However, there often exists a significant gap between the simulated and the actual energy consumption. A report of International Energy Agency, Energy in the Buildings and Communities Program (IEA-EBC) Annex 53 stated that building energy consumption is influenced by mainly six parameters, namely meteorological parameters, building envelope, system equipment, indoor design criteria, system operation management and occupant behavior (Yoshino et al., 2017). Among them, occupant behavior in buildings has been widely recognized as a major factor contributing to the gaps between measured and simulated energy consumption in buildings (Chen et al., 2017; Sun et al., 2014; Sun et al., 2016; Yan et al., 2015). Zhou et al. (2016) showed that the stochastic characteristics of air conditioning use patterns were the main factor for the difference in energy consumption between the predicted and the actual performance. However, only one type of user behavior (air conditioning) was considered in some study, regardless of other office equipment (lighting, window, and blind). Sun & Hong, 2017 simulated and analyzed five typical occupant behavior, which concluded that individual occupant behavior could cause a difference of up to 22.9% of energy consumption, with respect to integrated behavior, the difference could be 41%. Eguaras-Martínez et al. (2014) showed that the difference between predicted energy consumption inclusion and exclusion of occupant behavior in building simulations could be up to 30%. The randomness of occupant behavior was often oversimplified or neglected, by using full-time full-space static assumptions or applying default settings according to building types and climate zones. From a simulation perspective, occupant behavior is a vital input part of the simulation process, it is essential to conduct an in-depth study on occupant behavior, achieving accurate occupant behavior module input value. Scholars conducted various studies, indicating that the reason why occupant can affect building energy consumption was that occupant will perform a series of adaptive behaviors to achieve a new comfortable environment status (Yamaguchi et al., 2013; Yan et al., 2017). Therefore, information about the comfort desires of the occupant could be collected to improve the management of building energy consumption without sacrificing users' comfort or productivity (Pérez-Lombard et al., 2011; Chung, 2011). Besides, it was also crucial for simulation of occupant behavior to study the motivation of occupant interaction with the building environment (Yan et al., 2017). Yan & Hong, 2018 focused on the definition or simulation of occupant behavior in buildings and attempted to conduct a comprehensive description of the random occupant behavior. Data from various locations and types of buildings worldwide were collected by scholars to build a library of stochastic occupant behavior models. A newly developed algorithm (the Yun algorithm) has been described to simulate the occupant behavior of window-control in dynamic building simulation software (Yun et al., 2009). The research discovered the relationship between occupant behavior and environmental parameters (Mahdavi et al., 2008). A probabilistic model to simulate and predict occupancy in a single-person office was proposed by examining the statistical properties of occupancy (Wang et al., 2005). Taking the university office with irregular occupancy as a case study, a static and dynamic model of the individual occupant and occupant behavior in an office environment in relation to building control systems were validated (Zimmermann, 2007). A new, open-source modeling tool was set for stochastic simulation to predict occupant services demand in buildings (Rysanek & Choudhary, 2015). Coupling of dynamic building simulation with stochastic modeling of occupant behavior in offices was introduced that can be used in energy uncertainty analysis. (Parys et al., 2011). A stochastic model of occupant behavior regarding ventilation is proposed to study time-series of window angle (Fritsch, 1990). The statistical occupancy time-series data at a ten-minute resolution was generated to describe realistic occupancy in UK households, that presented to provide a stochastic simulation of active occupancy patterns (Richardson et al., 2008). A stochastic bottom-up model based on data that occupancy patterns and daylight availability observed in measured lighting demand in detached houses has been presented and validated (Widén et al., 2009). Presence models and action models were both included in the model base, the presence models (often referred as the occupancy schedule) describes the presence, absence and movement of occupants in space. The action model describes various types of adaptive and non-adaptive behavior, such as switching on/off air conditioning equipment, lighting, window and blind. Typical cooling load curves were used by Chow et al. (2004) and Gang et al. (2015) to simulate the different schedules of several building types (such as offices, residential buildings, and hotels), the performance of the system was analyzed according to the predicted load. The occupancy schedule was established by mining the energy consumption data of office building equipment, comparing with the occupancy schedule of medium-sized office building proposed by the DOE prototype, the results showed that there was a 36.67–50.53% difference between the "prototype" and the actual specific office (Zhao et al., 2014). Pearl River Delta is an alluvial plain located in the subtropical climate zone of southern Guangdong, China, which covers an area of around 55,000 km2, and a population of about 57 million. Considered as one of the most prosperous bay areas in the world, Pearl River Delta contains a huge scale of developed urban agglomeration, which emphasizes the importance of urban energy design and management. Occupant behavior, which is considered as the most important factor that affects the energy performance of buildings, has been the research focus of scholars. At present, there is no first-hand data for occupant behaviors within buildings in this area in the ASHRAE database (ASHRAE, 2013). In this study, a questionnaire was proposed from the literature review, which contains questions on thermal comfort, occupant behavior and sharing authority within air-conditioned office buildings. From the analysis of the survey results, the preference of occupant adaptive behavior in air-conditioned indoor environment was investigated. A prediction model of cooling temperature set points were obtained. A series of occupant behavior styles for air-conditioned office buildings in the PRD region were summarized. 2 Methodology 2.1 Basic information of questionnaire The occupant behavior within the building is affected by various "driving forces", both internal (such as lifestyle, age, gender, attitude, preference, etc.) and external (such as temperature, humidity, wind speed, building property etc.) (Sun & Hong, 2017). In order to investigate the intention behind the behavior as well as the interaction between thermal comfort and occupant behavior in the office, a questionnaire survey was conducted in the PRD region from March to April 2019. The questions could be divided into thermal comfort, equipment control preference, office type as well as the personal basic information, as shown in Table 1. Yan et al. (2015) pointed out that the occupant behavior mainly includes the operation of air conditioning, lighting, window, blind and plug-in appliances. Based on the large-scale questionnaire survey, the typical pattern of various kinds of occupant behaviors was mined by the clustering method and then combined to form the occupant behavior style model. In this study, the occupant behavior on plug-in electric appliances was not considered, because in the office building, high power plug-in electric life appliances such as TV sets, refrigerators or washing machines were quite rare. Table 1 Questionnaire structure and content in this study The questionnaires were distributed both online and on-site. The e-questionnaire was released on March 11, 2019, and recalled on April 30, 2019. During the survey period, though not every day, the air temperature often exceeds 30 °C. In the downtown area the peak temperature was even as high as 32 °C. Due to the high relative humidity, natural ventilation is not applicable especially in offices with multiple occupants. In this case, it is believed that the questionnaire result could reflect the occupants' behavior toward air-conditioned indoor environment. While a submission was made, the location, as well as the time of the submission, could be identified. Only those submissions with all the questions answered, located within the 9 main cities of the PRD and submitted during office hours were considered valid. Questionnaires with contradictory or casually random answers were also excluded. The number of interviews of online survey reached 4571, within which 667 valid questionnaires were collected. For the filed survey, a total of 206 valid questionnaire were collected till May 15, 2019. 2.2 Occupancy schedule analysis Occupancy schedule is a major issue in occupant behavior model for occupants would make adjustment according to comfort or behavior habits only when they are presented in the building (Sun et al., 2014; Wang et al., 2011). Currently, the static occupancy schedule is widely applied for building performance simulation. Many relevant government institutions and academic organizations provide occupancy for different building types in local design or evaluation standards. However, the movement and spatial change of users within one building could not be fully reflected by one single static schedule (Sun & Hong, 2017). In this study, occupancy schedules for different office scenes were defined by 3D scatter diagram method and the K-mean clustering analysis method. The random motion of occupants was simulated by the Markov chain method proposed by Wang et al. (2011), which indicates that the random movement probability of the person is related to time, and the future state depends on the current state. The random user motion was simulated with the application of the DeST software using the Markov chain model, and the results were stored in the SQLite database. This method can reflect the changes of the user's presence and movement indoor, which can reflect the user's diversity and random features. 2.3 Cooling temperature set point prediction The indoor cooling temperature set point is the most critical and direct factor affecting cooling load, which reflects occupant thermal comfort requirements. In this study, the influencing factors of the cooling temperature set point were analyzed by the IBM SPSS Statistics 23 Ordinal Logistic Regression method to establish the prediction model, which can predict the cooling temperature set-point probability based on the office case information. Ordinal logistic regression model fitting information test is based on the original hypothesis that all constant coefficients of the independent variable are 0. When P (Sig.) < 0.05, it means 95% probability believes that the original hypothesis is not valid, which indicates that the prediction model has statistical significance. When the goodness of fit test (χ2) of the model is P > 0.05, which shows that the goodness of fit of the model is better. The parallel line test is one of the most essential characteristics of the logistic regression model with ordered multi-classification. The original hypothesis of the parallel test is that the coefficients of independent variables of multiple binary logistic regression are equal, and when the value of P (Sig.) is > 0.05, the original hypothesis can be considered valid. $$Y=\log it\left({p}_j\right)=\ln \left(\frac{p_j}{1-{p}_j}\right)={A}_j+{\beta}_1{x}_1+{\beta}_2{x}_2+\cdots +{\beta}_n{x}_n\kern3.25em$$ Where, Pj = P(y ≤ j|x), represents the cumulative probability of y taking the first j values. Cumulative dependent variable probability follows a formula (2). $${P}_{\mathrm{j}}=p\left(Y\le j\left|x\right.\right)=\left\{\begin{array}{c}\frac{\exp \left({\alpha}_j+{\beta}_n{x}_n\right)}{1+\exp \left({\alpha}_j+{\beta}_n{x}_n\right)}, when\ 1\le \mathrm{j}\le k-1\\ {}\begin{array}{ccc}1& & \begin{array}{cc}&, \end{array}\end{array}\kern0.75em when\ j=k\end{array}\right\}\kern0.5em$$ j is a dependent variable partition point, ɑj is the constant term corresponding to the x cooling temperature, xn is the n influencing factor, βn is the regression coefficient of the n influencing factor. Probability prediction formula of the single dependent variable (3). $$p\left(Y\left.=j\right|\right)={\mathrm{P}}_j-{\mathrm{P}}_{j-1},\kern0.75em j=1,\cdots, \mathrm{k}$$ j is a dependent variable partition point, k is the k cooling temperature. In this study, the influencing factor of the cooling temperature set point is defined as IDV (X1, X2, ... Xķ): X1-age, X2-the effect of IAQ on work efficiency, X3-the effect of temperature on work efficiency, X4-control authority, X5-activity intensity, X6-thermal sensation voting (TSV), X7-whether the air conditioning run all the time in summer, X8-air velocity range, X9-status of window when the air conditioning on, X10-office nature, X11-office scale, X12-the effect of humidity on work efficiency, X13-air condition demand, X14-distance from air outlet. 2.4 Equipment control models It is a random event that whether someone in the office control equipment at a specific time or not. The occupant may take different behaviors facing the same environment or event that related to the indoor environment, daily events (commuting, leaving temporarily) and person. The clustering method is conducted by this study to classify occupants, and the probability of occupant behavior that corresponds to different types of users is further counted (Silva et al., 2009). The factors affecting equipment control are divided into environmental triggers and event triggers. The mode of "open when feeling hot" (related to indoor temperature) or "open when feeling stuffy" (related to indoor humidity) is considered as an environmental trigger, which is described as (4). $${P}_{\mathrm{on}}=\left\{\begin{array}{c}1-{\mathrm{e}}^{-{\left(\frac{t-u}{l}\right)}^k\nabla \tau }t\le u,\\ {}\begin{array}{cccc}0& & & t<u\end{array}\end{array}\right.$$ Where Pon is the probability that the user will control the equipment; t is the indoor temperature (°C), u is threshold temperature (°C), l is the scale parameter, which is dimensionless to the temperature, k is the shape parameter, which indicates the sensitivity to the environment, ∇τ is the time step in measurement and simulation, which is typically set at 10 minutes. The mode of "turn on when at work", "closed when leaving temporarily" and "closed at work" are considered as event triggers, and the probability of which is described as (5). $${P}_{\mathrm{on}}=\left\{\begin{array}{cc}P& \tau ={\tau}_0\\ {}0& \tau \ne {\tau}_0\end{array}\right.$$ Where Pon is the probability that the user will control the equipment, τ is the current time spot in the simulation, τ0 is the time spot when the relevant event occurs. 3 Result and discussion 3.1 Impact of IEQ factors on occupant behavior The impact factors of IEQ that affect productivity was studied in this study. According to the survey result, a seven-latitude radar chart was applied here as in Fig. 1. It is clear that humidity has a relatively low impact on work efficiency. 634 out of the 873 subjects, accounting for 72.3%, believe that temperature had a more considerable influence than humidity, which was also the difference of the main adaptation behavior between temperature and humidity discomfort. 5.5% of subjects did not take any adaptive actions to improve the thermal comfort status even under cold conditions, while a proportion of 15.2% subjects did not take any actions in the case of highly humid environment. The evaluation of the influence of indoor environment quality on work efficiency The evaluation of the satisfaction level of indoor environment quality is shown in Fig. 2. It can be concluded that the subjects generally believed that indoor temperature and indoor air quality have an influence on their working efficiency, but the corresponding satisfaction degree is not so high. Subjects thought that the humidity environment only has an average impact on work efficiency. The evaluation of the satisfaction level of indoor environment quality Indoor thermal comfort within office buildings is an essential factor affecting health, work efficiency, and energy consumption. In this study, while analyzing subjects' vote on indoor temperature and humidity, it was found that 18.9% of subjects thought that the indoor temperature of the office is slightly too low in summer. In order to study the influence of office staff's temperature and humidity perception, a 3-D cross-analysis was conducted to analysis temperature and humidity perception voting, as shown in Fig. 3. When the thermal environment is relatively satisfactory, most of the humidity complaint were in a neutral state. In general, when occupants feel hot in summer and cold in winter (discomfort conditions), humidity perception voting is hugely diversified. The result showed that in the PRD region, the temperature has a positive impact on wet feeling. When subjects are in a comfortable thermal environment, the acceptance rate of humidity will increase. The temperature and humidity of the indoor thermal environment are mutually coupled, and the influence of one factor on occupant comfort can be compensated by the corresponding change of the other factor. Distribution of indoor temperature and humidity Occupant behavior of office building is diversified while facing the same thermal discomfort, which can be interpreted as individual adaptive preference. During the interview about temperature-related adaptive behavior, subjects were asked about their preference for behavior during a temperature-related uncomfortable period. The result is shown in Fig. 4. In a hot and humid area such as PRD, the air conditioning is the first choice by 71.0% of subjects while they feel hot. On the contrary, in the case of a cold situation, clothes are chosen by 50.0% - 59.0%, and closing the doors and windows is chosen by 41.0%. The probability of switching on air-conditioning behavior decreases dramatically to under 20.0%, which is consistent with the trend that only 19.5% of subjects in the PRD region have heating systems. The tendency of temperature-related occupant behaviors to restore thermal comfort Occupant behavior under humidity related discomfort situation is shown in Fig. 5. The probability of air conditioning switching is 32.0%. While 27.2% of subjects choose to close doors or windows, 20.0% choose to open fans. In dry cases, the probability of taking non-actions have been increased to 21.0%. The tendency of humidity-related occupant behaviors to restore thermal comfort So far, most studies on occupant behavior only focus on one specific behavior, while in practice, multiple behaviors may be taken by occupants to adjust the indoor environment in many cases (e.g., opening office doors and windows to provide cross ventilation). The applicability of these multi-behavior models was not clear (Yan et al., 2015). In this study, relevant data were collected to study the probability of multiple actions triggered and how to restrict or negotiate each action under uncomfortable conditions. In order to study the probability between behaviors triggered by thermal discomfort, Fig. 6 was obtained after data analysis and statistics. As shown in the figure, the probability of single behavior is relatively small under extreme thermal discomfort. Under opening air conditioning behavior, a combination of turning on air conditioning and reducing clothing is 11.7%, While the combination of turning on air conditioning and opening fans is 5.16%. It indicates that while facing extreme thermal discomfort, multiple behaviors would be triggered in most cases. Adaptive behaviors under opening air conditioning to restore extreme heat discomfort In order to further study the balance among different occupant adaptive behaviors under the thermal discomfort status, the correlation among the possible adjust behaviors of the four devices was analyzed, and the P/Sig. value and the correlation coefficient (Phi) were obtained, as shown in Table 2. Table 2 The correlation between interaction behaviors to restore thermal discomfort The results show that air conditioning behavior has a significant effect on the adjustment actions of clothing, fan, door, and window during the hot period. When the air conditioning switching behavior occurs, the probability of reducing clothing increase, while the probability of opening fans and windows decrease. However, for any thermal discomfort situation, the action towards air conditioning has no significant effect on window related behavior. It is also found that the probability of air conditioning switching behavior is reduced when the fan switching behavior occurs. In extreme discomfort conditions, occupants would choose the air conditioning rather than the fan. In order to further study the occupant adaptive behavior under humidity related discomfort, the correlation among the behaviors toward the four devices was analyzed. The P/Sig. value and the correlation coefficient (Phi) were obtained, as shown in Table 3. Table 3 The correlation between interaction behaviors to restore discomfort of humidity The results show that air conditioning switching behavior has a significant effect on the behavior of opening windows under the extremely humid situation. When the air conditioning switching behavior occurs, the probability of opening the window is reduced. The air conditioning has no significant effect on the behavior of closing window action. It can reflect that the subjects prefer to pursue a higher indoor air quality, but have weak awareness of window behavior impacting on air-conditioning energy consumption. 3.2 Occupancy schedule It was found that the commuting time is affected by the size and nature of the office, which would, in turn, affects the occupancy schedule of office buildings. Based on survey data, it was observed that the difference between the large office occupancy schedule and the open office occupancy schedule is not significant. Therefore, in this study, multi-person office and open-plan office are collectively referred to as large office. The size range of small office and large office were set at 50m2 (Lv et al., 2019). The weekly working time data were classified into 3D scatter plots, as shown in Fig. 7. 3D scatter plot distribution of commuting time in this study Occupancy schedule on workday of all modes randomly simulated by DeST software K-mean clustering method was used to cluster the occupancy schedule of the office with different sizes and functions, and finally, eight typical office timetables were achieved, as shown in Table 4. Table 4 The occupancy schedule of different types of movement models obtained by clustering 3.3 Cooling temperature set point The cooling temperature set point data was shown in Fig. 9. A Q-Q graph is used to verify the normal distribution of the cooling temperature set point, as shown in Fig. 10. Each point is approximately distributed near the standard line, which indicates that the cooling temperature set point could be considered to follow a normal distribution. It can be concluded that the most common set points was 26 °C in the office building of the PRD. Survey result of the cooling temperature setting Cooling temperature set point standard Q-Q chart Multivariate ordinal logistic regression in IBM SPSS Statistics 23 was applied to analysis the cooling temperature set point prediction model of the air-conditioning system. The test results are as follows: Model Fitting Information Test P (Sig.) =0.000 < 0.001, Goodness of fit test P = 1 > 0.05, Test of Parallel Lines Test P (Sig.) =0.87 > 0.05, which indicates that the prediction model of cooling temperature setting in this study is valid. The research results show that the office occupation nature, office size, air conditioning demand and the influence of humidity on office work efficiency have no significant influence on the cooling temperature set point. The probability of high cooling temperature set point increases by 1.27 times when occupant's age increase. Indoor temperature and air quality have a significant influence on the cooling temperature set point. The probability of a low cooling temperature set point increases when people realize that temperature/air quality has a greater impact on work efficiency. Meanwhile, the larger the control permissions, the more diverse and the lower and more diverse the temperature setpoints; The frequency of occupant activities increase the probability of low temperature set point by 64.0%, compared with those who rarely move. Leaving the other variables unchanged, increasing the thermal sensation voting (from − 3 to 3) leads to an increase of a low temperature set point by 5 times. The probability of a low temperature set point can also be reduced by 68.0% when air conditioning can be functioned all day round in summer. Compared with that in the somatosensory temperature system, the low temperature set point probability of under high air velocity situation is reduced by 52.0%, while the low air velocity situation the probability by 2.23 times. While the AC system is on, the opening window will increase the probability of a low temperature set point by 1.42 times. Logical regression coefficients P values, significance levels and OR standardized estimations of cooling temperature setting are reported in Table 5. Table 5 Cooling temperature set point prediction model related factors The logistic regression function formula (2) can be expressed as Formula (6), as the prediction model of the cooling temperature of office buildings in the PRD region. $$L\mathrm{og}\left({p}_i\right)={A}_i+\sum \limits_{j=1,\eta =1,P=1}^{j=9,\eta =6,p=30}{\beta}_j{X}_j\left(\eta, P\right)$$ Aiis the constant corresponding to the i cooling temperature set point; Βj stands for the regression coefficient of the j influence factor; Xj (η,p)-j is the influencing factor, η is the option of the j influencing factor, and P is the number of an occupant who selects η. The office case information was substituted into the 12 model prediction equations obtained by function (6), and then the cumulative probability value was obtained by substituting the model prediction formula into the inverse function formula (2) above. The cumulative probability value was substituted into a function (3) above to calculate the prediction probability of a single dependent variable. The category with the highest forecast probability can be considered as the category of the case. Taking a small office (1 person) as an example, the probability distribution of the cooling temperature set point is predicted as shown in Table 6. Table 6 Validation example of cooling temperature prediction model of small office An example of the cooling temperature prediction for a large office (10 persons) is also given below. Influencing factors information and the probability distribution of predicting cooling temperature setpoints are shown in Table 7. Table 7 Validation example of the cooling temperature prediction model of the large office building 3.4 Typical occupant behavior model 3.4.1 HVAC K-prototype clustering analysis was conducted to analyze the factors affecting air conditioning behavior. According to the actual situation, the clustering categories were classified by the combination of opening and closing modes, as shown in Table 8. Table 8 The types and proportions of the occupant in the air conditioning behavior model In all models, air conditioning switching behavior is carried out if colleagues and supervisors propose, which highlights the collective interactivity in a large office. In this study, air conditioning behavior pattern is regarded as a combination of environmental and event drivers, which are independent of each other. For example, air conditioning behavior of the indoor temperature does not depend on the event when it is temporarily left. The probability of AC on/off is calculated using independent events, as presented in formula (7). P(ζ) is the probability of opening air conditioning in driving event ζ, P(η) is the probability of opening air conditioning in driving event η, the probability of office buildings to turn on air conditioning can be calculated by independent events (Feng et al., 2016). $$P\left(\zeta \cup \eta \cup \lambda \right)=P\left(\zeta \right)+P\left(\eta \right)+P\left(\lambda \right)-P\left(\zeta \right).P\left(\eta \right)-P\left(\zeta \right).P\left(\lambda \right)-P\left(\eta \right).P\left(\lambda \right)+P\left(\zeta \right).P\left(\eta \right).P\left(\lambda \right)$$ The model obtained by clustering was statistically classified with questionnaire data, and the behavior mode probability driven by various factors in the model was obtained. The total probability was obtained up by the independent events in the questionnaire that affect various mode behaviors, as shown in Table 9. Table 9 The probability of occupant in the air conditioning behavior model (on/off) In the questionnaire, only the probability of occupant behavior driven by the even can be calculated. As the specific parameters driven by the environmental need to be measured later, the probability of occupant behavior driven by the environment has been converted into the probability calculation of event-driven factors. 3.4.2 Lighting It is found that 24.7% of the subjects have no energy-saving awareness, which leads to lighting (all or part of) being turned on beyond working hours. 6.9% of the subject do not care about the on/off status of the lighting system, while 68.3% of the energy-saving users would turn it off. 36.8% of the subjects with better energy-saving consciousness will partially reduce or completely close the number of lighting fixtures when the daylight is sufficient. Therefore, the K-prototype clustering analysis method was used to distinguish groups in lighting behavior models, as shown in Table 10. Table 10 The types and proportions of the occupant in the lighting behavior model The lighting model obtained by clustering is statistically classified with questionnaire data to obtain the probability of control mode driven by various factors in this mode, as shown in Table 11. Table 11 The probability of occupant in lighting behavior model (on/off) 3.4.3 Window For window related behavior, it is found that the window of the office in the PRD region is in a state of constant closed with a probability of 26.69%. The reason is that in order to prevent objects from dropping, opening a window requires permission from supervisors if the office is in high-rise buildings, which greatly reduces the probability of window-related behavior. According to the adaptive behavior of thermal discomfort, it can be concluded that air conditioning systems and windows have a coupling effect on indoor environmental thermal comfort or energy use, which means that window-related behaviors may be affected by different air conditioning modes. Therefore, window behavior was considered under different air conditioning models, and the cluster categories were achieved, as shown in Table 12. Table 12 The types and proportions of the occupant in the window behavior model Table 13 The probability of occupant in the window behavior model (on/off) 3.4.4 Blind Although blinds behavior has not been added to the DeST occupant behavior module, the purpose of this study is to comprehensively define the user behavior style of office buildings in the PRD region. Therefore, the classification of blind's behavior is still defined, as shown in Table 14. Table 14 The types and proportions of the occupant in the blind behavior model Table 15 The probability of occupant in the blind behavior model (on/off) 3.4.5 Typical occupant behavior model This section comprehensively defines the behavior of office buildings to adapt to the uncomfortable environment of various user styles. According to the combination of typical human behaviors of each cluster, 20 typical office user behavior styles are obtained. It can comprehensively define the behavior styles of users' various kinds of equipment in office buildings in the PRD region, as shown in Table 16. Table 16 User style defined by the behavior model of office builders in Pearl river delta 4 Conclusions and future work A questionnaire survey investigating occupant behavior patterns in air-conditioned office buildings in the PRD region has been conducted in 2019. Based on the questionnaire survey and data analysis, the influencing factors of occupant behavior related to the office building was studied, a cooling temperature set point prediction model was quantified, a series of occupant behavior model for office buildings in PRD were defined. The results of the cooling temperature set point prediction model were analyzed, which shows that factors, such as age, activity intensity, and TSV, etc., have a significant influence on the cooling temperature set point. It also shows that the probability of a low cooling temperature set point in the large office is higher than that in the small office. However, this study has two limitations. First, the influencing factors of occupant's behavior of various electrical appliances (such as air conditioning, lighting, window, etc.) is studied by questionnaire, the event-driven behavior probability can only be established initially. The environmental driving factors lack the support of measured data, the quantitative relationship between environmental factors and behavior cannot be captured. Second, the samples of this study were mainly young people less than 35 years old, which may lead to a large deviation in the cooling temperature set point prediction model. It is suggested that samples over 35 years old should also be considered in the future. It should also be noted that, as previously mentioned, the occupants' behavior can be affected by various "driving forces", their behaviors cannot be the same in different climate regions. However, the research method described in this paper contains no climate sensitive factors, and can be applied to similar studies in other regions. In the future, the effort will be spent on investigation of the verification and improvement of the proposed occupant behavior model for practical use. A laboratory has been built for the on-site measurement of occupant behavior in the office building of the PRD. Over 20 occupants' daily behavior (including the adjustment of the air-conditioning system, lighting system, shading system and office equipment), as well as the indoor environment status (including temperature, RH, black globe temperature and air velocity), are measured and recorded. Besides the verification through laboratory tests, both survey and measurement studies are planned on occupant behaviors in the hotel, shopping centers, transport station and finally, residential buildings. The raw data generated from the survey during the current study are not openly available due to protection of subjects' personal information as well as their privacy. The raw data are available from the corresponding author upon reasonable request, via a Material Transfer Agreement. ASHRAE. (2013). ANSI/ASHRAE standard 62.1-2013: Ventilation for acceptable indoor air quality. American Society of Heating, Refrigerating and air-Conditioning Engineers. Chen, Y., Liang, X., Hong, T., & Luo, X. (2017). Simulation and visualization of energy-related occupant behavior in office buildings. Building Simulation, 10, 785–798. Chow, T. T., Chan, A. L. S., & Song, C. L. (2004). Building-mix optimization in district cooling system implementation. Applied Energy, 77(1), 1–13. Chung, W. (2011). Review of building energy-use performance benchmarking methodologies. Applied Energy, 88(5), 1470–1479. Eguaras-Martínez, M., Vidaurre-Arbizu, M., & Martín-Gómez, C. (2014). Simulation and evaluation of building information modeling in a real pilot site. Applied Energy, 114, 475–484. Feng, X., Yan, D., Wang, C., & Sun, H. (2016). A preliminary research on the derivation of typical occupant behavior based on large-scale questionnaire surveys. Energy and Buildings, 117, 332–340. Fritsch, R. (1990). A stochastic model of user behaviour regarding ventilation. Building and Environment, 25(2), 173–181. Gang, W., Wang, S., Gao, D., & Xiao, F. (2015). Performance assessment of district cooling systems for a new development district at planning stage. Applied Energy, 140, 33–43. Lv, Y. J., Peng, H. W., He, M. N., Huang, Y., & Wang, J. W. (2019). Definition of typical commercial building for South China's Pearl River Delta: Local data statistics and model development. Energy and Buildings, 190, 119–131. Mahdavi, A., Mohammadi, A., Kabir, E., & Lambeva, L. (2008). Occupants' operation of lighting and shading systems in office buildings. Journal of Building Performance Simulation, 1(1), 57–65. Parys, W., Saelens, D., & Hens, H. (2011). Coupling of dynamic building simulation with stochastic modelling of occupant behaviour in offices– A review-based integrated methodology. Journal of Building Performance Simulation, 4(4), 339–358. Pérez-Lombard, L., Ortiz, J., Coronel, J. F., & Maestre, I. R. (2011). A review of hvac systems requirements in building energy regulations. Energy and Buildings, 43(2–3), 255–268. Richardson, I., Thomson, M., & Infield, D. (2008). A high-resolution domestic building occupancy model for energy demand simulations. Energy and Buildings, 40(8), 1560–1566. Rysanek, A. M., & Choudhary, R. (2015). Delores – An open-source tool for stochastic prediction of occupant services demand. Journal of Building Performance Simulation, 8(2), 97–118. Silva, K. P., Carvalho, F., & Csernel, M. (2009). Clustering of symbolic data using the assignment-prototype algorithm. International Joint Conference on Neural Networks. IEEE Press. Sun, K., & Hong, T. (2017). A simulation approach to estimate energy savings potential of occupant behavior measures. Energy and Buildings, 136, 43–62. Sun, K., Hong, T. Z., Taylor-Lange, S. C., & Piette, M. A. (2016). A pattern-based automated approach to building energy model calibration. Applied Energy, 2016(165), 214–224. Sun, K., Yan, D., Hong, T., & Guo, S. (2014). Stochastic modeling of overtime occupancy and its application in building energy simulation and calibration. Building and Environment, 79, 1–12. Wang, C., Yan, D., & Jiang, Y. (2011). A novel approach for building occupancy simulation. Building Simulation, 4(2), 149–167. Wang, D., Federspiel, C. C., & Rubinstein, F. (2005). Modeling occupancy in single person offices. Energy and Buildings, 37(2), 121–126. Widén, J., Nilsson, A. M., & Wäckelgård, E. (2009). A combined markov-chain and bottom-up approach to modelling of domestic lighting demand. Energy and Buildings, 41(10), 1001–1012. Yamaguchi, Y., Shimoda, Y., & Kitano, T. (2013). Reduction potential of operational carbon dioxide emission of nakanoshima business/cultural area as a model for low-carbon districts in warm climates. Building and Environment, 59, 187–202. Yan, D., Hong, T. (2018). EBC annex 66 final report - definition and simulation of occupant behavior in buildings. IEA. Yan, D., Hong, T. Z., Dong, B., Mahdavi, A., D'Oca, S., Gaetani, L., & Feng, X. H. (2017). IEA EBC annex 66: Definition and simulation of occupant behavior in buildings. Energy and Buildings, 156, 258–270. Yan, D., O'Brien, W., Hong, T. Z., Feng, X. H., Gunay, H. B., Tahmasebi, F., & Mahdavi, A. (2015). Occupant behavior modeling for building performance simulation: Current state and future challenges. Energy and Buildings, 107, 264–278. Yoshino, H., Hong, T., & Nord, N. (2017). IEA EBC annex 53: Total energy use in buildings—Analysis and evaluation methods. Energy and Buildings, 152, 124–136. Yun, G. Y., Tuohy, P., & Steemers, K. (2009). Thermal performance of a naturally ventilated building using a combined algorithm of probabilistic occupant behaviour and deterministic heat and mass balance models. Energy and Buildings, 41(5), 489–499. Zhao, J., Lasternas, B., Lam, K. P., Yun, R., & Loftness, V. (2014). Occupant behavior and schedule modeling for building energy simulation through office appliance power consumption data mining. Energy and Buildings, 82, 341–355. Zhou, X., Yan, D., Feng, X., Deng, G., Jian, Y., & Jiang, Y. (2016). Influence of household air-conditioning use modes on the energy performance of residential district cooling systems. Building Simulation, 9(4), 429–441. Zimmermann, G. (2007). Modeling and simulation of individual user behavior for building performance predictions. Society for Computer Simulation International. All individuals that contributed to this work have been listed as authors. The work is financially supported by The Opening Fund of State Key Laboratory of Green Building in Western China (grant no. LSKF202203), the Science and Technology Program of Guangzhou, China (grant no. 202102010424). Guangdong Polytechnic of Water Resources and Electric Engineering, Guangzhou, China Manning He School of Civil Engineering, Guangzhou University, Guangzhou, China Huiwang Pen, Meixiang Li, Yu Huang, Siwei Lou & Liwei Wen Guangdong Provincial Key Laboratory of Building Energy Efficiency and Application Technologies, Guangzhou, China Yu Huang, Siwei Lou & Liwei Wen State Key Laboratory of Green Building in Western China, Xian University of Architecture & Technology, Xian, China Yu Huang School of Architecture, Tsinghua University, Beijing, China Huiwang Pen Meixiang Li Siwei Lou Liwei Wen Manning He: original draft preparation, data investigation and visualization. Huiwang Peng: data collection and validation. Meixiang Li: data collection and validation. Yu Huang: Supervision. project administration and funding acquisition, draft review. Da Yan: Methodology. Siwei Lou: software and draft review. Liwei Wen: data validation. The author(s) read and approved the final manuscript. Correspondence to Yu Huang. The authors have no competing interests to declare that are relevant to the content of this article. He, M., Pen, H., Li, M. et al. Investigation on typical occupant behavior in air-conditioned office buildings for South China's Pearl River Delta. ARIN 1, 8 (2022). https://doi.org/10.1007/s44223-022-00005-w DOI: https://doi.org/10.1007/s44223-022-00005-w Building performance simulation Occupant behavior model Pearl River Delta Indoor environment quality
CommonCrawl
From free text to clusters of content in health records: an unsupervised graph partitioning approach M. Tarik Altuncu1,4, Erik Mayer3,4, Sophia N. Yaliraki2,4 & Mauricio Barahona1,4 Applied Network Science volume 4, Article number: 2 (2019) Cite this article Electronic healthcare records contain large volumes of unstructured data in different forms. Free text constitutes a large portion of such data, yet this source of richly detailed information often remains under-used in practice because of a lack of suitable methodologies to extract interpretable content in a timely manner. Here we apply network-theoretical tools to the analysis of free text in Hospital Patient Incident reports in the English National Health Service, to find clusters of reports in an unsupervised manner and at different levels of resolution based directly on the free text descriptions contained within them. To do so, we combine recently developed deep neural network text-embedding methodologies based on paragraph vectors with multi-scale Markov Stability community detection applied to a similarity graph of documents obtained from sparsified text vector similarities. We showcase the approach with the analysis of incident reports submitted in Imperial College Healthcare NHS Trust, London. The multiscale community structure reveals levels of meaning with different resolution in the topics of the dataset, as shown by relevant descriptive terms extracted from the groups of records, as well as by comparing a posteriori against hand-coded categories assigned by healthcare personnel. Our content communities exhibit good correspondence with well-defined hand-coded categories, yet our results also provide further medical detail in certain areas as well as revealing complementary descriptors of incidents beyond the external classification. We also discuss how the method can be used to monitor reports over time and across different healthcare providers, and to detect emerging trends that fall outside of pre-existing categories. The vast amounts of data collected by healthcare providers in conjunction with modern data analytics techniques present a unique opportunity to improve health service provision and the quality and safety of medical care for patient benefit (Colijn et al. 2017). Much of the recent research in this area has been on personalised medicine and its aim to deliver better diagnostics aided by the integration of diverse datasets providing complementary information. Another large source of healthcare data is organisational. In the United Kingdom, the National Health Service (NHS) has a long history of documenting extensively the different aspects of healthcare provision. The NHS is currently in the process of increasing the availability of several databases, properly anonymised, with the aim of leveraging advanced analytics to identify areas of improvement in NHS services. One such database is the National Reporting and Learning System (NRLS), a central repository of patient safety incident reports from the NHS in England and Wales. Set up in 2003, the NRLS now contains more than 13 million detailed records. The incidents are reported using a set of standardised categories and contain a wealth of organisational and spatio-temporal information (structured data), as well as, crucially, a substantial component of free text (unstructured data) where incidents are described in the 'voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission and discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into the complex functioning of different processes and procedures in healthcare towards service improvement for safer carer. Although statistical analyses are routinely performed on the structured component of the data (dates, locations, assigned categories, etc), the free text remains largely unused in systematic processes. Free text is usually read manually but this is time-consuming, meaning that it is often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. There is a lack of methodologies that can summarise content and provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Methods that could provide automatic categorisation of incidents from the free text would sidestep problems such as difficulties in assigning an incident category by virtue of a priori pre-defined lists in the reporting system or human error, as well as offering a unique insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services. Our goal in this work is to showcase an algorithmic methodology that detects content-based groups of records in a given dataset in an unsupervised manner, based only on the free and unstructured textual description of the incidents. To do so, we combine recently developed deep neural-network high-dimensional text-embedding algorithms with network-theoretical methods. In particular, we apply multiscale Markov Stability community detection to a sparsified geometric similarity graph of documents obtained from text vector similarities. Our method departs from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents (Blei et al. 2003). More recent approaches have used deep neural network based language models clustered with k-means, without a full multiscale graph analysis (Hashimoto et al. 2016). There have been some previous applications of network theory to text analysis. For example, Lanchichinetti and co-workers (Lancichinetti et al. 2015) used a probabilistic graph construction analysed with the InfoMap algorithm (Rosvall et al. 2009); however, their community detection was carried out at a single-scale and the representation of text as BoW arrays lacks the power of neural network text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting with pre-designed classifications. The obtained results could thus help mitigate possible human error or effort in finding the right category in complex category classification trees. We showcase the methodology through the analysis of a dataset of patient incidents reported to the NRLS. First, we use the 13 million records collected by the NRLS since 2004 to train our text embedding (although a much smaller corpus can be used). We then analyse a subset of 3229 records reported from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 to extract clusters of incidents at different levels of resolution in terms of content. Our method reveals multiple levels of intrinsic structure in the topics of the dataset, as shown by the extraction of relevant word descriptors from the grouped records and a high level of topic coherence. Originally, the records had been manually coded by the operator upon reporting with up to 170 features per case, including a two-level manual classification of the incidents. Therefore, we also carried out an a posteriori comparison against the hand-coded categories assigned by the reporter (healthcare personnel) at the time of the report submission. Our results show good overall correspondence with the hand-coded categories across resolutions and, specifically, at the medium level of granularity. Several of our clusters of content correspond strongly to well-defined categories, yet our results also reveal complementary categories of incidents not defined in the external classification. In addition, the tuning of the granularity afforded by the method can be used to provide a distinct level of resolution in certain areas corresponding to specialised or particular sub-themes. Multiscale graph partitioning for text analysis: description of the framework Our framework combines text-embedding, geometric graph construction and multi-resolution community detection to identify, rather than impose, content-based clusters from free, unstructured text in an unsupervised manner. Figure 1 shows a summary of our pipeline. First, we pre-process each document to transform text into consecutive word tokens, where words are in their most normalised forms, and some words are removed if they have no distinctive meaning when used out of context (Bird et al. 2009; Porter 1980). We then train a paragraph vector model using the Doc2vec framework (Le and Mikolov 2014) on the whole set (13 million) of preprocessed text records, although training on smaller sets (1 million) also produces good results (Table 1). This training step is only done once. This Doc2Vec model is subsequently used to infer high-dimensional vector descriptions for the text of each of the 3229 documents in our target analysis set. We then compute a matrix containing pairwise similarities between any pair of document vectors, as inferred with Doc2vec. This matrix can be thought of as a full, weighted graph with documents as nodes and edges weighted by their similarity. We sparsify this graph to the union of a minimum spanning tree and a k-Nearest Neighbors (MST-kNN) graph, a geometric construction that removes less important similarities but preserves global connectivity for the graph and, hence, for the dataset. The derived MST-kNN graph is analysed with Markov Stability (Delvenne et al. 2010; Lambiotte et al. 2014), a multi-resolution dynamics-based graph partitioning method that identifies relevant subgraphs (i.e., clusters of documents) at different levels of granularity. Markov Stability uses a diffusive process on the graph to reveal the multiscale organisation at different resolutions without the need for choosing a priori the number of clusters, scale or organisation. To analyse a posteriori the different partitions across levels of resolution, we use both visualisations and quantitative scores. The visualisations include word clouds to summarise the main content, graph layouts, as well as Sankey diagrams and contingency tables that capture the correspondences across levels of resolution and relationships to the hand-coded classifications. The partitions are also evaluated quantitatively to score: (i) their intrinsic topic coherence (using pairwise mutual information (Newman et al. 2009; Newman et al. 2010)), and (ii) their similarity to the operator hand-coded categories (using normalised mutual information (Strehl and Ghosh 2003)). We now expand on the steps of the computational framework. Pipeline for data analysis including the training of the text embedding model and the graph-based unsupervised clustering of documents at different levels of resolution to find topic clusters only from the free text descriptions of hospital incident reports from the NRLS database Table 1 Benchmarking of text corpora used for Doc2Vec training The full dataset includes more than 13 million confidential reports of patient safety incidents reported to the National Reporting and Learning System (NRLS) between 2004 and 2016 from NHS trusts and hospitals in England and Wales. Each record has more than 170 features, including organisational details (e.g., time, trust code and location), anonymised patient information, medication and medical devices, among other details. The records are manually classified by operators to a two-level system of categories of incident type. In particular, the top level contains 15 categories including general groups such as 'Patient accident', 'Medication', 'Clinical assessment', 'Documentation', 'Admissions/Transfer' or 'Infrastructure' alongside more specific groups such as 'Aggressive behaviour', 'Patient abuse', 'Self-harm' or 'Infection control'. In most records, there is also a detailed description of the incident in free text, although the quality of the text is highly variable. Our analysis set for clustering is the group of 3229 records reported during the first quarter of 2014 at St. Mary's Hospital in London (Imperial College Healthcare NHS Trust). Text preprocessing Text preprocessing is important to enhance the performance of text embedding. We applied standard preprocessing techniques in natural language processing to the raw text of all 13 million records in our corpus. We normalise words into a single form and remove words that do not carry significant meaning. Specifically, we divide our documents into iterative word tokens using the NLTK library (Bird et al. 2009) and remove punctuation and digit-only tokens. We then apply word stemming using the Porter algorithm (Porter 1980; Willett 2006). If the Porter method cannot find a stemmed version for a token, we apply the Snowball algorithm (Porter 2001). Finally, we remove any stop-words (repeat words with low content) using NLTK's stop-word list. Although some of the syntactic information is reduced due to text preprocessing, this process preserves and consolidates the semantic information of the vocabulary, which is of relevance to our study. Text embedding Computational methods for text analysis rely on a choice of a mathematical representation of the base units, such as character n-grams, words or documents of any length. An important consideration for our methodology is an attempt to avoid the use of labelled data at the core of many supervised or semi-supervised classification methods (Agirre et al. 2016; Cer et al. 2017). In this work, we use a representation of text documents in vector form following recent developments in the field. Classically, bag-of-words (BoW) methods were used to obtain representations of the documents in a corpus in terms of vectors of term frequencies weighted by inverse document frequency (TF-iDF). While such methods provide a statistical description of documents, they do not carry information about the order or proximity of words to each other since they regard word tokens in an independent manner with no semantic or syntactic relationships considered. Furthermore, BoW representations tend to be high-dimensional and sparse, due to large sizes of word dictionaries and low frequencies of many terms. Recently, deep neural network language models have successfully overcome certain limitations of BoW methods by incorporating word neighbourhoods in the mathematical description of each term. PV-DBOW (Paragraph Vector - Distributed Bag of Words), also known as Doc2Vec (Le and Mikolov 2014), is such a model which represents any length of word sequences (i.e. sentences, paragraphs, documents) as d-dimensional vectors, where d is a user-defined parameter (typically d=300). Training a Doc2Vec model starts with a random d-dimensional vector assignment for each document in the corpus. A stochastic gradient descent algorithm iterates over the corpus with the objective of predicting a randomly sampled set of words from each document by using only the document's d-dimensional vector (Le and Mikolov 2014). The objective function being optimised by PV-DBOW is similar to the skip-gram model in Refs. (Mikolov et al. 2013a, b). Doc2Vec has been shown (Dai et al. 2014) to capture both semantic and syntactic characterisations of the input text outperforming BoW models, such as LDA (Blei et al. 2003). Here, we use the Gensim Python library (Rehurek and Sojka 2010) to train the PV-DBOW model. The Doc2Vec training was repeated several times with a variety of training hyper-parameters to optimise the output based on our own numerical experiments and the general guidelines provided by Lau and Baldwin (2016). We trained Doc2Vec models using text corpora of different sizes and content with different sets of hyper-parameters, in order to characterise the usability and quality of models. Specifically, we checked the effect of corpus size on model quality by training Doc2Vec models on the full 13 million NRLS records and on subsets of 1 million and 2 million randomly sampled records. (We note that our target subset of 3229 records has been excluded from these samples.) Furthermore, we checked the importance of the specificity of the text corpus by obtaining a Doc2Vec model from a generic, non-specific set of 5 million articles from Wikipedia representing standard English usage across a variety of topics. Benchmarking of the Doc2Vec training. We benchmarked the Doc2Vec models by scoring how well the document vectors represent the semantic topic structure: (i) calculating centroids for the 15 externally hand-coded categories; (ii) selecting the 100 nearest reports for each centroid; (iii) counting the number of incident reports (out of 1500) correctly assigned to their centroid. The results in Table 1 show that training on the highly specific text in the NRLS records is an important ingredient in the successful vectorisation of the documents, as shown by the degraded performance for the Wikipedia model across a variety of training hyper-parameters. Our results also show that reducing the size of the corpus from 13 million to 1 million records did not affect the benchmarking dramatically. This robustness of the results to the size of the training corpus was confirmed further with the use of more detailed metrics, as discussed below in section "Robustness of the results and comparison with other methods". Based on our benchmarking, we use henceforth (unless otherwise noted) the optimised Doc2Vec model obtained from the 13+ million NRLS records with the following hyper-parameters: {training method = dbow, number of dimensions for feature vectors size = 300, number of epochs = 10, window size = 15, minimum count = 5, number of negative samples = 5, random down-sampling threshold for frequent words = 0.001 }. As an indication of computational cost, the training of the model on the 13 million records takes approximately 11 h (run in parallel with 7 threads) on shared servers. Graph construction Once the Doc2Vec model is trained, we use it to infer a vector for each of the N=3229 records in our analysis set. We then construct a normalised cosine similarity matrix between the vectors by: computing the matrix of cosine similarities between all pairs of records, Scos; transforming it into a distance matrix Dcos=1−Scos; applying element-wise max norm to obtain \(\hat {D}=\|D_{cos}\|_{max}\); and normalising the similarity matrix \(\hat {S} = 1-\hat {D}\) which has elements in the interval [0,1]. The similarity matrix can be thought of as the adjacency matrix of a fully connected weighted graph. However, such a graph contains many edges with small weights reflecting weak similarities—in high-dimensional noisy datasets even the least similar nodes present a substantial degree of similarity. Such weak similarities are in most cases redundant, as they can be explained through stronger pairwise similarities present in the graph. These weak, redundant edges obscure the graph structure, as shown by the diffuse, spherical visualisation of the full graph layout in Fig. 2a. Planar layouts using the ForceAtlas2 algorithm (Jacomy et al. 2014) of some of the similarity graphs generated from the dataset of 3229 records. Each node represents a record and is coloured according to its hand-coded, external category to aid visualisation of the structure. Note that the external categories are not used to produce our content-driven multi-resolution clustering in Fig. 3. a Layout for the full, weighted normalised similarity matrix \(\hat {S}\) without MST-kNN applied. b–e show the layouts of the graphs generated from the data with the MST-kNN algorithm with an increasing level of sparsity: k=17,13,5,1 respectively. The structure of the graph is sharpened for intermediate values of k, and we choose k=13 for our analysis here The top plot presents the results of the Markov Stability algorithm across Markov times, showing the number of clusters of the optimised partition (red), the variation of information VI(t) for the ensemble of optimised solutions at each time (blue) and the variation of Information VI(t,t′) between the optimised partitions across Markov time (background colourmap). Relevant partitions are indicated by dips of VI(t) and extended plateaux of VI(t,t′). We choose five levels with different resolutions (from 44 communities to 3) in our analysis. The Sankey diagram below illustrates how the communities of documents (indicated by numbers and colours) map across Markov time scales. The community structure across scales present a strong quasi-hierarchical character—a result of the analysis and the properties of the data, since it is not imposed a priori. The different partitions for the five chosen levels are shown on a graph layout for the document similarity graph created with the MST-kNN algorithm with k=13. The colours correspond to the communities found by MS indicating content clusters To reveal the graph structure, we obtain a MST-kNN graph from the normalised similarity matrix (Veenstra et al. 2017). This is a simple sparsification based on a geometric heuristic that preserves the global connectivity of the graph while retaining details about the local geometry of the dataset. The MST-kNN algorithm starts by computing the minimum spanning tree (MST) of the full matrix \(\hat {D}\), i.e., the tree with (N−1) edges connecting all nodes in the graph with minimal sum of edge weights (distances). The MST is computed using the Kruskal algorithm implemented in SciPy (Jones et al. 2001). To this MST, we add edges connecting each node to its k nearest nodes (kNN) if they are not already in the MST. Here k is a user-defined parameter. The binary adjacency matrix of the MST-kNN graphs, EMST-kNN, is Hadamard-multiplied with \(\hat {S}\) to give the adjacency matrix A of the weighted, undirected sparsified graph. The MST-kNN method avoids a direct thresholding of the weights in \(\hat {S}\), and obtains a graph description that preserves local geometric information together with a global subgraph (the MST) that captures properties of the full dataset. The network layout visualisations in Fig. 2b–e give an intuitive picture of the effect of the sparsification. The highly sparse graphs obtained when the number of neighbours k is very small are not robust. As k is increased, the local similarities between documents induce the formation of dense subgraphs (which appear closer in the graph visualisation layout). When the number of neighbours becomes too large, the local structure becomes diffuse and the subgraphs lose coherence, signalling the degradation of the local graph structure. Figure 2 shows that the MST-kNN graph with k=13 presents a reasonable balance between local and global structure. Relatively sparse graphs that preserve important edges and global connectivity of the dataset (guaranteed here by the MST) have computational advantages when using community detection algorithms. The MST-kNN construction has been reported to be robust to the selection of the parameter k due to the guaranteed connectivity provided by the MST (Veenstra et al. 2017). In the following, we fix k=13 for our analysis with the multi-scale graph partitioning framework, but we have scanned values of k∈ [ 1,50] in the graph construction from our data and have found that the construction is robust as long as k is not too small (i.e., k>13). The detailed comparisons are shown in section "Robustness of the results and comparison with other methods". The MST-kNN construction has the advantage of its simplicity and robustness, and the fact that it balances the local and global structure of the data. However, the area of network inference and graph construction from data, and graph sparsification is very active, and several alternative approaches exist based on different heuristics, e.g., Graphical Lasso (Friedman et al. 2008), Planar Maximally Filtered Graph (Tumminello et al. 2005), spectral sparsification (Spielman and Srivastava 2011), or the Relaxed Minimum Spanning Tree (RMST) (Beguerisse-Diaz et al. 2013). We have experimented with some of those methods and obtained comparable results. A detailed comparison of sparsification methods as well as the choice of distance in defining the similarity matrix \(\hat S\) is left for future work. Multiscale graph partitioning The area of community detection encompasses a variety of graph partitioning approaches which aim to find 'good' partitions into subgraphs (or communities) according to different cost functions, without imposing the number of communities a priori (Schaub et al. 2017). The notion of community thus depends on the choice of cost function. Commonly, communities are taken to be subgraphs whose nodes are connected strongly within the community with relatively weak inter-community edges. Such structural notion is related to balanced cuts. Other cost functions are posed in terms of transitions inside and outside of the communities, usually as one-step processes (Rosvall et al. 2009). When transition paths of random walks of all lengths are considered, the concept of community becomes intrinsically multi-scale, i.e., different partitions can be found to be relevant at different time scales leading to a multi-level description dictated by the transition dynamics (Delvenne et al. 2010; Schaub et al. 2012a; Lambiotte et al. 2014). This leads to the framework of Markov Stability, a dynamics-based, multi-scale community detection methodology, which can be shown to recover seamlessly several well-known heuristics as particular cases (Delvenne et al. 2010; Delvenne et al. 2013; Lambiotte et al. 2008). Here, we apply Markov Stability to find partitions of the similarity graph A at different levels of resolution. The subgraphs detected correspond to clusters of documents with similar content. Markov Stability (MS)Footnote 1 is an unsupervised community detection method that finds robust and stable partitions under the evolution of a continuous-time diffusion process without a priori choice of the number or type of communities or their organisation (Delvenne et al. 2010; Schaub et al. 2012a; Lambiotte et al. 2014; Beguerisse-Díaz et al. 2014). In simple terms, it can be understood by analogy to a drop of ink diffusing on the graph under a diffusive Markov process. The ink diffuses homogeneously unless the graph has some intrinsic structural organisation, in which case the ink gets transiently contained, over particular time scales, within groups of nodes (i.e., subgraphs or communities). The existence of this transient containment signals the presence of a natural partition of the graph. As the process evolves, the ink diffuses out of those initial communities but might get transiently contained in other, larger subgraphs. By analysing this Markov dynamics over time, MS detects the structure of the graph across scales. The Markov time t thus acts as a resolution parameter that allows us to extract robust partitions that persist over particular time scales, in an unsupervised manner. Given the adjacency matrix AN×N of the graph obtained as described previously, let us define the diagonal matrix D=diag(d), where d=A1 is the degree vector. The random walk Laplacian matrix is defined as LRW=IN−D−1A where IN is the identity matrix of size N, and the transition matrix (or kernel) of the associated continuous-time Markov process is \(\phantom {\dot {i}\!}P(t)=e^{-t L_{\text {RW}}}, \, t>0\) (Lambiotte et al. 2014). For each partition, a binary membership matrix HN×C maps the N nodes into C clusters. We can then define the C×C clustered autocovariance matrix: $$\begin{array}{*{20}l} R(t,H) = H^{T}[\Pi P(t)-\pi\pi^{T}]H \end{array} $$ where π is the steady-state distribution of the process and Π=diag(π). The element [R(t,H)]αβ quantifies the probability that a random walker starting from community α will end in community β at time t, subtracting the probability that the same event occurs by chance at stationarity. We then define our cost function measuring the goodness of a partition over time t, termed the Markov Stability of partition H: $$ r(t,H) = \text{trace} \left[R(t,H)\right]. $$ A partition H that maximises r(t,H) is comprised of communities that preserve the flow within themselves over time t, since in that case the diagonal elements of R(t,H) will be large and the off-diagonal elements will be small. For details, see Delvenne et al. (2010), Schaub et al. (2012a), Lambiotte et al. (2014) and Bacik et al. (2016). MS searches for partitions at each Markov time that maximise r(t,H). Although the maximisation of (2) is an NP-hard problem (hence with no guarantees for global optimality), there are efficient optimisation methods that work well in practice. Our implementation here uses the Louvain Algorithm (Blondel et al. 2008; Lambiotte et al. 2008) which is efficient and known to give good results when applied to benchmarks. To obtain robust partitions, we run the Louvain algorithm 500 times with different initialisations at each Markov time and pick the best 50 with the highest Markov Stability value r(t,H). We then compute the variation of information (Meilă 2007) of this ensemble of solutions VI(t), as a measure of the reproducibility of the result under the optimisation. In addition, the relevant partitions are required to be persistent across time, as given by low values of the variation of information between optimised partitions across time VI(t,t′). Robust partitions are thus indicated by Markov times where VI(t) shows a dip and VI(t,t′) has an extended plateau, indicating consistent results from different Louvain runs and validity over extended scales (Bacik et al. 2016; Lambiotte et al. 2014). Visualisation and interpretation of the results Graph layouts: We use the ForceAtlas2 (Jacomy et al. 2014) layout to represent the graph of 3229 NRLS Patient Incident reports. This layout follows a force-directed iterative method to find node positions that balance attractive and repulsive forces. Hence similar nodes tend to be grouped together on the planar layout. We colour the nodes by either hand-coded categories (Fig. 2) or multiscale MS communities (Fig. 3). Spatially consistent colourings on this layout imply good clusters of documents in terms of the similarity graph. Tracking membership through Sankey diagrams: Sankey diagrams allow us to visualise the relationship of node memberships across different partitions and with respect to the hand-coded categories. In particular, two-layer Sankey diagrams (e.g., Fig. 4) reflect the correspondence between MS clusters and the hand-coded external categories, whereas the multilayer Sankey diagram in Fig. 3 represents the results of the multi-resolution MS community detection across scales. Summary of the 44-community found with the MS algorithm in an unsupervised manner directly from the text of the incident reports, as seen in Fig. 3. To interpret the 44 content communities, we have compared them a posteriori to the 15 external, hand-coded categories (indicated by names and colours). This comparison is presented in two equivalent ways: through a Sankey diagram showing the correspondence between categories and communities (left); and through a normalised contingency table based on z-scores (right). The communities have been assigned a content label based on their word clouds presented in Figure Additional file 1 in the SI Normalised contingency tables: In addition to Sankey diagrams between our MS clusters and the hand-coded categories, we also provide a complementary visualisation as heatmaps of normalised contingency (z-score) tables, e.g., Fig. 4. This allows us to compare the relative association of content clusters to the external categories at different resolution levels. A quantification of this correspondence is provided by the NMI score introduced in Eq. (5). Word clouds of increased intelligibility through lemmatisation: Our method clusters text documents according to their intrinsic content. This can be understood as a type of topic detection. To understand the content of the clusters, we use Word Clouds as basic, yet intuitive, tools that summarise information from a group of documents. Word clouds allow us to evaluate the results and extract insights when comparing a posteriori with hand-coded categories. They can also provide an aid for monitoring results when used by practitioners. The stemming methods described in the "Text preprocessing" section truncate words severely. Such truncation enhances the power of the language processing computational methods, as it reduces the redundancy in the word corpus. Yet when presenting the results back to a human observer, it is desirable to report the content of the clusters with words that are readily comprehensible. To generate comprehensible word clouds in our a posteriori analyses, we use a text processing method similar to the one described in (Schubert et al. 2017). Specifically, we use the part of speech (POS) tagging module from NLTK to leave out sentence parts except the adjectives, nouns, and verbs. We also remove less meaningful common verbs such as 'be', 'have', and 'do' and their variations. The residual words are then lemmatised and represented with their lemmas in order to normalise variations of the same word. Once the text is processed in this manner, we use the Python library wordcloudFootnote 2 to create word clouds with 2 or 3-gram frequency list of common word groups. The results present distinct, understandable word topics. Quantitative benchmarking of topic clusters Although our dataset has attached a hand-coded classification by a human operator, we do not use it in our analysis and we do not consider it as a 'ground truth'. Indeed, one of our aims is to explore the relevance of the fixed external classes as compared to the content-driven groupings obtained in an unsupervised manner. Hence we provide a double route to quantify the quality of the clusters by computing two complementary measures: an intrinsic measure of topic coherence and a measure of similarity to the external hand-coded categories, defined as follows. Topic coherence of text: As an intrinsic measure of consistency of word association without any reference to an external 'ground truth', we use the pointwise mutual information (PMI) (Newman et al. 2009; Newman et al. 2010). The PMI is an information-theoretical score that captures the probability of being used together in the same group of documents. The PMI score for a pair of words (w1,w2) is: $$ PMI(w_{1},w_{2})=\log{\frac{P(w_{1} w_{2})}{P(w_{1})P(w_{2})}} $$ where the probabilities of the words P(w1),P(w2), and of their co-occurrence P(w1w2) are obtained from the corpus. To obtain the aggregate \(\widehat {PMI}\) for the graph partition C={ci} we compute the PMI for each cluster, as the median PMI between its 10 most common words (changing the number of words gives similar results), and we obtain the weighted average of the PMI cluster scores: $$\begin{array}{*{20}l} \widehat{PMI} (C) = \sum\limits_{c_{i} \in C} \frac{n_{i}}{N} \, \underset{\substack{w_{k}, w_{\ell} \in S_{i} \\ k<\ell}}{\text{median}}\ PMI(w_{k},w_{\ell}), \end{array} $$ where ci denotes the clusters in partition C, each with size ni; \(N=\sum \nolimits _{c_{i} \in C} n_{i}\) is the total number of nodes; and Si denotes the set of top 10 words for cluster ci. We use this \(\widehat {PMI}\) score to evaluate partitions without requiring a labelled ground truth. The PMI score has been shown to perform well (Newman et al. 2009, 2010) when compared to human interpretation of topics on different corpora (Newman et al. 2011; Fang et al. 2016), and is designed to evaluate topical coherence for groups of documents, in contrast to other tools aimed at short forms of text. See Agirre et al. (2016), Cer et al. (2017), Rychalska et al. (2016), and Tian et al. (2017) for other examples. Similarity between the obtained partitions and the hand-coded categories: To compare against the external classification a posteriori, we use the normalised mutual information (NMI), a well-used information-theoretical score that quantifies the similarity between clusterings considering both the correct and incorrect assignments in terms of the information (or predictability) between the clusterings. The NMI between two partitions C and D of the same graph is: $$ NMI(C,D)=\frac{I(C,D)}{\sqrt{H(C)H(D)}}=\frac{\sum\limits_{c \in C} \sum\limits_{d \in D} p(c,d) \, \log\frac{p(c,d)}{p(c)p(d)}}{\sqrt{H(C)H(D)}} $$ where I(C,D) is the Mutual Information and H(C) and H(D) are the entropies of the two partitions. The NMI is bounded (0≤NMI≤1) with a higher value corresponding to higher similarity of the partitions (i.e., NMI=1 when there is perfect agreement between partitions C and D). The NMI score is directly relatedFootnote 3 to the V-measure used in the computer science literature (Rosenberg and Hirschberg 2007). We use the NMI to compare the partitions obtained by MS (and other methods) against the hand-coded classification assigned by the operator. Application to the analysis of hospital incident reports Multi-resolution community detection extracts content clusters at different levels of granularity We applied Markov Stability across a broad span of Markov times (t∈ [ 0.01,100] in steps of 0.01) to the MST-kNN similarity graph of N=3229 incident records. At each Markov time, we ran 500 independent optimisations of the Louvain algorithm and selected the optimal partition at each time. Repeating the optimisation from 500 different initial starting points enhances the robustness of the outcome and allows us to quantify the robustness of the partition to the optimisation procedure. To quantify this robustness, we computed the average variation of information VI(t) (a measure of dissimilarity) between the top 50 partitions for each t. Once the full scan across Markov time was finalised, a final comparison of all the optimal partitions obtained was carried out, so as to assess if any of the optimised partitions was optimal at any other Markov time, in which case it was selected. We then obtained the VI(t,t′) across all optimal partitions found across Markov times to ascertain when partitions are robust across levels of resolution. This layered process of optimisation enhances the robustness of the outcome given the NP-hard nature of MS optimisation, which prevents guaranteed global optimality. Figure 3 presents a summary of our analysis. We plot the number of clusters of the optimal partition and the two metrics of variation of information across all Markov times. The existence of a long plateau in VI(t,t′) coupled to a dip in VI(t) implies the presence of a partition that is robust both to the optimisation and across Markov time. To illustrate the multi-scale features of the method, we choose several of these robust partitions, from finer (44 communities) to coarser (3 communities), obtained at five Markov times and examine their structure and content. We also present a multi-level Sankey diagram to summarise the relationships and relative node membership across the levels. The MS analysis of the graph of incident reports reveals a rich multi-level structure of partitions, with a strong quasi-hierarchical organisation, as seen in the graph layouts and the multi-level Sankey diagram. It is important to remark that, although the Markov time acts as a natural resolution parameter from finer to coarser partitions, our process of optimisation does not impose any hierarchical structure a priori. Hence the observed consistency of communities across level is intrinsic to the data and suggests the existence of content clusters that naturally integrate with each other as sub-themes of larger thematic categories. The detection of intrinsic scales within the graph provided by MS thus enables us to obtain clusters of records with high content similarity at different levels of granularity. This capability can be used by practitioners to tune the level of description to their specific needs. Interpretation of MS communities: content and a posteriori comparison with hand-coded categories To ascertain the relevance of the different layers of content clusters found in the MS analysis, we examined in detail the five levels of resolution presented in Fig. 3. For each level, we prepared word clouds (lemmatised for increased intelligibility), as well as a Sankey diagram and a contingency table linking content clusters (i.e., graph communities) with the hand-coded categories externally assigned by an operator. We note again that this comparison was only done a posteriori, i.e., the external categories were not used in our text analysis. The results are shown in Figs. 4, 5, and 6 (and Supplementary Figures in Additional file 1–Additional file 2) for all levels. Analysis of the results of the 12-community partition of documents obtained by MS based on their text content and their correspondence to the external categories. Some communities and categories are clearly matched while other communities reflect strong medical content Results for the coarser MS partitions of the document similarity graph into: a 7 communities and b 3 communities, showing in each case their correspondence to the external hand-coded categories. Some of the MS communities with strong medical content (e.g., labour ward, radiotherapy, pressure ulcer) remain separate in our content-driven, unsupervised clustering and are not integrated with other procedural records due to their semantic distinctiveness even to this coarsest level of clustering The partition into 44 communities presents content clusters with well-defined characterisations, as shown by the Sankey diagram and the highly clustered structure of the contingency table (Fig. 4). The content labels for the communities were derived by us from the word clouds presented in detail in the Supplementary Information (Figure in Additional file 1 in the SI). Compared to the 15 hand-coded categories, this 44-community partition provides finer groupings of records with several clusters corresponding to sub-themes or more specific sub-classes within large, generic hand-coded categories. This is apparent in the external classes 'Accidents', 'Medication', 'Clinical assessment', 'Documentation' and 'Infrastructure', where a variety of subtopics are identified corresponding to meaningful subclasses (see Figure in Additional file 1 for details). In other cases, however, the content clusters cut across the external categories, or correspond to highly specific content. Examples of the former are the content communities of records from labour ward, chemotherapy, radiotherapy and infection control, whose reports are grouped coherently based on content by our algorithm, yet belong to highly diverse external classes. At this level of resolution, our algorithm also identified highly specific topics as separate content clusters. These include blood transfusions, pressure ulcer, consent, mental health, and child protection. We have studied two levels of resolution where the number of communities (12 and 17) is close to that of hand-coded categories (15). The results of the 12-community partition are presented in Fig. 5 (see Figure in Additional file 2 in the SI for the slightly finer 17-community partition). As expected from the quasi-hierarchical nature of our multi-resolution analysis, we find that some of the communities in the 12-way partition emerge from consistent aggregation of smaller communities in the 44-way partition. In terms of topics, this means that some of the sub-themes observed in Fig. 4 are merged into a more general topic. This is apparent in the case of Accidents: seven of the communities in the 44-way partition become one larger community (community 2 in Fig. 5), which has a specific and complete identification with the external category 'Patient accidents'. A similar phenomenon is seen for the Nursing community (community 1) which falls completely under the external category 'Infrastructure'. The clusters related to 'Medication' similarly aggregate into a larger community (community 3), yet there still remains a smaller, specific community related to Homecare medication (community 12) with distinct content. Other communities strand across a few external categories. This is clearly observable in communities 10 and 11 (Samples/ lab tests/forms and Referrals/appointments), which fall naturally across the external categories 'Documentation' and 'Clinical Assessment'. Similarly, community 9 (Patient transfers) sits across the 'Admission/Transfer' and 'Infrastructure' external categories, due to its relation to nursing and other physical constraints. The rest of the communities contain a substantial proportion of records that have been hand-classified under the generic 'Treatment/Procedure' class; yet here they are separated into groups that retain medical coherence, i.e., they refer to medical procedures or processes, such as Radiotherapy (Comm. 4), Blood transfusions (Comm. 7), IV/cannula (Comm. 5), Pressure ulcer (Comm. 8), and the large community Labour ward (Comm. 6). The high specificity of the Radiotherapy, Pressure ulcer and Labour ward communities means that they are still preserved as separate groups on the next level of coarseness given by the 7-way partition (Fig. 6a). The mergers in this case lead to a larger communities referring to Medication, Referrals/Forms and Staffing/Patient transfers. Figure 6b shows the final level of agglomeration into 3 communities: a community of records referring to accidents; another community broadly referring to procedural matters (referrals, forms, staffing, medical procedures) cutting across many of the external categories; and the labour ward community still on its own as a subgroup of incidents with distinctive content. This process of agglomeration of content, from sub-themes into larger themes, as a result of the multi-scale hierarchy of graph partitions obtained with Markov Stability is shown explicitly with word clouds in Fig. 7 for the 17, 12 and 7-way partitions. The word clouds of the partitions into 17, 12 and 7 communities show a multi-resolution coarsening in the content descriptive power mirroring the multi-level, quasi-hierarchical community structure found in the document similarity graph Robustness of the results and comparison with other methods Our framework consists of a series of steps for which there are choices and alternatives. Although it is not possible to provide comparisons to the myriad of methods and possibilities available, we have examined quantitatively the robustness of the results to parametric and methodological choices in different steps of the framework: (i) the importance of using Doc2Vec embeddings instead of BoW vectors, (ii) the size of training corpus for Doc2Vec; (iii) the sparsity of the MST-kNN similarity graph construction. We have also carried out quantitative comparisons to other methods, including: (i) LDA-BoW, and (ii) clustering with other community detection methods. We provide a brief summary here and additional material in the SI. Quantifying the importance of Doc2Vec compared to BoW: The use of fixed-sized vector embeddings (Doc2Vec) instead of standard bag of words (BoW) is an integral part of our pipeline. Doc2Vec produces lower dimensional vector representations (as compared to BoW) with higher semantic and syntactic content. It has been reported that Doc2Vec outperforms BoW representations in practical benchmarks of semantic similarity, as well as being less sensitive to hyper-parameters (Dai et al. 2014). To quantify the improvement provided by Doc2Vec in our framework, we constructed a MST-kNN graph following the same steps but starting with TF-iDF vectors for each document. We then ran Markov Stability on this TF-iDF similarity graph, and compared the results to those obtained from the Doc2Vec similarity graph. Figure 8 shows that the Doc2vec version outperforms the BoW version across all resolutions in terms of both NMI and \(\widehat {PMI}\) scores. Comparison of Markov Stability applied to Doc2Vec versus BoW (using TF-iDF) similarity graphs obtained under the same graph constructions steps. a Similarity against the externally hand-coded categories measured with NMI; b intrinsic topic coherence of the computed clusters measured with \(\widehat {PMI}\) Robustness to the size of dataset to train Doc2Vec : As shown in Table 1, we have tested the effect of the size of the training corpus on the Doc2Vec model. We trained Doc2Vec on two additional training sets of 1 million and 2 million records (randomly chosen from the full set of ∼13 million records). We then followed the same procedure to construct the MST-kNN similarity graph and carried out the MS analysis. The results, presented in Figure in Additional file 3 in the SI, show that the performance is affected only mildly by the size of the Doc2Vec training set. Robustness of the MS results to the level of sparsification: To examine the effect of sparsification in the graph construction, we have studied the dependence of quality of the partitions against the number of neighbours, k, in the MST-kNN graph. Our numerics, shown in Figure in Additional file 4 in the SI, indicate that both the NMI and \(\widehat {PMI}\) scores of the MS clusterings reach a similar level of quality for values of k above 13-16, with minor improvement after that. Hence our results are robust to the choice of k, provided it is not too small. Due to computational efficiency, we thus favour a relatively small k, but not too small. Comparison of MS to Latent Dirichlet Allocation with Bag-of-Words (LDA): We carried out a comparison with LDA, a widely used methodology for text analysis. A key difference between standard LDA and our MS method is the fact that a different LDA model needs to be trained separately for each number of topics pre-determined by the user. To offer a comparison across the methods, we obtained five LDA models corresponding to the five MS levels we considered in detail. The results in Table 2 show that MS and LDA give partitions that are comparably similar to the hand-coded categories (as measured with NMI), with some differences depending on the scale, whereas the MS clusterings have higher topic coherence (as given by \(\widehat {PMI}\)) across all scales. Table 2 Benchmarking of Markov Stability clusters versus LDA topics at different levels of resolution To give an indication of the computational cost, we ran both methods on the same servers. Our method takes approximately 13 h in total to compute both the Doc2Vec model on 13 million records (11 h) and the full MS scan with 400 partitions across all resolutions (2 h). The time required to train just the 5 LDA models on the same corpus amounts to 30 h (with timings ranging from ∼2 h for the 3 topic LDA model to 12.5 h for the 44 topic LDA model). This comparison also highlights the conceptual difference between our multi-scale methodology and LDA topic modelling. While LDA computes topics at a pre-determined level of resolution, our method obtains partitions at all resolutions in one sweep of the Markov time, from which relevant partitions are chosen based on their robustness. However, the MS partitions at all resolutions are available for further investigation if so needed. Comparison of MS to other partitioning and community detection algorithms: We have used several algorithms readily available in code libraries (i.e., the iGraph module for Python) to cluster/partition the same kNN-MST graph. Figure in Additional file 5 in the SI shows the comparison against several well-known partitioning methods (Modularity Optimisation (Clauset et al. 2004), InfoMap (Rosvall et al. 2009), Walktrap (Pons and Latapy 2005), Label Propagation (Raghavan et al. 2007), and Multi-resolution Louvain (Blondel et al. 2008)) which give just one partition (or two in the case of the Louvain implementation in iGraph) into a particular number of clusters, in contrast with our multiscale MS analysis. Our results show that MS provides improved or equal results to other graph partitioning methods for both NMI and \(\widehat {PMI}\) across all scales. Only for very fine resolution with more than 50 clusters, Infomap, which partitions graphs into small clique-like subgraphs (Schaub et al. 2012a, b), provides a slightly improved NMI for that particular scale. Therefore, Markov Stability allows us to find relevant, good quality clusterings across all scales by sweeping the Markov time parameter. This work has applied a multiscale graph partitioning algorithm (Markov Stability) to extract content-based clusters of documents from a textual dataset of healthcare safety incident reports in an unsupervised manner at different levels of resolution. The method uses paragraph vectors to represent the records and obtains an ensuing similarity graph of documents constructed from their content. The framework brings the advantage of multi-resolution algorithms capable of capturing clusters without imposing a priori their number or structure. Since different levels of resolution of the clustering can be found to be relevant, the practitioner can choose the level of description and detail to suit the requirements of a specific task. Our a posteriori analysis evaluating the similarity against the hand-coded categories and the intrinsic topic coherence of the clusters showed that the method performed well in recovering meaningful categories. The clusters of content capture topics of medical practice, thus providing complementary information to the externally imposed classification categories. Our analysis shows that some of the most relevant and persistent communities emerge because of their highly homogeneous medical content, although they are not easily mapped to the standardised external categories. This is apparent in the medically-based content clusters associated with Labour ward, Pressure ulcer, Chemotherapy, Radiotherapy, among others, which exemplify the alternative groupings that emerge from free text content. The categories in the top level (Level 1) of the pre-defined classification hierarchy are highly diverse in size (as shown by their number of assigned records), with large groups such as 'Patient accident', 'Medication', 'Clinical assessment', 'Documentation', 'Admissions/Transfer' or 'Infrastructure' alongside small, specific groups such as 'Aggressive behaviour', 'Patient abuse', 'Self-harm' or 'Infection control'. Our multi-scale partitioning finds corresponding groups in content across different levels of resolution, providing additional subcategories with medical detail within some of the large categories (as shown in Fig. 4 and Additional file 1). An area of future research will be to confirm if the categories found by our analysis are consistent with a second level in the hierarchy of external categories (Level 2, around 100 categories) that is used less consistently in hospital settings. The use of content-driven classification of reports could also be important within current efforts by the World Health Organisation (WHO) under the framework for the International Classification for Patient Safety (ICPS) (World Health Organization and WHO Patient Safety 2010) to establish a set of conceptual categories to monitor, analyse and interpret information to improve patient care. One of the advantages of a free text analytical approach is the provision, in a timely manner, of an intelligible description of incident report categories derived directly from the rich description in the 'words' of the reporter themselves. The insight from analysing the free text entry of the person reporting could play a valuable role and add rich information than would have otherwise been obtained from the existing approach of pre-defined classes. Not only could this improve the current state of play where much of the free text of these reports goes unused, but it avoids the fallacy of assigning incidents to a pre-defined category that, through a lack of granularity, can miss an important opportunity for feedback and learning. The nuanced information and classifications extracted from free text analysis thus suggest a complementary axis to existing approaches to characterise patient safety incident reports. Currently, local incident reporting system are used by hospitals to submit reports to the NRLS and require risk managers to improve data quality of reports, due to errors or uncertainty in categorisation from reporters, before submission. The application of free text analytical approaches, like the one we have presented here, has the potential to free up risk managers time from labour-intensive tasks of classification and correction by human operators, instead for quality improvement activities derived from the intelligence of the data itself. Additionally, the method allows for the discovery of emerging topics or classes of incidents directly from the data when such events do not fit the pre-assigned categories by using projection techniques alongside methods for anomaly and innovation detection. In ongoing work, we are currently examining the use of our characterisation of incident reports to enable comparisons across healthcare organisations and also to monitor their change over time. This part of ongoing research requires the quantification of in-class text similarities and to dynamically manage the embedding of the reports through updates and recalculation of the vector embedding. Improvements in the process of robust graph construction are also part of our future work. Detecting anomalies in the data to decide whether newer topic clusters should be created, or providing online classification suggestions to users based on the text they input are some of the improvements we aim to add in the future to aid with decision support and data collection, and to potentially help fine-tune some of the predefined categories of the external classification. The code for Markov Stability is open and accessible at https://github.com/michaelschaub/PartitionStabilityand http://wwwf.imperial.ac.uk/~mpbara/Partition_Stability/, last accessed on March 24, 2018 The word cloud generator library for Python is open and accessible at https://github.com/amueller/word_cloud, last accessed on March 25, 2018 http://scikit-learn.org/stable/modules/generated/sklearn.metrics.v_measure_score.html NRLS: National Reporting and Learning System NLTK: BoW: Bag of Words TF-iDF: Term Frequency - inverse Document Frequency D2V: Doc2vec, document to vector PV-DBOW: Paragraph vectors using distributed bag of words kNN: k-Nearest Neighbour MST: Minimum Spanning Tree Markov Stability NMI: Normalised Mutual Information Pairwise Mutual Information Agirre, E, Banea C, Cer D, Diab M, Gonzalez-Agirre A, Mihalcea R, Rigau G, Wiebe J (2016) Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), 497–511.. Association for Computational Linguistics, San Diego. Bacik, KA, Schaub MT, Beguerisse-Díaz M, Billeh YN, Barahona M (2016) Flow-Based Network Analysis of the Caenorhabditis elegans Connectome. PLoS Comput Biol 12(8):1–27. https://doi.org/10.1371/journal.pcbi.1005055. Beguerisse-Diaz, M, Vangelov B, Barahona M (2013) Finding role communities in directed networks using Role-Based Similarity, Markov Stability and the Relaxed Minimum Spanning Tree In: 2013 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2013 - Proceedings, 937–940, London. https://doi.org/10.1109/GlobalSIP.2013.6737046. Beguerisse-Díaz, M, Garduño-Hernández G, Vangelov B, Yaliraki SN, Barahona M (2014) Interest communities and flow roles in directed networks: the Twitter network of the UK riots. J R Soc Interface R Soc 11(101):20140,940. https://doi.org/10.1098/rsif.2014.0940. Bird, S, Klein E, Loper E (2009) Natural Language Processing with Python, 1st edn. O'Reilly Media, Inc. ISBN 0596516495, 9780596516499. 1st Edition. Blei, DM, Ng AY, Jordan MI (2003) Latent Dirichlet Allocation. J Mach Learn Res 3:993–1022. http://dl.acm.org/citation.cfm?id=944919.944937. MATH Google Scholar Blondel, VD, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):P10,008. https://doi.org/10.1088/1742-5468/2008/10/P10008. Cer, D, Diab M, Agirre E, Lopez-Gazpio I, Specia L (2017) Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Association for Computational Linguistics, Vancouver, Canada, 1–14. http://aclweb.org/anthology/S17-2001. Clauset, A, Newman ME, Moore C (2004) Finding community structure in very large networks. Phys Rev E 70(6):066,111. Colijn, C, Jones N, Johnston IG, Yaliraki S, Barahona M (2017) Toward precision healthcare: context and mathematical challenges. Front Physiol 8:136. Dai, AM, Olah C, Le QV, Corrado GS (2014) Document embedding with paragraph vectors In: NIPS Deep Learning Workshop. Delvenne, JC, Yaliraki SN, Barahona M (2010) Stability of graph communities across time scales. Proc Natl Acad Sci U S A 107(29):12,755–60. http://www.ncbi.nlm.nih.gov/pubmed/20615936. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC2919907. Delvenne, JC, Schaub MT, Yaliraki SN, Barahona M (2013) The Stability of a Graph Partition: A Dynamics-Based Framework for Community Detection. Springer New York, New York. https://doi.org/10.1007/978-1-4614-6729-8_11. Fang, A, Macdonald C, Ounis I, Habel P (2016) Topics in Tweets: A User Study of Topic Coherence Metrics for Twitter Data. In: Ferro N, Crestani F, Moens MF, Mothe J, Silvestri F, Di Nunzio GM, Hauff C, Silvello G (eds)Advances in Information Retrieval, 492–504.. Springer International Publishing, Cham. Friedman, J, Hastie T, Tibshirani R (2008) Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9(3):432–441. MATH Article Google Scholar Hashimoto, K, Kontonatsios G, Miwa M, Ananiadou S (2016) Topic detection using paragraph vectors to support active learning in systematic reviews. J Biomed Inform 62:59–65. https://www.sciencedirect.com/science/article/pii/S1532046416300442. Jacomy, M, Venturini T, Heymann S, Bastian M (2014) ForceAtlas2, a continuous graph layout algorithm for handy network visualization designed for the Gephi software. PLoS ONE 9(6):1–12. Jones, E, Oliphant T, Peterson P, et al. (2001) {SciPy}: Open source scientific tools for {Python}. http://www.scipy.org/. Lambiotte, R, Delvenne JC, Barahona M (2008) Laplacian Dynamics and Multiscale Modular Structure in Networks. ArXiv e-prints. 0812.1770, 0812.1770. Lambiotte, R, Delvenne JC, Barahona M (2014) Random Walks, Markov Processes and the Multiscale Modular Organization of Complex Networks. IEEE Trans Netw Sci Eng 1(2):76–90. Lancichinetti, A, Sirer MI, Wang JX, Acuna D, Körding K, Amaral LAN (2015) High-Reproducibility and High-Accuracy Method for Automated Topic Classification. Phys Rev X 5(1):11,007. https://link.aps.org/doi/10.1103/PhysRevX.5.011007. Lau, JH, Baldwin T (2016) An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation In: Proceedings of the 1st Workshop on Representation Learning for NLP, Rep4NLP@ACL 2016, 78–86.. Berlin, Germany. August 11, 2016, https://doi.org/10.18653/v1/W16-1609. Le, Q, Mikolov T (2014) Distributed representations of sentences and documents In: Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, JMLR.org, ICML'14, II–1188–II–1196.. JMLR.org, Beijing. http://dl.acm.org/citation.cfm?id=3044805.3045025. Meilă, M (2007) Comparing clusterings—an information based distance. J Multivar Anal 98(5):873–895. https://www.sciencedirect.com/science/article/pii/S0047259X06002016. MathSciNet MATH Article Google Scholar Mikolov, T, Chen K, Corrado G, Dean J (2013a) Efficient estimation of word representations in vector space. CoRR abs/1301.3781. http://dblp.uni-trier.de/db/journals/corr/corr1301.html#abs-1301-3781. Mikolov, T, Sutskever I, Chen K, Corrado G, Dean J (2013b) Distributed Representations of Words and Phrases and Their Compositionality In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, 3111–3119.. Curran Associates Inc., USA, NIPS'13. http://dl.acm.org/citation.cfm?id=2999792.2999959. Newman, D, Karimi S, Cavedon L (2009) External evaluation of topic models. In: Kay J, Thomas P, Trotman A (eds)Australasian Doc. Comp. Symp., 2009, 11–18.. School of Information Technologies, University of Sydney. Newman, D, Lau JH, Grieser K, Baldwin T (2010) Automatic Evaluation of Topic Coherence In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, 100–108.. Stroudsburg, PA, USA, HLT '10. http://dl.acm.org/citation.cfm?id=1857999.1858011. Newman, D, Bonilla EV, Buntine W (2011) Improving topic coherence with regularized topic models. In: Shawe-Taylor J, Zemel RS, Bartlett PL, Pereira F, Weinberger KQ (eds)Proceedings of the 24th International Conference on Neural Information Processing Systems, Curran Associates Inc., USA, NIPS'11, 496–504.. Curran Associates, Inc.http://dl.acm.org/citation.cfm?id=2986459.2986515. Pons, P, Latapy M (2005) Computing communities in large networks using random walks In: International symposium on computer and information sciences, 284–293.. Springer-Verlag, Berlin. ISCIS'05. http://doi.org/10.1007/11569596_31. Porter, M (1980) An algorithm for suffix stripping. Program 14(3):130–137. https://doi.org/10.1108/eb046814. Porter, MF (2001) Snowball: A language for stemming algorithms. http://snowball.tartarus.org/texts/introduction.html. Accessed 11.03.2008, 15.00h. Raghavan, UN, Albert R, Kumara S (2007) Near linear time algorithm to detect community structures in large-scale networks. Phys Rev E 76(3):036,106. Rehurek, R, Sojka P (2010) Software Framework for Topic Modelling with Large Corpora In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 45–50.. ELRA, Valletta, Malta. Rosenberg, A, Hirschberg J (2007) V-measure: A conditional entropy-based external cluster evaluation measure In: Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), 410–420.. The Association for Computational Linguistics, Prague. Rosvall, M, Axelsson D, Bergstrom CT (2009) The map equation. Eur Phys J Spec Top 178(1):13–23. Rychalska, B, Pakulska K, Chodorowska K, Walczak W, Andruszkiewicz P (2016) Samsung Poland NLP Team at SemEval-2016 Task 1: Necessity for diversity; combining recursive autoencoders, WordNet and ensemble methods to measure semantic similarity In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), 602–608.. Association for Computational Linguistics, San Diego. http://www.aclweb.org/anthology/S16-1091. Schaub, MT, Delvenne JC, Yaliraki SN, Barahona M (2012a) Markov dynamics as a zooming lens for multiscale community detection: Non clique-like communities and the field-of-view limit. PLoS ONE 7:1–11. Schaub, MT, Lambiotte R, Barahona M (2012b) Encoding dynamics for multiscale community detection: Markov time sweeping for the map equation. Phys Rev E 86(2):026,112. Schaub, MT, Delvenne JC, Rosvall M, Lambiotte R (2017) The many facets of community detection in complex networks. Appl Netw Sci 2(1):4. https://doi.org/10.1007/s41109-017-0023-6. Schubert, E, Spitz A, Weiler M, Gertz JGM (2017) Semantic Word Clouds with Background Corpus Normalization and t-distributed Stochastic Neighbor Embedding. CoRR abs/1708.0. Spielman, DA, Srivastava N (2011) Graph sparsification by effective resistances. SIAM J Comput 40(6):1913–1926. Strehl, A, Ghosh J (2003) Cluster Ensembles — a Knowledge Reuse Framework for Combining Multiple Partitions. J Mach Learn Res 3:583–617. https://doi.org/10.1162/153244303321897735. MathSciNet MATH Google Scholar Tian, J, Zhou Z, Lan M, Wu Y (2017) ECNU at SemEval-2017 Task 1: Leverage Kernel-based Traditional NLP features and Neural Networks to Build a Universal Model for Multilingual and Cross-lingual Semantic Textual Similarity In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), 191–197.. Association for Computational Linguistics, Vancouver. http://www.aclweb.org/anthology/S17-2028. Tumminello, M, Aste T, Di Matteo T, Mantegna RN (2005) A tool for filtering information in complex systems. Proc Natl Acad Sci U S A 102(30):10,421–6. http://www.ncbi.nlm.nih.gov/pubmed/16027373. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC1180754. Veenstra, P, Cooper C, Phelps S (2017) Spectral clustering using the kNN-MST similarity graph In: 2016 8th Computer Science and Electronic Engineering Conference, CEEC 2016 - Conference Proceedings, 222–227.. IEEE, Essex. Willett, P (2006) The Porter stemming algorithm: then and now. Program 40(3):219–223. https://www.emeraldinsight.com/doi/10.1108/00330330610681295. World Health Organization, WHO Patient Safety (2010) Conceptual framework for the international classification for patient safety version 1.1: final technical report. Tech. Rep. January. Geneva, World Health Organization. http://www.who.int/iris/handle/10665/70882. We thank Joshua Symons for help with accessing the data. We also thank Elias Bamis, Zijing Liu and Michael Schaub for helpful discussions. This research was supported by the National Institute for Health Research (NIHR) Imperial Patient Safety Translational Research Centre and NIHR Imperial Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health. MB, SNY and EM acknowledge funding from the EPSRC through award EP/N014529/1 funding the EPSRC Centre for Mathematics of Precision Healthcare. The dataset in this work is managed by the Big Data and Analytics Unit (BDAU), Imperial College London, and consists of incident reports submitted to the NRLS. Analysis of the data was undertaken within the Secure Environment of the BDAU. Due to its nature, we cannot publicise any part of the dataset, beyond that already provided within this manuscript. No individual identifiable patient information is disclosed in this work. Only aggregated information is used to describe the clusters. Department of Mathematics, Imperial College London, South Kensington campus, London, SW7 2AZ, UK M. Tarik Altuncu & Mauricio Barahona Department of Chemistry, Imperial College London, South Kensington campus, London, SW7 2AZ, UK Sophia N. Yaliraki Centre for Health Policy, Institute of Global Health Innovation, Imperial College London, St Mary's campus, London, W2 1NY, UK Erik Mayer EPSRC Centre for Mathematics of Precision Healthcare, Imperial College London, South Kensington campus, London, SW7 2AZ, UK M. Tarik Altuncu, Erik Mayer, Sophia N. Yaliraki & Mauricio Barahona M. Tarik Altuncu Mauricio Barahona MTA conducted the computational research. MTA and MB analysed the data. MB, EM and SNY conceived the study. All authors wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Mauricio Barahona. MTA is a PhD student at Imperial College London, Department of Mathematics. He holds an MSc degree in finance from Sabanci University and a BSc in Electrical and Electronics Engineering from Bogazici University. EM is a Clinical Senior Lecturer in the Department of Surgery and Cancer and Centre for Health Policy at Imperial College London and Transformation Chief Clinical Information Officer (Clinical Analytics and Informatics), ICHNT. SNY is a Professor of Theoretical Chemistry in the Department of Chemistry at Imperial College London and also with the EPSRC Centre for Mathematics of Precision Healthcare. MB is Professor of Mathematics and Chair in Biomathematics in the Department of Mathematics at Imperial College London, and Director of the EPSRC Centre for Mathematics of Precision Healthcare at Imperial. Additional file 1 Word clouds for the 44 community partition. (PDF 209 kb) Word cloud and Sankey diagram for the 17 community partition. (PDF 169 kb) Effect of the corpus size. (PDF 107 kb) Effect of the sparsification. (PDF 123 kb) Comparison with other clustering methods. (PDF 103 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Altuncu, M., Mayer, E., Yaliraki, S. et al. From free text to clusters of content in health records: an unsupervised graph partitioning approach. Appl Netw Sci 4, 2 (2019). https://doi.org/10.1007/s41109-018-0109-9 Topic clustering Unsupervised multi-resolution clustering Markov Stability partition algorithm Network Medicine in the era of Big Data in Science and Healthcare
CommonCrawl
Synergistic effects of melphalan and Pinus kesiya Royle ex Gordon (Simaosong) extracts on apoptosis induction in human cancer cells Natthida Weerapreeyakul1, Sasipawan Machana2 & Sahapat Barusrux3 Chinese Medicine volume 11, Article number: 29 (2016) Cite this article This study aims to determine the synergistic effects of the chemotherapeutic drug melphalan and the phytoconstituents extracted from Pinus kesiya Royle ex Gordon (Simaosong) in human cancer cells. P. kesiya twigs extracted from 50 % ethanol–water were evaluated alone (6–500 µg/mL) and in combination with melphalan (0.75–15 µg/mL). The cytotoxic effects of single extract or extract and melphalan combination were examined by a neutral red assay to investigate their antiproliferative and apoptosis induction effects in the U937 and HepG2 cell lines. Nuclei morphological change and DNA fragmentation were examined by DNA nuclei staining with 4´6-diamidino-2-phenylindole (DAPI) and agarose gel electrophoresis, respectively. The chemical constituents of the P. kesiya extract were assessed using gas chromatography–mass spectrometry (GC–MS) analysis. The synergistic effects of different IC50 ratios of the P. kesiya extract and melphalan combination were analyzed in each cancer cell line. The dose reduction index (DRI) was calculated to determine the extent of concentration reduction in the combination treatment compared with the concentration of each single treatment. The IC50 ratios for melphalan to P. kesiya extract that caused 75 % antiproliferation could be reduced after combination. This response was greater in the U937 cells than in the HepG2 cells (all P < 0.001). Melphalan and P. kesiya extract had a similar effect on apoptosis induction both singly and in combination. P. kesiya extract synergized the antiproliferation and apoptosis induction effects of melphalan. Combining the P. kesiya extract with melphalan reduced toxicity while retaining the therapeutic efficacy of melphalan. Specific and selective apoptosis induction in targeted chemotherapy is limited due to multidrug resistance and intolerably severe side effects [1, 2]. Synergistic combinations of two or more agents could overcome the toxicity and side effects associated with the high doses of chemotherapeutic drugs in monotherapy [3]. Several herbal plant extracts are rich sources of bioactive constituents that inhibit, reverse, or retard tumorigenesis [3, 4]. Additionally, some research on herbal synergistic action indicates that the whole herb produces a better effect than any single isolated active ingredient [5, 6]. The rosin of Pinus kesiya Royle ex Gordon (Simaosong) contains szemaoenin (diterpenoid) isopimaric acid, abiet-13(14)-en-8,12-epoxy-18-oic acid, abiet-8,11,13-trien-15-hydroxy-18-oic acid, pimarol, isopimarol, abiet-8,11,13-trien-18-oic acid, and 15-hydroxyabietic acid [7]. The main compounds found in P. kesiya turpentine are α- and β-pinene [8]. The needle oil is rich in α- and β-pinene, citronellol, bornyl acetate, β-phellandrene, camphene, and β-caryophyllene [9]. The needles of P. kesiya contain dichlorobenzene isomer, 1,4-cineol, α-terpinene, o-cimene and imonene enantiomers [10, 11], and monoterpenes [12]. P. kesiya is used to relieve flatulence, stomachache, and cough in complementary and alternative medicine [13]. The 50 % ethanol–water crude extract of the woody twigs of P. kesiya has an apoptotic-induction effect on both the human hepatocellular carcinoma HepG2 and the human leukemic U937 cell lines because of its cytotoxicity [14, 15]. Many studies indicate that this effect arises from the whole crude extract with its lower activity of isolated individual compounds [3–6]. The synergistic effect of the whole extract of P. kesiya plus the chemotherapeutic drug melphalan is of interest as it may expand the use of this herbal plant. Melphalan is an alkylating anticancer drug, leading to DNA cross-linking, DNA damage, and finally apoptosis of cancer cells [16]; however, melphalan resistance and highly toxic side effects in other tissues limit its use [17, 18]. We hypothesized that plant extracts that exhibited an anticancer activity in vitro might enhance the antiproliferative activity of melphalan, which would permit the use of a lower dosage and reduced side effects. This study aims to determine the synergistic effects of P. kesiya and melphalan phytoconstituents ​in human cancer cells. P. kesiya woody twigs were collected and taxonomically authenticated by Assistant Professor Thaweesak Thitimetharoach. The species of samples were determined using the Flora of Thailand [19] and the Thai Forest Bulletin [20]. The voucher (TT-OC-SK-910) was deposited at the Herbal Herbarium, Faculty of Pharmaceutical Sciences, Khon Kaen University, Khon Kaen Province, Thailand. The 50 % ethanol–water extract of P. kesiya twig was prepared as previously reported [14, 15]. Dried plants were cut and macerated with 50 % ethanol–water (1 kg:6 L) for 7 days with occasional manual shaking. The solvent was filtered, distilled by a rotary evaporator (RV 8 V, IKA, Germany) below 40°C, and freeze-dried to obtain the crude extract. The percent yield of the 50 % ethanol–water extract of P. kesiya was 4.3 %. Stock solution of P. kesiya extract was freshly prepared in dimethyl sulfoxide (DMSO) to make 100 mg/mL stock solution and further diluted with culture media to create a working solution (10 to 500 µg/mL). Gas chromatography–mass spectrometry (GC–MS) analysis GC–MS analysis was performed as described by Weerapreeyakul et al. [21] on an Agilent 6890 N gas chromatograph (Agilent Technologies, China) coupled to an Agilent 5973 N mass selective detector (Agilent Technologies, USA) to determine the extract composition for further standardization. Capillary GC analysis was performed using a DB-5 ms (3 m × 0.25 mm id, 0.25 µm) capillary column from Agilent Technologies (J&W Scientific, USA) with helium as the carrier gas. The column initially flowed at 80 °C for 6 min at a rate of 2 mL/min and an average velocity of 52 cm/s. The temperature was raised to 280 °C (at a rate of 5 °C/min) for 24 min. The total runtime was 70 min. The injector temperature was maintained at 250 °C and the injection volume was set at 2.0 µL in the splitless mode. The interface temperature was held at 280 °C. Mass spectra were scanned from m/z 50.0 to m/z 500.0 at a rate of 1.5 scans/s with a threshold of 150. The electron impact ionization energy was 70 eV. The chemical components of the crude extracts were identified from the chromatograms and mass spectra using the Wiley 7 N.l database (Agilent Technologies, USA). The human leukemic (U937) cell line was cultured in RPMI 1640. The human hepatocellular carcinoma (HepG2) cell line and normal African green monkey kidney epithelial (Vero) cell line were cultured in DMEM (GIBCO, Invitrogen Corporation, USA). Both media were supplemented with 10 % fetal bovine serum (GIBCO, Invitrogen Corporation), 100 units/mL penicillin and 100 µg/mL streptomycin (GIBCO, Invitrogen Corporation). The cells were cultured at 37 °C in a humidified atmosphere containing 5 % CO2. Antiproliferative effect The antiproliferative activity of the P. kesiya extract in the leukemia (U937), hepatocellular carcinoma (HepG2), and normal (Vero) cell lines was assessed by neutral red assay [14, 15]. Briefly, 100 µL of U937 cells at a density of 5 × 105 cells/mL and 100 µL of HepG2 or Vero cells at a density of 3 × 105 cells/mL were independently seeded in 96-wells plates and incubated for 24 h. After cell growth, cells were treated with various concentrations (10 to 500 µg/mL) of the P. kesiya crude extract or melphalan (purity 95 %, Sigma–Aldrich Chemie GmbH, Germany) dissolved in DMSO (United States Biological, USA). The maximum final concentration of the compound was 500 µg/mL to maintain 1 % v/v DMSO with a cytotoxicity <10 % compared with the untreated cells. After treated cells were exposed to the test compound for 24 h, cells were stained directly with a final concentration of 50 µg/mL neutral red dye (Sigma Chemicals Co., USA) and incubated for another 2 h. The neutral red stained viable cells were dissolved by 0.33 % hydrogen chloride in isopropanol and detected using a colorimetric-based method. The absorbance was measured at 520 and 650 nm (reference wavelength) by a spectromicroplate reader (TECAN, Grödig, Austria). A plot of percentage cytotoxicity vs. test compound concentration was used to calculate the IC50. Apoptosis induction effect Nuclei morphological study by 4´6-diamidino-2-phenylindole (DAPI) staining assay The apoptosis effect of the plant crude extracts was primarily determined by fluorescent dye staining and DAPI to identify the condensation and fragmentation of nuclear DNA [14, 15]. Briefly, the cancer cells (HepG2 or U937) were treated with various concentrations of the test compounds (6 to 500 µg/mL of P. kesiya and 0.75 to 15 µg/mL of melphalan) for 24 h, after which the culture medium was removed and the cells were washed with fresh medium. Cells were then fixed by cold methanol. DAPI dye (Sigma–Aldrich Chemie GmbH, Germany) was then added to stain the nuclei DNA for 1 h. The excess dye was removed and 1X PBS (pH 7.4; 10 mM) was added to glycerin (at a 1:1 ratio). The DAPI staining assay was performed in triplicate in independent experiments. An inverted fluorescence microscope ​(Nikon eclipse 80i, Kanagawa, Japan) was used to record images of the DAPI staining. The average percentage of apoptotic cells was calculated from three independent wells with 10 eye views per well under the inverted fluorescence microscopy at a magnitude of 40×. Analysis of DNA fragmentation DNA fragmentation was analyzed [14, 15]. Briefly, after cancer cells were treated with various concentrations of P. kesiya extracts (60, 150, 300, 450, and 500 µg/mL) and melphalan (15 µg/mL) for 24 h, they were collected and washed with fresh medium. Then, the cell suspension was transferred to microcentrifuge tubes and centrifuged (Daihan Scientific, Seoul, Korea) at 300×g for 5 min to collect the cell pellet. The DNA in the cell pellet was extracted using a FlexiGene DNA kit (QIAGEN GmbH, Germany) then 2 µg of the DNA was analyzed using electrophoresis on 2 % agarose gels (Bio-Rad, USA) containing 0.1 mg/mL ethidium bromide (AppliChem, USA). DNA was mixed with loading dye (SibEnzyme Ltd., Russia) and the gel electrophoresed in 0.5X Tris–Borate-EDTA buffer (pH 8.3; 40 mMTris base, 45 mM boric acid, 1 mM EDTA; Sigma–Aldrich Chemie GmbH, Germany) at 250 V for 1 min and 20 V for 4 h. After electrophoresis, the DNA fragments were analyzed by In Genius L gel documentation (Syngene, USA). Enhancement of anticancer effect of P. kesiya extract in vitro The enhanced chemosensitivity of melphalan combined with P. kesiya on the U937 and HepG2 cell lines was determined as per Chou and Talalay [22]. The antiproliferation assay on the melphalan and P. kesiya extract combination treatment against the HepG2 cancer cell line after 24 h was performed using the same method as for melphalan or P. kesiya extract alone. The cells were treated with the P. kesiya extract and melphalan was added subsequently. Different concentrations of the P. kesiya extract and melphalan combination treatment were used (Table 1). Briefly, the melphalan concentration was fixed at 1 × IC50 and the P. kesiya concentration was varied at 0.02, 0.05, 0.1, 0.2, 0.5, 1, 1.5, and 2 × IC50 in the U937 and HepG2 cell lines. In another series, the P. kesiya extract concentration was fixed at 1 × IC50 and the melphalan concentration was varied at 0.05, 0.1, 0.2, 0.5, 1, 1.5, and 2 × IC50 in both cancer cell lines. The maximum concentration that could be used in the experiment to maintain the percentage DMSO at less than 1 % v/v was 500 µg/mL, instead of a 2 × IC50 concentration of P. kesiya (600 µg/mL), the concentration used in this study was only 500 µg/mL. After 24 h exposure, the percentage of antiproliferation for each treatment was calculated. Table 1 Concentration ratios of P. kesiya extract and melphalan combinations used in the combination anticancer study The combined effects were analyzed to determine whether the enhanced growth inhibitory effect was antagonistic, additive, or synergistic. The combination index (CI) was calculated (adapted from the multiple-drug effect analysis based on the median effect principle and the isobologram technique developed by Chou [22] and Eid et al. [23]). The CI took into account both the potency and shape of the dose–effect curve, which is given by Eq. (1): $${\text{CI}} = \frac{{({\text{D}})_{1} }}{{({\text{D}}_{\text{X}} )_{1} }} + \frac{{({\text{D}})_{2} }}{{({\text{D}}_{\text{X}} )_{2} }}$$ CI > 1, = 1, and <1 correspond to synergism, addition, and antagonism, respectively. The denominators (Dx)1 and (Dx)2 are the doses of a single treatment of the first and second compound designated as 1 and 2, respectively. Additionally, (D)1 and (D)2 are the doses of the compounds 1 and 2 in the combination treatment that exhibited x % antiproliferation. The calculated dose reduction index (DRI) was used to determine the extent of dose reduction in the combination treatment compared with the dose of each single treatment. For the combination effect, DRI was defined as \({\text{DRI}}_{1} = {{({\text{D}}_{\text{X}} )_{1} } \mathord{\left/ {\vphantom {{({\text{D}}_{\text{X}} )_{1} } {({\text{D}})}}} \right. \kern-0pt} {({\text{D}})}}_{1} \,{\text{and}}\,{\text{DRI}}_{2} = {{({\text{D}}_{\text{X}} )_{2} } \mathord{\left/ {\vphantom {{({\text{D}}_{\text{X}} )_{2} } {({\text{D}})_{2} }}} \right. \kern-0pt} {({\text{D}})_{2} }}\). The relationship between DRI and CI was represented as: \({\text{CI}} = {1 \mathord{\left/ {\vphantom {1 {{\text{DRI}}_{1} }}} \right. \kern-0pt} {{\text{DRI}}_{1} }} + {1 \mathord{\left/ {\vphantom {1 {{\text{DRI}}_{2} }}} \right. \kern-0pt} {{\text{DRI}}_{2} }}\). The same antiproliferation experiment was performed on the normal Vero cells to provide a comparison to determine the cytotoxic effect of combining the compounds against the normal cells. The synergistic apoptotic effects of the combination of P. kesiya extract and melphalan were evaluated against the U937 and HepG2 cells by DAPI staining and DNA fragmentation assay. Experiments were performed in triplicate and the results were expressed as a mean ± standard deviation ​from three independent experiments. Statistical analysis was performed by IBM SPSS Statistics 17.0 (SPSS Inc., Chicago, IL, USA). A two-way analysis of variance (ANOVA) test followed by Fisher's least significant difference test for multiple comparisons of independent experiments were used to determine the effective enhancement of the inhibition growth by P. kesiya and melphalan in various cell lines. The significance level was set at P < 0.05. Statistical differences of percentage apoptotic cells between the P. kesiya treated group and the positive control group were compared by one-way ANOVA followed by Tukey's honest significant difference test with a significance level of P < 0.05. Antiproliferative effects of P. kesiya extract and melphalan singly versus in combination on U937 and HepG2 cell lines The antiproliferative effects of P. kesiya extract and melphalan singly or in combination were investigated in U937 and HepG2 cell lines. The IC50 values of P. kesiya in U937 and HepG2 were 299.0 ± 5.2 µg/mL and 52.0 ± 5.8 µg/mL, respectively. The IC50 values of melphalan in U937 and HepG2 were 15.0 ± 1.0 µg/mL and 37.7 ± 9.8 µg/mL, respectively. Both melphalan and P. kesiya extract caused significantly greater antiproliferation than was observed in untreated cancer cells (control). P. kesiya extract exerted potent antiproliferation in the HepG2 cells at an IC50 value of 52.0 ± 5.8 µg/mL. P. kesiya extract had a stronger selectivity against the two cancer cell lines than melphalan. A significant enhanced antiproliferative effect occurred with the combination when treating human U937 and HepG2 cell lines (Table 2). A two-way ANOVA was conducted that examined the effect of IC50 ratio and cell types on percentage antiproliferation. There was a statistically significant interaction between the effects of IC50 ratio and cell types on percentage antiproliferation, F(26, 84) = 42.654, P < 0.001. A two-way ANOVA detected significant differences in the antiproliferative activity of the various IC50 ratios (all P < 0.001) and among the various cell lines (P < 0.001). Significantly higher antiproliferative activities were observed at P. kesiya to melphalan concentration ratios of 0.2:1, 0.5:1, 1:1, 1.5:1, 2:1, 1:0.2, 1:0.5, 1:1.5, and 1:2 compared with normal Vero cells (P < 0.001). When the concentration of melphalan was constant, an enhanced antiproliferative effect was observed at P. kesiya to melphalan concentration ratios of 0.05:1 and 0.5:1 in the U937 and HepG2 cell lines, respectively. Alternatively, when P. kesiya was constant, the enhanced antiproliferative effect was observed at melphalan to P. kesiya concentration ratios of 0.1:1 and 0.2:1 in the U937 and HepG2 cell lines, respectively. Table 2 Antiproliferation of P. kesiya extract and melphalan combination in U937, HepG2, and Vero cell lines Melphalan alone exhibited strong antiproliferation in normal Vero cells at an IC50 value of 59.9 ± 3.2 µg/mL, while P. kesiya extract alone was inactive in normal Vero cells. When treating the Vero cells, the antiproliferation of the combination treatment was <25 % when the concentration of melphalan was constant (Table 2). The result showed an antagonistic effect of P. kesiya extract on the antiproliferation of melphalan in Vero cells. In the presence of a 1 × IC50 concentration of P. kesiya, the concentration of melphalan used could be as high as 2 × IC50, producing only 9.5 ± 1.9 % antiproliferation against Vero cells. Synergistic effect of melphalan and P. kesiya extract combination analyzed in U937 leukemic and HepG2 hepatocellular carcinoma cell lines The CIs of the combined effect of P. kesiya and melphalan in the U937 and HepG2 cell lines at IC75 and IC90 values are presented in Table 3 and Fig. 1. Our results indicated that the combination of P. kesiya and melphalan exhibited high synergistic effects in U937 and HepG2 cells at concentrations producing 75 and 90 % antiproliferation (IC75 and IC90). As observed from the higher synergistic effect (low CI value), the combination therapy was more effective on the U937 cells than on the HepG2 cells. Table 3 Dose reduction index (DRI) and combination index (CI) Isobolograms of the plot between combination index (CI) and fa in U937 (black symbol) and HepG2 (white symbol). fa = fraction affected by D (i.e., percentage antiproliferation/100). Isobolograms of the combination of compounds when P. kesiya was fixed (a, c; square), and when melphalan was fixed (b, d; circle) in each cancer cell lines Extent of dose reduction in the combination treatment compared with single doses of each treatment Table 3 shows the decreased concentration of melphalan or P. kesiya extract in the combination therapy to produce 90 and 75 % antiproliferation in U937 cells and HepG2 cells. The concentration of melphalan (when combined with a fixed concentration of P. kesiya extract) necessary to inhibit cancer growth by 90 % (IC90) represents a 7.8- and 3.6-fold decrease in U937 and HepG2 cells, respectively. By comparison, the concentration of P. kesiya extract (when combined with a fixed concentration of melphalan) needed for the IC90 represents a 6.5- and 2.4-fold decrease in U937 and HepG2 cells, respectively. A similar DRI trend (i.e., a greater melphalan dose reduction) for U937 cells compared with HepG2 cells was observed at 75 % antiproliferation. Dose reduction also depended on the type of cancer cells: 90 and 75 % cell death occurred when the P. kesiya extract concentration was fixed. The doses for melphalan per dose of P. kesiya extract could be reduced in the U937 cells (10.55:300 = 0.04:1 and 6.6:300 = 0.02:1) more than in the HepG2 cells (33.7:55.0 = 0.61:1 and 23.7:55.0 = 0.43:1). The synergistic effect appears to be greater for the melphalan because a greater potential dose reduction could be achieved; thus, melphalan concentrations could be significantly reduced. Apoptotic induction effect of single treatments and of P. kesiya extract and melphalan combination treatment Apoptosis induction by P. kesiya extract and melphalan when used singly or in combination was evaluated in HepG2 and U937 cell lines using a method based on the DAPI staining assay (Table 4) and DNA fragmentation assays (Fig. 2). In the control group, the nuclei were roundish and homogeneously stained by DAPI; in contrast, the apoptotic nuclei in the treated U937 and HepG2 cells were irregularly shaped, small, detached, and had apoptotic bodies. At a 1 × IC50 concentration, P. kesiya extract induced 42.5 ± 4.8 and 39.7 ± 2.6 % apoptosis in the U937 and HepG2 cells, respectively; melphalan induced 43.1 ± 16.3 and 53.0 ± 6.1 % apoptosis, respectively. The combination therapy of P. kesiya extract and melphalan was significantly more effective in inducing apoptosis against U937 and HepG2 cells. Table 4 Effects of P. kesiya and melphalan on apoptosis induction in U937 and HepG2 cells DNA fragments after combination treatment of P. kesiya (Pk) extract with melphalan in HepG2 (a) and U937 (b) cell lines DNA fragmentation was determined according to whether the synergistic apoptosis induction in both cancer cells occurred toward the late stage of apoptosis, as previously observed for single therapy [15]. The combined treatment of P. kesiya with melphalan resulted in DNA laddering in both the HepG2 and U937 cells compared with the control cells (Fig. 2). GC–MS analysis of P. kesiya extract GC–MS analysis of P. kesiya extract was performed to obtain its characteristic fingerprint (Table 5). The GC peaks obtained were used for further chemical constituent identification and revealed several compounds:podocarpa-8,11,13-trien-15-oic acid, neopine, rosin acid, pimaric acid, oleic acid, pyrocatecol, and vanillin (Table 5). Table 5 Identified compounds and their relative distribution in P. kesiya To our knowledge, this is the first report of a synergistic effect of P. kesiya extract on melphalan anticancer activity via the apoptosis induction mechanism in vitro. The U937 cells were only sensitive to melphalan, based on the lower IC50 value; however, the HepG2 cells were sensitive to both P. kesiya extract and melphalan. A wide chemopreventive index was observed in the combination treatment, probably because the cytotoxic effect was lower in the normal Vero cells. P. kesiya extract's potent antiproliferation effect in the HepG2 cell line and high selective antiproliferation effect in both cancer cell lines requires further investigation. Drug and herb combinations could significantly decrease the toxicity and chemoresistance of drugs and increase their efficacy [24, 25] i.e., increasing the efficacy of the therapeutic effect; decreasing the dosage to avoid toxicity; reducing the development of drug resistance; and providing selective synergism against a target (or efficacy synergism) vs. host (or toxicity antagonism). The ability of herbal extracts to enhance anticancer activity might also arise from the potentiation of pharmacokinetics, wherein one ingredient enhanced the therapeutic effect of another (active ingredient or drug) by modulating its pharmacokinetic properties (i.e., absorption, distribution, metabolism, and/or excretion) [24, 25]. The administration of multiple therapeutic agents is often associated with additional toxicities, which can be life threatening; however, a combination treatment with a non-toxic herbal extract like P. kesiya might provide superior benefits. Previous work indicated that, in addition to the compounds identified here in P. kesiya extract (Table 5), gallic acid, chlorogenic acid, caffeic acid, vanillin, and coumaric acid in the 50 % ethanol–water extract of P. kesiya exhibited an anticancer effect [14]. The main compounds found in various parts of Pinaceae were terpenes, such as α- and β-pinene [26]. Rosin is composed of a complex mixture of different compounds, including resin acids such as abietic acid, plicatic acid, and pimaric acid. Resin acids might be derived from terpenes through partial oxidation and undergo isomerization in the presence of strong acids or with heat [27]. Hence, terpenes such as α- and β-pinene might undergo thermal conversion into resin acids under the GC–MS condition studied. One previous study reported that α-pinene expressed both apoptosis induction and antimetastatic activity against melanoma cells [26]. A previous work has also shown that citronellol—found in the family Pinaceae—inhibited an efflux P-gp protein at an IC50 value of 504 µM [28]. Combining the P. kesiya extract with melphalan reduced toxicity while retaining the therapeutic efficacy (reduced approximately 4–9 fold) of melphalan. Dulbecco's modified Eagle's medium DMSO: dimethylsulfoxide HepG2: human hepatocellular carcinoma cell line neutral red phosphate buffer solution NCI: National Cancer Institute (USA) U937: human leukemic cell line Vero: normal African green monkey kidney epithelial cell line Au JLS, Panchal N, Li D, Gan Y. Apoptosis: a new pharmacodynamic endpoint. Pharm Res. 1997;14:1659–71. Lee C, Raffaghello L, Longo VD. Starvation, detoxification, and multidrug resistance in cancer therapy. Drug Resist Updates. 2012;15:114–22. Pinmai K, Chunlaratthanabhorn S, Ngamkitidechakul C, Soonthornchareon N, Hahnvajanawong C. Synergistic growth inhibitory effects of Phyllanthus emblica and Terminalia bellerica extracts with conventional cytotoxic agents: doxorubicin and cisplatin against human hepatocellular carcinoma and lung cancer cells. World J Gastroentero. 2008;14:1491–7. Tang SN, Singh C, Nall D, Meeker D, Shankar S, Srivastava RK. The dietary bioflavonoid quercetin synergizes with epigallocathechin gallate (EGCG) to inhibit prostate cancer stem cell characteristics, invasion, migration and epithelial-mesenchymal transition. J Mol Signal. 2010;5:14. Wagner H, Ulrich-Merzenich G. Synergy research: approaching a new generation of phytopharmaceuticals. Phytomedicine. 2009;16:97–110. Machana S, Weerapreeyakul N, Barusrux S, Thumanu K, Tanthanuch W. Synergistic anticancer effect of the extracts from Polyalthia evecta caused apoptosis in human hepatoma (HepG2) cells. Asian Pac J Trop Biomed. 2012;2:589–96. Ya C, Ming-Hua Q, Kun G, Lin Z, ZhongRong L. A new abietane diterpenoid from the rosin of Pinus kesiya var. langbianensis (Pinaceae). Acta Botanica Yunnanica (ABY). 2006;3:323–5. Jingkai D, Lisheng D, Yuanfen Y, Yu W, Handong S. Chemical constituents of the turpentine of Pinus kesiya var. langbianensis. Yunnan Zhiwu Yanjiu. 1983;5:224–46. Koukos PK, Papadopoulou KI, Patiaka DT, Papagiannopoulos AD. Chemical composition of essential oils from needles and twigs of balkan pine (Pinus peuce Grisebach) grown in Northern Greece. J Agric Food Chem. 2000;48:1266–8. Mateus EP, Gomesdasilva MD, Ribeiro AB, Marriott PJ. Qualitative mass spectrometric analysis of the volatile fraction of creosote-treated railway wood sleepers by using comprehensive two-dimensional gas chromatography. J Chromatogr. 2008;1178:215–22. Mateus E, Baratab RC, Zrostlíková J, Gomesdasilva MDR, Paiva MR. Characterization of the volatile fraction emitted by Pinus spp. by one- and two-dimensional chromatographic techniques with mass spectrometric detection. J Chromatogr. 2010;1217:1845–55. Hiltunen R, Löyttyniemi K. Monoterpene composition of needle oil in Pinus kesiya Royle ex Gordon. Seloste: Pinus kesiya-männyn neulasöljyn monoterpeenikoostumus. Commun Inst For Fenn. 1978;94:1–9. Hynniewta SR, Kumar Y. Herbal remedies among the Khasi traditional healers and village folk Mekhalaya. Indian J Traditional Knowl. 2008;7:581–6. Machana S, Weerapreeyakul N, Barusrux S, Nonpanya A, Sripanidkulchai B, Thitimetharoch T. Cytotoxic and apoptotic effects of six herbal plants against the human hepatocarcinoma (HepG2) cell line. Chin Med. 2011;6:39. Machana S, Weerapreeyakul N, Barusrux S, Thumanu K, Tanthanuch W. FTIR microspectroscopy discriminates anticancer action on human leukemic cells by extracts of Pinus kesiya; Cratoxylum formosum spp. pruniflorum and melphalan. Talanta. 2012;93:371–82. Cloutier JF, Castonguay A, O'Connor TR, Drouin R. Alkylating agent and chromatin structure determine sequence context-dependent formation of alkylpurines. J Mol Biol. 2001;306:169–88. Lialiaris T, Lyratzopoulos E, Papachristou F, Simopoulou M, Mourelatos C, Nikolettos N. Supplementation of melatonin protects human lymphocytes in vitro from the genotoxic activity of melphalan. Mutagenesis. 2008;23:347–54. Boegsted M, Holst JM, Fogd K, Falgreen S, Sørensen S, Schmitz A, Bukh A, Johnsen HE, Nyegaard K, Dybkaer K. Generation of a predictive melphalan resistance index by drug screen of B-cell cancer cell lines. PLoS ONE. 2011;6:e19322. Phengklai C. Pinaceae/Cephalotaxaceae/Cupressaceae. Flor Thail. 1972;2:193–6. Phengklai C. Studies in flora of Thailand : Pinaceae. Thai For Bull. (Bot.). 1973;7:1–4. Weerapreeyakul N, Nonpunya A, Barusrux S, Thitimetharoch T, Sripanidkulchai B. Evaluation of the anticancer potential of six herbs against a hepatoma cell line. Chin Med. 2012;7:15. Chou TC. Theoretical basis, experimental design, and computerized simulation of synergism and antagonism in drug combination studies. Pharmacol Rev. 2006;58:621–81. Eid SY, El-Readi MZ, Wink M. Synergism of three-drug combinations of sanguinarine and other plant secondary metabolites with digitonin and doxorubicin in multi-drug resistant cancer cells. Phytomedicine. 2012;19:1288–97. Greco WR, Bravo G, Parsons JC. The search for synergy: a critical review from a response surface perspective. Pharmacol Rev. 1995;47:331–85. Feng-Zhu JJ, Ma X, Cao ZW, Li YX, Chen YZ. Mechanisms of drug combinations from interaction and network perspectives. Nat Rev Drug Discov. 2009;8:111–28. Matsuo AL, Figueiredo CR, Arruda DC, Pereira FV, Scutti JAB, Massaoka MH, Travassos LR, Sartorelli P, Lago JHG. α-Pinene isolated from Schinus terebinthifolius Raddi (Anacardiaceae) induces apoptosis and confers antimetastatic protection in a melanoma model. Biochem Biophys Res Commun. 2011;411:449–54. Parimal K, Khale A, Pramod K. Resins from herbal origin and a focus on their applications. IJPSR. 2011;2:1077–85. Yoshida N, Takagi A, Kitazawa H, Kawakami J, Adachi I. Inhibition of P-glycoprotein-mediated transport by extracts of and monoterpenoids contained in Zanthoxyli Fructus. Toxicol Appl Pharm. 2005;209:167–73. NW designed the study. SM performed the experiments. NW and SM collected and analyzed data. SM, NW and SB analyzed and interpreted data. SM and NW wrote the manuscript. NW and SB revised manuscript. All the authors read and approved the final manuscript. We thank (a) The Office of the Higher Education Commission, Thailand, under the Strategic Scholarships for Thai Doctoral Degree Programs (CHE-PhD-THA-RG 3/2549), (b) Khon Kaen University for financial support, (c)The authors thank the Plant Genetics Conservation Project under the Royal Initiation of her Royal Highness Princess Maha Chakri Sirindhorn, The Bureau of The Royal Household for permission to conduct the research and Electricity Generating Authority of Thailand (EGAT) for field support and (d) Mr. Bryan Roderick Hamman and Mrs. Janice Loewen-Hamman for assistance with the English-language presentation. Faculty of Pharmaceutical Sciences, Khon Kaen University, Khon Kaen, 40002, Thailand Natthida Weerapreeyakul Graduate School, Khon Kaen University, Khon Kaen, 40002, Thailand Sasipawan Machana Faculty of Associated Medical Sciences, Khon Kaen University, Khon Kaen, 40002, Thailand Sahapat Barusrux Search for Natthida Weerapreeyakul in: Search for Sasipawan Machana in: Search for Sahapat Barusrux in: Correspondence to Natthida Weerapreeyakul. Weerapreeyakul, N., Machana, S. & Barusrux, S. Synergistic effects of melphalan and Pinus kesiya Royle ex Gordon (Simaosong) extracts on apoptosis induction in human cancer cells. Chin Med 11, 29 (2016) doi:10.1186/s13020-016-0103-z HepG2 Cell U937 Cell Combination Index HepG2 Cell Line
CommonCrawl
The angle of elevation of the top of.. Home / Height and Distance / question पोस्ट लेखक:ApexTeam पोस्ट श्रेणी:Height and Distance The angle of elevation of the top of a lighthouse 60 m high, from two points on the ground on its opposite sides are 45° and 60°. What is the distance between these two points? A. 45 m B. 30 m C. 103.8 m D. 94.6 m Answer: Option D Solution(By Apex Team) Let BD be the lighthouse and A and C be the two points on ground. Then, BD, the height of the lighthouse = 60 m $\begin{aligned}\angle BAD&=45^{\circ},\angle BCD=60^{\circ}\\ \tan45^{\circ}&=\frac{BD}{BA}\\ \Rightarrow1=&\frac{60}{BA}\\ \Rightarrow BA&=60\mathrm{~m}\ldots\ldots\ldots(\mathrm{i})\\ \tan60^{\circ}&=\frac{BD}{BC}\\ \Rightarrow\sqrt{3}&=\frac{60}{BC}\\ \Rightarrow BC&=\frac{60}{\sqrt{3}}\\ &=\frac{60\times\sqrt{3}}{\sqrt{3}\times\sqrt{3}}\\ &=\frac{60\sqrt{3}}{3}\\ &=20\sqrt{3}\\ &=20\times1.73\\ &=34.6\mathrm{~m}\ldots\ldots\ldots\text{ (ii) }\end{aligned}$ Distance between the two points A and C = AC = BA + BC = 60 + 34.6 [∵ Substituted value of BA and BC from (i) and (ii)] = 94.6 m शायद तुम्हे यह भी अच्छा लगे An observer 1.6 m tall is 20√3 away.. जनवरी 3, 2022 It is found that on walking x metres towards From a lighthouse the angles of depression Static Gk for SSC CGL UPSC 8 January 2022 जनवरी 8, 2022/ A company produces on an average 4000 items… जून 3, 2021/ At present, the ratio between the ages of Arun जनवरी 11, 2022/
CommonCrawl
High-pressure densification and hydrophobic coating for enhancing the mechanical properties and dimensional stability of soft poplar wood boards Yong Yu1,2, Aqiang Li1,2, Kaiya Yan1,2, Hosahalli S. Ramaswamy3, Songming Zhu1,2 & Huanhuan Li4 Journal of Wood Science volume 66, Article number: 45 (2020) Cite this article Effects of high-pressure (HP) treatment on densification of poplar sapwood boards and subsequent coatings were evaluated. Tung oil (TO) and epoxy resin (ER)-coated treatments were used to improve the dimensional stability of HP-densified wood. The density of the wood after HP densification increased from 450 ± 50 kg/m3 for the control to 960 ± 20 kg/m3 at 125 MPa. This process also resulted in the average thickness of HP-densified boards to reduce significantly from 29.7 ± 0.11 mm for the control to 18.8 ± 0.53 mm after HP densification at 25 MPa and 14.3 ± 0.10 mm after 125 MPa treatment for 30 s. The mechanical strength measured as the hardness of densified wood significantly increased from 35% at 25 MPa to 96% at 125 MPa treatment, compared to untreated wood. As expected both TO and ER-coated treatments significantly reduced set-recovery of densified wood when stored at four relative humidity environments. ER showed better anti-swelling performance than TO, and would be a better choice. As a porous biomass material, low-density plantation-tree wood strength and hardness could be increased by (1) impregnating the void volume of wood with bulking chemicals (e.g., monomers, polymers, resins and waxes); (2) compressing in the transverse direction to reduce the void volume [1]. These two kinds of method could significantly improve the wood density, while the wood density correlates well with its mechanical properties. Based on these two principles, many specific methods have been proposed and improved [2,3,4,5,6,7]. The development of densification technology has opened the door for broader use of low-density plantation-tree species (for example, Scots pine wood, poplar wood, Norway spruce, European beech, Paulownia wood, etc.), whose normal mechanical properties are often inadequate for use in structural applications [5]. It is well recognized that the mechanical strength of wood can be increased by densification which also increases its density. There are many different densification methods available in the literature [8]. These include thermal compression methods [6, 9,10,11] and semi-isostatic densification method [12,13,14]. Thermal compression methods have the same two main phases—wood softening and wood compression. Furthermore, the methods usually include a post-treatment phase, which decreases irreversible thickness swelling when the densified wood is exposed to humid or wet conditions. One wood densification method developed is known as "thermo-hydro-mechanical processing" [15,16,17,18]. This method involves the use of steam and thermal compression, resulting in mechanical compression of wood with more plasticizing properties, which makes densified wood more dimensionally stable. Another wood densification method developed by Kamke and Sizemore [19] is known as "viscoelastic thermal compression". The process has a wood softening phase above the glass transition temperature (Tg), under saturated steam, followed by a compression phase, heat-treatment, and cooling [20]. High densities are achieved through a dynamic process which combines temperature ranging from 150 to 190 °C, saturated steam and mechanical compression perpendicular to the grain. This process needs energy and time. Gao's method [6] was developed to compress Chinese white poplar wood by adjusting the moisture and temperature distributions within the lumber which is also based on thermal compression. The wood is soaked to adjust wood moisture content instead of steam preheating. The temperatures of the upper and lower plates of the hot press are controlled at 180 °C, and a pressure of 6–10 MPa is applied. A new compression method in which no heating was needed is developed by Blomberg and Persson [14]. This method is known as semi-isostatic densification or the Quintus press. This method can yield pressures of up 140 MPa, mediated through a flexible oil-filled rubber diaphragm that is pressed against a rigid press table. This processing can give densified wood irregular shape but homogenous density. The Quintus press makes use of a high-pressure equipment and can give densified wood with only one side uniform and regular shape. It is semi-isostatic compression for wood densification, and as used against a flat surface produces smooth surface on one side. High-pressure (HP) treatment has been proved to be a useful densification method for fast-growing wood species by quickly reducing the void volume of wood through isostatic hydrostatic pressure, in a high-pressure equipment [20, 21]. HP densification works as shown in Fig. 1. The wood product, already sealed in its final package, are introduced into a high-pressure cylindrical vessel and subjected to a high level of isostatic pressure (up to 600 MPa) transmitted by water. In this technology, applied pressure is uniformly distributed throughout the entire sample, whether in direct contact or in flexible container [22]. This method is a new alternative to the classic thermomechanical ones. With no thermal pre-treatment needed, this process allows for very short treatment times and make it very efficient with possibility of applications on an industrial scale. Plastic compressive strains occur predominately in HP-densified wood and the delayed elastic strain is very small. This advantage makes HP-densified wood attractive for long-term indoor use when the environment relative humidity (RH) is low and the climate is relatively stable. Moreover, results showed that HP densification could make the densities of Chinese fir wood/poplar/paulownia wood to increase 2–3 times within 5 min [20, 21, 23]. This treatment is shown to be universally effective for various low-density species of wood. Studies on HP treatment of wood for densification applications have just begun, and basic research are still scarce. Schematic illustration of the high-pressure treatment process Poplar (Populus euramericana) is a fast-growing tree with short cultivation time and is mainly used for making paper, veneers, and sawn wood [24]. Poplar is a genus of 25–35 species of deciduous flowering plants in the family Salicaceae, native to most of the Northern Hemisphere. In addition to the use as raw materials, poplar wood can be subjected to special treatments for added values, such as enhancing mechanical strength and other structural properties and/or proving new functionalities relative to the natural wood, and thereby enabling wider applications. Poplar wood is targeted in this study because of its broad range of use in China. Yu et al. [21] demonstrated the densification effects of poplar wood treated by HP densification in which technological parameters of 50–200 MPa pressure levels for 30 s holding time were applied. Their results show that densities and mechanical properties of treated wood could be significantly improved with HP densification up to 150 MPa, and they tend to stabilize or show a decline beyond 150 MPa. This was a first study on effects of poplar wood densification by HP densification and the operational range is focused on the lower side in this study to explore the possibility for more meaningful industrial applications. A major disadvantage of HP densification is the high recovery of the original dimensions when densified wood is exposed to high air relative humidity. A significant positive correlation between set-recovery and moisture/water absorption has been found [20]. When densified wood is subsequently exposed to a humid environment, both reversible and irreversible swellings can occur through moisture adsorption/desorption. Reversible swelling is due to the hygroscopic nature of wood, but irreversible swelling is a result of the densified wood partly or completely returning towards its original dimensions. This is commonly referred to as set-recovery [25]. This phenomenon was found in semi-isostatic densification, which is also based on HP principle [8]. A combination experiment of semi-isostatic densification and heat-treated method was conducted by Boonstra and Blomberg [1]. Their results showed no or only a limited effect on the shape-recovery when densified radiata pine was exposed to moisture. It is well known that water repellents can reduce the rate of water uptake into porous materials. The rate of water uptake can be considerably reduced either by providing a water barrier or by rendering the wood hydrophobic [26]. These hydrophobic treatments only slow down water penetration, and they do not fully prevent it. However, most wood in Class 2 (above ground covered) and Class 3 (above ground uncovered) (European Committee for Standardization, 2006) applications are exposed only for limited periods to extreme weathers, and water repellents perform well in such conditions. Tung oil (TO) is a drying oil obtained by pressing seeds from the nut of the Tung tree (Vernicia fordii; V. montana). TO dries quickly when exposed to air, forming a transparent film [27]. Its excellent water repellence has been proven in several laboratory and field trials [26, 27]. Epoxy resin (ER) has been widely used to fabricate superhydrophobic coating to reduce the abrasion and corrosion in many fields due to its high adhesion and chemical corrosion resistance [28]. Research indicates that ER can be used as a high-quality water repellent for wood members [29]. In this study, these two hydrophobic-coated treatments were used to improve the dimensional stability of HP-densified wood in high air humidity environments. Simple and effective water resistance methods are very meaningful for HP-densified wood to improve their dimensional stability. Set-recovery is a significant issue for the usability and stability of densified wood, and therefore, it is important to explore effective methods to prevent the set-recovery. Our previous research showed set-recovery of HP-densified wood is significantly related to water/moisture absorption and therefore water repellent offered an alternative treatment. These tests therefore, even if effective, do not reflect of the normal end-use conditions [7]. In the current study, densified wood with uncoated/coated treatments were exposed to different RH conditions (five RH environments at 25 °C) and the dimensional stability was assessed by measuring the set-recovery. Effects on preventing the set-recovery of densified wood of these two water repellents could be confirmed. The purpose of this study was to investigate the HP densification process for poplar (Populus euramericana) sapwood between 25 and 125 MPa pressure levels followed by surface coating of TO or ER. Treatment efficiency was monitored by evaluating compression ratio and hardness. Their dimensional integrity was evaluated under five RH storage conditions at 25 °C. Materials preparation Poplar (Populus euramericana) sapwood board specimens machine cut to size 1000 mm (longitudinal) × 300 mm (tangential) × 30 mm (thickness) with no defects were obtained from XinZheng, Henan province, China. The material was pre-conditioned (at 65% RH, 20 °C) for at least 2 months prior to use, and the equilibrated moisture content was ~ 12% (dry basis). The average density (at 65% RH, 20 °C) of the pre-cut boards was 460 ± 50 kg/m3. From these pre-cut boards, smaller specimens with dimensions of 150 mm (longitudinal) × 70 mm (tangential) × 30 mm (thickness) were cut for HP densification. In total, 75 specimens were treated and 15 specimens were used without treatment as control. All specimen boards were conditioned at 65% RH and 20 °C prior to use. High pressure treatment Schematic of the HP densification process (Fig. 1) as well as positioning of the test specimen was reported by Li et al. [20]. A test specimen board [150 mm (longitudinal) × 70 mm (tangential) × 30 mm (thickness)] was sandwiched between two stainless steel plates (the area and thickness of each plate were 70 mm × 150 mm and 5 mm, respectively, and matched the two parallel surface areas of the board) first, is shown in Fig. 2. Then this specimen board along with the two steel plates was wrapped in a flat polyethylene pouch (16 silk, Shandong Xinhua Packing Co., Ltd. Shandong, China) and vacuum-sealed with a vacuum package machine before HP densification. Width of steel plates was less than that of wood specimen, since the width of wood specimens would shrink after high pressure treatment. Schematic diagram of fixing method of poplar board. Poplar board was fixed between two steel plates, and then wrapped in a flat polyethylene pouch and vacuum-packed The HP densification process was carried out in a single vessel (5 L) HP apparatus (UHPF-750, Kefa, Baotou, China), which also has been described in the previous study [20]. Briefly, the system consisted of the HP unit and equipped with K-type thermocouples (Omega Engineering, Stamford, CT, USA) and a data logger (34970A, Agilent Technologies GMBH, Germany) for temperature measurement and a thermostat jacket connected to a water bath (SC-25, Safe, China) for maintaining the processing temperature. Water was used as pressure transmitting medium, and the pressure vessel was maintained at 25 °C before pressurization. Since test samples were packaged in a flat polyethylene pouch, they did not directly contact the process water. The pressure come up rate was 100–150 MPa per minute, so the come up times for various treatments were less than a minute. The pressure release time was kept less than 5 s. Normally sample temperature is expected to increase by 3 °C every 100 MPa pressure rise (assuming no heat loss to pressure vessel) due to adiabatic compression. This was not a limitation because the vessel was jacketed and the maximum pressure was only 125 MPa. Pre-packaged board specimen was placed in the pressure chamber and the system was brought to the required pressure level, held for exactly 30 s and then the pressure immediately released. So the total treatment time was less than 2 min. Each pressure treatment was given with one piece of the test board at a time, and then repeated again with new specimens to obtain 15 replicates. Five pressure levels (25 MPa, 50 MPa, 75 MPa, 100 MPa and 125 MPa) were used for treatment. Wood specimens without HP densification were used as control samples. Determination method for compression ratio After HP densification, densified woods were conditioned at RH 65% and 20 °C until equilibrium moisture was reached. The thickness of densified poplar boards was measured, both before and after densification, for calculating the compression ratio. The compression ratio (CR) was calculated according to Eq. 1, where To is the original thickness and Tc the final compressed thickness: $$ {\text{Compression}}\;{\text{ratio}}\;\left( \% \right) = \frac{{T_{\text{O}} - T_{\text{C}} }}{{T_{\text{O}} }} \times 100. $$ Measurements of hardness Each test board was polished by sandpaper first before the following test. Then a section of 50 mm (longitudinal) × 50 mm (tangential) × thickness (radial) (different thickness after various pressure levels densification) was cut from the center of each specimen for hardness measurement (Fig. 3). Totally 15 specimens for each group was prepared and measured. Hardness was measured with a static hardness standard ISO 3350:1975 with minor modification. In this method, a semicircular steel ball of 5.64 mm radius is pressed into the wood surface with an average speed of 6 mm/min to create an indentation of 5.64 mm depth. The hardness is calculated using the following equation (Eq. 2): Illustration of the sampling presenting one specimen: DM for hardness measurement, and SR for equilibrium moisture experiment and set-recovery experiment $$ H_{ 1 2} = KP, $$ where H12 is the hardness of the wood specimen at 12% moisture content; P is the applied force (N); and K is the coefficient of radius for the steel ball indenter at a depth of 5.64 mm, which is 1. Tung oil and epoxy resin-coated treatments Tung oil and epoxy resin-coated treatments were made with samples cut to size 20 mm (longitudinal) × 20 mm (tangential) × thickness (radial) (Fig. 3). Each group (pressure treatment and control) had a total of 40 treatment samples which were randomly divided into two groups (TO coated and ER coated). For TO (100% concentration, commercially available from Huangshi Gongjiang Company, Shanghai, China) coated treatment, test samples were immersed into TO for 5 h for better penetration and adherence (tested initially for 5 days to make sure change in mass was negligible) and then dried in room temperature. For ER-coated treatment, epoxy and curing agent (Yituo Company, Kunshan, China) were mixture at volume ratio of 2.3:1. Two thin layers of ER coats were applied on to each test sample. Each group contained 20 treatment samples and these samples were randomly divided into five groups for subsequent test. Set-recovery measurement and equilibrium moisture experiment Uncoated, TO-coated and ER-coated samples were used to measure set-recovery in five RH environments at a constant temperature of 25 °C. Five saturated inorganic solutions [MgCl2, Mg(NO3)2, KI, KCl and KNO3 solutions] were used to maintain five RH (33%, 52%, 69%, 86% and 93%) conditions [30]. These five conditions were created by placing saturated salt solutions onto the bottom of desiccators, respectively. Each pressure treatment group involved five of these desiccators. These sorption tests were carried out in temperature-controlled incubators and maintained at 25 ± 1 °C, allowing a precise RH control. The specimens were weighed periodically until the change in mass was negligible (at least < 0.01% per day). It was assumed that the equilibrium moisture content (EMC) was then reached. After establishing EMC, the thickness of specimen samples was measured with a micrometer. For uncoated specimens, after above measurements, all test specimens were oven-dried (103 °C) to evaluate the EMC. The EMC was calculated according to Eq. 3: $$ {\text{EMC}}_{\text{uncoated}} \left( \% \right) = \frac{{m_{\text{c}} - m_{\text{dry}} }}{{m_{\text{dry}} }} \times 100, $$ where mc and mdry were the weight of the specimen at each given RH environment and after oven-drying, respectively. For coated specimens, the EMC was calculated according to Eq. 4: $$ {\text{EMC}}_{\text{coated}} \;\left( \% \right) = \frac{{m_{\text{c}} + m_{\text{o}} - m_{\text{dry}} }}{{m_{\text{dry}} }} \times 100, $$ where mc was the mass change of the coated specimens before and after EMC established at each given RH environment; mo was the weight of the test specimens before coated treatments; and mdry was the calculated average dry mass of each specimen according to corresponding uncoated treatment specimens. For set-recovery of test samples stored under five RH conditions, the thickness of each specimen used was measured. Set-recovery was calculated according to Eq. 5: $$ {\text{Set-recovery }}\left( \% \right) = \frac{{T_{\text{s}} - T_{\text{d}} }}{{T_{\text{o}} - T_{\text{d}} }} \times 100, $$ where Ts is the recovery thickness under various RH conditions, Td the actual thickness after densification and To is the original thickness before densification. All measured values were represented as mean ± standard deviation calculated using SPSS version 20.0 (IBM, America) and an analysis of variance with least significant difference set at p < 0.05 used to compare the means. Densification effects on thickness, density and mechanical strength Effects of HP densification (30 s) at different pressures levels on poplar boards are shown in Fig. 4. HP densification significantly reduced the thickness of test specimen boards, and the reduction gradually progressed with increasing pressure level. Figure 4 also confirms that the HP densification technique with the poplar board sandwiched between two steel plates resulted in more uniform radial densification of the boards which could not be obtained in previous studies [20]. The shaping tool used in this research is very simple and works reasonably well. However, at present, this simple shaping tool could not create the degree of uniform thickness as traditional hot-plate compression methods do. Even so, HP densification offers some clear advantages for wood densification—requires no heat and completed in a short treatment time of 30 s—therefore saves time and energy. The steel plates were slightly bent (~ 4 mm) following the pressure treatments. Hence, better and more effective shaping tools and packaging methods may be needed and to be further explored. Photograph image of cross-section of poplar wood after various pressure treatments for 30 s Quantitative data on board densification effects are detailed in Table 1. After densification treatment, the thickness of poplar boards reduced by increasing pressure and thickness factor decreased by 35.4% at 25 MPa to 50.9% at 125 MPa. CR increased from the decreased thickness of poplar boards, also shown in Table 1. CR under various HP densification was between 36.8 ± 1.05% to 51.9 ± 0.92%. With HP densification at 100 MPa and above, CR increase was small. Table 1 Average compress ratios and density of wood samples after various HP densification for 30 s As expected, with a decrease of board thickness and increase in CR, the density of the treated board increased significantly (p < 0.05) by 55.6%, 86.7%, 91.1%, 102% and 113% at 25 MPa, 50 MPa, 75 MPa, 100 MPa and 125 MPa, respectively. In our preliminary experiments, density was observed to decrease when pressure level exceeded 150 MPa [21]. These results indicate that HP densification for wood densification improves compression effects up to a certain pressure level gradually compressing the void spaces in the boards after which the degree of densification will be small. Further increase in pressure level could decrease the degree of densification and eventually may result in board rupture. This result also supports our previous research [20], which illustrated the same trend with Chinese fir, but the densification magnitudes were different. There applicable optimal pressure levels could depend on the different tree species. In traditional hot-plate methods, CR is an important processing parameter [10, 30]. CR primarily influences density; both average density and peak density are improved when the CR gets enhanced. CR, however, also has an impact on the width of the density peak and the distance of the peak from the surface [30]. In Laine's [11] study, when CR were set to 40, 50 and 60%, and the average density of compressed Scots pine wood increased to 760 kg/m3, 900 kg/m3 and 920 kg/m3. In Gao's [6] report, when CR of plantation poplar wood were 9, 20, 33, 40 and 47%, the average density of high densification layer reached 730, 790, 820, 870 and 890 kg/m3, respectively. In the current study, HP densification yielded a much higher density with a relative lower CR. In addition, in Blomberg' s [13] report, when Scots pine was semi-isostatically compressed at 140 MPa in a Quintus press, the density reached about 1000 kg/m3, which was similar to the present findings. This may indicate that different pressure treatments could have similar densification effects, but need further validations. As an important processing parameter, CR of traditional densified wood should be set first. But in HP densification, the final CR of board specimens only could be adjusted by the pressure level used, which was very different from the convention. Figure 5 shows the relationship between CR and pressure level after treatment. CR increased linearly with the pressure between 25 and 125 MPa with R2 values about 0.98. This was in accordance with our experimental results. This relationship could be used to predict the required density and thickness from pressure levels. Relationship between compression ratios and pressure levels after treatment Hardness of treated/untreated poplar wood As expected, the hardness value significantly increased following the densification treatment (Fig. 6). The average hardness for untreated samples was 1140 N, while those for the densified samples at 25 and 125 MPa were 1540 and 2240 N, respectively. This result suggests that hardness could significantly enhance even by the 25 MPa treatment for 30 s, and nearly doubled at 125 MPa. Average hardness of specimens which were control (0.1 MPa), 25 MPa treatment, 50 MPa treatment, 75 MPa treatment, 100 MPa treatment and 125 MPa treatment. (Error bars present standard deviations, and letters mean significantly different levels at p = 0.05) As we know, the hardness of wood is generally measured by the ability of a steel ball to penetrate the wood surface. Slightly different approaches to evaluating hardness are adopted in various standard test methods [31].So care should be taken when comparing hardness results on nonhomogeneous materials when different indentation techniques are used in the test. According to EN 1534 (2000), Brinell hardness was calculated by measuring 10-mm-diameter metal ball indentation diameter and the applied force, while in our study, hardness values were calculated by measuring 5.64 mm indentation depth and applied force (according to ISO 3350:1975). A similar procedure conducted by Boonstra and Blomberg [1], which used semi-isostatic densification of radiata pine wood at the pressure level of 140 MPa, confirmed that Brinell hardness of densified wood improved nearly three times than untreated wood. Our results were supported by these reports. Excluding differences in hardness measurement standards restrictions, hardness has been confirmed to significantly increase during wood densification in various studies [31, 32]. Set-recovery of densified wood with uncoated/coated treatments Set-recovery of wood samples under various conditions is presented in Fig. 7. Set-recovery of uncoated treatments during storage at 58% RH was lower than those at other RH storage environments. When RH was maintained at 68% or above, set-recovery increased significantly with increasing RH. Moreover, the set-recovery decreased with an increasing HP densification treatment pressure level. TO and ER-coated treatments could significantly decrease set-recovery of densified wood in high air humidity, and the effect of ER was better than TO. Average set-recovery (%) of uncoated treated, TO-coated treated and ER-coated treated wood samples under different RH environments at 25 °C (error bars present standard deviations, and letters mean significantly different levels at p = 0.05) An unusual phenomenon was observed in that the set-recovery of densified wood was higher for storage at 33% RH than at higher at 58% RH. This is obviously due to the prevalence of lower moisture environment resulting in a moisture loss from the sample (desorption) rather than the adsorption behavior at other storage conditions. The correlation coefficient (R2) between mass (moisture) change and set-recovery of densified wood with uncoated/coated treated was in the range 0.98 (25 MPa) and 0.92 (100 MPa) (data not shown). These results in general indicate that set-recovery of HP-densified wood were significantly correlated with moisture absorption. Preventing moisture absorption/desorption could significantly improve dimensional stability of HP-densified wood. In this study, hydrophobic coating treatments although cannot totally prevent moisture absorption, could be an alternative method, which could dramatically improve dimensional stability of HP-densified wood in high air humidity. A few theories could explain the phenomenon of set-recovery of densified wood. For example, during traditional hot-plate compression, the crystalline regions of the microfibrils are mainly deformed affecting its elasticity (but also plastically under very high forces). The elastic strain energy is stored in the cellulose macromolecules, and the release of this energy causes high set-recovery [7], while in HP densification method, plastic compressive strain occurred predominately, and delayed elastic strain of densified wood was very small. Moisture/water adsorption plays a dominant role in thickness swelling and set-recovery of HP-densified wood. Hydrophobic coating methods were preliminarily explored in this research, and the significant effects were indicated. More hydrophobic coating methods need be further explored. EMC of uncoated treated, TO-coated treated and ER-coated treated wood samples under different RH environments at 25 °C (error bars present standard deviations, and letters mean significantly different levels at p = 0.05) EMC of densified/control poplar wood with uncoated/coated treatments In order to characterize the effect of HP densification with and without follow-up coating on the hygroscopicity of treated wood, EMC data under various RH environments at 25 °C were measured. As shown in Fig. 8, HP densification of wood without the follow-up coating significantly increased its EMC at 68%, 86% and 95% RH environments as compared with control (EMC, 12–18%), with treated samples showing slightly more moisture pick up than the control. There was no correlation between pressure level and EMC. This may be due to the destruction/disruption of cellular structure during the densification HP densification, making the uptake/evaporate of moisture easier. However, HP-densified wood only had a decrease in EMC when stored at 33% RH (from 12 to 8%), and the difference between treated and untreated was statistically not significant. This obviously results from the moisture loss potential when stored at 33% RH (12% moisture in the samples when equilibrated at about 65% RH at 20 °C). TO and ER coating of HP-densified wood reduced the moisture pick up from samples when stored at different RH environments. With TO and ER coating, the related moisture had a significant decrease compared to uncoated samples at higher RH environments. In addition, there were lower differences among difference RH environments in the related moisture compared to uncoated samples. For example, with ER coating, the resulting ranges were 9.67–13.6% for control, 9.84–13.8% for 25 MPa, 9.92–13.7% for 50 MPa, 9.81–13.3% for 75 MPa, 9.76–13.5% for 100 MPa and 9.91–13.4% for 125 MPa HP-densified wood, respectively. Data therefore demonstrate lower differences in moisture change between test samples. HP-densified wood with ER-coated treatments illustrated best water resistance. Combined with set-recovery effects, an obvious advantage of coated treatments on improving dimensional stability of HP-densified wood was found. In the present study, the potential of high pressure treatment as an alternative rapid densification method of poplar wood and its combination with TO or ER-coated treatment to improve dimensional stability of densified wood in high air humidity environments were demonstrated. The HP densification treatment could increase the wood density from 450 ± 50 kg/m3 to 960 ± 20 kg/m3 for 30 s holding time, the density increasing pressure up to 125 MPa pressure level. The density parameter was well correlated with pressure level. Hardness of the densified wood was significantly increased due to densification, which again depended on pressure with a magnitude increase of 35% at 25 MPa to 96% at 125 MPa, compared to untreated wood. For uncoated treatments, set-recovery of densified wood changed minimally at 58% RH, but increased when stored at 33%, 68%, 86% and 95% RH. TO and ER-coated treatments showed similar trend with uncoated treatments, but significantly reduced set-recovery in 33% RH, 68% RH, 85% RH and 95% RH. Coated treatments indicated good anti-swelling properties, while ER-coated treatment was better than TO. The results provided a reference for development of HP densification combining with hydrophobic coating methods to increase softwood value in wood industry. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. ER: EMC: Equilibrium moisture content CR: RH: Boonstra MJ, Blomberg J (2007) Semi-isostatic densification of heat-treated radiata pine. Wood Sci Technol 41(7):607. https://doi.org/10.1007/s00226-007-0140-y Cai S, Jebrane M, Terziev N, Daniel G (2016) Mechanical properties and decay resistance of Scots pine (Pinus sylvestris L) sapwood modified by vinyl acetate-epoxidized linseed oil copolymer. Holzforschung 70(9):885–894. https://doi.org/10.1515/hf-2015-0248 Ang AF, Ashaari Z, Bakar ES, Ibrahim NA (2017) Possibility of enhancing the dimensional stability of jelutong (Dyera costulata) wood using glyoxalated alkali lignin-phenolic resin as bulking agent. Eur J Wood Wood Prod 76:269–282. https://doi.org/10.1007/s00107-016-1139-6 Seki M, Kiryu T, Miki T, Tanaka S, Shigematsu I, Kanayama K (2016) Extrusion of solid wood impregnated with phenol formaldehyde (PF) resin: effect of resin content and moisture content on extrudability and mechanical properties of extrudate. Bioresources 11(3):7697–7709. https://doi.org/10.15376/biores.11.3.7697-7709 Gabrielli CP, Kamke FA (2010) Phenol–formaldehyde impregnation of densified wood for improved dimensional stability. Wood Sci Technol 44(1):95–104. https://doi.org/10.1007/s00226-009-0253-6 Gao Z, Huang R, Lu J, Chen Z, Fei G, Zhan T (2016) Sandwich compression of wood: control of creating density gradient on lumber thickness and properties of compressed wood. Wood Sci Technol 50(4):833–844. https://doi.org/10.1007/s00226-016-0824-2 Laine K, Rautkari L, Ramsay J, Hill CAS, Hughes M (2013) Measuring the thickness swelling and set-recovery of densified and thermally modified Scots pine solid wood. J Mater Sci 48(24):8530–8538. https://doi.org/10.1007/s10853-013-7671-4 Blomberg J, Persson B, Bexell U (2006) Effects of semi-isostatic densification on anatomy and cell-shape recovery on soaking. Holzforschung 13(3):151–331. https://doi.org/10.1515/HF.2006.052 Navi P, Pittet V, Plummer CJG (2002) Transient moisture effects on wood creep. Wood Sci Technol 36(6):447–462. https://doi.org/10.1007/s00226-002-0157-1 Rautkari L, Laflin N, Hughes M (2011) Surface modification of Scots pine: the effect of process parameters on the through thickness density profile. J Mater Sci 46(14):4780–4786. https://doi.org/10.1007/s10853-011-5388-9 Laine K, Segerholm K, Wålinder M, Rautkari L, Hughes M (2016) Wood densification and thermal modification: hardness, set-recovery and micromorphology. Wood Sci Technol 50(5):1–12. https://doi.org/10.1007/s00226-016-0835-z Blomberg J, Persson B (2005) An algorithm for comparing density in CT-images taken before and after compression of Pinus sylvestris. Holz als Roh- und Werkstoff 63(1):23–29. https://doi.org/10.1007/s00107-004-0544-4 Blomberg J (2005) Elastic strain at semi-isostatic compression of Scots pine (Pinus sylvestris). J Wood Sci 51(4):401–404. https://doi.org/10.1007/s10086-004-0666-7 Blomberg J, Persson B (2004) Plastic deformation in small clear pieces of Scots pine (Pinus sylvestris) during densification with the CaLignum process. J Wood Sci 50(4):307–314. https://doi.org/10.1007/s10086-003-0566-2 Navi P, Girardet F (2000) Effects of thermo-hydro-mechanical treatment on the structure and properties of wood. Holzforschung 54(3):287–293. https://doi.org/10.1515/hf.2000.048 Navi P, Heger F (2004) Combined densification and thermo-hydro-mechanical processing of wood. MRS Bull 29(5):332–336. https://doi.org/10.1557/mrs2004.100 Welzbacher CR, Wehsener J, Rapp AO, Haller P (2008) Thermo-mechanical densification combined with thermal modification of Norway spruce (Picea abies Karst) in industrial scale—dimensional stability and durability aspects. Holz als Roh- und Werkstoff 66(1):39–49. https://doi.org/10.1007/s00107-007-0198-0 Diouf PN, Stevanovic T, Cloutier A, Fang CH, Blanchet P, Koubaa A, Mariotti N (2011) Effects of thermo-hygro-mechanical densification on the surface characteristics of trembling aspen and hybrid poplar wood veneers. Appl Surf Sci 257(8):3558–3564. https://doi.org/10.1016/j.apsusc.2010.11.074 Kutnar A, Sernek M, Kamke FA Viscoelastic thermal compression (VTC) of wood. In: New technologies & materials in industries based on the forestry sector international scientific conference. 2007 Li H, Zhang F, Ramaswamy HS, Zhu S, Yong Y (2016) High-pressure treatment of Chinese fir wood: effect on density, mechanical properties, humidity-related moisture migration, and dimensional stability. Bioresources 11(4):10497–10510. https://doi.org/10.15376/biores.11.4.10497-10510 Yu Y, Zhang F, Zhu S, Li H (2017) Effects of high-pressure treatment on poplar wood: density profile, mechanical properties, strength potential index, and microstructure. BioResources 12(3):6283–6297. https://doi.org/10.15376/biores.12.3.6283-6297 Balasubramaniam VM, Barbosa-Cánovas GV, Lelieveld HLM (2016) High pressure processing of food-principles, technology and application. Springer, New York. https://doi.org/10.1007/978-1-4939-3234-4 Li H, Jiang X, Ramaswamy HS, Zhu S, Yong Y (2018) High-pressure treatment effects on density profile, surface roughness, hardness, and abrasion resistance of paulownia wood boards. Trans ASABE 61:1181–1188. https://doi.org/10.13031/trans.12718 Chen H, Qian L, Zeng B, Miao X, Yu L, Pu J (2013) Impregnation of poplar wood (Populus euramericana) with methylolurea and sodium silicate sol and induction of in situ gel polymerization by heating. Holzforschung 68(1):45–52. https://doi.org/10.1515/hf-2013-0028 Kutnar A, Kamke FA (2012) Influence of temperature and steam environment on set recovery of compressive deformation of wood. Wood Sci Technol 46(5):953–964. https://doi.org/10.1007/s00226-011-0456-5 Humar M, Lesar B (2013) Efficacy of linseed- and tung-oil-treated wood against wood-decay fungi and water uptake. Int Biodeterior Biodegrad 85(7):223–227. https://doi.org/10.1016/j.ibiod.2013.07.011 Žlahtič M, Mikac U, Serša I, Merela M, Humar M (2017) Distribution and penetration of tung oil in wood studied by magnetic resonance microscopy. Ind Crops Prod 96:149–157. https://doi.org/10.1016/j.indcrop.2016.11.049 Wang H, Liu Z, Wang E, Zhang X, Yuan R, Wu S, Zhu Y (2015) Facile preparation of superamphiphobic epoxy resin/modified poly(vinylidene fluoride)/fluorinated ethylene propylene composite coating with corrosion/wear-resistance. Appl Surf Sci 357:229–235. https://doi.org/10.1016/j.apsusc.2015.09.017 Yang YL, Yin YG, Xiong GJ (2013) Study of water resistance of wood coated with epoxy resin. J Build Mater 16(1):170–174. https://doi.org/10.3969/j.issn.1007-9629.2013.01.032 Belt T, Laine K, Hill CAS (2013) Cupping behaviour of surface densified Scots pine wood: the effect of process parameters and correlation with density profile characteristics. J Mater Sci 48(18):6426–6430. https://doi.org/10.1007/s10853-013-7443-1 Rautkari L, Kamke FA, Hughes M (2011) Density profile relation to hardness of viscoelastic thermal compressed (VTC) wood composite. Wood Sci Technol 45(4):693–705. https://doi.org/10.1007/s00226-010-0400-0 Laine K, Rautkari L, Hughes M (2013) The effect of process parameters on the hardness of surface densified; Scots pine solid wood. Eur J Wood Wood Prod 71(1):13–16. https://doi.org/10.1007/s00107-012-0649-0 The authors are grateful for the support of Yulin Wood Industry Co., Ltd, and the College of Biosystems Engineering and Food Science of Zhejiang University, Hangzhou, China. College of Biosystems Engineering and Food Science, Zhejiang University, 866 Yuhangtang Road, Hangzhou, 310058, China Yong Yu, Aqiang Li, Kaiya Yan & Songming Zhu Key Laboratory of Equipment and Information in Environment Controlled Agriculture, Ministry of Agriculture, 866 Yuhangtang Road, Hangzhou, 310058, China Department of Food Science and Agricultural Chemistry, McGill University, St-Anne-de-Bellevue, Quebec, Canada Hosahalli S. Ramaswamy Institute of Food Science, Zhejiang Academy of Agriculture Sciences, Hangzhou, 310021, China Huanhuan Li Yong Yu Aqiang Li Kaiya Yan Songming Zhu YY, HL, and AL designed the study. HL conducted high pressure treatment on densification of poplar sapwood boards, and was a major contributor in writing the manuscript. YY and AL conducted tung oil and epoxy resin-coated treatments on densified samples and subsequent data analysis.KY performed Image drawing and analysis.SZ and HSR contributed to writing the manuscript. All authors read and approved the final manuscript. Correspondence to Huanhuan Li. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Yu, Y., Li, A., Yan, K. et al. High-pressure densification and hydrophobic coating for enhancing the mechanical properties and dimensional stability of soft poplar wood boards. J Wood Sci 66, 45 (2020). https://doi.org/10.1186/s10086-020-01892-1 Received: 10 October 2019 High-pressure densification Low-density wood densification Hardness improvement
CommonCrawl
Numerical Simulation of the Impact of Urban Non-uniformity on Precipitation Yuqiang SONG1,2, Hongnian LIU1, Xueyuan WANG1, Ning ZHANG1, Jianning SUN1 School of Atmospheric Sciences, Nanjing University, Nanjing 210093 Dalian Meteorological Station, Dalian 116000 Full Text(HTML) Figures(0) / Table(0) To evaluate the influence of urban non-uniformity on precipitation, the area of a city was divided into three categories (commercial, high-density residential, and low-density residential) according to the building density data from Landsat satellites. Numerical simulations of three corresponding scenarios (urban non-uniformity, urban uniformity, and non-urban) were performed in Nanjing using the WRF model. The results demonstrate that the existence of the city results in more precipitation, and that urban heterogeneity enhances this phenomenon. For the urban non-uniformity, uniformity, and non-urban experiments, the mean cumulative summer precipitation was 423.09 mm, 407.40 mm, and 389.67 mm, respectively. Urban non-uniformity has a significant effect on the amount of heavy rainfall in summer. The cumulative precipitation from heavy rain in the summer for the three numerical experiments was 278.2 mm, 250.6 mm, and 236.5 mm, respectively. In the non-uniformity experiments, the amount of precipitation between 1500 and 2200 (LST) increased significantly. Furthermore, the adoption of urban non-uniformity into the WRF model could improve the numerical simulation of summer rain and its daily variation. urban non-uniformity, urban precipitation, WRF model Chen L. X., W. Q. Zhu, and X. J. Zhou, 2000: Characteristics of environmental and climate change in Changjiang Delta and its possible mechanism. Acta Meteorologica Sinica, 14( 2), 129- 140. (in Chinese)9bdea2e50286dcb8dcf496cc8a0801fbhttp%3A%2F%2Fwww.cnki.com.cn%2FArticle%2FCJFDTotal-QXXW200002000.htmhttp://www.cnki.com.cn/Article/CJFDTotal-QXXW200002000.htmCharacteristics of climate change in the Changjiang Delta were analyzed based on the annualmean meteorological data since 1961,including air temperature,maximum and minimum airtemperature,precipitation,sunshine duration and visibility at 48 stations in that area(southernJiangsu and northern Zhejiang),and its adjacent areas(northern Jiangsu,eastern Anhui andsouthern Zhejiang),together with the environmental data.The results indicate that it is gettingwarmer in the Changjiang Delta and cooler in adjacent areas,thus the Changjiang Delta becomes a bigheat island,containing many little heat islands consisting of central cities,in which Shanghai City isthe strongest heat island.The intensity of heat islands enhances as economic development goes up.From the year 1978.the beginning year of reform and opening policy,to the year 1997,the intensityof big heat island of Changjiang Delta has increased 0.5℃ and Shanghai heat island increased 0.8℃.However.since 1978 the constituents of SO 2 ,NO x and TSP(total suspended particles)in theatmosphere,no matter whether in the Changjiang Delta or in the adjacent areas,have all increased,but pH values of precipitation decreased.In the meantime,both sunshine duration and visibility arealso decreased,indicating that there exists a mechanism for climate cooling in these areas.Ouranalyses show that the mechanism for climate warming in the Changjiang Delta may be associatedwith heating increase caused by,economic development and increasing energy consumption.It isestimated that up to 1997 the intensity of warming caused by this mechanism in the Changjiang Deltahas reached 0.8—0.9℃,about 4—4.5 times as large as the mean values before 1978.Since then,the increase rate has become 0. 035℃/a for the Changjiang Delta.It has reached 1.3℃ for Shanghaiin 1997,about 12—13 times as large as the mean values before 1978.This is a rough estimation ofincreasing energy consumption rate caused by economic development. Hu X. M., P. M. Klein, and M. Xue, 2013: Impact of low-level jets on the nocturnal urban heat island intensity in Oklahoma Journal of Applied Meteorology and Climatology, 52( 8), 1779- 1802. Jauregui E., E. Romales, 1996: Urban effects on convective precipitation in Mexico City. Atmos. Environ., 30( 20), 3383- 3389.10.1016/1352-2310(96)00041-6438a4a3713b008ae952ab443ac6a555dhttp%3A%2F%2Fwww.sciencedirect.com%2Fscience%2Farticle%2Fpii%2F1352231096000416http://www.sciencedirect.com/science/article/pii/1352231096000416This paper reports on urban-related convective precipitation anomalies in a tropical city. Wet season (May–October) rainfall for an urban site (Tacubaya) shows a significant trend for the period 1941–1985 suggesting an urban effect that has been increasing as the city grew. On the other hand, rainfall at a suburban (upwind) station apparently unaffected by urbanization, has remained unchanged. Analysis of historical records of hourly precipitation for an urban station shows that the frequency of intense (> 20 mm h 611 ) rain showers has increased in recent decades. Using a network of automatic rainfall stations, areal distribution of 24 h isoyets show a series of maxima within the urban perimeter which may be associated to the heat island phenomenon. Isochrones of the beginning of rain are used to estimate direction and speed of movement of the rain cloud cells. The daytime heat island seems to be associated with the intensification of rain showers. Li S. Y., H. B. Chen, and W. Li, 2008: The impact of urbanization on city climate of Beijing region. Plateau Meteorology, 27( 5), 1102- 1110. (in Chinese)10.3724/SP.J.1047.2008.000149c0f839f-ad74-4b45-a37b-34a85aec5f68mag4842620082751102ef6054e2706e59d808f1431f034b203chttp%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAL-GYQX200805020.htmhttp://en.cnki.com.cn/Article_en/CJFDTOTAL-GYQX200805020.htmThe impact of urban growth on city climate variation is studied using the daily mean data of temperature/velocity and precipitation at 20 meteorological stations from 1970 to 2005.The results show that:(1)In the past 36 years the urban heat island(UHI) area is increasing,the UHI intensity is enhancing,and the UHI centers are evolving from single to several centers.In 2000′s,the maximal UHI intensity is 2.11℃.In the past 36 years the mean temperature in winter increases 0.298℃/10a.(2)The urbanization has made the precipitation show a tendency of uneven distribution.In 1970′s,the precipitation in the west of the city is much,while in the southeast of the city is little;in 1980′s,all the urban zone′s precipitation is little;in 1990′s,the precipitation in both west and south of the city is much,while in the northeast of the city is little;in 2000′s,the little precipitation zone extends from urban district to the southeast.(3)The urban wind speed has a decreasing tendency.The wind speed in 1970′s is 2.49 m·s-1,in 1980′s,is 2.32 m·s-1,in 1990′s,is 2.16 m·s-1,and in 2000′s,is 2.28 m·s-1.In the past 36 years the wind speed decreases 0.05 m·s-1·(10a)-1.(4)The temperature and the population density logarithm have a linear correlation,the correlative coefficient is 0.65;the temperature and the city land area have a linear correlation,the correlative coefficient is 0.6387. Liao J. B., X. M. Wang, Y. X. Li, and B. C. Xia, 2011: An analysis study of the impacts of urbanization on precipitation in Guangzhou. Scientia Meteorologica Sinica, 31( 4), 384- 390. (in Chinese)10.1007/s00376-010-1000-52d699413e0b621ba7d614641edd40cb8http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAL-QXKX201104003.htmhttp://en.cnki.com.cn/Article_en/CJFDTOTAL-QXKX201104003.htmUsing the observation data from 1959 to 2009,the precipitation variation in Guangzhou was studied.We found that the trends of annual total precipitation in suburban Zengcheng station is not obvious,but number of the heavy rain days has increased;the total annual precipitation in Guangzhou fluctuated and there is a slight upward trend with the increasing rate of 10.5 mm/a since 1991.The total precipitation days tend to decrease with the decreasing rate of 7.2 d/10a since 1982,what's more,the heavy rain days and rainfall precipitation levels were significantly rising,the increasing rate of heavy precipitation days is 2.8 d/10a while the growth rate of precipitation load is 2.4%/10a.Urbanization in Guangzhou have caused heavy rain to occur more frequently in Guangzhou.Compared to pre-urbanization,the precipitation in Guangzhou indicated a significant increase since 1991.The contribution rate to the precipitation increase due to urbanization in Guangzhou is 44.7%. Lowry W. P., 1998: Urban effects on precipitation amount. Progress in Physical Geography, 22( 4), 477- 520.10.1177/0309133398022004031b82f4fa6d18efe156423aa277d9e2dfhttp%3A%2F%2Fdialnet.unirioja.es%2Fservlet%2Farticulo%3Fcodigo%3D453977http://dialnet.unirioja.es/servlet/articulo?codigo=453977Major reviews of urban effects on local climate, extending from Kratzer in 1937 through to Landsberg in 1981, have dealt primarily with radiation, temperature, wind, and air quality. To a much lesser extent they have examined moisture-related elements including humidity, cloud, precipitation, and storminess. Selecting air temperature to represent the former group and precipitation amount to represent the latter, the author asserts that, because of the intrinsic physical differences between them, there are necessarily important differences in the methods to be used for their proper observation, analysis, presentation, and interpretation pertaining to urban effects. The principal differences are based in the fact that temperature is continuous in both time and space, whereas precipitation is continuous in neither. The author maintains that because of these differences, urban climatologists have had much greater success in specifying and explaining urban effects on temperature than on precipitation amount. Further, he makes the case that, lack of recognition that methods used for the study of urban effects on temperature are too often inappropriate for study of urban effects on precipitation amount, has led to a state of affairs where there remains basic uncertainty about the specification of urban effects on precipitation amount, and even greater uncertainty about their explanation. In making that case, the author includes 1) an historical perspective, 2) a critical evaluation of methods, 3) an overview of the status of urban precipitation climatology, and 4) recommendations concerning future research. Miao S. G., F. Chen, Q. C. Li, and S. Y. Fan, 2011: Impacts of urban processes and urbanization on summer precipitation: A case study of heavy rainfall in Beijing on 1 August 2006. Journal of Applied Meteorology and Climatology, 50( 4), 806- 825.10.1007/s13143-014-0016-70a1c7909-2aee-4694-9658-3eee0bfcfd8b0825dd9bf898e5debb8e206546d89a31http%3A%2F%2Flink.springer.com%2F10.1007%2Fs13143-014-0016-7refpaperuri:(2053229b2072a9fd72ff8134e3006e55)http://link.springer.com/10.1007/s13143-014-0016-7Weather and climate changes caused by human activities (e.g., greenhouse gas emissions, deforestation, and urbanization) have received much attention because of their impacts on human lives as well as scientific interests. The detection, understanding, and future projection of weather and climate changes due to urbanization are important subjects in the discipline of urban meteorology and climatology. This article reviews urban impacts on precipitation. Observational studies of changes in convective phenomena over and around cities are reviewed, with focus on precipitation enhancement downwind of cities. The proposed causative factors (urban heat island, large surface roughness, and higher aerosol concentration) and mechanisms of urban-induced and/or urban-modified precipitation are then reviewed and discussed, with focus on downwind precipitation enhancement. A universal mechanism of urban-induced precipitation is made through a thorough literature review and is as follows. The urban heat island produces updrafts on the leeward or downwind side of cities, and the urban heat island-induced updrafts initiate moist convection under favorable thermodynamic conditions, thus leading to surface precipitation. Surface precipitation is likely to further increase under higher aerosol concentrations if the air humidity is high and deep and strong convection occurs. It is not likely that larger urban surface roughness plays a major role in urbaninduced precipitation. Larger urban surface roughness can, however, disrupt or bifurcate precipitating convective systems formed outside cities while passing over the cities. Such urban-modified precipitating systems can either increase or decrease precipitation over and/or downwind of cities. Much effort is needed for in-depth or new understanding of urban precipitation anomalies, which includes local and regional modeling studies using advanced numerical models and analysis studies of long-term radar data. Rosenfeld D., 2000: Suppression of rain and snow by urban and industrial air pollution. Science, 287( 5459), 1793- 1796.10.1016/j.ijfoodmicro.2014.11.023107103021af0481b2ecbe261f403a76bf5bd1c8ehttp%3A%2F%2Fwww.ncbi.nlm.nih.gov%2Fpubmed%2F10710302%3Fdopt%3DAbstracthttp://www.ncbi.nlm.nih.gov/pubmed/10710302?dopt=AbstractOur method for the analysis of quantitative microbial data shows a good performance in the estimation of true prevalence and the parameters of the distribution of concentrations, which indicates that it is a useful data analysis tool in the field of QMRA. Shepherd J. M., H. Pierce, and A. J. Negri, 2002: Rainfall modification by major urban areas: Observations from spaceborne rain radar on the TRMM satellite. J. Appl. Meteor., 41( 7), 689- 701.10.1175/1520-0450(2002)041<0689:RMBMUA>2.0.CO;25607a620-c68e-4cb9-acd3-6041f87f23b60b54b7b2e4dc8fcc409170accd48091fhttp%3A%2F%2Fci.nii.ac.jp%2Fnaid%2F10013125690%2Frefpaperuri:(16e55b4179ade8c71ec65990efcaf437)http://ci.nii.ac.jp/naid/10013125690/Data from the Tropical Rainfall Measuring Mission (TRMM) satellite's precipitation radar (PR) were employed to identify warm-season rainfall (1998-2000) patterns around Atlanta, Georgia; Montgomery, Alabama; Nashville, Tennessee; and San Antonio, Waco, and Dallas, Texas. Results reveal an average increase of about 28% in monthly rainfall rates within 30-60 km downwind of the metropolis, with a modest increase of 5.6% over the metropolis. Portions of the downwind area exhibit increases as high as 51%. The percentage changes are relative to an upwind control area. It was also found that maximum rainfall rates in the downwind impact area exceeded the mean value in the upwind control area by 48%-116%. The maximum value was generally found at an average distance of 39 km from the edge of the urban center or 64 km from the center of the city. Results are consistent with the Metropolitan Meteorological Experiment (METROMEX) studies of St. Louis, Missouri, almost two decades ago and with more recent studies near Atlanta. The study establishes the possibility of utilizing satellite-based rainfall estimates for examining rainfall modification by urban areas on global scales and over longer time periods. Such research has implications for weather forecasting, urban planning, water resource management, and understanding human impact on the environment and climate. Skamarock W.C., Coruthors, 2008: A description of the advanced research WRF version 3. NCAR Tech. Note NCAR/ TN-475+STR,88 pp.10.5065/D68S4MVH6e1e8ed5238484bf7e6021f9957054e6http%3A%2F%2Fwww.researchgate.net%2Fpublication%2F244955031_A_Description_of_the_Advanced_Research_WRF_Version_2http://www.researchgate.net/publication/244955031_A_Description_of_the_Advanced_Research_WRF_Version_2The development of the Weather Research and Forecasting (WRF) modeling system is a multiagency effort intended to provide a next-generation mesoscale forecast model and data assimilation system that will advance both the understanding and prediction of mesoscale weather and accelerate the transfer of research advances into operations. The model is being developed as a collaborative effort ort among the NCAR Mesoscale and Microscale Meteorology (MMM) Division, the National Oceanic and Atmospheric Administration's (NOAA) National Centers for Environmental Prediction (NCEP) and Forecast System Laboratory (FSL), the Department of Defense's Air Force Weather Agency (AFWA) and Naval Research Laboratory (NRL), the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma, and the Federal Aviation Administration (FAA), along with the participation of a number of university scientists. The WRF model is designed to be a flexible, state-of-the-art, portable code that is an efficient in a massively parallel computing environment. A modular single-source code is maintained that can be configured for both research and operations. It offers numerous physics options, thus tapping into the experience of the broad modeling community. Advanced data assimilation systems are being developed and tested in tandem with the model. WRF is maintained and supported as a community model to facilitate wide use, particularly for research and teaching, in the university community. It is suitable for use in a broad spectrum of applications across scales ranging from meters to thousands of kilometers. Such applications include research and operational numerical weather prediction (NWP), data assimilation and parameterized-physics research, downscaling climate simulations, driving air quality models, atmosphere-ocean coupling, and idealized simulations (e.g boundary-layer eddies, convection, baroclinic waves).*WEATHER FORECASTING Song J., J. P. Tang, and J. N. Sun, 2009: Simulation study of the effects of urban canopy on the local meteorological field in the Nanjing area. Journal of Nanjing University (Natural Sciences), 45( 6), 779- 789. (in Chinese)10.1360/972008-2143a6c0f887befc66dd8aab329f074ec80ehttp%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAL-NJDZ200906008.htmhttp://en.cnki.com.cn/Article_en/CJFDTOTAL-NJDZ200906008.htmNumerical experiments were made to investigate the effects of urban canopy on local meteorological field in Nanjing area,from July 17th to July 18th,2005,by using the Weather Research and Forecasting Model(WRF).Three types of surface conditions were employed in the simulations: the wrf-no urban case,in which the natural surface was chosen;the wrf-ucm case,in which the urban surface was chosen but the Urban Canopy Model(UCM) was not used;the wrf+ucm case,in which the urban surface was chosen,and the UCM was employed.The results of numerical experiments were compared with the data of the field observations.This study shows that the temperature at 2 m in the wrf+ucm case is slightly lower than that in the wrf-ucm case,and both are higher than that in the wrf-no urban case.Meanwhile,the sensible heat flux of the urban surface with canopy is similar to that of urban surface without canopy in the daytime,while the former is slightly higher than the latter during the night.The values in the two cases are obviously higher than in the case with natural surface in the area.However,the latent heat flux in the wrf+ucm case is lower than that in the wrf-ucm case,and both are much lower than that in the wrf-no urban case.That is to say,urban ground makes the urban fields drier than the natural ones.On the other hand,the effects of urban canopy influence the air flow in the area,which makes the horizontal wind be reduced over the city,and consequently the vertical motion is enhanced.It seems that this influence is more obvious during the night than in the daytime. Song Y. Q., H. N. Liu, X. Y. Wang, N. Zhang, and J. N. Sun, 2014a: The influence of urban heterogeneity on the surface energy balance and characters of temperature and wind. Journal of Nanjing University (Natural Sciences), 50( 6), 810- 819. (in Chinese) Song Y. Q., H. N. Liu, Y. Zhu, and X. Y. Wang, 2014b: Numerical simulation of urban heterogeneity's influence on urban meteorological characteristic. Plateau Meteorology, 33( 6), 1579- 1588. (in Chinese)10.7522/j.issn.1000-0534.2013.00080b831a731-2894-4551-8b54-079b61be6255mag4842620143361579The Nanjing city was divided into three types according the building density: Commercial, Hi-dens Res(High intensity residential), Low-dens Res(Low intensity residential). The influence of urban heterogeneity on urban meteorological characteristic in Nanjing was researched by WRF model. The results show that: After considering the effect of the urban heterogeneity, the spatial distribution of temperature, the urban heat island, the relative humidity and the winds exhibit are more complex in the urban region. In the simulations of urban canopy, urban heterogeneity has obvious effects on the heat island and other meteorological characters.The simulated mean heat island intensity, dry island intensity and decrease of wind of city will decrease 0.02℃, 0.2% and 0.11 m&#183;s<sup>-1</sup>. But the maximum heat island intensity and dry island intensity of city will increase 0.28℃ and 1.51%. In the city of considering heterogeneity, the spaial distribution variances of urban heat island, dry island and decrease of wind will increase 0.06, 2.08 and 0.28. Sun J. S., B. Yang, 2008: Meso- scale torrential rain affected by topography and the urban circulation. Chinese Journal of Atmospheric Sciences, 32( 6), 1352- 1364. (in Chinese)10.3878/j.issn.1006-9895.2008.06.105208fcfa-da3b-481c-8734-ddc4e916ae8c4825320083269Some theoretical features of meso-β scale torrential rain, which are caused by joint action of topography and the urban heat island, are gained by mesoscale dynamic meteorology theory and scale analysis. Using observation datasets with high spatial-temporal resolution based on auto-weather station network and wind profile data from two profilers which are located at different positions, most of the theoretical features are confirmed by three cases which occurred in Beijing in the summer of 2006. The results indicate that (1) the temperature gradient in front of mountains, mainly caused by the urban heat island, is able to engender a relatively isolated vertical wind shear near the windward slope, and the shear is much more important to grow, develop and maintain the mesoscale convective system. The closer the mountain is to urban areas, the stronger the temperature gradient in front of mountains is, and the local stronger vertical wind shear is easy to be at the position. On the other hand, the response time of strong vertical wind shear depends on the intensity of temperature gradient. (2) Once stronger convective precipitation begins on the windward slope, the positive feedback between rainfall intensity and horizontal wind velocity toward the windward slope will appear, and the process is an essential condition to form meso-β scale torrential rain. (3) The stronger the terrain grade is, the stronger ascending motion will be forced and the smaller horizontal-scale mesoscale weather system will be stirred; in front of smoother topography, however, the mesoscale system at a relatively larger horizontal scale is easy to be formed. (4) generally, most of the mesoscale torrential rain processes, which are caused by joint influence of topography and thermodynamic urban circulation, should occur in front of mountains in the evening or the early morning. Sun J. S., H. Wang, L. Wang, F. Liang, Y. X. Kang, and X. Y. Jiang, 2006: The role of urban boundary layer in local convective torrential rain happening in Beijing on 10 July 2004. Chinese J. Atmos. Sci., 30( 2), 221- 234. (in Chinese)98bef774-50b0-4007-a50b-7def9d36e94e76505b89b766b97d046d5acb398cfb99http%3A%2F%2Fen.cnki.com.cn%2Farticle_en%2Fcjfdtotal-dqxk200602004.htmrefpaperuri:(3750b9c219257f7d898babd9e9205148)http://en.cnki.com.cn/article_en/cjfdtotal-dqxk200602004.htmAn isolated mesoscale convective torrential rain which happened in Beijing urban on 10 July 2004("7.10") made a great traffic trouble because of serious inundation cross the urban areas and caught various social attention.The triggering mechanism of the convective torrential rain and the reason that the downpour occurred only in urban center are studied by analyzing a large number of observation data sets,such as observational data with high spatial-temporal resolution based on auto-weather station network,Doppler radar observational products,available vertical distribution of wind detected by a boundary wind profiler,TBB data from GOES and conventional weather observational data sets.Based on a simple mesoscale theoretical analysis and detail observational investigation,the spatial structure of the weather system is proposed.The research results indicate that(1) the local vapor condition and the large-scale vapor transportation are favorable during the torrential rain.However,the large-scale descending area has been keeping inhibition during the weather event,and this is a great difference between the isolated meso-scale convective storm system(MCSS) and other mesoscale torrential rain events which happen in regional precipitation;(2) The convective activities in Beijing area are closely related to gravity wave.At the initial stage of "7.10" local torrential rainfall,the local convective instable energy is possible to be triggered by gravity wave which is motivated by the stronger convective activities in Laishui and Yixian counties of Hebei Province to the southwest of Beijing and a series of relatively isolated meso- scale convection cells (MCCs),which appear to be linear,develop in Beijing.Finally,a meso- scale convective storm system is organized by urban mesoscale convergence line.The MCSS not only causes the heaviest rain intensity in the urban center,but also excites gravity wave and brings forth the similar meso- scale convection cells again.When the meso- scale convection cells are reorganized,a meso- scale convective storm system reappears in the urban areas.However,the second heaviest rain intensity is debilitated obviously compared with the former MCSS because the local instable energy have been released partly during the first precipitation period;(3) prior to the torrential rain,a mesoscale convergence line can be observed not only in urban surface but also in total boundary layer above,which plays a key role in organizing the isolated meso- scale convection cells(MCCs).The research confirms that the thermodynamic forcing caused by the temperature difference between urban areas and suburbs,is a fundamental factor in the development of the convergence line.On the other hand,because of the thermodynamic difference between urban areas and suburbs,the vertical wind shear is strengthened in the urban center,and the horizontal flow in lower layer is accelerated in suburbs,in other words,the thermodynamic forcing is advantageous to keeping the stronger convergence motion toward central convection area and providing enough compensated moisture current around a relative large field. Wu X., X. Y. Wang, X. N. Zeng, and L. Xu, 2000: The effect of urbanization on short duration precipitation in Beijing. Journal of Nanjing Institute of Meteorology, 23( 1), 68- 72. (in Chinese)10.1142/S175882511200130076967d4b5c54e92cbd0ef69d08739b67http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAL-NJQX200001010.htmhttp://en.cnki.com.cn/Article_en/CJFDTOTAL-NJQX200001010.htmHour precipitation from AWS in urban and suburb of Beijing is analysed to study the effects of urbanization on short duration precipitation.Results show that hour precipitation can be fitted to logarithm Weibull distribution and that the enhancement of rainfall due to urbanization is remarkable in downwind under moderate/heavy short duration precipitation process,and that the increase of probability and intensity of torrential rain is significant in urban center. Zhang C. L., S. G. Miao, Q. C. Li, and F. Chen, 2007: Impacts of fine-resolution land use information for Beijing on a summer, severe rainfall simulation. Chinese Journal of Geophysics, 50( 5), 1373- 1382. (in Chinese)10.1002/cjg2.1136ba5b0ff1-3750-42dc-8a93-6f603c25e717f7a22f1b6f1dfd036655e5c9ec5d3af6http%3A%2F%2Fonlinelibrary.wiley.com%2Fdoi%2F10.1002%2Fcjg2.1136%2Fpdfrefpaperuri:(ab8770c39e864ce201c7e9b5c7406362)http://en.cnki.com.cn/Article_en/CJFDTotal-DQWX200705011.htmUsing the land use data around Beijing in 2000 with the resolution of 500 m,we updated the U.S.Geological Survey global land use classification data for numerical weather model,in which there are 25 types with 30 s lat-lon equidistant grids(1 km resolution).And then by 24-hour numerical experiments with the MM5V3.6 coupled with Noah LSM system,two domain two-way nested with the resolution of 10-3.3 km,we investigated the impact of fine-resolution land use information incorporation on a summer severe rainfall in Beijing.Analyses show that,the new land use data can not only represent better the real characteristic of underlying surface around Beijing area,especially the rapid expanding of urban/built-up areas since 1990s',but also help to correct the unreasonable classification Savanna in the original USGS data for the middle-latitudes of Asia data as the deciduous broadleaf.Furthermore,numerical experiments prove that incorporation of the fine-resolution land use information has a significant impact on the short-range severe rainfall weather event.For the intensity and location of major rainfall centers,their difference ranges of 12 h rainfall amount are beyond 30 km,and the relative difference of the maximum rainfall amount reaches up to 30%.One important interaction mechanism between urban underlying surface and atmosphere is also revealed,that is,the urban expansion reduces natural vegetation cover,and then it can help to decrease ground evaporation and local water vapor supply,enlarge surface sensible heat flux,deepen PBL height and enhance the mixing of water vapor.Hence it is not conducive to the occurrence of the rainfall. [1] MIAO Yucong, LIU Shuhua, CHEN Bicheng, ZHANG Bihui, WANG Shu, LI Shuyan, 2013: Simulating Urban Flow and Dispersion in Beijing by Coupling a CFD Model with the WRF Model, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1663-1678. doi: 10.1007/s00376-013-2234-9 [2] MIAO Yucong, LIU Shuhua, ZHENG Hui, ZHENG Yijia, CHEN Bicheng, WANG Shu, 2014: A Multi-Scale Urban Atmospheric Dispersion Model for Emergency Management, ADVANCES IN ATMOSPHERIC SCIENCES, 31, 1353-1365. doi: 10.1007/s00376-014-3254-9 [3] Ui-Yong BYUN, Jinkyu HONG, Song-You HONG, Hyeyum Hailey SHIN, 2015: Numerical Simulations of Heavy Rainfall over Central Korea on 21 September 2010 Using the WRF Model, ADVANCES IN ATMOSPHERIC SCIENCES, 32, 855-869. doi: 10.1007/s00376-014-4075-6 [4] Dongmei XU, Feifei SHEN, Jinzhong MIN, Aiqing SHU, 2021: Assimilation of GPM Microwave Imager Radiance for Track Prediction of Typhoon Cases with the WRF Hybrid En3DVAR System, ADVANCES IN ATMOSPHERIC SCIENCES, 38, 983-993. doi: 10.1007/s00376-021-0252-6 [5] SUN Jianhua, ZHAO Sixiong, XU Guangkuo, MENG Qingtao, 2010: Study on a Mesoscale Convective Vortex Causing Heavy Rainfall during the Mei-yu Season in 2003, ADVANCES IN ATMOSPHERIC SCIENCES, 27, 1193-1209. doi: 10.1007/s00376-009-9156-6 [6] Jun WANG, Jinming FENG, Qizhong WU, Zhongwei YAN, 2016: Impact of Anthropogenic Aerosols on Summer Precipitation in the Beijing-Tianjin-Hebei Urban Agglomeration in China: Regional Climate Modeling Using WRF-Chem, ADVANCES IN ATMOSPHERIC SCIENCES, 33, 753-766. doi: 10.1007/s00376-015-5103-x [7] Xinguan DU, Haishan CHEN, Qingqing LI, Xuyang GE, 2023: Urban Impact on Landfalling Tropical Cyclone Precipitation: A Numerical Study of Typhoon Rumbia (2018), ADVANCES IN ATMOSPHERIC SCIENCES. doi: 10.1007/s00376-022-2100-8 [8] LIU Huizhi, LIANG Bin, ZHU Fengrong, ZHANG Boyin, SANG Jianguo, 2003: A Laboratory Model for the Flow in Urban Street Canyons Induced by Bottom Heating?, ADVANCES IN ATMOSPHERIC SCIENCES, 20, 554-564. doi: 10.1007/BF02915498 [9] HU Wei, ZHONG Qin, 2010: Using the OSPM Model on Pollutant Dispersion in an Urban Street Canyon, ADVANCES IN ATMOSPHERIC SCIENCES, 27, 621-628. doi: 10.1007/s00376-009-9064-9 [10] Ning ZHANG, Yunsong DU, Shiguang MIAO, 2016: A Microscale Model for Air Pollutant Dispersion Simulation in Urban Areas: Presentation of the Model and Performance over a Single Building, ADVANCES IN ATMOSPHERIC SCIENCES, 33, 184-192. doi: 10.1007/s00376-015-5152-1 [11] Lin Naishi, Zhou Zugang, Zhou Liufei, 1998: An Analytical Study on the Urban Boundary Layer, ADVANCES IN ATMOSPHERIC SCIENCES, 15, 258-266. doi: 10.1007/s00376-998-0044-2 [12] Xiaojuan LIU, Guangjin TIAN, Jinming FENG, Bingran MA, Jun WANG, Lingqiang KONG, 2018: Modeling the Warming Impact of Urban Land Expansion on Hot Weather Using the Weather Research and Forecasting Model: A Case Study of Beijing, China, ADVANCES IN ATMOSPHERIC SCIENCES, 35, 723-736. doi: 10.1007/s00376-017-7137-8 [13] WANG Gengchen, BAI Jianhui, KONG Qinxin, Alexander EMILENKO, 2005: Black Carbon Particles in the Urban Atmosphere in Beijing, ADVANCES IN ATMOSPHERIC SCIENCES, 22, 640-646. doi: 10.1007/BF02918707 [14] HE Yuting, JIA Gensuo, HU Yonghong, and ZHOU Zijiang, 2013: Detecting urban warming signals in climate records, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1143-1153. doi: 10.1007/s00376-012-2135-3 [15] Liu Huizhi, Sang Jianguo, Zhang Boyin, Johnny C.L. Chan, Andrew Y.S.Cheng, Liu Heping, 2002: Influences of Structures on Urban Ventilation:A Numerical Experiment, ADVANCES IN ATMOSPHERIC SCIENCES, 19, 1045-1054. doi: 10.1007/s00376-002-0063-3 [16] Jae-Jin KIM, Do-Yong KIM, 2009: Effects of a Building's Density on Flow in Urban Areas, ADVANCES IN ATMOSPHERIC SCIENCES, 26, 45-56. doi: 10.1007/s00376-009-0045-9 [17] MIAO Shiguang, JIANG Weimei, 2004: Large Eddy Simulation and Study of the Urban Boundary Layer, ADVANCES IN ATMOSPHERIC SCIENCES, 21, 650-661. doi: 10.1007/BF02915732 [18] QIU Jinhuan, YANG Jingmei, 2008: Absorption Properties of Urban/Suburban Aerosols in China, ADVANCES IN ATMOSPHERIC SCIENCES, 25, 1-10. doi: 10.1007/s00376-008-0001-0 [19] JIANG Yujun, LIU Huizhi, SANG Jianguo, ZHANG Boyin, 2007: Numerical and Experimental Studies on Flow and Pollutant Dispersion in Urban Street Canyons, ADVANCES IN ATMOSPHERIC SCIENCES, 24, 111-125. doi: 10.1007/s00376-007-0111-0 [20] TAO Jun, CHENG Tiantao, ZHANG Renjian, CAO Junji, ZHU Lihua, WANG Qiyuan, LUO Lei, and ZHANG Leiming, 2013: Chemical composition of PM2.5 at an urban site of Chengdu in southwestern China, ADVANCES IN ATMOSPHERIC SCIENCES, 30, 1070-1084. doi: 10.1007/s00376-012-2168-7 Article Views: 795 Times PDF downloads: 118 Times Manuscript received: 21 April 2015 Manuscript revised: 29 December 2015 1. School of Atmospheric Sciences, Nanjing University, Nanjing 210093 2. Dalian Meteorological Station, Dalian 116000 Abstract: To evaluate the influence of urban non-uniformity on precipitation, the area of a city was divided into three categories (commercial, high-density residential, and low-density residential) according to the building density data from Landsat satellites. Numerical simulations of three corresponding scenarios (urban non-uniformity, urban uniformity, and non-urban) were performed in Nanjing using the WRF model. The results demonstrate that the existence of the city results in more precipitation, and that urban heterogeneity enhances this phenomenon. For the urban non-uniformity, uniformity, and non-urban experiments, the mean cumulative summer precipitation was 423.09 mm, 407.40 mm, and 389.67 mm, respectively. Urban non-uniformity has a significant effect on the amount of heavy rainfall in summer. The cumulative precipitation from heavy rain in the summer for the three numerical experiments was 278.2 mm, 250.6 mm, and 236.5 mm, respectively. In the non-uniformity experiments, the amount of precipitation between 1500 and 2200 (LST) increased significantly. Furthermore, the adoption of urban non-uniformity into the WRF model could improve the numerical simulation of summer rain and its daily variation. Urbanization takes place at an exceptionally rapid rate in China. The areas of cities in China, particularly in the Beijing-Tianjin-Hebei, Yangtze River Delta, and Pearl River Delta urban agglomeration regions, have continued to grow. The process of urbanization has altered the natural surface and resulted in increases in anthropogenic heat and pollutant emissions, which inevitably have impacts on urban meteorological environments. Problems such as "urban heat islands" (UHIs), "dry islands", "rain islands" and "turbid islands", have arisen in association with urbanization. Many studies have been conducted on these phenomena, and important progress has already been accomplished. However, the urban effect on precipitation is relatively complicated. Potential mechanisms of the effects of cities on precipitation include the dynamic actions of urban buildings and thermodynamic actions of UHIs altering the flow field characteristics, the impervious surfaces of urban areas affecting the evaporation process and influencing land-surface water vapor transfer, and urban air pollution (e.g., the direct and indirect effects of aerosols) affecting radiation and cloud microphysical processes. Studying the urban effect on precipitation has become a frontier of focus for the atmospheric sciences, with a large number of studies devoted to the subject (Jauregui and Romales, 1996; Lowry, 1998; Li et al., 2008). (Liao et al., 2011) analyzed the pattern of variation in precipitation in Guangzhou using daily observed precipitation data from 1959 to 2009 and found that the number of heavy rain days is increasing. The study conducted by (Zhang et al., 2007) determined that urban expansion reduces the coverage of natural vegetation, which further reduces surface evaporation and local water vapor supply. Meanwhile, it increases the boundary layer height and enhances the mixing of atmospheric water vapor, ultimately decreasing the amount of precipitation. (Sun et al., 2006) analyzed the formation mechanism and the role of the urban boundary layer in a relatively independent meso-β-scale convective rainstorm system in Beijing. The observational study conducted by (Chen et al., 2000) determined that precipitation in the Yangtze River Delta region increased in response to urbanization, while the temperature in surrounding regions decreased. Based on mesoscale weather dynamics theory and a scale analysis method, (Sun and Yang, 2008) found that a meso-β-scale rainstorm was affected by the interaction of the terrain and UHIs. (Shepherd et al., 2002) analyzed the distribution of summer precipitation in several cities in the U.S. from 1998 to 2000 using TRMM satellite data. They reported a 28% increase in the mean monthly precipitation at locations 30-60 km from cities in the downwind direction, and an approximate 5.6% increase in urban areas. (Rosenfeld, 2000) suggested that urbanization and industrial pollution would result in increased precipitation and snowfall in downstream locations. And finally, the results of the study conducted by (Wu et al., 2000) demonstrated that the most significant urban effect attributed to the thermal and dynamic effects of cities was the increase in short-term precipitation. A high-resolution numerical simulation is a common and effective tool for studying the effects of cities on precipitation. As a result of its exceptional performance, the WRF model has been widely applied in the field of urban meteorology. Details on the WRF model can be found in (Skamarock et al., 2008). The WRF model classifies cities into three categories: commercial, high-density residential (hi-dens res), and low-density residential (low-dens res). The buildings in commercial cities are the tallest, whereas the buildings in low-dens res cities are the shortest. In most cases, when using WRF (the version in this study is 3.3.1) to simulate urban meteorology, a city is determined to belong to one of the aforementioned categories based on its building density (Song et al., 2009). However, due to the non-uniformity of urban density, all three types of urban area exist within a city. Therefore, it is difficult to conclude that a city belongs solely to one of these categories. (Hu et al., 2013) considered urban non-uniformity in a study on low-level jets. For the city of Nanjing, there are numerous skyscrapers in the downtown area, i.e., in Gulou and Xinjiekou, which is commercial. However, the south of the city is close to the Qinhuai River, which belongs to the hi-dens res category. And the eastern part, i.e., Xianlin, belongs to the low-dens res category. Categorizing a city into only one classification may cause relatively large deviation in the model. Some studies (Song et al., 2014a, b) show that the surface energy balance, temperature and wind are significantly influenced by urban non-uniformity, but there have been few studies on the link between urban non-uniformity and precipitation. Building dynamics, UHIs and aerosols have influences on precipitation. In addition, the former two affect each other. When urban non-uniformity is considered, urban dynamic parameters change, which also leads to a change in the UHI effect. Therefore, urban non-uniformity is an important factor and should be considered in precipitation simulation. In the present study, Nanjing was recategorized based on the characteristics of different areas of the city, and the potential effect of the urban non-uniformity on precipitation was investigated. 2.1. Model description The WRF model system is composed of a new-generation mesoscale forecasting model and assimilation system, jointly established in 1997 by the Mesoscale & Microscale Meteorology Division of the NCAR, the Environmental Modeling Center at the NCEP, the Forecast Research Division of the Forecast Systems Laboratory, and the Center for Analysis and Prediction of Storms at the University of Oklahoma. The WRF model system has been applied extensively. The urban canopy model (UCM) in the WRF model was used in the present study. The UCM includes the following features: (1) streets with 2D structures are parameterized to calculate their thermal characteristics; (2) the shadows and reflections of buildings are considered; (3) the courses of streets and the daily variation of the solar elevation angle are considered; (4) the thermal effects of road surfaces, wall surfaces and roofs are differentiated. 2.2. Non-uniform distribution of Nanjing Three experiments (A, B and C) were designed for the present study. In experiment A, the non-uniformity of the city was considered; different areas of the city were classified into three categories: commercial, hi-dens res, and low-dens res. In experiment B, the city was considered to be uniform (hi-dens res). And in experiment C, the effect of a non-urban environment was considered; the original land surface types of the city were replaced by irrigated cropland and pasture. The city classification was mainly based on its building density (Song et al., 2014a, Fig. 1). The building densities were obtained by statistically analyzing the 25-m resolution land surface type data developed by Landsat satellites. When the building density was less than 0.3, the area was defined as low-dens res; when it was greater than or equal to 0.45, the area was defined as commercial. The distributions of the land surface types in the third level of the model for experiments A and B can be found in Song et al. (2014a, Fig. 2). The commercial area accounted for 23.7% in the non-uniform category, while the hi-dens res and low-dens categories accounted for 41.2% and 35.1%, respectively. However, in the uniform category, the city area was completely hi-dens res. The land types of the central and peripheral areas of Nanjing change when the spatial variation is considered. Even though the height of the city decreases, the non-uniform distribution of the city increases. Compared to a uniform city, the mean values of sensible heat, UHI and friction velocity are less in a non-uniform city. However, the extreme values are larger (Song et al., 2014a). This demonstrates that a non-uniform city provides weaker land-surface forcing, but enhances the land-surface forcing turbulence corresponding to a uniform city. Obviously, the enhancement of precipitation is due to the urban non-uniformity rather than the mean land-surface forcing. Figure 1. Observed and simulated monthly mean precipitation at Nanjing station. Figure 2. Spatial distribution of summer accumulated precipitation (mm) in the Nanjing region: (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The wind fields in the figure are the mean wind fields at 10 m for rainy summer days (m s$^-1$). 2.3. Experimental design A three-level nested grid with a two-way nesting experiment was used for the simulation. The central longitude and latitude of the domain were 32.1004°N and 118.8986°E, respectively. The outer domain was 900 km × 900 km, with 9-km horizontal grid spacing; the middle domain was 303 km × 303 km, with 3-km spacing; and the innermost domain was 101 km × 101 km, with 1-km spacing. The model top was set at 100 hPa and there were 27 vertical layers. The simulation period was from 0000 UTC 1 January 2011 to 1800 UTC 31 December 2011, and the model ran month by month. The 1°× 1° resolution NCEP data were used as the boundary conditions, which were forced every 6 h. The result was output every hour. The model parameterization can be found in Song et al. (2014a, Table 1). The Multi-layer Building Environment Model, Noah land-surface, Monin-Obukhov, and unified Noah land-surface schemes were chosen in this study. Figure 3. Frequency of summer (June, July and August) precipitation in the Nanjing region (%): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. 3.1. Total summer precipitation The precipitation for Jiangsu province in the summer of 2011 ranged from 247.1 mm (Guanyun) to 1243.6 mm (Jiangyin), mainly concentrated in the southern part of Jiangsu and over the Yangtze River. The mean was 731.2 mm, which was approximately 50% of the annual value. Figure 1 compares the simulated and observed total monthly precipitation (mm) in the three experiments. The surface data from a representative station in Nanjing (58238; coordinates: 31.93°N, 118.90°E) were selected as the observation. The observed annual precipitation amount in 2011 was 989.2 mm. The simulated precipitation amount for the station in Nanjing, based on the three experiments, was 810.4 mm, 723.2 mm and 625.7 mm, respectively. In winter, spring and fall, the precipitation simulated in the non-uniform, uniform and non-urban experiments was relatively close to the observation. However, in summer, the accumulated precipitation simulated in the uniform and non-urban experiments was significantly lower than observed, whereas that simulated in the non-uniform experiment was closest to the observation. The error of the monthly mean accumulated precipitation for 2011 was 14.9 mm in the non-uniform experiment, which was lower than that of the uniform experiment. Thus, the simulation accuracy regarding urban precipitation can be effectively increased when urban non-uniformity is considered in the WRF model. Compared with uniform experiments, the accumulated precipitation for the 12 months of the non-uniform experiments increased by 0.68%, -0.33%, 0.50%, -2.40%, -1.13%, 14.51%, -6.81%, 2.42%, -0.31%, -0.45%, -2.41% and -0.52%, respectively. For summertime (June, July, August), the values were 14.51%, -6.81% and 2.42%, respectively. The differences among the three experiments were greatest in summer because the amount of precipitation is largest in this season. Therefore, the effect of urban non-uniformity on summer precipitation (June, July and August) is mainly discussed hereafter. Figure 2 presents the spatial distribution of the simulated accumulated precipitation in summer in the Nanjing region. It shows a mean northeasterly wind field on rainy days. The wind speed significantly decreased in the urban area, especially centrally in the non-uniform experiment. Compared with the uniform experiment, the airflows also converged at the center in the non-uniform experiment (Fig. 2d). The simulated summer mean accumulated precipitation amount for the entire region was 423.09 mm, 407.40 mm and 389.67 mm in experiments A, B and C, respectively. There was a significant difference among the simulations, and the non-uniform experiment result was the highest. Compared with the non-urban experiment, the existence of the city resulted in a significant increase in precipitation in the urban area and downstream locations. The distribution patterns of the summer precipitation were significantly different in the non-uniform and uniform experiments. Precipitation increased more significantly in the urban areas and the southeast in the non-uniform experiment, whereas it decreased in downstream areas. Besides, the maximum precipitation amount was greater in the non-uniform experiment. Figure 4. Intensity of summer precipitation in the Nanjing region (mm h$^-1$): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The wind fields in the figure are the mean wind fields at 10 m for rainy summer days (m s$^-1$). Figure 5. Daily variation of summer precipitation in the Nanjing region: (a) observation data from Nanjing station; (b) simulation results. Figure 6. Spatial distribution of accumulated precipitation on 20 July 2011, in Nanjing (mm): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The wind fields in the figure are the mean wind fields at 10 m on 20 July 2011 (m s$^-1$). Figure 3 presents the spatial distributions of the simulated summer precipitation frequency (the ratio of total precipitation to summertime precipitation) in the Nanjing region. Compared to the non-urban experiment (experiment C), the precipitation frequency was higher in the urban areas of experiments A and B. In general, the presence of the city resulted in an increasing trend for the precipitation frequency in the urban area. Also, it increased in the southeastern direction of the city, but decreased, to a certain extent, in the northwestern direction. Figure 4 presents the spatial distributions of the simulated summer precipitation intensity (the ratio of accumulated summer precipitation to total precipitation) in the Nanjing region. The distributions were relatively similar to the accumulated summer precipitation. The mean intensity of the precipitation simulated in the non-uniform, uniform and non-urban experiments was 1.37 mm h-1, 1.33 mm h-1 and 1.26 mm h-1, respectively. Figure 7. Daily mean friction velocity on 20 July 2011 across the Nanjing region (m s$^-1$): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. Figure 5a presents the observed accumulated precipitation every 6 h at Nanjing station. The data indicate that the occurrence of precipitation was greatest between 0200 and 0800 (LST), slightly lower between 0800 and 1400 (LST) and between 1400 and 2000 (LST), and lowest between 2000 and 0200 (LST). Figure 5b presents the daily variation of the summer mean accumulated precipitation for the entire Nanjing region. Compared to the observation, the lowest precipitation simulated by the three experiments occurred at night, while the highest occurred in the morning. In the morning, the precipitation was lowest in the non-uniform experiment. However, in the afternoon, the precipitation in the uniform and non-urban experiments was much lower than in the morning, while the non-uniform experiment results were much higher and equivalent to the simulations in the morning. Clearly, the results of the non-uniform experiment were better. Therefore, urban non-uniformity can significantly increase convective precipitation in summer afternoons, and improve the model's performance in terms of the pattern of daily precipitation. In the present study, light, moderate and heavy rain were defined as daily precipitation from 0.02 to 10 mm, from 10 to 20 mm, and greater than 20 mm, respectively. The effects of the three experiments on these three grades of rain were also compared. There was no significant difference between the spatial distribution of accumulated precipitation simulated in the three experiments for light and moderate rain (data not shown). The mean accumulated precipitation for light rain simulated in experiments A, B and C was 73.5 mm, 79.9 mm and 75.3 mm, respectively. And for moderate rain, the values were 71.5 mm, 76.9 mm and 77.9 mm, respectively. However, there were significant differences among the accumulated precipitation amounts for heavy rain in the three experiments: 278.2 mm (experiment A); 250.6 mm (experiment B), and 236.5 mm (experiment C). Hence, the effect of urban non-uniformity on precipitation was primarily manifested during heavy precipitation events in summer. 3.2. Analysis of a precipitation event Figure 6 shows the distribution of simulated precipitation for an event that occurred on 20 July 2011 in the Nanjing region. The precipitation and its intensity were smallest in the non-urban experiment and largest in the non-uniform experiment. In addition, the majority of precipitation was distributed in and around the urban area. For both the precipitation range and its intensity, the results of the uniform experiment (experiment B) were between those of the non-urban and non-uniform experiments. The mean accumulated precipitation simulated in the three experiments was 1.24 mm, 0.93 mm and 0.23 mm, respectively. Figure 7 presents the spatial distributions of mean daily friction velocity on 20 July 2011. The friction velocity in the urban area was significantly greater than that in the suburban area. When urban non-uniformity was considered, the friction velocity exhibited a more complicated spatial distribution. The friction velocity increased obviously in the central urban area and also increased over downstream locations, even where the friction velocity was relatively low. Though the overall urban building height decreased, the non-uniformity of the urban distribution increased in the non-uniform city. Overall, the roughness increased, resulting in significant disturbances in the flow field across the urban area. Figure 8. Daily mean vertical velocity profile on 20 July 2011 across the Nanjing region (cm s$^-1$) [vertical wind vector (w) $\times 25$; the blue lines at the bottom of (a, b, d) represent the region of the city]: (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The blue boxes reflect the regions that have significant updrafts. The wind fields in the figure are the mean wind fields at the vertical direction on 20 July 2011 (m s$^-1$). Figure 9. Daily mean water vapor flux divergence at 850 hPa on 20 July 2011 across the Nanjing region (10$^-6$ kg m$^-2$ s$^-2$): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. The red box reflects the region that has significant water vapor divergence. Figure 10. Daily mean vertical velocity at 700 hPa on 20 July 2011 across the Nanjing region (m s$^-1$): (a) non-uniform experiment; (b) uniform experiment; (c) non-urban experiment; (d) non-uniform-uniform comparison. Figure 8 presents the mean daily vertical velocity profiles in the y direction that passed through the center of the domain on 20 July 2011. Compared to experiments B and C, there was a significant updraft in the north of the city in experiment A (see the areas within the blue rectangles in Figs. 8a and d), which corresponded very well to the location of the precipitation. Figure 9 presents the spatial distributions of daily mean water vapor flux divergence at 850 hPa on 20 July 2011. For experiment A, there was significant water vapor divergence in the precipitation area (the area within the red rectangle in Fig. 9a). The water vapor flux divergence in the precipitation was -0.268× 10-6, -0.055× 10-6 and -0.045× 10-6 kg m-2 s-2 in experiments A, B and C, respectively. The region of water vapor divergence within the precipitation area for the three experiments was 608 km2, 554 km2 and 543 km2, respectively. Hence, water vapor divergence was the most intense and covered the largest area in the non-uniform experiment. Figure 10 presents the distributions of daily mean vertical velocity at 700 hPa on 20 July 2011. Compared to the other two experiments, there was significant upward movement in the precipitation area in experiment A, and the areas of positive vertical velocity were more concentrated. The mean upward velocity within the precipitation area at 700 hPa was 1.1 cm s-1, 0.1 cm s-1 and -0.5 cm s-1 for experiments A, B and C, respectively. In addition, the area of upward movement was 553 km2, 427 km2 and 268 km2, respectively. Hence, the upward velocity and area of upward movement was relatively fast and large, respectively, at 700 hPa in experiment A. The upward velocity and area of upward movement in experiment B were the second fastest and largest, respectively. In experiment C, the mean vertical movement was downward, and the area of upward movement was the smallest among the three experiments. Mechanistically, the urban impact on precipitation involves dynamic, thermodynamic and chemical effects. The dynamic effects involve increases in surface roughness and enhancements to the drag and lift effects on the airflow. Thermodynamic effects encompass changes in the surface energy balance and the impact of the UHI on the structure of the urban boundary layer. And chemical effects mainly relate to artificial increases in the influence of aerosols on the microstructure of clouds——otherwise known as "aerosol indirect effects". In this study, the setup of the WRF model did not include chemical processes. Therefore, the impact of urban non-uniformity on precipitation was restricted to the other two aspects: thermodynamic and dynamic effects. Compared to the uniform city, the mean UHI intensity and heat flux of the non-uniform city were lower. Figure 11 shows the diurnal variation of the UHI intensity in the non-uniform and uniform experiments, indicating that the diurnal variation of the UHI could not explain the difference in the diurnal variation of precipitation between these two urban experimental setups (Fig. 5b). We believe that in this simulation, the influence of the UHI was not the main reason for the increased precipitation in the non-uniform city. Certainly, however, the UHI may play an important role in the increase of precipitation compared with the non-urban experiment (Miao et al., 2011). In the experiments carried out in this work, the total volume of buildings in the non-uniform and uniform setups was 6.41 km3 and 6.07 km3, respectively. The volume in the non-uniform experiment was 5.6% higher, which was close to the percentage increase of precipitation (3.85%) in summer. However, heavy rain in summer increased by 11%——much more than the increase in building volume in the non-uniform experiment. This shows that the increase in summer precipitation was due to two aspects, the increase in buildings and the urban non-uniformity, and the effect of urban non-uniformity on convective precipitation was much greater than that on non-convective precipitation. Figure 11. Daily variation of the summer UHI in the Nanjing region. In the present study, sensitivity simulations using the WRF model were conducted to investigate the effect of urban non-uniformity on precipitation in Nanjing in 2011. The main findings can be summarized as follows: (1) The effect of urban non-uniformity on precipitation was relatively small in winter, spring and fall, but relatively large in summer. The precipitation simulated in the non-uniform experiment was the most comparable to observations, implying that consideration of urban non-uniformity can significantly improve model performance in terms of urban summer precipitation. (2) Urbanization will result in increases of total accumulated precipitation, precipitation intensity and precipitation frequency in urban areas, and this effect is further increased when urban non-uniformity is considered. The accumulated summer precipitation was 423.1 mm, 407.4 mm and 389.7 mm in the non-uniform, uniform and non-urban experiments, respectively. Therefore, the amount of precipitation simulated in the non-uniform experiment was largest. (3) The simulated contribution of precipitation to heavy rain (daily accumulated precipitation >20 mm) in the non-uniform experiment was significantly higher. The summer mean accumulated precipitation for heavy rain was 278.19 mm, 250.61 mm and 236.54 mm in the three experiments, respectively. The effect on light rain and moderate rain was relatively small. (4) When urban non-uniformity was considered, the precipitation in the morning decreased, but the precipitation between 1500 and 2200 (LST) increased significantly. The pattern of the daily variation was closest to observations in the non-uniform experiment. (5) The effect of urban non-uniformity on precipitation is mainly realized through increased land surface roughness and surface friction velocity, which in turn increase the low-level water vapor divergence and enhances the mean upward velocity, promoting an increase in heavy precipitation in the afternoon. It is important to note that in investigating the effect of urban non-uniformity on precipitation in this study, the urban non-uniformity was represented by only three categories. Furthermore, the dynamic and thermodynamic effects relating to urban non-uniformity were not separated. This will be the next step in our continuing research.
CommonCrawl
Predicting breast cancer metastasis from whole-blood transcriptomic measurements Einar Holsbø ORCID: orcid.org/0000-0002-9728-20881, Vittorio Perduca2, Lars Ailo Bongo1, Eiliv Lund3,4 & Etienne Birmelé2 In this exploratory work we investigate whether blood gene expression measurements predict breast cancer metastasis. Early detection of increased metastatic risk could potentially be life-saving. Our data comes from the Norwegian Women and Cancer epidemiological cohort study. The women who contributed to these data provided a blood sample up to a year before receiving a breast cancer diagnosis. We estimate a penalized maximum likelihood logistic regression. We evaluate this in terms of calibration, concordance probability, and stability, all of which we estimate by the bootstrap. We identify a set of 108 candidate predictor genes that exhibit a fold change in average metastasized observation where there is none for the average non-metastasized observation. About one in ten women will at some point develop breast cancer (BC). About 25% have an aggressive cancer at the time of diagnosis, with metastatic spread. The absence or presence of metastatic spread largely determines the patient's survival. Early detection is hence very important in terms of reducing cancer mortality. A blood sample is cheaper and less invasive than the usual node biopsy. Were we able to detect signs of metastasis or metastatic potential by a blood sample, we could conceivably start treatment earlier. Several recent articles develop this idea of liquid biopsies [1]. A review in Cancer and Metastasis Reviews [2] lists liquid biopsies and large data analysis tools as important challenges in metastatic breast cancer research. The Norwegian Women and Cancer (NOWAC) postgenome cohort [3] is a prospective population-based cohort that contains blood samples from 50,000 women born between 1943 and 1957. Out of these in total about 1600 BC case–control pairs (3200 blood samples) have at various times been processed to provide transcriptomic measurements in the form of mRNA abundance. These measurements combine with questionnaires, disease status from the Norwegian Cancer Registry, and death status from the Cause of Death Registry from Statistics Norway to provide a high-quality dataset. These data are used for exploration and hypothesis generation. We examine 88 breast cancer cases from the NOWAC study. The blood samples were provided 6–358 days before BC diagnosis. We fit a penalized likelihood logistic regression with the ElasticNet-type penalty [4]. This approach provides built-in variable selection in the estimation procedure. Our model suggests 108 predictor genes that form a potential direction for further research. We analyze 88 cases with breast cancer diagnoses from the NOWAC Post-genome cohort [5]. For each case, we have an age-matched control that we use to normalize the gene expression levels. For our analysis this is mainly done to mitigate batch effects from the lab processing of the blood samples, cases and controls being kept together for the whole pipeline. Only women who received a breast cancer diagnosis at most one year after providing a blood sample were considered as cases. This limits our sample size but it is more biologically plausible to see a signal in more recent blood samples. Out of the 88 breast cancers, 25% have metastases. The metastic- and non-metastic cancers are fairly similar in terms of usual covariates. Respectively the proportion of smokers is 13% against 25%. The proportion of hormone treatment is 25% against 31%. The median age (with .05 and .95 quantiles) is 56 (51, 61) against 56 (51, 62). The median BMI is 24.5 (19.4, 35.9) against 25.5 (21.1, 32.4). The median parity is 2 (1, 3) against 2 (0, 3). The data were processed according to [6] and [7]. The pre-processed data is a \(88\times 12404\) fold change matrix, X, on the \(\log _2\) scale. For each gene, g, and each observation, i, we have the measurement \(\log _2 x_{ig} - \log _2 x^\prime _{ig}.\) Here \(x_{ig}\) is the g expression level for the ith case, and \(x^\prime _{ig}\) is the corresponding control. The response variable, metastasis, indicates the presence of metastatic spread. We model the probability of metastasis, p(m), given gene expression across all genes, x, by a penalized likelihood logistic regression with an ElasticNet-type penalty [4]. The likelihood of the logistic model $$\begin{aligned} \log \frac{p(m)}{1-p(m)} = \beta _0 +\beta _1x_1 + \cdots + x_p \end{aligned}$$ is maximized under the constraint that \((1-\alpha )\sum \left| \beta _j\right| + \alpha \sum \beta _j^2 \le t\) for some user-specified penalty size t and mixing parameter \(\alpha \). We choose \(\alpha =0.5\) a priori and find a penalty size t in a data-driven way by optimizing for the modified version of Akaike's Information Criterion [8, 9], $$\begin{aligned} AIC^\prime = LR \chi ^2 - 2k, \end{aligned}$$ where \(LR \chi ^2\) is the likelihood ratio \(\chi ^2\) for the model and k is the number of non-zero coefficients. We use this criterion on the recommendation of Harrell [10], who states that maximizing this criterion in terms of penalty often leads to a reasonable choice. We prefer this to tuning by cross-validation since it does not require data splitting. Data splitting procedures tend to induce more variance, which is undesirable with as few observations as we have. A more detailed discussion of these choices can be found in [11]. We evaluate models by several criteria. Brier score [12] is the mean squared error, $$\begin{aligned} {{\bar{B}}} = n^{-1} \sum ({\hat{y}}_i - y_i)^2, \end{aligned}$$ between the probability that was predicted by the model, \({\hat{y}}\), and the known outcomes, y. It is a one-number summary of the calibration of predicted probabilities. We also assess calibration by means of a calibration curve. This is an estimate of proportion of true successes as a function of predicted probability, which we calculate by smoothing the true zero/one outcome as a function of predicted probability (LOWESS with a span of \(\frac{2}{3}\)). If n observations receive a prediction of \({\hat{p}}\), \(n{\hat{p}}\) of them should have the predicted condition for a well-calibrated model. Concordance probability is the probability of ranking (in terms of predicted \({\hat{p}}\)) a randomly chosen positive higher than a randomly chosen negative. This is equivalent to the area under the receiver operating characteristic curve (AUC), and is proportional to the Mann-Whitney-Wilcoxon U statistic [13]. Stability is the proportion of overlap between predictor genes chosen during different realizations of the modeling procedure. We follow [14] and measure this by the Jaccard index, \(\frac{|S_1 \cap S_2|}{|S_1 \cup S_2|},\) where \(S_1\) and \(S_2\) are two sets of predictor genes. Brier score and concordance probability are estimated using the optimism-corrected bootstrap approach described in [15], which has the advantage of using all of the data in estimating model performance opposed to data splitting procedures. Stability is estimated from regular bootstrap resampling. Evaluation metrics Figure 1 shows the bootstrap distributions for our estimates of Brier score, concordance probability, and stability. The solid lines show point estimates and the dotted lines indicate the middle .8 of each distribution. The Brier score for our model is roughly .1, while that of an intercept-only null model is roughly .18. Since Brier score is the mean square error of predicted probabilities we can take its root to get an average error on the probability scale; \(\sqrt{.1} \approx .32\), which suggests that the predicted probabilities are not very accurate on average. Figure 2 corroborates this. The figure shows the pointwise calibration of predicted probabilities, ie., for a given predicted metastasis probability, how great a proportion observations turned out to have metastases. For a predicted metastasis probability \(<.4\) the true proportion is \(\approx .1\), while for a predicted metastasis probability \(>.8\) the true proportion is \(\approx .7\). In other words we overestimate low probabilities and underestimate high ones. Bootstrap distribution of optimism-corrected estimates for Brier score, concordance/AUC, and stability for the Elasticnet model. The solid vertical lines show point estimates, and the dotted vertical lines show the middle .8 of each distribution Expected calibration of predicted probabilities shown in solid black. The dotted line shows middle .8 of the bootstrap distribution. Ideally, .8 of the observations for which .8 metastasis probability was predicted should turn out to show metastasis. In other words the ideal calibration is a diagonal line (shown in grey). Our model tends to overestimate lower probabilities and underestimate higher ones Returning to Fig. 1, the concordance probability (or AUC) is quite high at roughly .88, with a lower bound for the middle .8 of the distribution at .81. Contrast this with random guess at .5. This suggests that the model consistently selects gene sets that separate metastases from non-metastases in their expression levels in spite of the fact that the predicted probabilities are poorly calibrated. The stability of these chosen gene sets is around .16, which suggests the likely scenario that there are many correlated genes to choose from. With a stability of .16 for 108 genes you might expect a 17-gene overlap when fitting a similar model to similar data. Selected genes We list the 108 genes selected by penalized likelihood and describe them in general quantitative terms. We keep track of the selected gene sets under resampling and can hence calculate statistics for how often a given gene is selected and for how often a given gene is co-selected with any other gene. Table 1 shows the 108 selected genes ordered by their individual selection probabilities. Apart from the first few genes, the selection probabilities are not very high. It is quite likely that (i) a larger set of genes correlate with the ones we select and get selected in their place some of the time, and (ii) our selected genes correlate with one another and the selection of one some times makes the selection of another less likely. This is a natural consequence of doing variable selection: "redundant" information may shrink out of the model. Table 1 Resampling selection probability for the 108 elasticnet-selected genes The selected genes show a clear difference in fold change between metastasized- and non-metastasized BC cases; we refer interested readers to Additional file 1. Further figures and discussion about, as well as pairwise co-selection can be found in [11]. The prospective design of NOWAC yields data prior to the cancer diagnosis, thus allowing to test prediction models on original data corresponding to early-stage cancer. However, there will perforce never be more cases where the blood sample was provided close to diagnosis in this particular study. As the data acquisition technology has changed, there little hope to produce new comparable data outside of NOWAC. Since our data set is small (88 pairs of women for 12404 probes), we expect the success of both variable selection and prediction to be limited. Concerning variable selection, the set of genes kept in the model is highly unstable under perturbation by resampling, and only a few of them are selected in a meaningful fraction resamples. Concerning prediction, the AUC is high enough that there is reason for suspicion. The same is the case for Brier score, which is suspiciously low. It is quite likely that the bootstrap corrections for optimism are too. Moreover the bootstrap shows high variability in high dimensions. The calibration curve suggests that the predicted probabilities need to be better calibrated for this model to be useful for prediction in a real setting. In model selection with small data sets it is recommended to use AUCc, which places a stronger penalty on larger numbers of parameters than the formulation we use [16]. At the same time we overestimate the effective number of parameters by taking k as the number of non-zero parameters, which does not take into account the shrinkage on parameter size. This places a larger penalty than necessary on a given model. Since in our case all models lie on the regularization path decided by the penalty size, a stronger/weaker parameter penalty will lead to similar results in terms of selected genes with some additions/omissions as the case may be. The model we apply does not control for what is considered usual sources of confounding in breast cancer. This is both out of a desire to identify a pre-diagnosic gene signature for metastasis independent of questionnaire data, and from the realization that this would require the estimation of even more coefficients for already-inadequate data. The potential confounding from sources such as smoking and hormone therapy may not be a problem for prediction, but makes interpretation challenging. On the other hand what is considered a source of confounding for breast cancer may or may not be one when comparing breast cancers to one another in terms of metastasis. The explicit way to deal with this would be to derive a causal model to argue from. This study is exploratory and not validated in external data. It is important that this work be viewed as hypothesis generating. The datasets generated and/or analysed during the current study are not publicly available due to restrictions under Norwegian regulations for access to confidential data based on patient consent and Research Ethics terms, but are available from the corresponding author on reasonable request. AUC: Area under the (ROC) curve LOWESS: Locally weighted polynomial regression NOWAC: Norwegian Women and Cancer ROC: Receiver operating characteristic Chi KR. The tumour trail left in blood. Nature. 2016;532:269–71. Lim B, Hortobagyi GN. Current challenges of metastatic breast cancer. Cancer Metastasis Rev. 2016;. https://doi.org/10.1007/s10555-016-9636-y. Lund E, Dumeaux V, Braaten T, Hjartåker A, Engeset D, Skeie G, Kumle M. Cohort profile: the norwegian women and cancer study-nowac-kvinner og kreft. Int J Epidemiol. 2008;37(1):36–41. Zou H, Hastie T. Regularization and variable selection via the elastic net. J R Stat Soc Ser B. 2005;67(2):301–20. Dumeaux V, Børresen-Dale A-L, Frantzen J-O, Kumle M, Kristensen VN, Lund E. Gene expression analyses in breast cancer epidemiology: the Norwegian women and cancer postgenome cohort study. Breast Cancer Res. 2008;10(1):13. https://doi.org/10.1186/bcr1859. Bøvelstad HM, Holsbø E, Bongo LA, Lund E. A standard operating procedure for outlier removal in large-sample epidemiological transcriptomics datasets. bioRxiv 144519 (2017). https://doi.org/10.1101/144519. https://www.biorxiv.org/content/early/2017/05/31/144519.full.pdf. Lund E, Holden L, Bøvelstad H, Plancade S, Mode N, Günther C-C, Nuel G, Thalabard J-C, Holden M. A new statistical method for curve group analysis of longitudinal gene expression data illustrated for breast cancer in the nowac postgenome cohort as a proof of principle. BMC Med Res Methodol. 2016;16(1):28. https://doi.org/10.1186/s12874-016-0129-z. Akaike H. Information theory and an extension of the maximum likelihood principle. In: 2nd international symposium on information theory. Akademiai Kiado; 1973; p. 267–281. Verweij PJ, Van Houwelingen HC. Penalized likelihood in cox regression. Stat Med. 1994;13(23–24):2427–36. Harrell F. Regression modeling strategies as implemented in R package 'rms' version 2013;3(3) Holsbø E. Small data: practical modeling issues in human-model -omic data. PhD thesis, UiT—the arctic University of Norway (2019). Online: https://hdl.handle.net/10037/14660. Brier GW. Verification of forecasts expressed in terms of probability. Monthey Weather Rev. 1950;78(1):1–3. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143(1):29–36. https://doi.org/10.1148/radiology.143.1.7063747. Haury A-C, Gestraud P, Vert J-P. The influence of feature selection methods on accuracy, stability and interpretability of molecular signatures. PLoS ONE. 2011;6(12):28210. https://doi.org/10.1371/journal.pone.0028210. Efron B, Gong G. A leisurely look at the bootstrap, the jackknife, and cross-validation. Am Stat. 1983;37(1):36–48. Burnham KP, Anderson DR. Model selection and multimodel inference: a practical information-theoretic approach. 2nd ed. New York: Springer; 2002. The publication charges for this article have been funded by a grant from the publication fund of UiT The Arctic University of Norway. This study was supported by a grant from the European Research Council (ERC-AdG 232997 TICE). Department of Computer Science, UiT – The Arctic University of Norway, Tromsø, Norway Einar Holsbø & Lars Ailo Bongo Laboratoire MAP5 (UMR CNRS 8145), Université Paris Descartes, Université de Paris, Paris, France Vittorio Perduca & Etienne Birmelé Cancer Registry of Norway, Oslo, Norway Eiliv Lund Department of Community Medicine, UiT – The Arctic University of Norway, Tromsø, Norway Einar Holsbø Vittorio Perduca Lars Ailo Bongo Etienne Birmelé EH provided most writing and data analysis. EB, VP, and LAB contributed substantially to design, interpretation and writing. EL conceived the project and provided study design and data acquisition on the NOWAC side. All authors read and approved the final manuscript. Correspondence to Einar Holsbø. The women in this study have given written informed consent for blood sampling. We have received approval from the Regional Committee for Medical Research Ethics for the basic collection and storing of questionnaire information, blood samples and tumour tissue from patients. All women have provided informed consent for later linkages to the Cancer Registry of Norway, the Norwegian Mammographic Screening Program, and the register of death certificates in Statistics Norway. The informed consent formula explicitly mentions that the blood samples can be used for gene–environment analyses. All data are stored and handled according to the permission given from the Norwegian Data Inspectorate. Expression levels of selected genes. This figure shows the expression levels of selected genes ordered by difference in medians between metastasized andnon-metastasized observations. Holsbø, E., Perduca, V., Bongo, L.A. et al. Predicting breast cancer metastasis from whole-blood transcriptomic measurements. BMC Res Notes 13, 248 (2020). https://doi.org/10.1186/s13104-020-05088-0
CommonCrawl
The Helicopter Helicopter Abbreviations and Symbols Helicopter Components Helicopter Swashplate Helicopter Rotors Rotor Design Flap Dynamics Helicopter Rotor Modes Momentum Theory Three State Dynamic Wake Helicopter Forces and Moments Helicopter Performance Helicopter Modeling and Simulation Helicopter Linear Models Flight Test Data: Estimating Frequency and Damping Flight, Control and Stability Autorotation Control Forces Coordinated Turns Collective Rigging for Autorotation Off-Axis Control Responses Tiltrotors There are several ways to measure helicopter performance. Top speed, range and acceleration are probably familiar to car drivers. However, other factors like hover ceiling and loiter time may not be familiar to the average person. Below we will explore these and other metrics for helicopter performance. We will also discuss properties of power consumption, which is important for many of these performance metrics. Helicopter Performance Metrics Hover Ceiling The hover ceiling is the maximum altitude at which a helicopter can hover. At higher altitudes air is less dense which makes both the rotor and engine less efficient – more power is required by the rotor and less power is available from the engine(s). A chart like the one below may be used to specify the hover ceiling. It shows the maximum pressure altitude that a helicopter can hover as a function of weight and temperature. Air density is the important value governing hover ceiling, but pressure altitude and temperature are more readily available. Together these two values mostly determine the density. Charts typically use weight on the horizontal axis and pressure altitude on the vertical axis, as shown below. Notice separate lines in the chart for each temperature - higher temperatures correspond to lower air density and therefore lower pressure altitude. A few notes about this performance chart. This ceiling is out of ground effect (OGE), meaning it's well above the ground. When a helicopter is near the ground the rotor is more efficient and can hover at lower air density (higher pressure attitude). For this reason, two charts like the one above are normally provided: one for in ground effect (IGE) and another for out of ground effect (OGE). A little more about the physics of this is provided below. Also, contrary to what you might guess, a helicopter requires less power with moderate forward speed as explained below. Hence, a helicopter can fly above it's ceiling with speed. Lastly, this ceiling is at maximum continuous power (MCP). Most helicopters can use more power than MCP, as long as this power is limited to a short time period. For example, higher two minute or 30 second power limits are often provided, enabling a pilot to exceed MCP when necessary. More about Hover Performance Hovering requires more power than cruising. Hence, it's possible to cruise at a given altitude, but not be able to slow and hover there. Why? Aircraft maintain altitude (counteract gravity) by accelerating air downward. In hover, a helicopter is engulfed in this downward flowing column of air. This makes the main rotor less efficient, like a kayaker paddling upstream. The phenomenon is called induced drag on the rotor. With forward speed, the helicopter moves out of this column of air, partially accessing less disturbed air in front of it. This reduces induced drag. (The speed of this downward flowing air may be estimated using momentum theory as described here.) The air thrown down by the helicopter must slow to have no vertical velocity at ground level. When close to the ground, this ends up reducing the downward airflow all the way up to the main rotor. This means induced drag is reduced near the ground and the rotor becomes more efficient. This is the ground effect phenomena previously mentioned. This becomes relevant when the main rotor is within a diameter of the ground, and increases as the rotor gets closer to the ground. Loiter Time Loiter time is particularly important for smaller helicopters used in law enforcement, news and sightseeing. This is the maximum amount of time the helicopter can stay aloft before refueling. It's about 4 hours for commercial helicopters such as the Bell 505 and 412EPI. This requires the helicopter to fly at a specific airspeed, often around 60 knots, called the loiter speed. Loiter speed is the airspeed with the lowest fuel flow rate - smaller or larger speeds will burn fuel faster and reduce time aloft. The example plot below shows fuel flow rate versus airspeed and marks the loiter speed. Knowing this fuel flow rate \(R\) and the aircraft fuel capacity \(C\), one can estimate the loiter time as \(C/R\). To be more accurate, \(R\) also changes with weight and atmospheric conditions. Since weight drops as fuel is burned in a flight, \(R\) decreases within a flight. Another critical performance metric is a helicopter's range. Range is the distance a helicopter can fly without refueling and assuming no wind. This is about 300 to 400 nautical miles for modern commercial helicopters. Range is dependent on the weight (passengers and cargo onboard), airspeed and atmospheric conditions. Of course, range shrinks with added weight. The airspeed that provides maximum range is not the same as loiter speed. Flying faster, while burning more fuel per unit time, burns less fuel per unit distance. Hence, range increases as airspeed increases above loiter speed. This is true up to a limit called the max range airspeed (MRA). Beyond MRA, the fuel burn per unit distance increases. Flying at MRA will get the helicopter from point A to B while consuming the least fuel. However, helicopters are often flown faster for passenger convenience. Breaking Down the Power Required The power required by the helicopter governs most of the performance metrics we've discussed. This power may be subdivided into profile power, parasite power, induced power, and others we'll categorize as miscellaneous (this is how most books on helicopter performance do it). The plot below gives a rough idea of how these values change with airspeed. We'll explain all the values below. Profile Power Profile power is the power required to overcome drag on the rotor blades. This is often about 20% of the power required in hover. This power increases with airspeed. Initially this is due to the larger airspeeds on advancing rotor blades. At higher speeds, this increases further due to retreating blade stall and advancing blade compressibility effects. For technical readers, we'll provide a calculation here. We'll consider a section of the rotor \(dr\) at distance \(r\) from the hub. We'll assume it maintains a constant drag coefficient around the azimuth and denote the product of the chord and drag coefficient as \(c\). This section encounters a drag force \(D=\rho c v_\Psi ^2dr/2\), where \(v_\Psi\) is the airspeed of the section at azimuth location \(\Psi\). The airspeed will be \(\Omega r\) due to the rotor rotating at speed \(\Omega\), plus the speed of the helicopter \(v\) in the frame of the blade section, which is \( v \sin\Psi \). We've used the convention that \(\Psi =0\) corresponds to a blade over the tail of the helicopter, and \(\Psi = 90^o\) is an advancing blade. Averaging the associated power \(Dv_\Psi\) over the azimuth gives $$ \begin{equation} P = \frac{1}{2\pi} \int_0^{2\pi}d\Psi Dv_\Psi = \frac{1}{2\pi} \int_0^{2\pi} d\Psi \rho c(\Omega r+v\sin \Psi )^3dr/2 \label{eq:profpwr1} \end{equation}.$$ From here the arithmetic is tedious. One way to solve this is expand the cubed sum to get a sum of terms with powers of \(\sin^n \Psi\). Anti-derivatives of such terms can be looked up and then evaluated at \(2\pi\) and \(0\) to give the following $$ \begin{equation} P = dr \rho c \Omega^3 r^3 (1 + \frac{3v^2}{2 \Omega^2 r^2} ) / 2 \label{eq:prof2} \end{equation}. $$ To compute profile power for the entire rotor, Equation \eqref{eq:prof2} can be multiplied by the number of blades and integrated from the blade root to \(r=R\). However, Equation \eqref{eq:prof2} is enough to see the trends. Profile power increases with the square of the helicopter airspeed \(v\) and is minimum in hover, where \(v=0\). Recall that this was assuming a constant drag coefficient. At large speeds, portions of the blade suffer from compressibility effects when advancing (\(\psi \approx 90^o\)) and stall when retreating (\( \psi \approx 270^o \)), causing \(c\) and therefore \(P\) to increase further. Parasite Power Parasite power is the power required to pull the helicopter (minus the rotors) through the air. This is associated with drag on the fuselage, skids and tail surfaces. At low speeds this is negligible, but this drag increases with the square of the airspeed so that the power increases with the cube of the airspeed. A simple method used in many calculations is to provide a table of fuselage drag, normalized by dynamic pressure \(\rho v^2/2\). These values may be provided in a 2D table, as a function of fuselage angle of attack \(\alpha\) and sideslip \(\beta\): \(c=c(\alpha ,\beta )\). Since the fuselage drag is then \(D=\rho v^2 c/2\), the parasite power becomes \(P=Dv=\rho v^3 c/2\). Induced Power Induced power is what we eluded to in the hover performance section above. The velocity of air thrown down by the main rotor is called induced velocity. Its magnitude can be estimated using momentum theory. It causes the net direction of air movement relative to a main rotor blade to be tilted down from the horizontal. The diagram below shows a cross section of a rotor blade and the velocity of air relative to it: the left blue arrow labeled V (total). Lift is perpendicular to this velocity and hence it's not directly upward, but it's partially tilted rightward in the diagram. The horizontal component of that lift vector (dashed line near the top of the diagram) is called induced drag and pushes back against the rotor rotation. The power required to overcome this component of lift (keep the rotor turning at full speed) is induced power. As you might expect from earlier discussion, this power is largest at hover and low speed and reduces at higher speeds where the rotor accesses "cleaner" air (with less induced velocity). Miscellaneous Power Miscellaneous power includes losses in the drive system, power required for avionics, hydraulic pumps, the tail rotor, etc. This is normally less than 20% of the total power required. © 2019 SpinningWing
CommonCrawl
Future Business Journal Foreign direct investment and poverty reduction in sub-Saharan Africa: does environmental degradation matter? James Temitope Dada1 & Taiwo Akinlo ORCID: orcid.org/0000-0002-1415-42412 Future Business Journal volume 7, Article number: 21 (2021) Cite this article This paper investigates the threshold effect of environmental degradation on the FDI-poverty nexus in sub-Saharan Africa for the period 1986–2018. The study used panel threshold regression for the empirical analysis. The evidence from threshold regression using different measures of poverty and environmental degradation shows that the poverty reduction effect of FDI is not eroded by environmental degradation. The study found overwhelming evidence that at the higher level of environmental degradation, FDI contributes significantly to poverty reduction except when Household final consumption is used to proxy poverty and FDI produces an insignificant effect on poverty reduction at the higher level of methane emissions and nitrous oxide emission. Based on this finding, any attempts to reduce environmental degradation by reducing the inflow of FDI will worsen poverty rates in the region. In the theoretical literature, foreign direct investment (FDI) is expected to reduce poverty through an increase in economic growth. FDI complements domestic investment by providing much-needed financial capital, transfer of valuable technology and know-how through its externalities. However, despite the positive effect of FDI on poverty reduction through economic growth in theoretical studies, previous empirical studies have produced ambiguous results as some studies found a positive relationship (e.g., [3, 10, 21, 29, 42, 53, 62, 67, 69]), while some studies (e.g., [2, 28, 43, 63] found an inverse relationship and Ogunniyi and Igberi [46] found no beneficial effect of foreign direct investment in reducing poverty especially in developing countries which has continually record increase in poverty level despite the massive inflow of foreign direct investment. As noted by Dhrifi et al. [18], most studies that examined the nexus between FDI and poverty reduction focused on the growth channel which could be responsible for the mixed result in the literature. The foreign direct investment comes with some negative externalities, of which one of them is environmental pollution [60, 74]. For instance, the negative externalities in terms of production and consumption pollution could worsen environmental quality, economic growth and increase poverty level [18]. This argument suggests that there is a need to consider the role of environmental quality in the FDI-poverty nexus in sub-Saharan Africa as the interdependence between the variables has not received much attention either from academia or policymakers. The impact of FDI on poverty reduction might significantly be influenced by environmental degradation. That is, FDI can reduce or increase poverty depending on the level of environmental quality. A strong environmental regulation that guarantees sound environmental quality will ensure that the multinational corporations in the host country adopt environmentally friendly technologies, clean energy and adopt best international practices which will protect the environment from degradation and hence reduce poverty. This implies that the poverty reduction effect of FDI can be eroded by the decrease in environmental quality in the long run. Likewise, a weak environmental quality will worsen environmental pollution and thereby increase the poverty rates if the attention is focused only on the inflow of FDI without any policies that can protect the environment and the people from unhealthy activities of the multinational corporation. Based on these premises, there could exist a threshold level of environmental quality where below the threshold level, FDI will significantly worsen the poverty reduction drive of developing countries and above it, FDI will significantly enhance poverty reduction. In addition, empirical evidence from Wang and Liu [71] revealed that there are double threshold levels of environmental regulation and when environmental regulation is between the two thresholds, foreign direct investment reduces poverty in two out of three Chinese regions. Evidence from the literature shows that many studies (e.g., [4, 19, 20, 23, 33]) have examined the FDI-poverty reduction nexus in sub-Saharan Africa, but no study has examined the threshold effect of environmental quality on the relationship between FDI and poverty reduction. Thus, this study intends to contribute to the existing studies by investigating the threshold effect of environmental degradation on the relationship between FDI and poverty reduction. This study is very important particularly in sub-Saharan Africa as knowing the threshold point where above and below FDI reduces or increases poverty, respectively, will help researchers, governments and policymakers to formulate policies that will attract FDI, protect the environment and as well reducing the level of poverty. The contributions of this paper are in fourfold. First, rather than focusing on the relationship between FDI and poverty reduction, the study seeks to investigate the threshold effect of environmental degradation on the relationship between FDI and poverty reduction. Second, aside from estimating the threshold values, we employ the threshold regression that permits the classification of our observations relative to whether or not they exceed the threshold values so that the exact effect of FDI on poverty reduction can easily be obtained for both when the region is below and above the threshold. Third, we use diverse robust variables to measure environmental degradation and poverty reduction for sensitivity analysis and to provide robust findings that will inform adequate policies. For instance, environmental degradation is measured by carbon dioxide emissions per capita, methane emission and nitrous oxide emission while for poverty, household final consumption expenditure, life expectancy and human development index are used as measures. Lastly, the study focuses on a region (sub-Saharan Africa) that has witnessed a massive inflow of foreign direct investment in the past decades and also, increase in her poverty rate. As noted by World Data Lab's Global Poverty Ranking [73], the number of extreme poor rose from 279 million in 1990 to over 400 million and 600 million in 2015 and 2019, respectively, signifying over 70% of the people living in sub-Saharan African countries lives in extreme poverty. Contrarily, the region witnessed an increase in the inflow of foreign direct investment to the tune of $42 billion and $46 billion in the year 2017 and 2018, respectively [68]. The rest of the paper is structured as follows: "Literature review" section consists of the methodology and data. "Methods" sections deals with empirical analysis. "Results" section presents the discussion of findings. "Discussion" section outlines the conclusion and policy recommendations. Empirical studies on the threshold effect of environmental degradation in the nexus between foreign direct investment and poverty are still scare; however, there are studies that examine the relationship among foreign direct investment, poverty and environmental degradation. In this regard, Wang and Huifang [70] applies provincial panel data from 2000 to 2014 to investigate the impact of foreign direct investment and environmental regulation on environmental pollution in China. Applying panel corrected standard error as the estimating technique, the result reveals that stricter environmental regulation abates environmental pollution in all the regions. Furthermore, foreign direct investment reduces environmental pollution in eastern and central regions, but it spurs pollution in the western region. The authors therefore conclude that there is evidence of double-threshold effects of environmental regulation on the effects of FDI on environmental pollution in each province considered. In a more recent study, Dhrifi et al. [18], in a panel of 98 developing countries, examine the interrelationship among FDI, carbon dioxide (CO2 emission and poverty, using simultaneous equation models between 1995 and 2017. Dividing the sample into sub-sample of Asia, Africa and Latin America, the authors found a two-way causality between foreign direct investment and poverty; and CO2 emission and poverty. Further, the authors conclude that FDI and poverty are negatively correlated for all the regions apart from the African region. Meanwhile, study by Rizk and Slimane [55] examines the relationship between poverty and carbon dioxide (CO2) emission in 146 nations over the period 1996 and 2014 using institutions as the moderating variable. Applying three-stage least squares as the estimation technique, the authors find that nonlinear relationship between poverty and carbon dioxide emissions worsen poverty and environmental degradation. However, strong institutional quality strengthens environmental quality and leads to reduction in poverty. Furthermore, Khan [35] explores the relationship between poverty and environment nexus in Pakistan. The author finds that the popular belief that poverty worsens environmental degradation cannot be supported by empirical evidence. Rather, the findings show that environmental degradation hurts the poor more. Besides, Gohou and Soumare [23] examine the direct relationship between foreign direct investment and welfare (poverty reduction) in 52 African countries from 1990 to 2007. Outcomes of the study show positive and significant relationship between foreign direct investment and poverty reduction. Furthermore, the authors conclude that the effect of foreign direct investment on poverty reduction is more felt among poorer countries than wealthier countries. Still in light of direct relationship between foreign direct investment and poverty, Muhammad et al. [45] found that foreign direct investment contributes significantly to poverty reduction in Pakistan using ARDL bound testing approach between the period 1985–2016. Similarly, Tsaurai [66] investigate the complementarity between foreign direct investment and natural resources in reducing poverty in southern and western African countries from 2002 to 2012. Using different of methodologies and different proxies for poverty, finding from the study shows that interaction between foreign direct investment and natural resources reduces poverty level in African countries. Contrary to the position in the theoretical literature of positive effect of foreign direct investment in reducing poverty, Huang et al. [28], Ali et al. [2], Tsai and Huang [65] and Akinmulegun [1] among others conclude is there studies that foreign direct investment worsening poverty position of a country. This study therefore deviates from extant studies that have examined the relationship among foreign direct investment, environmental degradation and poverty by determining the optimum level of environmental degradation beyond which foreign direct investment will translate to poverty reduction in African countries. Model specification To investigate whether the effect of foreign direct investment on poverty reduction will be affected by the environmental degradation in sub-Saharan Africa, we adopt a panel threshold estimator. Hansen [26] developed a panel threshold estimator, but this threshold estimator is appropriate for static and balanced panels only. Due to persistence which is usually common to some macroeconomic variables, we discovered that a dynamic panel framework is more suitable. To examine nonlinearity in dynamic panel data, Bick [6] and Kremer et al. [38] proposed a dynamic panel threshold estimator which is an extension of the threshold models by Hansen [26, 27], and Caner and Hansen [11]. We, therefore, follow this methodology proposed by Kremer et al. [38] to investigate the threshold effect of environmental degradation on foreign direct investment-poverty reduction nexus. The model is specified as, $$\begin{aligned} {\text{Pov}}_{it} & = \mu_{i } + \delta_{1} l\;({\text{env}}_{it} \le \gamma_{it} ) + \beta_{1} \left( {{\text{fdi}}_{it} } \right)l\left( {{\text{env}}_{it} \le \gamma_{it} } \right) \\ & \quad + \beta_{2} \left( {{\text{fdi}}_{it} } \right)l\left( {{\text{env}}_{it} > \gamma_{it} } \right) + \sigma_{1} {\text{pov}}_{it - 1} + \sigma_{2} X_{it} + \varepsilon_{it} \\ \end{aligned}$$ where, \({\text{pov}}_{i,t}\) stand for poverty for country i at the period t. fdi represents a foreign direct investment, env stands for environmental quality, \({\text{pov}}_{it - 1}\) is the lagged poverty which is also the right-hand side endogenous variable. \(X_{it}\) represents the control variables. \(\mu_{i }\) indicates country-specific fixed effects. I(.) is an indicator function and depending on whether the threshold variable is larger or smaller than \(\gamma\), it divides the observations into two regimes distinguished by differing regression slopes [16], \(\beta_{1}\) and \(\beta_{2}\). \(\delta_{1}\) is the regime intercept which is the same for all individuals. Data and measurement of variables Poverty reduction is the dependent variable and is measured by three indicators. The first measure is household final consumption expenditure per capita (HCE) while life expectancy (LEX) and human development index (HDI) [15, 18, 23, 42, 69] are the second and third measures. The human development index (HDI) is instituted by UNDP to indicate a deeper and sustainable poverty reduction perspective that takes life expectancy, education, and standard of living into consideration. Data on household final consumption as well as that of life expectancy are obtained from the World Bank Development Indicator [64]. We source data on the Human Development Index from Africa Development Indicator (2019). Likewise, we measure environmental degradation through three indicators. Carbon dioxide (CO2) emission per capita is our first measure of environmental degradation which happens to be the most common proxy for environmental degradation in the literature. For instance, several studies such as Kivyiro and Arminen [39], Ren et al. [52] and recent studies like Sarkodie and Strezov [57] and Dhrifi et al. [18] proxied environmental degradation by carbon dioxide (CO2). However, for this study to provide a clear understanding of the diverse dynamics of environmental degradation we use methane emissions (kt of CO2 equivalent) and nitrous oxide emissions (thousand metric tons of CO2 equivalent) as other measures of environmental degradation. We use nitrous oxide emissions and methane emissions as alternative measures of environmental degradation because GISS Surface Temperature Analysis [22] and Intergovernmental Panel on Climate Change [30] classified them as one of the sources of global warming. Kubicova [40] and Opoku and Boachie [48] also used nitrous oxide emission and methane emission to measure environmental degradation. The three proxies of environmental degradation are obtained from the World Bank Development Indicator [64]. Foreign Direct Investment (FDI) is obtained from the World Bank Development Indicator [64] and is the average of FDI net inflows to GDP. We include some control variables which include; trade openness, inflation, financial development and corruption. The choice of the control variables included is based on their relevance and roles in FDI and poverty nexus. Also, these control variables are considered in the literature as important variables in explaining the poverty reduction. Trade openness is the sum of export and import (as % of GDP). Inflation is measured as the annual percentage of the consumer price index. Financial development is measured by domestic credit to private sector. Corruption is the evaluation of corruption within the political system. It concerns with the possible corruption in the form of extreme sponsorship, favouritism, job reservations, 'favour-for-favours', secret party funding, and questionably link between politics and business. Corruption control is rated between 0 and 6. A score of 6 points suggests a low level of corruption, while 0 point indicates a high level of corruption. Data on corruption are obtained from the International Country Risk Guide [31] while data on trade openness, financial development and inflation are obtained from the World Development Indicator [64]. This study included 39 sub-Saharan African countries and spans from 1986 to 2018. We present the list of the countries included in the study in the "Appendix". The choice of the sample size and time frame of this study is dictated by data availability. The descriptive statistics of the variables are presented in Table 1. Table 1 Descriptive statistics We present the results of Eq. 1 in Tables 2, 3, and 4. In Table 2, we use household final consumption expenditure as the proxy for poverty reduction. We use life expectancy and human development index to proxy poverty reduction in Tables 3 and 4, respectively. In each of the Tables, we use carbon dioxide emission as the measure for environmental degradation in the first column. In the second column, we use methane emissions as the measure of environmental degradation while in the third column, we use nitrous oxide emissions to measure environmental degradation. Table 2 Results of the dynamic threshold effect of the environmental degradation on foreign direct investment and poverty reduction (dependent variable: household final consumption expenditure) Table 3 Results of the dynamic threshold effect of the environmental degradation on foreign direct investment and poverty reduction (dependent variable: life expectancy) Table 4 Results of the dynamic threshold effect of the environmental degradation on foreign direct investment and poverty reduction (dependent variable: human development index) Starting with Table 2, we estimate threshold values of 1.105, for carbon dioxide emission, 3.720 and 3.197 for methane emission and nitrous emission, respectively. Below the threshold level in the first column, we found that the coefficient of foreign direct investment is negative but insignificant. However, above the threshold level, the coefficient of foreign direct investment is positive and significant. This is an indication that above the threshold level, FDI enhances poverty reduction. This is contrary to what is obtained in the second column where methane emission is used as a threshold variable. Below the threshold level, foreign direct investment significantly enhances poverty reduction while above the threshold level, foreign direct investment produces no effect on poverty. In the third column, where nitrous oxide emission is used to proxy environmental degradation, foreign direct investment failed to significantly impact poverty below and above the estimated threshold value. Regarding the regime intercept \(\delta_{1}\), Bick [6] stated that it signifies that the difference in the regime intercepts is not individual-specific, but the same for all cross-sections. This indicates that the growth rate of poverty reduction in the same regime is identical, but not different across countries within the regime. The inclusion of regime intercept \(\delta_{1}\) in the model reduces the biases of omitting variables based on a statistic perspective. The coefficients of regime intercept \(\delta_{1}\) are positive and significant in all the models. On the control variables, the coefficient of inflation is positive but insignificant in all the models. This indicates that inflation has no effect on poverty during the study period. Financial development significantly contributed to poverty reduction as its coefficient is significant at 1% in all the models. This is in line with Sehrawat and Giri [58], Boukhatem [8] and Rewilak [54], Keho [36] who found that financial development enhances poverty reduction. Trade openness produces a positive effect on poverty which means that trade openness contributed to poverty reduction. This is consistent with Onakoya et al. [47] who found that trade openness reduces poverty levels in sub-Saharan Africa. Corruption has no effect on poverty as its coefficient is not significant in all the models. The lagged household final consumption expenditure contributes to the current final consumption expenditure. In Table 3, where we use life expectancy to proxy poverty reduction, in the first column we estimate a threshold value of 0.125 for carbon dioxide emission. In the second column, a threshold value of 3.278 is obtained for methane emission, while we obtained a threshold value of 3.807 for nitrous oxide emission in the third column. We found that at the lower and higher level of carbon dioxide emission, FDI has a positive and significant effect on poverty reduction. However, the coefficient of FDI is bigger at the lower level of carbon dioxide emission which indicates that FDI contributes more to poverty reduction when carbon dioxide emission is lower. Likewise, in the second column, FDI contributes to poverty reduction below and above the threshold level. However, based on the coefficient value, FDI contributes more to poverty reduction above the threshold level. In the third column, the coefficients of FDI are positive and significant in the two regimes. However, by comparing the coefficients of FDI in the two regimes, the coefficient of FDI is bigger above the threshold level. This implies that a higher level of nitrous oxide emissions is not detrimental to poverty reduction effect of FDI. The regime intercept is positive and significant in all the models. Regarding the control variables, like in Table 2, inflation produces no effect on poverty. Financial development reduces the level of poverty in all models. Trade openness proves to be a major determinant of poverty reduction as it produces a positive and significant effect on poverty. Corruption failed to significantly impact poverty reduction. Lagged life expectancy contributes positively to current life expectancy. As we indicated earlier, the human development index is used as the measure for poverty reduction in Table 4. From the results, a threshold value of 1.819 is estimated for carbon dioxide emission in the first column. For methane emission in the second column, the estimated threshold value is 4.299 while a threshold value of 3.849 is estimated for nitrous oxide emission in the third column. For the first column, where environmental degradation is measured by carbon dioxide emission, FDI produces a positive but insignificant effect on poverty reduction below the threshold level. But the coefficient of FDI becomes significant above the threshold value. This means that FDI enhances poverty reduction above the threshold value. The same result is obtained in the second column where we measured environmental degradation by methane emission. Below the threshold level, the impact of FDI on poverty reduction is insignificant positive. Above the threshold level, the coefficient of FDI is positive and significant. This is indicating that FDI significantly contributed to poverty reduction above the threshold level. A slightly different result is obtained in the third column where environmental degradation is proxied by nitrous oxide emission. Below the threshold value, the coefficient of FDI is insignificant negative. However, above the threshold level, FDI significantly contributed to poverty reduction. These findings show that environmental degradation is not harmful to the poverty reduction effect of FDI. Like in earlier Tables, inflation and corruption produce an insignificant effect on poverty. However, financial development is beneficial to poverty reduction. The coefficient of trade openness remains positive and significant like in other Tables. The coefficient of the lagged human development index is positive and significant. This suggests that the lagged human development index enhances the current human development index. We found that the higher level of carbon dioxide emission enhances the poverty reduction effect of FDI. This implies that carbon dioxide emission is not eroding the benefit of FDI in reducing poverty. Evidence from our findings shows that the influence of methane emission on the FDI-poverty nexus is sensitive to the poverty measures used. For instance, when household final consumption expenditure is used to measure poverty, FDI promotes poverty reduction when methane emission is below the threshold. But when life expectancy is used as a proxy for poverty, FDI contributes more to poverty reduction when methane emission is above the threshold value. Contrary to when household final consumption is used to proxy poverty, above the threshold level of methane emission, FDI contributes to poverty reduction when the human development index is used to proxy poverty. This is an indication that while lower methane emission is good for poverty reduction effect of FDI when household final consumption is used as proxied for poverty, higher methane emission is, however, good for poverty reduction effect of FDI when poverty is proxied by life expectancy and human development index. Likewise, the role of nitrous oxide emission in FDI-poverty depends on the proxy of poverty used. When household consumption expenditure is used as a proxy, FDI produces no effect on poverty contrary to other measures of environmental degradation. However, with life expectancy and human development index as proxies for poverty, we found that FDI reduces the rate of poverty when nitrous oxide emission is above the threshold. Generally, we found overwhelming evidence that environmental degradation is not harmful to the poverty reduction effect of FDI. This suggests that the inflow of FDI might not contribute to environmental degradation as much as it is generally believed. This might be a surprise; however, Jugurnath and Emrith [34] found that the increase in the inflow of FDI does not significantly contribute to an increase in the levels of CO2 emissions in SIDS countries. It has also been argued that FDI is less pollutant than domestic producers as FDI uses new technologies that are cleaner than domestic producers; therefore, the inflow of FDI will significantly improve the environmental quality. Likewise, Wilhelms [72] and Pigato [49] stated that the attraction of FDI and as well maximizing its benefits and minimizing its risk by developing countries is depending on the effectiveness of their policy and institutions which suggest that environmental degradation might not be a major issue if appropriate policies are adopted. The study found that financial development enhances poverty reduction. According to Keho [36], financial development can directly reduce poverty by easing transactions and enable the poor to have access to financial services that boost their income. McKinnon [44] stated that even the poor might not have access to credit facilities of the financial institutions but financial institutions still make transaction services and saving opportunities available which enhances the income of the poor and hence poverty. Shahbaz [59] and Inoue and Hamori [32] claimed that the access of the poor to credit and financial services reinforce their productive assets through the use of productivity-enhancing technologies or investing in education and health, which in turn increases their income and thus lowers the rate of poverty. Aside from the direct effect of financial development on poverty reduction, De Gregorio [17] stated that financial development can also indirectly reduce poverty rates by stimulating economic growth. The effect of inflation on poverty is insignificant in all the estimations. However, evidence from the literature shows that there are mixed results on the impact of inflation on poverty. Some studies (e.g., [9, 12, 13, 50, 51]) established a positive correlation between inflation and poverty, while some other studies produce otherwise results (e.g., [7, 14, 56]. Trade openness contributed significantly to poverty reduction in this study. Le Goff and Singh [41] found that trade openness reduces poverty in sub-Saharan Africa countries that have deep financial sectors and strong institutional quality. Trade openness can reduce poverty by increasing economic activities, ideas and innovations for the poor. Theoretically, it has been argued that trade openness can change relative factor prices in favor of the more abundant factor and if labor surplus is the cause of the low level of income and increase in poverty, then expansion of trade openness will stimulate an increase in labor prices and hence reduces poverty. Though the relationship between trade openness and poverty is not without controversy in the literature as there is no consensus on the relationship. For instance, Beck et al. [5] and Kpodar and Singh [37] find no effect of trade openness on poverty, while Guillaumont-Jeanneney and Kpodar [24] found that trade openness reduces the income of the poor and Singh and Huang [61] found that trade openness increases the poverty rate. There is no significant relationship between poverty and corruption in this study. However, corruption is seen as an obstacle to poverty reduction as it leads to the diversion of scarce resources, preventing the poor from having access to basic social amenities and resources which can enhance their livelihoods. Gupta et al. [25] stated that corruption promotes income inequality and poverty by slowing down economic growth. This study investigates the threshold effect of environmental degradation in the relationship between FDI and poverty in sub-Saharan Africa over the period 1986–2018. To provide valid and robust results, we employed CO2, methane emission and nitrous oxide emission as proxies for environmental degradation. In the same way, household final consumption expenditure, life expectancy and human development index are used to measure poverty. The study employed dynamic panel threshold regression proposed by Kremer et al. [38]. The empirical results from the threshold regression showed that environmental degradation is not detrimental to the poverty reduction effect of FDI during the study period. Two implications can be drawn from this finding. First, any policy employed to reduce the level of CO2, methane emission and nitrous oxide by preventing the inflow of FDI into the region will be detrimental to the poverty reduction effect of FDI in the region. Therefore, policies that target a cleaner environment without reducing the inflow FDI is necessary as it will also boost the poverty reduction impact of FDI. The introduction of energy-efficient technologies will also help to reduce the level of environmental degradation without limiting the inflow of FDI in the region. Second, more inflow of FDI is still necessary in the region in solving the problem of poverty as the region is regarded as the poorest region in the world. This is very germane as the region lacks sufficient capital required for development and hence poverty. More inflow of FDI will supplement the available domestic capital to transform the economies in the region by increasing employment, production, exports and per capita income. The study also found that financial development contributes to poverty reduction in the region. This finding implies that more effort is required from the policymakers to increase the level of financial deepening for more impact of financial development on poverty reduction in the region. Evidence from past studies indicates that the financial sector in sub-Saharan Africa is among the least developed across the world. Likewise, policies that will increase access of poor people to credit facilities and other financial services will significantly reduce the level of poverty. It is important to give a hint that this study has contributed to the literature by examining the threshold effect of environmental degradation in foreign direct investment-led poverty reduction in sub-Saharan African countries. Further, no known study from sub-Saharan Africa has examined this relationship, thereby making the study different. However, this study is limited based on the availability of data on other proxies of poverty such as poverty headcount ratio and poverty gap. Further, this study can be extended to other developing countries especially countries in Latin America and Asian continents. The datasets used during the current study are available from the corresponding author on reasonable request. FDI: CO2 : Carbon dioxide emissions HDI: LEX: Akinmulegun SO (2012) Foreign direct investment and standard of living in Nigeria. J Appl Finance Bank 2(3):295–309 Ali M, Nishat M, Anwar T (2010) Do foreign inflows benefit Pakistan poor? Pak Dev Rev 48(4):715–738. https://doi.org/10.30541/v48i4IIpp.715-738 Agarwal M, Atri P, Kundu S (2017) Foreign direct investment and poverty reduction: India in regional context. South Asia Econ J 18(2):135–157 Anetor FO, Esho E, Verhoef G, Christian N (2020) The impact of foreign direct investment, foreign aid and trade on poverty reduction: evidence from Sub-Saharan African countries. Cogent Econ Finance. https://doi.org/10.1080/23322039.2020.1737347 Beck T, Demirguc-Kunt A, Levine R (2007) Finance, inequality and the poor. J Econ Growth 12:27–49 Bick A (2010) Threshold effects of inflation on economic growth in developing countries. Econ Lett 108(2):126–129 Blank RM, Blinder AS (1985) Macroeconomics, income distribution, and poverty. NBER working paper no.1567, Cambridge, MA Boukhatem J (2015) Assessing the direct effect of financial development on poverty reduction in a panel of low-and middle-income countries. Res Int Bus Finance 37:214–230. https://doi.org/10.1016/j.ribaf.2015.11.008 Braumann B (2004) High inflation and real wages. IMF staff papers, vol 51, no 1. Palgrave Macmillan, pp 1–6 Calvo CC, Hernandez MA (2006) Foreign direct investment and poverty in Latin America. Leverhulme Centre for Research on Globalization and Economic Policy, University of Nottingham Caner M, Hansen BE (2004) Instrumental variable estimation of a threshold model. Econom Theor 20(5):813–843 Cardoso E (1992) Inflation and Poverty. NBER working paper. no. 4006. Cambridge, MA Chauhary TT, Chaudhary AA (2008) The effects of rising food and fuel costs on poverty in Pakistan. The Lahore Journal of Economics, Special Edition (September), Lahore, Pakistan, pp 117–138 Cutler DM, Katz LF (1991) Macroeconomic performance and the disadvantaged. Brook Pap Econ Act 22(2):1–74 Dada JT, Fanowopo O (2020) Economic growth and poverty reduction: the role of institutions. Ilorin J Econ Policy 7(1):1–15 Dada JT, Abanikanda EO (2019) How important is oil revenue in Nigerian growth process? Evidence from a threshold regression. Int J Sustain Econ 11(4):364–377. https://doi.org/10.1504/IJSE.2019.10025140d De Gregorio J (1996) Borrowing constraints, human capital accumulation and growth. J Monet Econ 37:49–71 Dhrifi A, Jaziri R, Alnahdi S (2019) Does foreign direct investment and environmental degradation matter for poverty? Evidence from developing countries. Struct Change Econ Dyn. https://doi.org/10.1016/j.strueco.2019.09.008 Fauzel S, Seetanah B, Sannassee RV (2015) Foreign direct investment and welfare nexus in sub Saharan Africa. J Dev Areas 49(4):271–283 Fowowe B, Shuaibu MI (2014) Is foreign direct investment good for the poor? New evidence from African countries. Eco Change Restruct 47:321–339 Ganić M (2019) Does foreign direct investment (FDI) contribute to poverty reduction? Empirical evidence from Central European and Western Balkan countries. Sci Ann Econ Bus 66(1):15–27 GISS Surface Temperature Analysis (2019) GISTEM version is v3. https://data.giss.nasa.gov/gistemp Gohou G, Soumare I (2012) Does foreign direct investment reduce poverty in Africa and are there regional differences? World Dev 40:75–95. https://doi.org/10.1016/j.worlddev.2011.05.014 Guillaumont-Jeanneney S, Kpodar K (2011) Financial development and poverty reduction: can there be a benefit without a cost? J Dev Stud 47(1):143–163 Gupta S, Davoodi H, Alonso-Terme R (1998) Does corruption affect income inequality and poverty? IMF working paper no. 98/76, Available at SSRN: https://ssrn.com/abstract=882360 Hansen BE (1999) Threshold effects in non-dynamic panels: estimation, testing, and inference. J Econom 93(2):345–368 Hansen BE (2000) Sample splitting and threshold estimation. Econometrica 68(3):575–603 Huang C, Teng K, Tsai P (2010) Inward and outward foreign direct investment and poverty reduction: East Asia versus Latin America. Rev World Econ 146(4):763–779 Hung TT (1999) Impact of foreign direct investment on poverty reduction in Vietnam. ID Program, GRIPS Intergovernmental Panel on Climate Change (2014) Summary for policymakers. In: Climate Change 2014: mitigation of climate change. Contribution of working group III to the fifth assessment report of the intergovernmental panel on climate change. Cambridge University Press, UK and New York, NY, USA International Country Risk Guide (ICRG) Researchers (2019) International country risk guide (ICRG) Researchers Dataset. Harvard Dataverse, V8. https://doi.org/10.7910/DVN/4YHTPU Inoue T, Hamori S (2012) How has financial deepening affected poverty reduction in India? Empirical analysis using state-level panel data. Appl Financial Econ 22(5):395–408 Jemiluyi OO, Dada JT (2018) Market size and foreign direct investment in Sub-Saharan Africa: the role of education. Jurnal Perspektif Pembiayaan dan Pembangunan Daerah (J Perspect Financing Reg Dev) 6(1):21–30. https://doi.org/10.22437/ppd.v6i1.5140 Jugurnath B, Emrith A (2018) Impact of foreign direct investment on environment degradation: evidence from SIDS countries. J Dev Areas 52(2):13–26 Khan H (2008) Poverty, environment and economic growth: exploring the links among three complex issues with specific focus on the Pakistan's case. Environ Dev Sustain 10:913–929. https://doi.org/10.1007/s10668-007-9092-5 Keho Y (2017) Financial development and poverty reduction: evidence from selected African countries. Int J Financial Res 8:90–98 Kpodar K, Singh R (2011) Does financial structure matter for poverty? Evidence from developing countries. World Bank policy research working paper, WPS5915 Kremer S, Bick A, Nautz D (2013) Inflation and growth: new evidence from a dynamic panel threshold analysis. Empir Econ 44(2):861–878 Kivyiro P, Arminen H (2014) Carbon dioxide emissions, energy consumption, economic growth, and foreign direct investment: causality analysis for Sub-Saharan Africa. Energy 74(C):595–606 Kubicova J (2014) Testing greenhouse gasses in Slovakia for environmental Kuznets curve and pollution haven hypothesis. J Int Stud 7(2):161–177 Le Goff M, Singh RJ (2014) Does trade reduce poverty? A view from Africa. J. Afr Trade 1(1):5–14. https://doi.org/10.1016/j.joat.2014.06.001 Magombeyi MT, Odhiambo NM (2017) Causal relationship between FDI and poverty reduction in South Africa. Cogent Econ Finance 5:1357901. https://doi.org/10.1080/23322039.2017.1357901 Meyer KE, Sinani E (2009) When and where does foreign direct investment generate positive spillover? A meta-analysis. J Int 40:1075–1094 McKinnon RI (1973) Money and capital in economic development. Brooking Institution, Washington Muhammad BK, Xie H, Hummera S (2019) Direct impact of inflow of foreign direct investment on poverty reduction in Pakistan: a bonds testing approach. Econ Res Ekon Istraž 32:3647–3666. https://doi.org/10.1080/1331677X.2019.1670088 Ogunniyi MB, Igberi CO (2014) The impact of foreign direct investment on poverty reduction in Nigeria. J Econ Sustain Dev 5(14):73–89 Onakoya A, Johnson B, Ogundajo G (2019) Poverty and trade liberalization: empirical evidence from 21 African countries. Econ Res Ekon Istraž 32(1):635–656. https://doi.org/10.1080/1331677X.2018.1561320 Opoku EEO, Boachie MK (2020) The environmental impact of industrialization and foreign direct investment. Energy Policy 137:111178 Pigato MA (2001) The foreign direct investment environment in Africa. World Bank Africa region working paper series no. 15 Powers ET (1995) Inflation, unemployment, and poverty revisited. Fed Reserve Bank Clevel Econ Rev 31(3):2–13 Ravallion M (1998) Reform, food prices and poverty in India. Econ Polit Wkl 33(1/2):29–36 Ren S, Yuan B, Ma X, Chen X (2014) International trade, FDI (foreign direct investment) and embodied CO2 emissions: a case study of Chinas industrial sectors. China Econ Rev 28:123–134 Reiter SL, Steensma HK (2010) Human development and foreign direct investment in developing countries: the influence of foreign direct investment policy and corruption. World Dev 38:1678–1691 Rewilak J (2017) The role of financial development in poverty reduction. Rev Dev Finance 7(2017):169–176 Rizk R, Slimane MB (2018) Modelling the relationship between poverty, environment, and institutions: a panel data study. Environ Sci Pollut Res 25(3):31459–31473 Romer CD, Romer DH (1999) Monetary policy and the well-being of the poor. Econ Rev 84(Q1):21–49 Sarkodie SA, Strezov V (2019) Effect of foreign direct investments, economic development and energy consumption on greenhouse gas emissions in developing countries. Sci Total Environ 646:862–871 Sehrawat M, Giri AK (2016) Financial development, poverty and rural-urban income inequality: evidence from South Asian countries. Qual Quant Int J Methodol 50(2):577–590 Shahbaz M (2009) Financial performance and earnings of poor people: a case study of Pakistan. J Yasar Univ 4:2557–2572 Shahbaz M, Nasreen S, Abbas F, Anis O (2015) Does foreign direct investment impede environmental quality in high-, middle-, and low-income countries? Energy Econ 51:275–287 Singh R, Huang Y (2011) Financial deepening, property rights, and poverty: evidence from sub-Saharan Africa. IMF working paper, WP/11/196. International Monetary Fund, Washington Soumare I (2015) Does Foreign Direct Investment Improve Welfare in North Africa? Africa Development Bank Sumner A (2005) Is foreign direct investment good for the poor? A review and stock take. Dev Pract 15:269–285 The World Bank (2019) World development indicators. The World Bank, Washington, DC Tsai P, Huang C (2007) Openness, growth and poverty: the case of Taiwan. World Dev 35:1858–1871. https://doi.org/10.1016/j.worlddev.2006.11.013 Tsaurai K (2018) Investigating the impact of foreign direct investment on poverty reduction efforts in Africa. Rev Galega Econ 27(2):139–154 Ucal MS (2014) Panel data analysis of foreign direct investment and poverty from the perspective of developing countries. Soc Behav Sci 109:1101–1105 United Nation Conference on Trade and Development (UNCTAD) (2019) World investment report 2019 Uttama NP (2015) Foreign direct investment and the poverty reduction nexus in Southeast Asia. Economic studies in inequality, social exclusion, and well-being. In: Heshmati A, Maasoumi E, Wan G (eds) Poverty reduction policies and practices in developing Asia, 127th ed, pp 281–298 Wang H, Huifang L (2019) Foreign direct investment, environmental regulation, and environmental pollution: an empirical study based on threshold effects for different Chinese regions. Environ Sci Pollut Res 26:5394–5409. https://doi.org/10.1007/s11356-018-3969-8 Wang H, Liu H (2019) Foreign direct investment, environmental regulation, and environmental pollution: An empirical study based on threshold effects for different Chinese regions. Environ Sci Pollut Res 26(6):5394–5409 Wilhelms SKS (1998) Foreign direct investment and its determinants in emerging economies. African economic policy paper discussion paper number 9, USAID World Data Lab's Poverty Clock (2018) www.worlddata.io/portfolio/world-poverty-clock Zhang YJ (2011) The impact of financial development on carbon emissions: an empirical analysis in China. Energy Policy 39:2197–2203 This work does not receive any funding. Department of Economics, Obafemi Awolowo University, Ile-Ife, Nigeria James Temitope Dada Department of Economics, Adeyemi College of Education, Ondo, Nigeria Taiwo Akinlo JTD writes the introduction and proofreading the study on Foreign Direct Investment and Poverty Reduction in Sub-Saharan Africa: Does Environmental Degradation Matter? TA analyzed and interpreted the results for the study. Both authors read and approved the final manuscript. Correspondence to Taiwo Akinlo. See Table 5. Table 5 List of countries Dada, J.T., Akinlo, T. Foreign direct investment and poverty reduction in sub-Saharan Africa: does environmental degradation matter?. Futur Bus J 7, 21 (2021). https://doi.org/10.1186/s43093-021-00068-7 Threshold regression
CommonCrawl
Order and stochasticity in the folding of individual Drosophila genomes Sergey V. Ulianov1,2 na1, Vlada V. Zakharova1,2,3 na1, Aleksandra A. Galitsyna ORCID: orcid.org/0000-0001-8969-56944 na1, Pavel I. Kos5 na1, Kirill E. Polovnikov4,6, Ilya M. Flyamer ORCID: orcid.org/0000-0002-4892-42087, Elena A. Mikhaleva8, Ekaterina E. Khrameeva ORCID: orcid.org/0000-0001-6188-91394, Diego Germini3, Mariya D. Logacheva4, Alexey A. Gavrilov1,9, Alexander S. Gorsky10,11, Sergey K. Nechaev12,13, Mikhail S. Gelfand4,10, Yegor S. Vassetzky3,14, Alexander V. Chertovich5,15, Yuri Y. Shevelyov ORCID: orcid.org/0000-0002-0568-92368 & Sergey V. Razin ORCID: orcid.org/0000-0003-1976-86611,2 Nature Communications volume 12, Article number: 41 (2021) Cite this article Chromatin analysis Chromatin structure Transcriptional regulatory elements Mammalian and Drosophila genomes are partitioned into topologically associating domains (TADs). Although this partitioning has been reported to be functionally relevant, it is unclear whether TADs represent true physical units located at the same genomic positions in each cell nucleus or emerge as an average of numerous alternative chromatin folding patterns in a cell population. Here, we use a single-nucleus Hi-C technique to construct high-resolution Hi-C maps in individual Drosophila genomes. These maps demonstrate chromatin compartmentalization at the megabase scale and partitioning of the genome into non-hierarchical TADs at the scale of 100 kb, which closely resembles the TAD profile in the bulk in situ Hi-C data. Over 40% of TAD boundaries are conserved between individual nuclei and possess a high level of active epigenetic marks. Polymer simulations demonstrate that chromatin folding is best described by the random walk model within TADs and is most suitably approximated by a crumpled globule build of Gaussian blobs at longer distances. We observe prominent cell-to-cell variability in the long-range contacts between either active genome loci or between Polycomb-bound regions, suggesting an important contribution of stochastic processes to the formation of the Drosophila 3D genome. The principles of higher-order chromatin folding in the eukaryotic cell nucleus have been disclosed thanks to the development of chromosome conformation capture techniques, or C-methods1,2. High-throughput chromosome conformation capture (Hi-C) studies demonstrated that chromosomal territories were partitioned into partially insulated topologically associating domains (TADs)3,4,5. TADs likely coincide with functional domains of the genome6,7,8, although the results concerning the role of TADs in the transcriptional control are still conflicting6,9,10,11,12. Analysis performed at low resolution suggested that active and repressed TADs were spatially segregated within A and B chromatin compartments13,14. However, high-resolution studies demonstrated that the genome was partitioned into relatively small compartmental domains bearing distinct chromatin marks and comparable in sizes with TADs15. In mammals, the formation of TADs by active DNA loop extrusion partially overrides the profile of compartmental domains15,16. Of note, TADs identified in studies of cell populations are highly hierarchical (i.e., comprising smaller subdomains, some of which are represented by DNA loops5,17). Partitioning of the genome into TADs is relatively stable across cell types of the same species3,4. The recent data suggest that mammalian TADs are formed by active DNA loop extrusion18,19. The boundaries of mammalian TADs frequently contain convergent binding sites for the insulator protein CTCF that are thought to block the progression of loop extrusion19,20,21. Contribution of DNA loop extrusion in the assembly of Drosophila TADs has not been demonstrated yet22; thus, Drosophila TADs might represent pure compartmental domains23. Large TADs in the Drosophila genome are mostly inactive and are separated by transcribed regions characterized by the presence of a set of active histone marks, including hyperacetylated histones5,24. Some insulator/architectural proteins are also overrepresented in Drosophila TAD boundaries24,25,26, but their contribution to the formation of these boundaries has not been directly tested. The results of computer simulations suggest that Drosophila TADs are assembled by the condensation of nucleosomes of inactive chromatin24. The current view of genome folding is based on the population Hi-C data that present integrated interaction maps of millions of individual cells. It is not clear, however, whether and to what extent the 3D genome organization in individual cells differs from this population average. Even the existence of TADs in individual cells may be questioned. Indeed, the DNA loop extrusion model considers TADs as a population average representing a superimposition of various extruded DNA loops in individual cells18. Heterogeneity in patterns of epigenetic modifications and transcriptomes in single cells of the same population was shown by different single-cell techniques, such as single-cell RNA-seq27, ATAC-seq28, and DNA-methylation analysis29. Studies performed using FISH demonstrated that the relative positions of individual genomic loci varied significantly in individual cells30. The first single-cell Hi-C study captured a low number of unique contacts per individual cell31 and allowed only the demonstration of a significant variability of DNA path at the level of a chromosome territory. Improved single-cell Hi-C protocols32,33 allowed to achieve single-cell Hi-C maps with a resolution of up to 40 kb per individual cell32,34 and investigate local and global chromatin spatial variability in mammalian cells, driven by various factors, including cell cycle progression33. Of note, TAD profiles directly annotated in individual cells demonstrated prominent variability in individual mouse cells32. The possible contribution of stochastic fluctuations of captured contacts in sparse single-cell Hi-C matrices into this apparent variability was not analyzed32. More comprehensive observations were made when super-resolution microscopy (Hi-M, 3D-SIM) coupled with high-throughput hybridization was used to analyze chromatin folding in individual cells at a kilobase-scale resolution. These studies demonstrated chromosome partitioning into TADs in individual mammalian cells and confirmed a trend for colocalization of CTCF and cohesin at TAD boundaries, although the positions of boundaries again demonstrated significant cell-to-cell variability35. Condensed chromatin domains coinciding with population TADs were also observed in Drosophila cells36,37. In accordance with previous observations made in cell population Hi-C studies24, the obtained results suggested that partitioning of the Drosophila genome into TADs was driven by the stochastic contacts of chromosome regions with similar epigenetic states at different folding levels38. Although studies performed using FISH and multiplex hybridization allowed to construct chromatin interaction maps with a very high resolution35, they cannot provide genome-wide information. Here, we present single-nucleus Hi-C (snHi-C) maps of individual Drosophila cells with a 10-kb resolution. These maps allow direct annotation of TADs that appear to be non-hierarchical and are remarkably reproducible between individual cells. TAD boundaries conserved in different cells of the population bear a high level of active chromatin marks supporting the idea that active chromatin might be among determinants of TAD boundaries in Drosophila24. High-resolution single-nucleus Hi-C reveals distinct TADs in Drosophila genome To investigate the nature of TADs in single cells and to characterize individual cell variability in Drosophila 3D genome organization, we performed single-nucleus Hi-C (snHi-C)32 (Fig. 1a) in 88 asynchronously growing Drosophila male Dm-BG3c2 (BG3) cells (Supplementary Fig. 1a) in parallel with the bulk BG3 in situ Hi-C analysis and obtained 2–5 million paired-end reads per single-cell library (for the data processing workflow, see Supplementary Fig. 1b). To select the libraries for deep sequencing, we subsampled the snHi-C data to estimate the expected number of unique contacts that could be extracted from the data (Supplementary Fig. 2a; also see "Methods"). Twenty libraries were additionally sequenced with 16.7–36.5 million paired-end reads, and we extracted 8032–107,823 unique contacts per cell (Supplementary Table 1). We developed a custom pairtools-based approach termed ORBITA (One Read-Based Interaction Annotation) (Fig. 1b) to eliminate artificial contacts generated by spontaneous template switches of the Phi29 DNA-polymerase39,40 (Fig. 1c, d) during the whole-genome amplification (WGA) step (see "Methods"). In contrast to the hiclib32,41 (see "Methods") annotations showing up to 20 contacts per restriction fragment (RF) in a single nucleus, ORBITA detects one or two unique contacts per RF (Fig. 1d, Supplementary Fig. 2b, c). We tested ORBITA by analyzing previously published snHi-C data from murine oocytes32 and found that ORBITA allowed us to filter out artificial junctions in this dataset (Supplementary Fig. 3a). Notably, hiclib and ORBITA detect a similar number of contacts per RF in single-cell Hi-C data obtained without the usage of Phi29 DNA-polymerase33 (Supplementary Fig. 3b). Thus, ORBITA efficiently filters out artificial Phi29 DNA-polymerase-produced DNA chimeras from snHi-C libraries. Fig. 1: ORBITA-processed Hi-C data from single Drosophila nuclei. a Single-nucleus Hi-C protocol scheme (see "Methods" for details). b Workflow of ORBITA function for detection of unique Hi-C contacts. ORBITA processes only chimeric reads with good mapping quality containing ligation junction marked by the cleavage site for restriction enzyme used for the snHi-C map construction. c Scheme of an artefactual DNA chimera formation by Phi29-DNA-polymerase. d Number of unique contacts per restriction fragment (RF) captured by ORBITA (orange) and hiclib (blue) for autosomes and the X chromosome. BG3 is a diploid male cell line; accordingly, in a single nucleus, each RF from autosomes and the X chromosome could establish no more than four and two unique contacts, respectively. Cell 1, autosomes: n = 148,415 and 159,060 for ORBITA and hiclib, respectively; ChrX: n = 22,016 and 26,674 for ORBITA and hiclib, respectively. Cell 2: autosomes: n = 113,988 and 119,066 for ORBITA and hiclib, respectively; ChrX: n = 16,384 and 19,429 for ORBITA and hiclib, respectively. e Visualization of a single-nucleus Hi-C map at 1-Mb, 100-kb, and 10-kb resolution for the cell with 107,823 captured unique contacts. f Dependence of the contact probability Pc(s) on the genomic distance s for single nuclei (shades of blue reflect the number of unique contacts captured in individual nuclei), merged snHi-C data (orange), and bulk in situ BG3 Hi-C data (red). Black lines show slopes for Pc(s) = s−1.5 and Pc(s) = s−1. We then constructed snHi-C maps with a resolution of up to 10 kb (Fig. 1e). In single nuclei, the dependence of the contact probability on the genomic distance, Pc(s), has a shape comparable to that observed in the bulk BG3 in situ Hi-C regardless of the number of captured contacts (Fig. 1f), indicating that the key steps of the snHi-C protocol such as fixation, DNA fragmentation, and in situ ligation were performed successfully. To estimate the overall quality of the snHi-C libraries, we first calculated the number of captured contacts per cell. On average, we extracted 33,291 unique contacts from individual nuclei that represented 5% of the theoretical maximum number of contacts and corresponded to four contacts per 10-kb genomic bin (see "Methods"); in the best cell, 17% of contacts were recovered (Fig. 2a, b, Supplementary Table 1). Relying on the number of captured contacts, we then estimated the proportion of the genome available for the downstream analysis. At 10-kb resolution, ~82% of the genome on average was covered with contacts in each individual cell, and 67% of genomic bins established more than 1 contact (Fig. 2c). Notably, in the previously published mouse snHi-C datasets, ~0.6% of theoretically possible contacts were detected on average (Fig. 2b). Because the top-20 mouse snHi-C libraries from Flyamer et al.32 demonstrated a comparable genome coverage with contacts and a number of contacts per 10-kb genomic bin (Fig. 2d), we could directly compare the Drosophila and mouse snHi-C maps (see below). Next, to verify that these sparse snHi-C matrices were not generated by random fluctuations of captured contacts, we calculated the distributions of the contact numbers in sliding non-intersecting windows of different fixed sizes. In contrast to the shuffled maps, these distributions in the original data are distinct from the Poisson shape typical for random matrices (Fig. 2e, see "Methods" and Supplementary Fig. 4). We conclude that the snHi-C maps obtained here are of acceptable quality and indeed reflect specific patterns of spatial contacts in chromatin. Fig. 2: snHi-C datasets in Drosophila represent a major portion of the genome and are not random matrices. a Number of ORBITA-captured contacts per individual nuclei obtained for Drosophila in the current work, compared with mouse oocytes from Flyamer et al.32 and G2 zygotes pronuclei from Gassler et al.34. **p < 0.01 using the Mann–Whitney two-sided test. n = 20, 120, and 32 nuclei for Drosophila in the current work, mouse oocytes from Flyamer et al.32 and G2 zygotes pronuclei from Gassler et al.34, respectively (the same is true for (b) and (c)). b Percentage of recovered contacts out of the total possible for Drosophila in the current work, compared with mouse oocytes from Flyamer et al.32 and G2 zygotes pronuclei from Gassler et al.34. P-values are calculated using the Mann–Whitney two-sided test. c Percentage of bins with non-zero coverage for autosomes and sex chromosome of Drosophila, murine oocytes, and G2 zygote pronuclei. Boxplots represent the median, interquartile range, maximum and minimum. d Mean number of contacts per 10-kb genomic bin in top-20 cells in the current work, compared with mouse oocytes from Flyamer et al.32 and G2 zygotes pronuclei from Gassler et al.34. Boxplots represent the median, interquartile range, maximum and minimum. e Distributions of the number of contacts in windows of fixed size (100 kb for the Cell 4, and 400 kb for the Cell 6; chr2R) in snHi-C data and shuffled maps for two individual cells (blue bars). The red curve shows the Poisson distribution expected for an entirely random matrix with the same number of contacts. P-values were estimated by the goodness of fit test. n = 211 and 52 windows for the cell 4 and for the cell 6, respectively. Visual inspection of snHi-C maps revealed distinct 50–200 kb contact domains that closely recapitulated the TAD profile in the bulk BG3 in situ Hi-C data (Fig. 3a). To call TADs in snHi-C data systematically, we used the lavaburst Python package with the modularity scoring function32. For each nucleus, we performed TAD segmentation in snHi-C maps of 10-kb resolution at a broad range of the gamma (γ) master parameter values (Fig. 3b, see "Methods" and Supplementary Fig. 5). Of note, the majority of the identified boundaries were resistant to the data downsampling, indicating that these boundaries did not result from fluctuations of captured contacts in sparse snHi-C matrices (Supplementary Fig. 6). In individual nuclei, we identified 554–1402 TADs with a median size of 60 kb covering 40–76% of the genome at the γ value corresponding to the maximal number of TADs called (γmax). At 10–20 kb resolution, the median size of Drosophila TADs was previously estimated as 100–150 kb5,24,25. To obtain a robust TAD profile, we used γmax/2 corresponding to TADs with a median size equal to that for TADs identified in the Drosophila cell population according to the previously published data24. At γmax/2, we identified 510–1175 TADs with a median size ~90 kb covering up to 89% of the genome in best snHi-C matrices (Supplementary Fig. 5). Fig. 3: Stable TAD boundaries are defined by high level of active epigenetic marks. a Example of a genomic region on Chromosome 2L with a high similarity of TAD profiles (black rectangles) in individual cells and bulk BG3 in situ Hi-C data. Number of unique captured contacts is shown in brackets. Positions of TAD boundaries identified in bulk BG3 in situ Hi-C data (top panel) are highlighted with gray lines. Here and below, TADs are identified using lavaburst software. b Dependence of the contact domain (CD) size (green), genome coverage by CDs (orange), and number of identified CDs (violet) on the γ value in bulk (left) and single-cell (right) BG3 Hi-C data. γ values selected for the calling of sub-TADs (γmax) and TADs (γmax/2) are marked with vertical gray lines. c Percentage of TAD boundaries shared between single cells, bulk BG3 in situ Hi-C, and merged snHi-C data. d Percentage of shared boundaries in real snHi-C, shuffled control maps, and bootstrap expected. Boxplots represent the median, interquartile range, maximum and minimum. **p < 0.01 using the Mann–Whitney two-sided test. n = 380 comparisons between individual cells. e Percentage of shared boundaries in real snHi-C for Drosophila, murine oocytes from Flyamer et al.32 and G2 zygote pronuclei from Gassler et al.34. Boxplots represent the median, interquartile range, maximum and minimum. n = 380 comparisons between individual cells. f Heatmaps of active (H3K4me3, RNA Polymerase II) and inactive (H1 histone) chromatin marks centered at single-cell TAD boundaries from different groups (±100 kb). Bulk—conventional BG3 in situ Hi-C; merged—aggregated snHi-C data from all individual cells; stable and unstable—boundaries found in more and in less than 50% of cells, respectively; cell-specific—boundaries identified in any one individual cell; TAD bins—genomic bins from TAD interior; random—randomly selected genomic bins. To additionally validate the single-cell TAD segmentations, we utilized a modification of the recently published42 spectral clustering method based on the non-backtracking random walks (NBT; see "Methods"). The non-backtracking operator is used to resolve communities in sufficiently sparse networks42,43, thus providing a useful tool for TAD annotation in single-cell Hi-C matrices. The method performs dimensionality reduction of the network using the leading eigenvectors of the non-backtracking operator, which has a distinctive disc-shape complex spectrum with a number of isolated eigenvalues on the real axis (Supplementary Fig. 7d). The resulting average size of the detected TADs was 110 kb, closely matching the typical TAD size in the population-averaged data and in the single-cell modularity-derived segmentations. The mean number of detected TADs per cell (855 and 920 for the NBT and modularity, respectively) and single-cell TAD segmentations were remarkably similar between the two methods (Supplementary Fig. 7a) and demonstrated the same epigenetic properties (Supplementary Fig. 7c, see below). Moreover, the modularity-derived TAD boundaries were robust to the data resolution changes. On average, 84.8% of modularity-derived boundaries at the 20-kb resolution and 78.6% of boundaries at the 40-kb resolution have a matching boundary at the 10-kb resolution. This is significantly higher than the 43 and 58% expected at random, respectively. Taken together, these results indicate that TAD profiles are robust and, thus, acceptable for the downstream analysis. TADs are largely conserved in individual Drosophila nuclei, and stable TAD boundaries are enriched with active chromatin We found that TADs tended to occupy similar positions in different cells regardless of the number of captured contacts (Fig. 3a, Supplementary Fig. 8). On average, 46.6% of population-identified TAD boundaries were present in each of the single cells analyzed (Fig. 3c), and 39.5% of boundaries were shared between different cells in pairwise comparisons (Supplementary Fig. 8). This is significantly higher than the percentage of shared boundaries for shuffled control maps (32.9%) and the percentage expected at random (33.1%, Fig. 3d). Notably, 44% of NBT-identified single-cell TAD boundaries were conserved in pairwise cell-to-cell comparisons (Supplementary Fig. 7b), supporting the results obtained in the analysis of modularity-derived TAD boundary profiles. In individual mammalian cells, TADs frequently overpassed the boundaries identified in the cell population, arguing for a substantial degree of stochasticity in genome folding32,35,44. We used the ORBITA algorithm to reanalyze previously published snHi-C data from murine oocytes32 and G2 zygote pronuclei34 and found that 31.2 and 21% of boundaries were shared on average between any two cells, respectively (Fig. 3e, Supplementary Fig. 9). This result is reproduced at 40-kb resolution and persists for a broad range of snHi-C datasets' quality (Supplementary Fig. 10). We conclude that, in Drosophila, TADs have more stable boundaries as compared to mammals. This corroborates recent observations of the Cavalli lab37 and may reflect the differential impact of loop extrusion18,19,34 and internucleosomal contacts24 on TAD formation16,23. Population TADs in Drosophila identified at 10–20 kb resolution mostly correspond to inactive chromatin, whereas their boundaries and inter-TAD regions correlate with highly acetylated active chromatin24,45. These are further partitioned into much smaller domains with the size of about 9 kb25 and, thus, unavailable for the analysis at the resolution of our Hi-C maps. To examine the properties of TAD boundaries at the single-cell level, we divided all TAD boundaries from snHi-C data into three groups according to the proportion of cells where these boundaries were present and analyzed them separately (number of boundaries of each type and distances between neighboring boundaries within each type are shown in Supplementary Fig. 13). The boundaries present in a large fraction of cells (more than 50% of cells) defined here as "stable" overlapped 73% of conserved boundaries between BG3 and Kc167 cell lines46 and had high levels of active chromatin marks (RNA polymerase II, H3K4me3; Fig. 3f, Supplementary Figs. 11, 12). They were also slightly enriched in some architectural proteins associated with active promoters (BEAF-32, Chriz, CTCF, and GAF; Supplementary Fig. 11, 12). In contrast, boundaries identified in less than 50% of cells and defined here as "unstable" (as well as boundaries identified in just one cell termed cell-specific boundaries) were remarkably depleted of acetylated histones and features of transcriptionally active chromatin while being enriched in histone H1 and other proteins of repressed chromatin similarly to the internal TAD bins (Fig. 3f, Supplementary Fig. 11, 12). The epigenetic profiles of "unstable" boundaries may be due to the fact that actual profiles of active chromatin in individual cells differ from the bulk epigenetic profiles used in our analysis. However, it may also reflect a certain degree of stochasticity in chromatin fiber folding into contact domains35. Taking into consideration the fact that active chromatin regions mostly colocalize with stable boundaries, one would expect the "unstable" boundaries tend to be located in the inactive parts of the chromosome. TADs in individual Drosophila cells are not hierarchical Drosophila TADs are hierarchical in cell population-based Hi-C maps45,47. It is, however, not clear whether the hierarchy exists in individual cells or emerges in the bulk BG3 in situ Hi-C maps as a result of averaging of alternative chromatin configurations over a number of individual cells. To test this proposal, we focused on two TAD segmentations: at γmax/2 (TADs) and γmax (smaller domains referred to as sub-TADs located inside TADs, Fig. 4a). We analyzed only the haploid X chromosome to avoid combined folding patterns of diploid somatic chromosomes. We assumed that if TADs in individual nuclei are truly hierarchical, then sub-TADs belonging to the same TAD should be demarcated with well-defined boundaries arising from specific folding of the chromatin. To determine whether this is the case, we tested the resistance of sub-TAD boundaries to the data downsampling (two-fold depletion of total number of contacts in the snHi-C maps). In contrast to relatively stable TAD boundaries, sub-TAD boundaries showed a two-fold reduction in the probability of detection in downsampled datasets (Fig. 4b). Moreover, we found that profiles of sub-TADs were highly different in individual nuclei: only approximately 20% of sub-TAD boundaries in individual cells were shared in pairwise comparisons, similar to the shuffled controls (Supplementary Fig. 14). Hence, a potential hierarchy of TAD structure in single cells appears to reflect local Hi-C signal fluctuations. The hierarchical structure of TADs observed in bulk Drosophila Hi-C data45,48, thus, likely results from the superposition of multiple alternative chromatin folding patterns present in individual nuclei; this is also supported by the visual inspection of snHi-C maps (Fig. 4c). Fig. 4: Chromatin in individual Drosophila cells is compartmentalized and lacks folding hierarchy at the level of TADs. a Examples of TAD (black triangles) and sub-TAD (light blue triangles) positions in the haploid X chromosome in individual nuclei with 77,770 unique contacts. b Percentage of TAD and sub-TAD boundaries per cell (excluding TAD boundaries for the same cells) found as sub-TAD boundaries in the snHi-C maps after 50% downsampling. Downsampling was performed 10 times. At the top: TAD boundaries are highlighted with blue lines, sub-TAD boundaries located inside TADs are highlighted with red lines. Boxplots represent the median, interquartile range, maximum and minimum. ****p-value < 0.0001 using the Mann–Whitney two-sided test. n = 20 cells. c Genomic regions with alternative chromatin folding patterns in individual cells. Positions of sub-TADs and TADs identified in bulk BG3 in situ Hi-C data (top panels) are highlighted with light gray and dark gray rectangles, respectively. Positions of TAD boundaries in bulk BG3 in situ Hi-C data are shown with vertical light gray lines. d Heatmaps showing log2 values of contact enrichment between genomic regions belonging to putative A- (negative PC1 values) and B- (positive PC1 values) compartments (saddle plot). PC1 profile is constructed using the bulk BG3 in situ Hi-C data. e Contact probability Pc(s) between transcriptionally active (red) and inactive (blue) genomic bins in the snHi-C data. The light blue shading shows the genomic distances corresponding to the average TAD size in single nuclei. f Average plot of long-range interactions between top 1000 regions of A compartment annotated by bulk Hi-C data (in bulk Hi-C and merged snHi-C). g Average plot of long-range interactions between top 1000 regions of B compartment annotated by bulk Hi-C data (in bulk Hi-C and merged snHi-C). h Average plot of interactions between top 500 regions enriched in MSL (in bulk Hi-C and merged snHi-C) on chromosome X. i Average plot of interactions between top 500 regions enriched in dRING (in bulk Hi-C and merged snHi-C). A-compartment in individual Drosophila nuclei In animal cells, TADs of the same epigenetic type interact with each other across large genomic distances, forming compartments that spatially segregate active and inactive genomic loci in the nuclear space13. Similarly to Drosophila embryo5, S249, and Kc167 cells50, we observed an increased long-range interaction frequency within the A-compartment in the bulk BG3 in situ Hi-C data (Fig. 4d–f; Supplementary Fig. 15). Supporting this observation, we also found increased long-range interactions between genomic regions of the X chromosome activated by male-specific-lethal (MSL) complex binding51 (Fig. 4h) in both BG3 in situ Hi-C data and the merged cell. In contrast, we observed a weak enrichment of long-range interactions between Polycomb-repressed regions52,53 bound by dRING (Fig. 4i)54 and nearly no enrichment for B-compartment regions (Fig. 4d, e, g). We could not directly detect compartments in individual nuclei due to the sparsity of the maps, but we observed a substantial enrichment of contacts in the A-compartment after averaging contacts in each individual nucleus across the population-based compartment mask (Fig. 4d, Supplementary Fig. 15). Compartmentalization might, thus, be a genuine feature of chromatin folding of Drosophila individual nuclei. The presence of extensive long-range contacts between the active genome regions in individual chromosomes is also supported by the contact probability Pc(s) plotted for active and inactive genomic bins separately: Pc(s) between active genome regions has a gentler slope outside TADs, indicating that active, but not inactive chromatin forms spatial contacts across large genomic distances (Fig. 4e). These results suggest that active and inactive genome loci are spatially segregated in individual Drosophila nuclei; active regions establish long-distance contacts, possibly at transcription factories and nuclear speckles55,56,57,58. Modeling of DNA fiber folding within X-chromosome by constrained polymer collapse We next applied dissipative particle dynamics (DPD) polymer simulations59 to reconstruct the 3D structures of haploid X chromosomes (Supplementary Fig. 16a) in individual cells using the snHi-C data (Fig. 5a, Supplementary Fig. 16b). The chromatin fiber path in these structures is strictly determined by the pattern of contacts derived from the snHi-C experiments and, thus, reflects the actual folding of the X chromosome in living cells60. As revealed by TAD annotation, the DPD simulations successfully reproduced chromatin fiber folding even at short and middle genomic distances because TAD positions along the X chromosome were largely preserved between the models and the original snHi-C data (Fig. 5a, Supplementary Figs. 17, 19a, b; also see "Methods"). Moreover, the simulations correctly reproduced chromatin folding at the scale of the whole chromosome with a well-defined A-compartment (Fig. 5a, Supplementary Fig. 18). Additionally, to validate the simulation results using an alternative approach, we performed multicolor in situ fluorescence hybridization (FISH) with two intra-TAD probes and one probe located outside the selected TAD. The distributions of inter-probe spatial distances extracted from the X chromosome model closely resemble those of the FISH analysis (Supplementary Fig. 19c). Taken together, these observations confirm the validity of our approach. Fig. 5: 3D folding of the haploid X chromosome. a Left panel: 3D structure of the haploid X chromosome from an individual nucleus derived from snHi-C data by the DPD polymer simulations. Right panel: averaging of contacts in the 3D model over TAD positions in the corresponding snHi-C data (top) and compartment (bottom) positions annotated in the bulk BG3 in situ Hi-C data. Source data are provided as a Source data file. b Coefficient of difference over a broad range of genomic distances. The central curves represent average values. Error bars show standard deviation (SD) for 20 independent model realizations, n = 2242 distinct ranges of genomic distances used for the curve construction. c Single-nucleus 3D structures of a genomic region covered by three TADs (left). Right, bulk BG3 in situ Hi-C map of this region. TAD positions are shown by colored rectangles below the map and by black squares on the map. d Dependence of the Euclidean spatial inter-particle distance R (see "Methods") on the genomic distance s between these particles along the chromatin fiber. Black lines show the slopes characteristic for the random walk behavior (s0.5) and the crumpled globule build-up from Gaussian blobs (s0.14). Error bars show standard deviation (SD) for 20 independent model realizations, n = 20. e Spatial distance from the surface of a chromosome territory (CT) to transcriptionally active (n = 8966) and inactive (n = 17,103; according to RNA-seq from ref. 24) regions, and Polycomb-bound domains (n = 2160; according to the 9-state chromatin type annotation54). Boxplots show data aggregated from all individual models analyzed. Boxplots represent the median, interquartile range, maximum and minimum. P-values are calculated using the Mann–Whitney two-sided test. f Examples of simulated ChrX 3D structures demonstrating the preferential location of transcriptionally inactive regions (blue particles) at the surface of the CT. g Heatmap showing cell-to-cell variability in interactions detected between Polycomb domains (upper panels) or between transcriptionally active regions (bottom panels) in individual cells. Red rectangles denote detected interactions. The total number of Polycomb and active regions identified in the X chromosome are 20 and 240, respectively (see "Methods"). Only interacting domains are numbered for Polycomb domains; interacting active regions are not numbered due to their multiplicity. The snHi-C maps show remarkable cell-to-cell variability in the distribution of captured contacts (Figs. 3a, 4c); therefore, we performed a pairwise comparison of 3D models of the X chromosome in individual cells using the coefficient of the difference at a broad range of genomic distances (Fig. 5b; see "Methods"). The higher the value of the coefficient, the higher the difference between the distance matrices obtained from the models. We have found that chromatin fiber conformation was strikingly different between individual models (red curve, Fig. 5b) in comparison to different configurations (at different time points) of each particular model (blue curve, Fig. 5b), showing the prominent cell specificity in the organization of the X chromosome territory (CT). Notably, shuffling of contacts (see "Methods") in the snHi-C data prior to simulations significantly decreased the variability in the chromatin fiber conformation at long distances (gray curve, Fig. 5b). Despite cell-to-cell differences in the overall 3D shape of a particular TAD (Fig. 5c, Supplementary Fig. 19d), the variability of the chromatin fiber conformation was substantially lower at short ranges (within TADs) as compared to long-range distances (Fig. 5b). This difference could be due to an increased flexibility in chromatin folding arising from larger genomic distances. In addition, the curve of the coefficient of difference between individual models reached the plateau outside TADs (Fig. 5b), suggesting that the variability of chromatin folding inside and outside TADs was governed by different rules. Due to the fact that TADs in Drosophila (at the 10–20 kb resolution of the Hi-C maps) are largely composed of inactive chromatin, we propose that the chromatin fiber conformation within TADs is mostly determined by interactions between adjacent non-acetylated nucleosomes. In contrast, at large genomic distances, TADs interact with each other in a stochastic manner, imposing the spherical form of the CT that is observed in all model structures (Fig. 5a, Supplementary Figs. 16, 20). In line with this hypothesis, the dependence of spatial distance R between any two particles on the genomic distance s revealed two distinct modes of polymer folding (Fig. 5d). At the scale of ~100 kb (e.g., inside TADs), the chromatin fiber demonstrated a random walk behavior (s0.5) similar to the chromatin of budding yeast. At larger distances, R(s) had a scaling similar to a crumpled globule build of Gaussian blobs (s0.14)61. Thus, chromatin folding within TADs and at the scale of the whole CT could be driven by different molecular mechanisms. Analysis of the radial distributions of transcriptionally active, inactive, and Polycomb-bound genome regions in our models demonstrated that active chromatin tended to be located in the CT interior, whereas inactive regions were located near the CT surface (Fig. 5e, f); this can be driven by interactions with the nuclear lamina62. Formation of nuclear microcompartments such as Polycomb bodies63 represents another factor determining the large-scale spatial structure of the X chromosome territory. We analyzed patterns of interactions between individual Polycomb-occupied regions in the 3D models. To this aim, each of such regions was assigned a consecutive number according to their positions along the chromosome. The examples of 2D maps demonstrating regions residing in a spatial proximity in each cell are presented in Fig. 5g (upper panels). We found that Polycomb-occupied regions interacted with each other in a cell-specific manner and, moreover, such contacts occurred between loci regardless of the genomic distances between them (Fig. 5g, upper panels). Using a similar approach, we constructed 2D interaction maps of active genomic regions (Fig. 5g, bottom panels). Active genome regions also interacted with each other across large genomic distances in a cell-specific manner (Fig. 5g, bottom panels). We propose that these two types of long-range interactions: stochastic assembly of Polycomb bodies and transcription-related microcompartments (factories64), underlie the cell-specific conformation of the chromatin fiber within CTs in Drosophila. Folding of interphase chromatin in eukaryotes is driven by multiple mechanisms operating at different genome scales and generating distinct types of the 3D genome features16,20. In mammalian cells, cohesin-mediated chromatin fiber extrusion mainly impacts the genome topology at the scale of ~100–1000 kb by producing loops, resulting in the formation of TADs18,19 and establishing enhancer-promoter communication65. Chromatin loop formation by the loop extrusion complex (LEC) in mammalian cells is a substantially deterministic process due to the preferential positioning of loop anchors encoded in DNA by CTCF binding sites (CBS). The cohesin-CTCF molecular tandem modulates folding of intrinsically disordered chromatin fiber16,23. On the other hand, association of active and repressed gene loci in chromatin compartments13,14, and formation of Polycomb and transcription-related nuclear bodies66,67 in both mammalian and Drosophila cells shape the 3D genome at the scale of the whole chromosome. These associations appear to be stochastic: a particular Polycomb-bound or transcriptionally active region in individual cells interacts with different partners located across a wide range of genomic distances68. Here, we applied the single-nucleus Hi-C to probe the 3D genome in individual Drosophila cells at a relatively high resolution that was not achieved previously in single-cell Hi-C studies. Based on our observations, we suggest that, in Drosophila, both deterministic and stochastic forces govern the chromatin spatial organization (Fig. 6a). Fig. 6: Order and stochasticity in the Drosophila 3D genome. a Schematic representation of the ordered and stochastic components in the Drosophila genome folding. Positions of TAD boundaries are largely conservative between individual cells and determined by active chromatin. Chromatin fiber path within a particular TAD and within the whole chromosome territory is largely stochastic and demonstrate prominent cell-to-cell variability. b Determined positions of active regions along the Drosophila genome define TAD boundaries persistent in individual cells. Inactive region is folded into chromatin globule due to interactions between non-acetylated "sticky" nucleosomes. This region adopts different configurations in individual cells (and at different time points in a particular cell). In a cell 1, it is folded into two globules separated with stochastically formed fuzzy boundary. In a cell 2, one part of the region is compact (left) and the other part (right) is transiently decondensed. In a cell 3, the entire region forms one densely packed globule. Averaging of these configuration results in a TAD containing two sub-TADs in a population-based Hi-C map. Note, that the hierarchical structure of the TAD emerging in a population Hi-C map reflects different configuration of the region in individual cells. We note that the absence of any structure at inactive TAD borders denotes ambiguity of folding of these regions with snHi-C, but not the absence of this structure. c Extended active regions serving as barrier elements for potential loop extrusion complex (LEC) in Drosophila cells. It has been previously shown that transcription might interfere with loop extrusion71,72. Since stable TAD boundaries in Drosophila are enriched with transcribed genes, we propose that extended regions of active chromatin but not binding sites of architectural proteins represent barrier elements for LEC in Drosophila cells. In this scenario, LEC is looping out a TAD and terminates within flanking active regions colliding with RNA-polymerases, large chromatin-remodeling complexes and other components of active chromatin. In different individual cells, termination occurs accidentally at different points within these regions. In a population-based Hi-C map that results in a compartment-like signal but not in a conventional pointed loop observed in mammalian cells where CTCF binding sites serve as barrier elements for LEC. We found that the entire individual Drosophila genomes were partitioned into TADs; this observation supports the results of recent super-resolution microscopy studies37. TAD profiles are highly similar between individual Drosophila cells and demonstrate lower cell-to-cell variability as compared to mammalian TADs. According to our model24, large inactive TADs in Drosophila are assembled by multiple transient electrostatic interactions between non-acetylated nucleosomes in transcriptionally silent genome regions. Conversely, TAD boundaries and inter-TAD regions at the 10-kb resolution of Hi-C maps in Drosophila were found to be formed by transcriptionally active chromatin. This result may explain why TADs in individual cells occupy virtually the same genomic positions (Fig. 6b). Gene expression profile is a characteristic feature of a particular cell type, and, thus, should be relatively stable in individual cells within the population. In agreement with this, we demonstrated that invariant TAD boundaries present in a major portion of individual cells were highly enriched in active chromatin marks. Moreover, stable boundaries were also largely conserved in other cell types (see "Results" and ref. 46), possibly due to the fact that TAD boundaries were frequently formed at the position of housekeeping genes. In contrast to stable TAD boundaries, the boundaries that demonstrate cell-to-cell variability bear silent chromatin. Some cell-specific TAD boundaries may originate at various positions due to a putative size limit of large inactive TADs or other restrictions in chromatin fiber folding. Indeed, it appears that the assembly of randomly distributed TAD-sized self-interacting domains is an intrinsic property of chromatin fiber folding35. In mammals, the positioning of these domains is modulated by cohesin-mediated DNA loop extrusion35, whereas in Drosophila, it may be modulated by segregation of chromatin domains bearing distinct epigenetic marks16,23. Even if cell-specific and unstable TAD boundaries are distributed in a random fashion, they should be depleted in active chromatin marks because active chromatin regions are mainly occupied by stable TAD boundaries. We also cannot exclude that variable boundaries and the TAD boundary shifts are caused by local variations in gene expression and active chromatin profiles in individual cells that we cannot assess simultaneously with constructing snHi-C maps. Our results are also compatible with an alternative mechanism of TAD formation. Given that the above-mentioned cohesin-driven loop extrusion is evolutionarily conserved from bacteria to mammals69, it is compelling to assume that extrusion works in Drosophila as well. Despite the presence of all potential components of LEC (cohesin, its loading and releasing factors), TAD boundaries in Drosophila are not significantly enriched with CTCF24,25 and do not form CTCF-enriched interactions or TAD corner peaks. These observations suggest that the binding sites of CTCF or other distinct proteins do not constitute barrier elements for the Drosophila LEC even if these proteins are enriched in TAD boundaries; this may be due to some other properties of a genomic region. For example, stably bound cohesins were proposed to act as the barriers for cohesin extrusion in yeast70. Active transcription interferes with DNA loop extrusion71,72. Because TAD boundaries in Drosophila are highly transcribed, we propose that open chromatin with actively transcribing polymerase and/or a high density of chromatin remodeling complexes could serve as a barrier for the Drosophila LEC. Contrary to the strictly positioned and short CBSs in mammals, active loci flanking Drosophila TADs represent relatively extended regions up to several dozens of kb in length. Probabilistic termination of LEC at varying points within such regions in different cells of the population could explain the absence of canonical loop signals and the presence of strong compartment-like interactions between active regions flanking a TAD (Fig. 6c). This model also provides a potential explanation for the relatively high stability of TAD positioning in individual Drosophila cells in comparison to mammals. A relative permeability of CBSs in mammalian cells allows LEC to proceed through thousands of kilobases and to produce large contact domains17. Extended active regions acting as "blurry" barrier elements where LEC termination occurs at multiple points, should stop the LEC more efficiently, making the TAD pattern more stable and pronounced. Taken together, the order in the Drosophila chromatin 3D organization is manifested in a TAD profile that is relatively stable between individual cells and likely dictated by the distribution of active genes along the genome. On the other hand, our molecular simulations of individual haploid X chromosomes indicate a prominent stochasticity in both the form of individual TADs and the overall folding of the entire chromosome territory. According to our data, the active A-compartment is easily detectable in individual cells, and the profiles of interaction between individual active regions are highly variable between individual cells. Notably, this also holds true for Polycomb-occupied loci that are known to shape chromatin fiber in living cells48. Although these highly variable long-range interactions of active regions and Polycomb-occupied loci are closely related to the shape of chromosome territory (CT), the cause-and-effect relationships between them and the stochastic nature of the cell-specific chromatin chain path are currently unclear. The main question to be answered by future studies is whether these interactions are fully stochastic or at least partially specific. The possible molecular mechanisms that may provide specific communication between remote genomic loci separated by up to megabases of DNA are not known. In a scenario of the absence of any specificity, the pattern of contacts inside A-compartment and within Polycomb bodies in a particular cell is established by stochastic fluctuations of the large-scale chromatin fiber folding. In this case, the large-scale chromatin fiber folding dictates the cell-specific location of Polycomb-enriched and active chromatin regions in the 3D nuclear space. The formation of Polycomb bodies and transcription-related chromatin hubs is achieved by confined diffusion of these regions and might be further stabilized by specific protein-protein interactions and liquid-liquid phase separation73. This mechanism allows to sort through alternative configurations of the 3D genome and to transiently stabilize those that are functionally relevant under specific conditions. A balance between the order and the stochasticity appears to be an intrinsic property of nuclear organization that enables rapid adaptation to changing environmental conditions. Drosophila melanogaster ML-DmBG3-c2 cell line (Drosophila Genomics Resource Center) was grown at 25 °C in a mixture (1:1 v/v) of Shields and Sang M3 insect medium (Sigma) and Schneider's Drosophila Medium (Gibco) supplemented with 10% heat-inactivated fetal bovine serum (FBS, Gibco), 50 units/ml penicillin, and 50 µg/ml streptomycin. Single-nucleus Hi-C library preparation We modified the previously published single-nucleus Hi-C protocol32 as follows: 5–10 million cells were fixed in 1× phosphate-buffered solution (PBS) with 2% formaldehyde for 10 min with occasional mixing. The reaction was stopped by the addition of 2 M glycine to give a final concentration of 125 mM. Cells were centrifuged (1000 × g, 10 min., 4 °C), resuspended in 50 μl of 1× PBS, snap-frozen in liquid nitrogen, and stored at −80 °C. Defrozen cells were lysed in 1.5 ml isotonic buffer (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 0.5% (v/v) NP-40 substitute (Fluka), 1% (v/v) Triton-X100 (Sigma), 1× Halt™ Protease Inhibitor Cocktail (Thermo Scientific) on ice for 15 min. Cells were centrifuged at 2500 × g for 5 min, resuspended in 100 μl of 1× DpnII buffer (NEB), and pelleted again. The pellet was resuspended in 200 μl of 0.3% SDS in 1.1× DpnII buffer and incubated at 37 °C for 1 h. Then, 330 μl of 1.1× DpnII buffer and 53 μl of 20% Triton X-100 (Sigma) were added, and the suspension was incubated at 37 °C for 1 h. Next, 600 U of DpnII enzyme (NEB) were added, and the chromatin was digested overnight (14–16 h) at 37 °C with shaking (1400 rpm). On the following day, 200 U of DpnII enzyme were added, and the cells were incubated for an additional 2 h. DpnII was then inactivated by incubation at 65 °C for 20 min. Nuclei were centrifuged at 3000 × g for 5 min, resuspended in 100 μl of 1× T4 DNA ligase buffer (Fermentas), and pelleted again. The pellet was resuspended in 400 μl of 1× T4 DNA ligase buffer, and 75 U of T4 DNA ligase (Fermentas) were added. Chromatin fragments were ligated at 16 °C for 6 h. Next, the nuclei were centrifuged at 5000 × g for 5 min, resuspended in 100 μl of sterile 1× PBS, stained with Hoechst, and single nuclei were isolated into wells of a standard 96-well PCR plate (Thermo Fisher) using FACS (BD FACSAriaTMIII). Each well contained 3 μl of sample buffer from the Illustra GenomiPhi v2 DNA amplification kit (GE Healthcare). Sample buffer drops containing isolated nuclei were covered by 5 μl of mineral oil (Thermo Fisher) and incubated at 65 °C for 3 h to reverse formaldehyde cross-links. Total DNA was amplified according to a previously published protocol74. The amplification was considered successful if the sample contained ≥1 μg DNA. The DNA was then dissolved in 500 μl of sonication buffer (50 mM Tris-HCl (pH 8.0), 10 mM EDTA, 0.1% SDS) and sheared to a size of ~100–1,000 bp using a VirSonic 100 (VerTis). The samples were concentrated (and simultaneously purified) using AMICON Ultra Centrifugal Filter Units to a total volume of about 50 μl. The DNA ends were repaired by adding 62.5 μl MQ water, 14 μl of 10× T4 DNA ligase reaction buffer (Fermentas), 3.5 μl of 10 mM dNTP mix (Fermentas), 5 μl of 3 U/μl T4 DNA polymerase (NEB), 5 μl of 10 U/μl T4 polynucleotide kinase (NEB), 1 μl of 5 U/μl Klenow DNA polymerase (NEB), and then incubating at 20 °C for 30 min. The DNA was purified with Agencourt AMPure XP beads and eluted with 50 μl of 10 mM Tris-HCl (pH 8.0). To perform an A-tailing reaction, the DNA samples were supplemented with 6 μl 10× NEBuffer 2, 1.2 μl of 10 mM dATP, 1 μl of MQ water, and 3.6 μl of 5 U/μl Klenow (exo-) (NEB). The reactions were carried out for 30 min at 37 °C in a PCR machine, and the enzyme was then heat-inactivated by incubation at 65 °C for 20 min. The DNA was purified using Agencourt AMPure XP beads and eluted with 100 μl of 10 mM Tris-HCl (pH 8.0). Adapter ligation was performed at 22 °C for 2.5 h in the following mixture: 41.5 μl MQ water, 5 μl 10× T4 DNA ligase reaction buffer (Fermentas), 2.5 μl of Illumina TruSeq adapters, and 1 μl of 5 U/μl T4 DNA ligase (Fermentas). Test PCR reactions containing 4 μl of the ligation mixture were performed to determine the optimal number of PCR cycles required to generate sufficient PCR products for sequencing. The PCR reactions were performed using KAPA High Fidelity DNA Polymerase (KAPA) and Illumina PE1.0 and PE2.0 PCR primers (10 pmol each). The temperature profile was 5 min at 98 °C, followed by 6, 9, 12, 15, and 18 cycles of 20 s at 98 °C, 15 s at 65 °C, and 20 s at 72 °C. The PCR reactions were separated on a 2% agarose gel containing ethidium bromide, and the number of PCR cycles necessary to obtain a sufficient amount of DNA was determined based on the visual inspection of gels (typically 12–15 cycles). Four preparative PCR reactions were performed for each sample. The PCR mixtures were combined, and the products were separated on a 1.8% agarose gel. 200–600 bp DNA fragments were excised from the gel and purified with a QIAGEN Gel Extraction Kit. Bulk BG3 in situ Hi-C library preparation Bulk BG3 in situ Hi-C libraries were prepared as described previously24 with minor modifications. The first steps of the protocol (from fixation to DpnII enzyme inactivation) were completely identical to the corresponding steps in the single-cell Hi-C library preparation procedure described above. After DpnII inactivation, the nuclei were harvested for 10 min at 5000 × g, washed with 100 μl of 1× NEBuffer 2, and resuspended in 50 μl of 1× NEBuffer 2. Cohesive DNA ends were biotinylated by the addition of 7.6 μl of the biotin fill-in mixture prepared in 1× NEBuffer 2 (0.025 mM dATP (Thermo Scientific), 0.025 mM dGTP (Thermo Scientific), 0.025 mM dTTP (Thermo Scientific), 0.025 mM biotin-14-dCTP (Invitrogen), and 0.8 U/μl Klenow enzyme (NEB)). The samples were incubated at 37 °C for 75 min with shaking (1400 rpm). Nuclei were centrifuged at 3000 × g for 5 min, resuspended in 100 μl of 1× T4 DNA ligase buffer (Fermentas), and pelleted again. The pellet was resuspended in 400 μl of 1× T4 DNA ligase buffer, and 75 U of T4 DNA ligase (Fermentas) were added. Chromatin fragments were ligated at 20 °C for 6 h. The cross-links were reversed by overnight incubation at 65 °C in the presence of proteinase K (100 μg/ml). After cross-link reversal, the DNA was purified by single phenol-chloroform extraction followed by ethanol precipitation with 20 μg/ml glycogen (Thermo Scientific) as the co-precipitator. After precipitation, the pellets were dissolved in 100 μl 10 mM Tris-HCl pH 8.0. To remove residual RNA, samples were treated with 50 μg of RNase A (Thermo Scientific) for 45 min at 37 °C. To remove residual salts and DTT, the DNA was additionally purified using Agencourt AMPure XP beads (Beckman Coulter). Biotinylated nucleotides from the non-ligated DNA ends were removed by incubating the Hi-C libraries (2 μg) in the presence of 6 U of T4 DNA polymerase (NEB) in NEBuffer 2 supplied with 0.025 mM dATP and 0.025 mM dGTP at 20 °C for 4 h. Next, the DNA was purified using Agencourt AMPure XP beads. The DNA was then dissolved in 500 μl of sonication buffer (50 mM Tris-HCl (pH 8.0), 10 mM EDTA, 0.1% SDS) and sheared to a size of approximately 100–1000 bp using a VirSonic 100 (VerTis). The samples were concentrated (and simultaneously purified) using AMICON Ultra Centrifugal Filter Units to a total volume of approximately 50 μl. The DNA ends were repaired by adding 62.5 μl MQ water, 14 μl of 10× T4 DNA ligase reaction buffer (Fermentas), 3.5 μl of 10 mM dNTP mix (Fermentas), 5 μl of 3 U/μl T4 DNA polymerase (NEB), 5 μl of 10 U/μl T4 polynucleotide kinase (NEB), 1 μl of 5 U/μl Klenow DNA polymerase (NEB), and then incubating at 20 °C for 30 min. The DNA was purified with Agencourt AMPure XP beads and eluted with 50 μl of 10 mM Tris-HCl (pH 8.0). To perform an A-tailing reaction, the DNA samples were supplemented with 6 μl 10× NEBuffer 2, 1.2 μl of 10 mM dATP, 1 μl of MQ water, and 3.6 μl of 5 U/μl Klenow (exo−) (NEB). The reactions were carried out for 30 min at 37 °C in a PCR machine, and the enzyme was then heat-inactivated by incubation at 65 °C for 20 min. The DNA was purified using Agencourt AMPure XP beads and eluted with 100 μl of 10 mM Tris-HCl (pH 8.0). Biotin pulldown of the ligation junctions was performed as described previously, with minor modifications. Briefly, 4 μl of MyOne Dynabeads Streptavidin C1 (Invitrogen) beads were used to capture the biotinylated DNA, and the volumes of all buffers were decreased by 4-fold. The washed beads with captured ligation junctions were resuspended in 50 μl of adapter ligation mixture comprising 41.5 μl MQ water, 5 μl 10× T4 DNA ligase reaction buffer (Fermentas), 2.5 μl of Illumina TruSeq adapters, and 1 μl of 5 U/μl T4 DNA ligase (Fermentas). Adapter ligation was performed at 22 °C for 2.5 h, and the beads were sequentially washed twice with 100 μl of TWB (5 mM Tris-HCl (pH 8.0), 0.5 mM EDTA, 1 M NaCl, 0.05% Tween-20), once with 100 μl of 1× binding buffer (10 mM Tris-HCl (pH 8.0), 1 mM EDTA, 2 M NaCl), once with 100 μl of CWB (10 mM Tris-HCl (pH 8.0) and 50 mM NaCl), and then resuspended in 20 μl of MQ water. Test PCR reactions containing 4 μl of the streptavidin-bound Hi-C library were performed to determine the optimal number of PCR cycles required to generate sufficient PCR products for sequencing. The PCR reactions were performed using KAPA High Fidelity DNA Polymerase (KAPA) and Illumina PE1.0 and PE2.0 PCR primers (10 pmol each). The temperature profile was 5 min at 98 °C, followed by 6, 9, 12, 15, and 18 cycles of 20 s at 98 °C, 15 s at 65 °C, and 20 s at 72 °C. The PCR reactions were separated on a 2% agarose gel containing ethidium bromide, and the number of PCR cycles necessary to obtain a sufficient amount of DNA was determined based on the visual inspection of gels (typically 12–15 cycles). Four preparative PCR reactions were performed for each sample. The PCR mixtures were combined, and the products were separated on a 1.8% agarose gel. 200–600 bp DNA fragments were excised from the gel and purified with a QIAGEN Gel Extraction Kit. Two biological replicates were performed. snHi-C raw data processing and contact annotation The whole-genome amplification step of snHi-C uses the Phi29 DNA polymerase, which is known to produce chimeric DNA molecules by randomly switching the DNA template40. DNA molecules created by the template switch were further amplified during the snHi-C protocol and resulted in chimeric reads. Notably, in theory, template switches can be detected by the presence of two consecutive parts of the same read that map to different genomic locations and do not align immediately next to the restriction sites at the DNA breakpoint. This situation is different from the standard Hi-C, where each read pair is considered to be a true contact pair regardless of the DNA breakpoint presence and annotation. Standard Hi-C processing tools, such as hiclib32,41, Juicer75, and HiCExplorer26, typically rely on mapping of both reads in a Hi-C pair and do not account for the presence of chimeric parts in a single side of paired-end sequencing. We devised a more accurate approach for processing of snHi-C data that annotates each DNA breakpoint observed in each single-end read, and selects the contacts that do not represent possible template switches of Phi29 polymerase. Thus, we developed a custom approach for snHi-C data processing termed ORBITA (One Read-Based Interaction Annotation), as described below. Reads mapping As the first step of the approach, FASTQ files with paired-end sequencing data are mapped to Drosophila reference genome dm3 using Burrows-Wheeler Aligner (BWA-MEM, console version 0.7.17-r1188)76 with default parameters. Notably, this mapping procedure allows independent alignment of chimeric parts of both forward and reverse reads. This step results in BAM files with paired-end mapping information. Annotated pairs retrieval In the next step, the BAM files are parsed with an adapted version of pairtools (https://github.com/mirnylab/pairtools) with our newly implemented option ORBITA. Among many other utilities for Hi-C data processing, we selected pairtools from the Mirny lab as the basis of our approach, due to the convenience and modular structure of its code. This version of the tool can be accessed at the GitHub repository https://github.com/agalitsyna/pairtools. ORBITA treats each read in the BAM file independently, regardless of whether it is forward or reverse. Reads that are uniquely mapped to a single location of the genome are marked as type P, meaning that they are part of a standard Hi-C Pair with no DNA breakpoint evidence. Reads that contain precisely two successive regions uniquely mapped to different genomic locations (MAPQ > 1) are selected for further DNA breakpoint annotation. ORBITA takes the genome restriction annotation (provided as a BED file with DpnII restriction fragments positions, produced by cooler digest77) and compares each breakpoint against the list of restriction sites. For each 3′-end of the right chimeric part and 5′-end of the left chimeric part (in other words, ligated ends), both upstream and downstream restriction sites are annotated, and the distance to the closest one is calculated. If both ends are located sufficiently close (<10 bp) to any restriction site in the genome, ORBITA considers them as a true ligation junction of restricted fragments in the snHi-C proximity ligation step. These cases are marked as J type (ligation Junction), with the evidence of traversing the ligation junction of DpnII restriction fragments. If at least one ligated end of the chimeric read was not mapped to the restriction site, ORBITA marks it as H (template switch, or Hopping of Phi29 DNA polymerase). To simplify the ORBITA approach, we omit the cases with more complicated scenarios of read mapping, when three or more uniquely mapped chimeric parts of a single-end read were present. If the read contains multiple mapped chimeric parts, it is discarded. ORBITA produces the resulting PAIRS file with annotation of JJ pairs (with the evidence of the ligation) that are accepted for further processing. If not explicitly mentioned, the generic names "pair" or "contact" are used for snHi-C contacts with the evidence of the ligation junction. Amplification duplicates removal In the next step, we performed a correction for amplified duplicates of snHi-C contacts. Standard Hi-C uses amplification by the Illumina PCR protocol with primers that are ligated to the ends of sheared DNA17. Thus, two independent Hi-C pairs can be PCR duplicates if their mapping positions coincide (e.g., see hiclib). However, the amplification in snHi-C32 is followed by sonication, resulting in random breaks of ligated DNA fragments. Hence, coinciding mapping positions cannot be used as a criterion of PCR duplication. Notably, we cannot distinguish the amplified pair contacting restriction fragments from the contacts of the same regions in the homologous chromosomes. Thus, we removed all multiple copies of restriction fragment pairs and retained unique contacts for each combinatorial pair of restriction fragments. Fragment filtration In the next step, we used restriction fragment filtration to reduce the possible contribution of copy number variation, read misalignment, and Phi29 DNA polymerase template switch that had not been removed by the ORBITA filter. In theory, each restriction fragment of DNA has two ends and is present twice in the diploid nucleus of ML-DmBG3-c2 Drosophila cells; thus, we expect the upper limit of four unique contacts per restriction fragment if no unannotated genomic rearrangements, mismappings, or template switches occurred. For each restriction fragment, we calculated the observed number of contacts and removed fragments that had more than four contacts. Before contact filtration by this rule, we compared the number of restriction fragments with more than four unique contacts according to ORBITA and one previous approach, hiclib for Flyamer et al. 2017. We obtained datasets for mouse nuclei from Flyamer et al. 2017 and Nagano et al. 2017 and mapped with the hiclib and ORBITA pipelines. We found a significant reduction in the number of unique contacts per fragment for snHi-C from Phi29 DNA polymerase datasets (Flyamer et al. 2017, present work], but not for scHi-C without Phi29 DNA polymerase (Nagano et al. 2017) (Supplementary Figs. 2, 3). Thus, we conclude that ORBITA is an effective approach to reduce the number of snHi-C artefactual contacts arising from random template switches of Phi29 DNA polymerase. Cell selection by raw data subsampling We obtained filtered contacts for 88 individual nuclei after the initial round of sequencing. Before the second round of sequencing, we assessed the robustness of the number of unique contacts by subsampling of raw datasets (Supplementary Fig. 2a). For each library, we created a uniform grid of sequencing depth (from 0 to the resulting number of reads with the step of 100,000 reads). We then randomly selected X reads from the full library and calculated the number of unique contacts (as described above) for each number from the grid X. We repeated this procedure ten times and plotted the mean number of unique contacts for each sequencing depth from the grid. We proposed that there are a significant number of cells containing PCR duplicates and that the number of contacts increases slowly depending on the sequencing depth due to the poor efficiency of the snHi-C protocol. Further sequencing of these cells would result in a relatively small improvement of the detectable number of unique contacts. The number of contacts for other cells increases more rapidly with the number of reads but reaches a plateau once the maximum number of unique contacts is achieved. Thus, additional sequencing of these cells might result in reading duplicated contacts. For other cells, the number of contacts grew slowly with sequencing depth (Supplementary Fig. 2a). However, for all these cells, the number of unique contacts gradually increased with no plateau signature. We selected the cells displaying the best growth of the number of contacts, indicative of the good quality of the dataset. The top 20 cells by the number of unique contacts were subjected to an additional round of sequencing. The same mapping and parsing pipeline was used for these datasets. Technical replicates (initial and additional rounds of snHi-C libraries sequencing) were merged at the annotated PAIRS file stage. snHi-C interaction map construction The resulting pair data were binned at 1 kb, 10 kb. 20-kb, 40-kb, and 100-kb resolutions with cooler version 0.8.577 and stored in the COOL format. We constructed the merged dataset by summing all snHi-C maps. To exclude self-interacting genomic bins and possible contribution of dangling ends, self-circles41, and mirror reads78, we removed the first diagonal in both single cells and the merged maps. The HiGlass server was used for data visualization79. 10-kb resolution was used throughout the paper if another resolution is not specified. Bulk BG3 in situ Hi-C raw data processing For bulk BG3 in situ Hi-C (two biological replicates), reads were mapped to Drosophila reference genome dm3 with Burrows-Wheeler Aligner (BWA-MEM, console version 0.7.17-r1188)76 with default parameters. For consistency with the snHi-C analysis, the resulting BAM files were parsed with pairtools v0.3.0, (https://github.com/mirnylab/pairtools) using default parameters. The resulting files were sorted by the pairtools module "sort"; replicates were merged by the pairtools module "merge" and duplicates were removed, allowing one mismatch between possible duplicates (pairtools dedup with --max-mismatch 1 and —mark-dups options). The resulting PAIRS file was binned with cooler77 at the same resolutions as the single-cell datasets. To remove the contribution of possible Hi-C technical artifacts, such as backward ligation, dangling ends, self-circles41, and mirror reads78, the first two diagonals of Hi-C maps were removed. As the last step of bulk Hi-C processing, the maps were iteratively corrected for the removal of coverage bias41 by the cooler balance tool with default parameters77. For the reproducibility control, both replicates were converted to interaction maps independently by the above pipeline. The resulting maps demonstrated a correlation of 0.9–0.95 as estimated by the HiCRep stratum-adjusted correlation coefficient for intrachromosomal maps smoothed with one-bin offset and genomic distance up to 300 kb at 20 kb resolution80. snHi-C background model construction We sought to create a background model for snHi-C that can be used as a control for the subsequent analysis of intrachromosomal snHi-C interaction maps. For that, we considered two major factors contributing to the intrachromosomal contact frequency in the genomic region: the contact probability for a particular genomic distance Pc(s)13, and region visibility81. For bulk BG3 in situ Hi-C, the Pc(s) is assessed by the mean number of contacts for a certain genomic distance13. However, the same procedure cannot be readily used for snHi-C due to data sparsity and missing data. Thus, to calculate Pc(s) for a snHi-C dataset, we counted the number of contacts for a certain genomic distance and normalized by the number of genomic bins that had contact in at least one snHi-C experiment at any distance. Notably, we use the same procedure for the visualization of snHi-C Pc(s) dependence on the genomic distance s (Fig. 1f and Fig. 4e); the genomic distance step size was set to 1 kb. For snHi-C background models, we used Pc(s) genomic distance step size 10 kb. We assessed the region visibility in snHi-C by the marginal distribution of the number of contacts for the region margi (in other words, the total number of observed intrachromosomal contacts for a genomic region) using maps at a 10-kb resolution. For each snHi-C map, we calculated Pc(s) and the marginal distribution of contacts and shuffled the positions of the contacts for each chromosome, so that the marginal distribution was preserved, and Pc(s) was at least approximated (Supplementary Fig. 4a–d). Note that for 3D modeling, we used more crude shuffling without saving the marginal distribution of contacts. Assessment of percentage of recovered contacts To compare snHi-C datasets across species (Fig. 2a–c), we assessed the percentage of recovered contacts out of all possible contacts per nuclei. First, we determined the theoretical size of the pool of restriction fragments for the nucleus of each species and cell type. For Drosophila, we used a diploid male cell line. Thus, the total number of restriction fragments was ~600,000, composed of the double amount of fragments in autosomes (2 × 265,167, as assessed by the dm3 in silico digestion) in addition to the number of fragments on chromosome X (64,108). For mice, Flyamer et al. (2017) analyzed oocytes with four copies of the genome, resulting in a total of 4 × 6,407,802 ~ 25,600,000 fragments. Gassler et al. (2017) analyzed G2 zygotes pronuclei with two copies of the genome, resulting in a total of 2 × 6,407,802 ~ 12,800,000 fragments (we did not distinguish between the maternal and paternal pronuclei because the contribution of chromosome X is not as significant for the mouse genome). We next assessed the upper limit of the total number of possible contacts per single nucleus, which is achieved when each restriction fragment formed two contacts with the ends of any other restriction fragments from the pool. Because the valency of each fragment is two, the theoretical upper limit is equal to the number of restriction fragments. We then divided the total number of observed contacts (recovered by ORBITA) by the upper bound of the possible number of contacts, and we recovered up to ~16% of the total number of possible contacts for Drosophila (see Fig. 2b); this number is approximately 2.6% for the best mouse dataset. The mean percentage of recovered contacts is 4.9% for our dataset and <1% for Flyamer et al. (2017) and Gassler et al. (2017). However, this assessment of the percentage of recovered contacts is not exact for several reasons: (1) we did not perform sorting prior to snHi-C to isolate G1 cells; hence, some regions of the genome might have an increased copy number in S or G2 cells; (2) some regions of the genome might be affected by deletions and copy number variations that were not accounted for in our analysis. However, even in the worst-case scenario, if we imagine that all Drosophila cells are in the G2 phase of the cell cycle, we recovered at least 8% of all possible contacts for the best cells in our analysis, which is still a substantial improvement compared to recovery for the best cells from mammalian studies. TAD calling in snHi-C and bulk BG3 in situ Hi-C data We used Hi-C map segmentation with lavaburst (v0.2.0) (https://github.com/nvictus/lavaburst) with the modularity scoring function for TAD calling in Hi-C maps at 10-kb resolution32. All TAD segments smaller or equal to 3 bins (30 kb) were considered to be inter-TADs24. lavaburst has a gamma (γ) parameter controlling the size and the number of resulting TADs. We varied g from 0 to 375 with a step of 0.1 for Drosophila datasets. The range and the step were selected to guarantee the comprehensive coverage of both extremes (a small amount of unusually large TADs and a large amount of smallest possible TADs). We observed a sharp decrease in median TAD size and an increase in the number of TADs with the γ increase (Fig. 3b, Supplementary Fig. 5). After reaching the peak, the number of TADs starts to drop because many segments fall beyond the minimal allowed TAD size. For large γ, both the number of TADs and mean TAD size reach a plateau at low levels. We considered the point of the maximum number of TADs (γmax) as the most informative segmentation reachable by the algorithm for a particular dataset. The mean TAD size is ~70 kb on average between cells compared to the expected 120 kb size of Drosophila TADs24. Thus, we considered this level to be the sub-TADs. To guarantee a uniform γ selection procedure for all the cells, we arbitrarily selected γmax/2 to obtain a resulting TAD segmentation (mean TAD size ~90 kb). For the other resolutions of snHi-C maps, the same protocol of TAD calling was applied, except the inter-TAD size threshold was set to 60 kb (3 bins at 20 kb) for 20 kb and 120 kb (3 bins of 40 kb) for 40 kb. Robustness of TAD calling To assess TAD calling robustness and filter out potentially artifactual TAD boundaries, we performed TAD calling on snHi-C maps with random subsampling of the contacts as a control. For each cell, we performed ten iterations of independent subsampling of contacts leaving 95%, 90%, … 5% of the initial number of unique contacts per dataset. For each subsampling, we performed the TAD calling in the same manner as for the full dataset. We then assumed the bins found as TAD boundaries in the full snHi-C maps with no subsampling to be positives and inner TAD bins to be negatives. Based on this definition, we calculated both false positive rates (FPR) and false negative rates (FNR) for each cell and all subsampling levels. As expected, FNR gradually decreased with the percentage of remaining contacts. FPR reached a maxima at 10–30% subsampling level and then gradually decreased (Supplementary Fig. 6a, b). We then defined a TAD boundary support for a given subsampling level (X%). TAD boundary support is calculated for each genomic bin as the number of subsampling iterations with the number of contacts equal to or larger than X%, where the bin was annotated as the TAD boundary (allowing a one-bin offset). We used TAD boundary support as a predictor of observed TAD boundaries in each cell (with no subsampling of the snHi-C dataset). We plotted receiver operating characteristic (ROC) curves for each X = (95%, 90%, … 5%) and calculated the ROC area under the curve (AUC) for each case (Supplementary Fig. 6c). Based on the largest ROC AUC, we selected the best subsampling level predictive of boundaries, X = 90% ROC AUC 0.9969 (Supplementary Fig. 6c). We then chose the TAD boundary support threshold by optimizing the accuracy. We obtained an accuracy of 0.9765 for the final criteria that the TAD boundary support is larger than 45% for (90%..95%) subsampling levels. We refined the boundaries based on these final criteria and observed only a mild decrease in the number of boundaries per cell (Supplementary Fig. 6d). Thus, we conclude that the TAD calling procedure is robust to subsampling. We used the non-refined boundaries set in the paper if not stated otherwise. For the refined boundaries set, we allowed a 10-kb offset for each boundary and assessed the number of cells in which each genomic bin was annotated as a boundary. We then defined the stable boundaries as bins that were annotated as boundaries in more than or equal to 50% of cells (> = 7), and unstable boundaries as the bins annotated as boundaries in less than 50% of cells (<7). We compared stable boundaries with boundaries conserved between Kc167 and BG3 cells46. For that, we obtained TAD positions from46, mapped them to the dm3 genome with liftover, and coarse-grained the coordinates to 10-kb bins. We then allowed the 10-kb offset and counted the boundaries that overlapped with stable boundaries obtained in the single-cell analysis. Segmentation comparison We introduced two types of similarity scores for TAD/sub-TAD segmentation comparison: the percentage of shared boundaries, where we fixed the first segmentation and compared it with the second segmentation. Each TAD boundary bin of the second segmentation was allowed to include two of its closest neighbors at a 10 kb distance (one bin offset). The number of shared boundaries between two segmentations was calculated as a simple intersection of sets. The percentage was calculated by division by the total number of bins annotated as TAD boundaries in the first segmentation. Jaccard index for TAD bins, where the bins inside a TAD (excluding the boundaries) were considered. The shared TAD bins between two segmentations were calculated and divided by the total number of bins annotated as TADs in both segmentations. To assess the significance of obtained similarity score of TADs, we randomized the locations of TAD boundaries preserving the distributions of TAD and inter-TAD sizes and the number of TADs/inter-TADs per chromosome. Each randomization was performed 1000 times; the distribution of scores was approximated by Gaussian distribution; p-values were inferred from these backgrounds. The same procedure was used for sub-TADs. Non-backtracking approach for annotation of TADs in single cells contact maps The chromatin network, constructed on the basis of the single-cell Hi-C data, can be classified as sparse (i.e., the number of actual contacts per bin in a single-cell contact matrix (adjacency matrix of the network) is much less than the matrix size N). The sparsity of the data significantly complicates the community detection problem in single cells. It is known that upon dilution of the network, there is a fundamental resolution threshold for all community detection methods82. Furthermore, traditional operators (adjacency, Laplacian, modularity) fail far above this resolution limit (i.e., their leading eigenvectors become uncorrelated with the true community structure above the threshold)43. That is explained by the emergence of tree-like subgraphs (hubs) overlapping with true clusters in the isolated part of the spectrum for these operators. Localization on the hubs, but not on true communities in the network, is a drawback of all conventional spectral methods in the sparse regime. To overcome the sparsity issue and to make spectral methods useful in the sparse regime, Krzakala et al.43 proposed to construct the transfer-matrix of non-backtracking random walks (NBT) on a directed network. The NBT operator B is defined on the edges i → j, k → l as follows: $$B_{i \to j,k \to l} = \delta _{il}(1 - \delta _{jk})$$ By construction, NBT walks cannot revisit the same node on the subsequent step and, thus, they do not concentrate on hubs. It has been shown that the non-backtracking operator is able to resolve the community structure in a sparse stochastic block model up to the theoretical resolution limit. In recently published paper42, we have proposed the neutralized towards the expected contact probability NBT operator for the sake of a large-scale splitting of a sparse polymer network into two compartments. Here, we are interested in the small-scale clustering into TADs, for which the conventional NBT operator is appropriate. To eliminate the compartmental signal from the data, we first cleansed all chromosome contact matrices starting from the diagonal, corresponding to 1 Mb separation distance (100th diagonal in the 10-kb resolution). To respect the polymeric nature of the contact matrices, we have filled all empty cells on the leading sub-diagonals with 1. Then, the NBT spectra of all single-cell contact matrices were computed. The majority of eigenvalues of the non-Hermitian NBT operator are located inside the disc in a complex plane, and some number of isolated eigenvalues with large amplitudes lie on the real axis. The edge of the isolated part of the spectrum was defined as the real part of the largest in absolute value eigenvalue with a non-zero imaginary part. All eigenvalues λi such that Re(λi) > rc are isolated, and the corresponding eigenvectors correlate with annotation into the TADs. The position of the spectral edge, determined by the procedure above, has been found to be very close to the edge of the disk for the stochastic block model \(r_c = \sqrt {d^{ - 1}\left\langle {\frac{d}{{d - 1}}} \right\rangle }\), where d is the vector of degrees83. The typical number of the isolated eigenvalues was around 100 for dense contact matrices and somewhat less for sparser ones. The leading eigenvectors define the coordinates \(u_j^{(i)},j = 1,2, \ldots ,N\) of the nodes (bins) of the network in the space of reduced dimension k << N. At the second step, the clustering of the data was performed using the spherical k-means method, realized in the Python library spherecluster84. The number of isolated eigenvalues establishes a lower bound on the new space dimension k to be used for the clustering algorithm, since the respective leading eigenvectors are linearly independent. The dimension of the space k establishes a lower bound on the number of clusters because the leading eigenvectors are linearly independent. To take into account the hierarchical organization of TADs, we have communicated to the spherical k-means the number of clusters somewhat larger than the lower bound. Although the final splitting was found to be not particularly sensitive to this number, we have chosen to split the network into 2.5*k clusters in order to obtain the same mean amount of TADs per chromosome as with the modularity method (171 TADs). The annotations produced by the spherical k-means on the single-cell Hi-C matrices were contiguous (i.e., the clusters were sequence respective, thus resembling TADs). The clusters (i) of size less than 30 kb and (ii) with amount of contacts equal to 2(l – 1) (i.e., with no contacts other than on the sub-diagonals) were excluded from the set as the inter-TADs regions. The ultimate median size of the TADs across all single cells obtained by this algorithm was 110 kb (from 60 kb to 260 kb), and the mean chromosome coverage was 82% (from 57 to 93%). The same analyses of shuffled contact maps have revealed a similar number, size, and coverage of the domains, formed purely due to fluctuations. The boundaries of the NBT TADs in single cells were significantly conserved from cell to cell: the mean pairwise fraction of matched boundaries was 44% for all the cells and 59% for the five densest ones (for the shuffled cells with preservation of stickiness and scaling, see the MSS model; the mean pairwise fraction was 38 and 50% for the five densest cells). Regarding the comparison of TAD boundaries with the modularity approach, the mean fraction of conserved modularity boundaries is somewhat less – 42% for all pairs of cells in the analyses and 52% for the five densest cells, whereas the number of TADs per chromosome is the same in the two methods (171). Between the two methods, the mean number of matched boundaries for the corresponding cells is 61%. Compartment annotation in snHi-C and bulk BG3 in situ Hi-C For compartment annotation in bulk BG3 in situ Hi-C, we used eigenvector decomposition of cis-interactions maps for each chromosome, as implemented in cooltools call-compartments tool version 0.2.0 (https://github.com/mirnylab/cooltools). We then reversed the sign of eigenvalues based on GC content (positive values corresponding to an A compartment with larger GC content)26. We next carried out a saddle plot analysis for each snHi-C dataset based on bulk BG3 in situ Hi-C compartment annotation32. For this procedure, the bins in raw scHi-C maps were reordered by ascending first eigenvector values and averaged to 5 × 5 saddle plots32. Epigenetic analysis of TAD boundaries For the functional annotation of TAD boundaries, we downloaded modENCODE normalized array files85: total RNA of ML-DmBG3-c2 cell line assessed by RNA tiling array (modENCODE id 713) and the ChIP-chip for MOF (id 3041), BEAF-32 (id 921), Chriz (275), CP190 (924), CTCF (3280), dmTopo-II (5058), GAF (2651), H1 (3299), HP1a (2666), HP1b (3016), HP1c (942), HP2 (3026), HP4 (4185), ISWI (3030), JIL-1 (3035), mod(mdg4) (324), MRG15 (3045), NURF301 (5063), Pc (325), RNA-polymerase-II (950), Su(Hw) (951), Su(var)3-7 (2671), Su(var)3-9 (952), WDS (5148), H3 (3302), H3K27ac (295), H3K27me3 (297), H3K36me1 (299), H3K36me3 (301), H3K4me1 (2653), H3K4me3 (967), H3K9me2 (310), H3K9me3 (312), H4K16ac (316). For RNA-Seq coverage, we used the data from ref. 24. The files were binned at 10-kb resolution by summation. We plotted the ChIP-chip signal around different types of boundaries with pybbi utility (https://github.com/nvictus/pybbi.git) based on UCSC tools86 and constructed six sets of boundaries: boundaries found in the bulk in situ Hi-C, boundaries found in the merged snHi-C dataset, boundaries present in > = 50% of cells (> = 7 cells, stable boundaries), boundaries present in <50% of cells (<7 cells, unstable boundaries), boundaries present in just one single cell, and random boundaries. To obtain randomized boundaries, we shuffled bulk in situ Hi-C boundaries across the Drosophila genome, preserving the number of boundaries per chromosome. We also used the bins from the inner parts of TADs as a control for the epigenetic analysis. Functional annotation of distant contacts The 10-kb genomic bins were separated into four groups based on chromatin states for BG3 from Kharchenko et al.54: active chromatin (>0.5 of RED and MAGENTA color), inactive chromatin (>0.5 LIGHT GRAY), Polycomb chromatin (>0.5 DARK GRAY), and unannotated (all the rest) for functional annotation of distant contacts. The thresholds for functional enrichment of particular types of chromatin were selected in order to guarantee the selection of the regions with the most prominent properties of active/inactive/Polycomb chromatin. The 10-kb genomic bins were split into five groups based on the average expression from two RNA-seq replicates in BG3 cells24 (0 expression, 38.1–40%, 40–60%, 60–80%, top 20% expression) for expression activity annotation. We were not able to split the data using an even grid of percentiles (e.g., 0–20%, 20–40%) because ~38% of all genomic bins had zero expression in both replicates. The same functional annotation was used later for polymer model coloring. Average loop For the construction of an average loop of A-compartment regions (Fig. 4f) and B compartment regions (Fig. 4g), MSL complex (Fig. 4h) and Polycomb (Fig. 4i), we selected the top 1000 genomic regions with the highest abundance of the corresponding genomic annotations as potential looping positions. A and B compartments were assessed by a cis-derived eigenvector of the bulk BG3 Hi-C data. MSL ChIP-Seq was obtained from Ramirez et al.51, GEO ID GSE58821). dRING binding data were obtained from modENCODE as a ChIP-chip normalized array file (ID 92754). We considered the pairs of potential looping positions corresponding to intrachromosomal interactions, at the genomic distances of more than 600 kb, separated by up to 50 other looping positions. The snipping of Hi-C square 600-kb windows, centered on the corresponding looping positions, was done with cooltools (https://github.com/mirnylab/cooltools/tree/master/cooltools). The aggregation was performed by summation. log10 values were plotted as heatmaps. Assessment of folding hierarchy of TADs To assess the folding hierarchy at the level of TADs, we used the assumption that the successive sub-TADs that form the same TAD will have more interactions in the observed real snHi-C maps than in the control maps described in the section "snHi-C background model" of these Methods. We calculated the number of contacts directly from snHi-C maps and the control maps. Only sequential sub-TADs falling into the same TAD were considered. The distribution of the number of contacts in the windows between sequential sub-TADs was calculated. We compared the distributions of the number of contacts between sub-TADs falling into the same TAD for real snHi-C maps and the control maps. For each cell, we used either TAD/sub-TAD annotations from the corresponding snHi-C map or TAD/sub-TAD annotation from bulk in situ Hi-C. Marginal scaling (MS) and marginal scaling and stickiness (MSS) models We carried out the statistical analysis of the single-cell Hi-C maps to provide statistical arguments supporting the premise that the clustering observed in snHi-C contact matrices "is not random". For this, we used two different models of a polymer network based on Erdos-Renyi graphs, where bins of the contact map resemble graph vertices, and contacts between bins are graph edges87 (Supplementary Fig. 4a): In the MS model, we require the probability of contact between nodes to respect the contact probability of the experimental contact map, i.e. P (s) = Pc(|i − j|). Decay of the contact probability originates from the intrinsic linear connectivity of the chromatin nodes; therefore, it is an important ingredient for studying fluctuations in a polymer network. The probability of the link between i and j in the random graph I, j = 1, 2…, N is, thus, defined as follows: $$p_{ij} = \frac{{P_{\mathrm{c}}(|i - j|)}}{{\mathop {\sum }\nolimits_{s = 1}^{N - 1} (N - s)P_{\mathrm{c}}(s)}}N_{\mathrm{c}}$$ where the normalization factor in the denominator guarantees that the mean number of links in the graph equals Nc (i.e., the number of experimentally observed links in each single cell). To obtain the average scaling, we merge all contacts from the available single cells and compute the average Pc(s). Given the probability pij by Eq. 2, we randomly generate adjacency matrices that have a homogenous distribution of contacts along the diagonals and do not respect local peculiarities of the bins, such as insulation score, acetylation, and protein affinity. Nevertheless, some non-homogeneity (clustering) of contacts still emerges as a result of stochasticity in each realization of this graph (Supplementary Fig. 4e). the MSS model introduces probabilistic non-homogeneity along the diagonals of the adjacency matrices through definition of the "stickiness" of bins, or. Specifically, under "stickiness", we understand a non-selective affinity ki of a bin i to other bins; the probability that the bin i forms a link with any other bin in the polymer graph is proportional to its stickiness. Thus, the clusters of contacts close to the main diagonal of contact matrices form as a result of different "stickiness" of bins in the MSS model. Stickiness might effectively emerge as a result of a particular distribution of "sticky" proteins, such as PcG proteins known to mediate bridging interactions between nucleosomes and to participate in stabilization of the repressed chromatin state. Assuming that the stickiness is distributed independently of the polymer scaling Pc(|i − j|), we use the following expression for the probability of the link, pij, in the MSS model: $$p_{ij} = \frac{{k_ik_jP_{\mathrm{c}}(|i - j|)}}{{\mathop {\sum }\nolimits_{i < j} k_ik_jP_{\mathrm{c}}(|i - j|)}}N_{\mathrm{c}}$$ To derive the values of stickiness, we calculated the coverage at each bin in the merged contact map \(\tilde k_i\), which stands for the average number of contacts at a particular bin. Due to the polymer scaling, the rates of contacts along each row (column) vary. Thus, \(\tilde k_i\) is not equal to stickiness, \(\tilde k_i \ne k_i\). To determine the stickiness values ki, one should correlate the experimental coverage \(\tilde k_i\) with the theoretical mean number of contacts per bin, according to Eq. 3: $${\tilde{k}}_{i} = \mathop {\sum}\nolimits_{j} {{p}_{{ij}} = {k}_{i}{\alpha}_{i}}$$ where is "activity" of surrounding bins, measured for the i-th bin: $${\alpha}_i = \frac{1}{Z}\mathop {\sum}\nolimits_j {k_jP_{\mathrm{c}}(|i - j|)} ,\;Z = \frac{1}{{N_{\mathrm{c}}}}\mathop {\sum}\nolimits_{i < j} {k_ik_jP_{\mathrm{c}}(|i - j|)}$$ Equation 3 sets a system of N non-linear equations that cannot be solved analytically. To determine the stickiness values, we implement the numerical method of iterative approximations. Namely, we start with: $$k_i^{(0)} = \tilde k_i,\;{\alpha}_i^{(0)} = {\alpha}_i(\tilde k_i)$$ and recalculate \(k_i^{(1)}\) using Eqs. (4, 5) at the second step. After several recursive steps, we find good convergence of the stickiness and activity to their limiting values \(k_i^\infty\) and \({\alpha}_i^\infty\). In particular, the derived values of the stickiness provide a good estimate for the averaged theoretical coverage \(\tilde k_i\) as compared to the experimental coverage; see Supplementary Fig. 4f, g. Therefore, the derived null-model of single-cell maps reproduces, on average, the observed coverage of contacts of each bin by means of the individual stickiness assignment. We would like to point out the difference between the limiting values of the stickiness and \(\tilde k_i\), used as a starting approximation in the iterative procedure; Supplementary Fig. 4h. This difference is a result of the non-homogeneous redistribution of contacts at each particular row in accordance with the marginal polymeric scaling Pc(|i − j|). Number of contacts in windows The MS and MSS models introduced above demonstrate apparent clustering of generated contacts close to the main diagonal in realizations of adjacency matrices. In the MS model, this is purely due to fluctuations: the mean weight of the link wij = ps depends only on the genomic distance between the bins s = |i − j| in the respective Poisson version of the weighted network. In contrast, in the MSS model, the non-homogeneity of bin sicknesses allows for a deterministic non-homogeneous distribution of contacts along the main diagonal. To statistically compare the clustering of contacts generated by the two models with the clustering in experimental single cell Hi-C maps, we studied distributions of the number of contacts in certain "windows" of different sizes. The inspected windows are isoscele triangles with the base located on the main diagonal and having the angle with the congruent sides. These windows look like TADs but, in contrast to the latter, have a fixed size throughout the genome. At a given window size W, we sampled the number of contacts falling in the defined windows in each snHi-C map. We compared the samples originating from 100 random MS-generated maps and 100 random MSS-generated maps with derived limiting values of stickiness (see the previous section for discussion of the models). Note that in the theoretical models (MS and MSS), all contacts are statistically independent: in both models, the number of contacts falling in a window of size can be interpreted as a number of "successes" occurring independently in a certain fixed interval. In the MS model, the "success" rate is constant along each diagonal; thus, for rather sparse MS maps (i.e. sufficiently small rates), one would expect the observed contacts in the windows to follow the Poisson distribution. In the MSS maps, the stickiness distributions introduce non-homogeneity to "success" rates along the diagonals; however, as our analyses suggest, the random MSS maps exhibit much more satisfactory Poisson statistics than their original experimental counterparts; Supplementary Fig. 4j, k. Deviations from the Poisson statistics of the snHi-C contact maps are evaluated by the p-value of the χ2 goodness of fit test (Supplementary Fig. 4k). The heatmaps of the common logarithm of p-values for the top-10 single cells and the corresponding MS and MSS maps are presented in Supplementary Fig. 4j. The random maps (the second and third rows) demonstrate reasonably even distributions of the p-values across distinct single cells that rarely enter below the significance level α = 10−5. Several atypically low p-values correspond either to the most dense single cells and small window sizes (upper-left corner), for which the sparse Poisson limit is violated, or to a quite uneven distribution of stickiness for a given chromosome. Notably, the snHi-C maps demonstrate remarkable deviations from the Poisson statistics for small window size W < 40 bins (<400 kb). As can be seen from the heatmaps (Supplementary Fig. 4j) the χ2 test rejects the null hypothesis at the significance level α = 10−5 for most of the single cells at small scales. Therefore, the probability that the experimental contact maps are described by the Poisson statistics is significantly low (α). To understand the source of inconsistency between the experimental and Poisson distributions, we plotted the histograms of the number of contacts along with their best Poisson-fit for W = 10 (Supplementary Fig. 4k, left) and W = 40 (Supplementary Fig. 4k, right). The presence of large-scale heavy tails and low-scale shoulders in the experimental histograms results in the rejection of the null hypothesis. Finally, the samples corresponding to larger windows are notably better described by the Poisson distribution, exhibiting a level of p-values similar to the random maps. The crossover W0 ≈ 40 (400 kb) corresponds to the scale of 3–4 typical TADs; this implies that the positioning of the contacts inside a single TAD is sufficiently correlated. Correlations between the contacts of different pairs of loci can originate from a specific non-ideal folding of chromatin (e.g., fractal globule) or be a signature of active processes (e.g., loop extrusion) operating at the scale of one TAD. Larger window sizes accumulate contacts from different TADs, whereas most of the inter-TADs contacts are much less correlated. As a result, we see reasonable Poisson statistics of the number of contacts from larger windows with W > W0. Taken together, we conclude that correlations in contacts is a structural feature of experimental single cell maps and that clusters (TADs) identified in the maps cannot be reduced to random fluctuations imposed by the white noise or imperfections of the experimental setup. The cells were harvested overnight on poly-l-lysine coated coverslips placed in culture flasks. The cells were fixed in 4% paraformaldehyde for 10 min, permeabilized in 0.5% Triton X-100, washed in PBS, dehydrated in ethanol series, air-dried, stored at room temperature for 2 days, and then frozen at −80 °C. Probes were prepared from fosmids by labeling with fluorophore-conjugated dUTPs using nick-translation. Approximately 150 ng of each probe was used in hybridization. Denaturation was performed at 80 °C for 30 min in 70% formamide (pH 7.5), 2× SSC. Hybridization of probes was done for 24 h in 50% formamide, 2× SSC, 10% dextran sulfate, 1% Tween 20. Washing steps were performed in 2× SSC at 45 °C followed by 0.1× SSC at 60 °C and 4× SSC, 0.1% Triton X-100. For imaging, cells were counterstained with DAPI, and epifluorescent images were acquired using a microscope setup comprising a Zeiss Axiovert 200 fluorescence microscope (Carl Zeiss UK, Cambridge, UK), X-Cite ExFo 120 Mercury Halide (Exfo X-cite 120, Excelitas Technologies) fluorescent source with liquid light guide and 10-position excitation, neutral density, and emission filter wheels (Sutter Instrument, Novato, CA), ASI PZ2000 3-axis XYZ stage with integrated piezo Z-drive (Applied Scientific Instrumentation, Eugene, OR), Retiga R1 CCD camera (Qimaging, Surrey, BC, Canada). The filter wheels were populated with a #89903 ET BV421/BV480/AF488/AF568/AF647 quinta set (Chroma Technology Corp., Rockingham, VT). Image capture was performed using Micromanager 1.4 (https://open-imaging.com/). Hardware control and image capture were carried out using µManager88. Images were deconvolved using Nikon NIS-Elements. Measurements were taken using Imaris. Polymer simulations Simulation of 3D chromatin fiber enabled substantiation of assumptions about factors that play key roles in chromatin organization and to obtain important information about its packaging. We focused on the static properties of the system and did not consider its dynamic properties. Modeling pipeline, general description of the procedure Many methods are currently used to perform computer modeling of polymers. Due to the actual size and complexity of the chromatin, the all- or united-atom model cannot be used to simulate spatial scales of interest. The dissipative particle dynamics (DPD) technique was used because it enables modeling of the physical properties of polymer systems59. DPD is a coarse-grain method of molecular dynamics. Newton's equations are solved numerically for each particle in the system for every time step. The total force consists of conservative, dissipative, random, and elastic forces. Conservative force is described by a soft potential within the sphere with cutting radius Rc = 1.0. The soft potential has no singularity at the zero point (Supplementary Fig. 21a). It is possible to use a large time step in the Velocity Verlet integration scheme, in contrast to classical molecular dynamics (CMD) with the Lennard-Jones potential. The typical time step in CMD is 20 times smaller than in DPD. The solvent is taken into account explicitly; it is necessary for the DPD thermostat to work89,90. The temperature control of the system is ensured by a balance of dissipative and random forces that conserve the momentum. The elastic force simulates the presence of a bond between beads. An ensemble of NVT (number of particles, volume, temperature) is used. A detailed description of the simulation method can be found elsewhere91. We used our own implementation of DPD that is 2D parallelized and lightweight92. In all simulations, the following parameters were used: app = ass = 25.0, aps = 26.63 (soft potential repulsion coefficient), in terms of Flory-Huggins' theory \(\chi = 0.5 = 0.306 \ast (a_{{\mathrm{ps}}} - a_{{\mathrm{pp}}})\), where app—repulsion coefficient between polymer and polymer beads, ass—between solvent and solvent beads, aps—between polymer and solvent beads; l0 = 0.5 (undeformed bond length), k = 40 (bond stiffness), dt = 0.04 (integration timestep), σ = 3 (number density), simulation box size 22 × 22 × 22 DPD a.u. With these parameters, the polymer chain (or chromatin fiber) is able to self-intersect but still has an effective excluded volume. At χ = 0.5, the single polymer chain in a dilute solution has a Gaussian conformation (i.e. it corresponds to a simple random walk). Each simulation was organized as follows: The polymer chain is generated as a random walk within the cubic cell with the size of 10 DPD units. Adjacent solvent particles are included into the simulation cell with the size of 22 DPD units until the number density σ = 3. Additional bonds between beads are added according to the snHi-C contact matrix. If i-th and j-th beads have a contact, an additional harmonic bond between i-th and j-th beads is added to the system if |i − j| > 1. We define contact as an event when the distance between two beads (i, j) meets criterion Dij < Rcut = 0.7 Such Rcut value corresponds to the average bond length. We count all the contacts in the system. So, in a system any bead can have more than 1 contact. Additional bonds could be overstretched; therefore, the system is equilibrated over 106 steps. The simulation time is two orders of magnitude higher than the necessary equilibration time (Supplementary Fig. 21b); hence, there are no doubts regarding the system equilibrium. According to our calculations, the equilibration time is ~20k steps. The equilibrated system contained overstretched bonds, which were removed one by one until the maximum length became less than the threshold lmax < 1.5 DPD a.u. (Supplementary Fig. 21c, Supplementary Table 2). Backbone bonds were not removed, because they represented reliable information. The system was equilibrated for 20k steps after each bond removal. Values of the single-cell Hi-C matrix elements could vary because the restriction fragment is smaller than the selected resolution (10 kb). Data regarding the exact number of contacts between two fragments were not used. Therefore, the contact matrix was considered to be binary. Only the X chromosome was simulated because it is haploid. The X chromosome corresponds to the polymer chain consisting of 2242 beads at 10 kb resolution. Every single chain bead represents 50 nucleosomes. Our model does not consider the shape of a 10-kb region or any other internal properties. Control simulations were organized in the same manner, but the contacts were shuffled. Shuffling was performed while maintaining the number of contacts at each genomic distance. We also performed simulations with shuffling on the long genomic distances only and sampling the contacts from two cells (Supplementary Table 3). The second case shows that reconstruction of the 3D conformation from diploid chromosomes is meaningless in comparison with haploid chromosomes. Coefficient of the difference To compare two 3D structures, corresponding distance matrices were calculated. Orientation of the chain in 3D space did not affect the elements of distance matrices. The Coefficient of the difference is introduced as K = Masym/Msym, where Masym = ||D–D′||/2 and Msym = ||D–D′||/2, where D and D′—distance matrices. ||Matrix||—is the Euclidean distance (\(d = \sqrt {a_{11}^2 + a_{12}^2 + .. + a_{21}^2 + \ldots }\), a##—matrix element). To avoid the contribution of thermal fluctuations, each distance matrix was averaged over 100 conformations with an output rate of 10k steps. To demonstrate the independence of the final result on the initial conformation, we repeated the calculation of the system ten times with the maximal number of contacts. For each repeat, we created a new independent initial conformation, but we kept the same set of additional bonds. The initial conformation does not affect the final result in the simulation protocol. Visualization of epigenetic states The visualization was performed using the pymol software v. 2.3.2 (https://pymol.org/2/). 1D epigenetic data were added to the structure as a bead type and represented with a corresponding color. Analysis of different epigenetic states was performed via Python scripts (https://github.com/polly-code/DPD_withRemovingBonds). Before the visualization, some of the conformations were smoothed by averaging coordinates within the window of 15 beads along the chain. This approach ensured that thermal fluctuations were avoided (Supplementary Figs. 16, 21). Radial distances and center of mass We calculated the surface of the chromosome territory as a convex hull. The distance to the surface was evaluated as the minimal distance from the particle to the surface, and then the distance arrays were averaged. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Raw and processed snHi-C and bulk BG3 in situ Hi-C data are available in the GEO NCBI under accession number "GSE131811". List of publicly available GEO sources used in this study: "GSE122603" (Hi-C for Kc167 and BG3 cell lines for comparison of stable TAD boundaries), "GSE58821" (MSL; ChIP-seq), "GSE69013" (RNA-Seq). List of publicly available modENCODE data sources used in this study: total RNA of ML-DmBG3-c2 cell line assessed by RNA tiling array (modENCODE id 713) and the ChIP-chip for MOF (id 3041), BEAF-32 (id 921), Chriz (id 275), CP190 (id 924), CTCF (id 3280), dmTopo-II (id 5058), GAF (id 2651), H1 (id 3299), HP1a (id 2666), HP1b (id 3016), HP1c (id 942), HP2 (id 3026), HP4 (id 4185), ISWI (id 3030), JIL-1 (id 3035), mod(mdg4) (id 324), MRG15 (id 3045), NURF301 (id 5063), Pc (id 325), RNA-polymerase-II (id 950), Su(Hw) (id 951), Su(var)3-7 (id 2671), Su(var)3-9 (id 952), WDS (id 5148), H3 (id 3302), H3K27ac (id 295), H3K27me3 (id 297), H3K36me1 (id 299), H3K36me3 (id 301), H3K4me1 (id 2653), H3K4me3 (id 967), H3K9me2 (id 310), H3K9me3 (id 312), H4K16ac (id 316). dRING binding data were obtained from modENCODE as a ChIP-chip normalized array file (id 927). All other relevant data supporting the key findings of this study are available within the article and its Supplementary Information files or from the corresponding author upon reasonable request. A reporting summary for this Article is available as a Supplementary Information file. Source data are provided with this paper. The data processing pipeline is available at https://github.com/agalitsyna/sc_dros. The modeling pipeline is available at https://github.com/polly-code/DPD_withRemovingBonds. Dekker, J., Rippe, K., Dekker, M. & Kleckner, N. Capturing chromosome conformation. Science 295, 1306–1311 (2002). CAS PubMed Article ADS PubMed Central Google Scholar Kim, T. H. & Dekker, J. 3C-based chromatin interaction analyses. Cold Spring Harbor protoc. https://doi.org/10.1101/pdb.top097832 (2018). Dixon, J. R. et al. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature 485, 376–380 (2012). CAS PubMed PubMed Central Article ADS Google Scholar Nora, E. P. et al. Spatial partitioning of the regulatory landscape of the X-inactivation centre. Nature 485, 381–385 (2012). Sexton, T. et al. Three-dimensional folding and functional organization principles of the Drosophila genome. Cell 148, 458–472 (2012). Lupianez, D. G. et al. Disruptions of topological chromatin domains cause pathogenic rewiring of gene-enhancer interactions. Cell 161, 1012–1025 (2015). Symmons, O. et al. Functional and topological characteristics of mammalian regulatory domains. Genome Res. 24, 390–400 (2014). Dixon, J. R., Gorkin, D. U. & Ren, B. Chromatin domains: The Unit of Chromosome Organization. Mol. CeLL 62, 668–680 (2016). Franke, M. et al. Formation of new chromatin domains determines pathogenicity of genomic duplications. Nature 538, 265–269 (2016). Akdemir, K. C. et al. Disruption of chromatin folding domains by somatic genomic rearrangements in human cancer. Nat. Genet. 52, 294–305 (2020). Schwarzer, W. et al. Two independent modes of chromatin organization revealed by cohesin removal. Nature 551, 51–56 (2017). PubMed PubMed Central Article ADS Google Scholar Rao, S. S. P. et al. Cohesin loss eliminates all loop domains. Cell 171, 305–320 e324 (2017). Lieberman-Aiden, E. et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science 326, 289–293 (2009). Hildebrand, E. M. & Dekker, J. Mechanisms and Functions of Chromosome Compartmentalization. Trends Biochem Sci. 45, 385–396 (2020). Drucker, J. L. & King, D. H. Management of viral infections in AIDS patients. Infection 15, S32–S33 (1987). Nuebler, J., Fudenberg, G., Imakaev, M., Abdennur, N. & Mirny, L. A. Chromatin organization by an interplay of loop extrusion and compartmental segregation. Proc. Natl Acad. Sci. USA 115, E6697–E6706 (2018). Rao, S. S. et al. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell 159, 1665–1680 (2014). Fudenberg, G. et al. Formation of chromosomal domains by loop extrusion. Cell Rep. 15, 2038–2049 (2016). Sanborn, A. L. et al. Chromatin extrusion explains key features of loop and domain formation in wild-type and engineered genomes. Proc. Natl Acad. Sci. USA 112, E6456–E6465 (2015). Rowley, M. J. & Corces, V. G. Organizational principles of 3D genome architecture. Nat. Rev. Genet. 19, 789–800 (2018). Wutz, G. et al. Topologically associating domains and chromatin loops depend on cohesin and are regulated by CTCF, WAPL, and PDS5 proteins. EMBO J. 36, 3573–3599 (2017). Matthews, N. E. & White, R. Chromatin architecture in the fly: living without CTCF/cohesin loop extrusion?: Alternating chromatin states provide a basis for domain architecture in Drosophila. BioEssays 41, e1900048 (2019). Rowley, M. J. et al. Evolutionarily conserved principles predict 3D chromatin organization. Mol. Cell 67, 837–852 e837 (2017). Ulianov, S. V. et al. Active chromatin and transcription play a key role in chromosome partitioning into topologically associating domains. Genome Res. 26, 70–84 (2016). Wang, Q., Sun, Q., Czajkowsky, D. M. & Shao, Z. Sub-kb Hi-C in D. melanogaster reveals conserved characteristics of TADs between insect and mammalian cells. Nat. Commun. 9, 188 (2018). PubMed PubMed Central Article ADS CAS Google Scholar Ramirez, F. et al. High-resolution TADs reveal DNA sequences underlying genome organization in flies. Nat. Commun. 9, 189 (2018). Kolodziejczyk, A. A., Kim, J. K., Svensson, V., Marioni, J. C. & Teichmann, S. A. The technology and biology of single-cell RNA sequencing. Mol. Cell 58, 610–620 (2015). Cusanovich, D. A. et al. Multiplex single cell profiling of chromatin accessibility by combinatorial cellular indexing. Science 348, 910–914 (2015). Clark, S. J., Lee, H. J., Smallwood, S. A., Kelsey, G. & Reik, W. Single-cell epigenomics: powerful new methods for understanding gene regulation and cell identity. Genome Biol. 17, 72 (2016). Fraser, J., Williamson, I., Bickmore, W. A. & Dostie, J. An Overview of Genome Organization and How We Got There: from FISH to Hi-C. Microbiol Mol. Biol. Rev. 79, 347–372 (2015). Nagano, T. et al. Single-cell Hi-C reveals cell-to-cell variability in chromosome structure. Nature 502, 59–64 (2013). Flyamer, I. M. et al. Single-nucleus Hi-C reveals unique chromatin reorganization at oocyte-to-zygote transition. Nature 544, 110–114 (2017). Nagano, T. et al. Cell-cycle dynamics of chromosomal organization at single-cell resolution. Nature 547, 61–67 (2017). Gassler, J. et al. A mechanism of cohesin-dependent loop extrusion organizes zygotic genome architecture. EMBO J. 36, 3600–3618 (2017). Bintu, B. et al. Super-resolution chromatin tracing reveals domains and cooperative interactions in single cells. Science https://doi.org/10.1126/science.aau1783 (2018). Cardozo Gizzi, A. M. et al. Microscopy-based chromosome conformation capture enables simultaneous visualization of genome organization and transcription in intact organisms. Mol. Cell 74, 212–222 e215 (2019). Szabo, Q. et al. TADs are 3D structural units of higher-order chromosome organization in Drosophila. Sci. Adv. 4, eaar8082 (2018). Cattoni, D. I. et al. Single-cell absolute contact probability detection reveals chromosomes are organized by multiple low-frequency yet specific interactions. Nat. Commun. 8, 1753 (2017). Murthy, V., Meijer, W. J., Blanco, L. & Salas, M. DNA polymerase template switching at specific sites on the phi29 genome causes the in vivo accumulation of subgenomic phi29 DNA molecules. Mol. Microbiol. 29, 787–798 (1998). Lasken, R. S. & Stockwell, T. B. Mechanism of chimera formation during the multiple displacement amplification reaction. BMC Biotechnol. 7, 19 (2007). Imakaev, M. et al. Iterative correction of Hi-C data reveals hallmarks of chromosome organization. Nat. Methods 9, 999–1003 (2012). Polovnikov, K., Gorsky, A., Nechaev, S., Razin, S. V. & Ulianov, S. V. Non-backtracking walks reveal compartments in sparse chromatin interaction networks. Sci. Rep. https://doi.org/10.1038/s41598-020-68182-0 (2020). Krzakala, F. et al. Spectral redemption in clustering sparse networks. Proc. Natl Acad. Sci. USA 110, 20935–20940 (2013). MathSciNet CAS PubMed MATH Article ADS PubMed Central Google Scholar Hansen, A. S., Cattoglio, C., Darzacq, X. & Tjian, R. Recent evidence that TADs and chromatin loops are dynamic structures. Nucleus 9, 20–32 (2018). Luzhin, A. V. et al. Quantitative differences in TAD border strength underly the TAD hierarchy in Drosophila chromosomes. J. Cell Biochem. 120, 4494–4503 (2019). Chathoth, K. T. & Zabet, N. R. Chromatin architecture reorganization during neuronal cell differentiation in Drosophila genome. Genome Res. 29, 613–625 (2019). Wang, X. T., Cui, W. & Peng, C. HiTAD: detecting the structural and functional hierarchies of topologically associating domains from chromatin interactions. Nucleic Acids Res. 45, e163 (2017). Schwartz, Y. B. & Cavalli, G. Three-dimensional genome organization and function in drosophila. Genetics 205, 5–24 (2017). Ulianov, S. V. et al. Nuclear lamina integrity is required for proper spatial organization of chromatin in Drosophila. Nat. Commun. 10, 1176 (2019). Rowley, M. J. et al. Condensin II counteracts cohesin and RNA polymerase II in the establishment of 3D chromatin organization. Cell Rep. 26, 2890–2903 e2893 (2019). Ramirez, F. et al. High-affinity sites form an interaction network to facilitate spreading of the MSL complex across the X chromosome in Drosophila. Mol. Cell 60, 146–162 (2015). Eagen, K. P., Aiden, E. L. & Kornberg, R. D. Polycomb-mediated chromatin loops revealed by a subkilobase-resolution chromatin interaction map. Proc. Natl Acad. Sci. USA 114, 8764–8769 (2017). Ogiyama, Y., Schuettengruber, B., Papadopoulos, G. L., Chang, J. M. & Cavalli, G. Polycomb-dependent chromatin looping contributes to gene silencing during Drosophila development. Mol. CeLL 71, 73–88 e75 (2018). Kharchenko, P. V. et al. Comprehensive analysis of the chromatin landscape in Drosophila melanogaster. Nature 471, 480–485 (2011). Osborne, C. S. et al. Active genes dynamically colocalize to shared sites of ongoing transcription. Nat. Genet. 36, 1065–1071 (2004). Iborra, F. J., Pombo, A., Jackson, D. A. & Cook, P. R. Active RNA polymerases are localized within discrete transcription "factories' in human nuclei. J. Cell Sci. 109, 1427–1436 (1996). Quinodoz, S. A. et al. Higher-Order Inter-chromosomal Hubs Shape 3D Genome Organization in the Nucleus. Cell 174, 744–757 e724 (2018). Chen, Y. et al. Mapping 3D genome organization relative to nuclear compartments using TSA-Seq as a cytological ruler. J. Cell Biol. 217, 4025–4048 (2018). Español, P. & Warren, P. B. Perspective: dissipative particle dynamics. The. J. Chem. Phys. 146, 150901 (2017). PubMed Article ADS CAS PubMed Central Google Scholar Stevens, T. J. et al. 3D structures of individual mammalian genomes studied by single-cell Hi-C. Nature 544, 59–64 (2017). Chertovich, A. & Kos, P. Crumpled globule formation during collapse of a long flexible and semiflexible polymer in poor solvent. J. Chem. Phys. 141, 134903 (2014). Shevelyov, Y. Y. & Ulianov, S. V. The nuclear lamina as an organizer of chromosome architecture. Cells https://doi.org/10.3390/cells8020136 (2019). Pirrotta, V. & Li, H. B. A view of nuclear Polycomb bodies. Curr. Opin. Genet Dev. 22, 101–109 (2012). Razin, S. V. et al. Transcription factories in the context of the nuclear and genome organization. Nucleic Acids Res. 39, 9085–9092 (2011). Robson, M. I., Ringel, A. R. & Mundlos, S. Regulatory landscaping: how enhancer-promoter communication is sculpted in 3D. Mol. CeLL 74, 1110–1122 (2019). Loubiere, V., Martinez, A. M. & Cavalli, G. Cell fate and developmental regulation dynamics by polycomb proteins and 3D genome architecture. BioEssays 41, e1800222 (2019). Cook, P. R. & Marenduzzo, D. Transcription-driven genome organization: a model for chromosome structure and the regulation of gene expression tested through simulations. Nucleic Acids Res 46, 9895–9906 (2018). Rhodes, J. D. P. et al. Cohesin disrupts polycomb-dependent chromosome interactions in embryonic stem cells. Cell Rep. 30, 820–835 e810 (2020). Banigan, E. J. & Mirny, L. A. Loop extrusion: theory meets single-molecule experiments. Curr. Opin. Cell Biol. 64, 124–138 (2020). Costantino, L., Hsieh, T.-H. S., Lamothe, R., Darzacq, X. & Koshland, D. Cohesin residency determines chromatin loop patterns. eLife 9, e59889 (2020). Brandao, H. B. et al. RNA polymerases as moving barriers to condensin loop extrusion. Proc. Natl Acad. Sci. USA 116, 20489–20499 (2019). Davidson, I. F. et al. Rapid movement and transcriptional re-localization of human cohesin on DNA. EMBO J. 35, 2671–2685 (2016). Yoshizawa, T., Nozawa, R. S., Jia, T. Z., Saio, T. & Mori, E. Biological phase separation: cell biology meets biophysics. Biophysical Rev. 12, 519–539 (2020). Kumar, G., Garnova, E., Reagin, M. & Vidali, A. Improved multiple displacement amplification with phi29 DNA polymerase for genotyping of single human cells. Biotechniques 44, 879–890 (2008). Durand, N. C. et al. Juicer provides a one-click system for analyzing loop-resolution Hi-C experiments. Cell Syst. 3, 95–98 (2016). Li, H. & Durbin, R. Fast and accurate short read alignment with burrows-wheeler transform. Bioinformatics 25, 1754–1760 (2009). Abdennur, N. & Mirny, L. A. Cooler: scalable storage for Hi-C data and other genomically labeled arrays. Bioinformatics 36, 311–316 (2020). Gavrilov, A. A., Gelfand, M. S., Razin, S. V., Khrameeva, E. E. & Galitsyna, A. A. "Mirror reads" in Hi-C data. Genomics Comput. Biol. 3, 36 (2017). Kerpedjiev, P. et al. HiGlass: web-based visual exploration and analysis of genome interaction maps. Genome Biol. 19, 125 (2018). Yang, T. et al. HiCRep: assessing the reproducibility of Hi-C data using a stratum-adjusted correlation coefficient. Genome Res. 27, 1939–1949 (2017). Chandradoss, K. R. et al. Biased visibility in Hi-C datasets marks dynamically regulated condensed and decondensed chromatin states genome-wide. BMC Genomics 21, 175 (2020). Decelle, A., Krzakala, F., Moore, C. & Zdeborova, L. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys. Rev. E 84, 066106 (2011). Article ADS CAS Google Scholar Newman, M. E. J. Spectral methods for community detection and graph partitioning. Phys. Rev. https://doi.org/10.1103/PhysRevE.88.042822 (2013). Banerjee, A., Dhillon, I. S., Ghosh, J. & Sra, S. Clustering on the unit hypersphere using von Mises-Fisher distributions. J. Mach. Learn Res 6, 1345–1382 (2005). MathSciNet MATH Google Scholar Celniker, S. E. et al. Unlocking the secrets of the genome. Nature 459, 927–930 (2009). Kent, W. J., Zweig, A. S., Barber, G., Hinrichs, A. S. & Karolchik, D. BigWig and BigBed: enabling browsing of large distributed datasets. Bioinformatics 26, 2204–2207 (2010). Anderson, G. W., Guionnet, A. & Zeitouni, O. An Introduction to Random Matrices (Cambridge University Press, 2010). Edelstein, A. D. et al. Advanced methods of microscope control using muManager software. J. Biol. Methods https://doi.org/10.14440/jbm.2014.36 (2014). Hoogerbrugge, P. J. & Koelman, J. M. V. A. Simulating microscopic hydrodynamic phenomena with dissipative particle dynamics. Europhys. Lett. 19, 155–160 (1992). Article ADS Google Scholar Koelman, J. M. V. A. & Hoogerbrugge, P. J. Dynamic simulations of hard-sphere suspensions under steady shear. Europhys. Lett. 21, 363–368 (1993). CAS Article ADS Google Scholar Groot, R. D. & Warren, P. B. Dissipative particle dynamics: bridging the gap between atomistic and mesoscopic simulation. J. Chem. Phys. 107, 4423–4435 (1997). Gavrilov, A. A., Chertovich, A. V., Khalatur, P. G. & Khokhlov, A. R. Effect of nanotube size on the mechanical properties of elastomeric composites. Soft Matter. 9, 4067 (2013). This work was supported by Russian Science Foundation (RSF) grant #19-14-00016 to S.V.R. Bioinformatics analysis of the data was supported by RSF grant #19-74-00112 to E.E.K and Russian Foundation for Support of Fundamental Science (RFBR) grant #18-29-13013 to S.K.N. A.A.Gal. was supported by RFBR grant #19-34-90136. The research is carried out using the equipment of the shared research facilities of HPC computing resources at Lomonosov Moscow State University and the Makarich HPC cluster provided by the Faculty of Bioengineering and Bioinformatics. The research of P.I.K. is supported partly by RFBR grant #18-29-13041 and by Skoltech Systems Biology Fellowship. The research of A.V.C. is supported by RFBR grant #18-29-13041. S.V.U. and S.V.R. were supported by the Interdisciplinary Scientific and Educational School of Moscow University «Molecular Technologies of the Living Systems and Synthetic Biology». We thank the Center for Precision Genome Editing and Genetic Technologies for Biomedicine, IGB RAS, and IGB RAS facilities supported by the Ministry of Science and Higher Education of the Russian Federation for providing research equipment. These authors contributed equally: Sergey V. Ulianov, Vlada V. Zakharova, Aleksandra A. Galitsyna, Pavel I. Kos. Institute of Gene Biology, Russian Academy of Sciences, Moscow, Russia Sergey V. Ulianov, Vlada V. Zakharova, Alexey A. Gavrilov & Sergey V. Razin Faculty of Biology, M.V. Lomonosov Moscow State University, Moscow, Russia Sergey V. Ulianov, Vlada V. Zakharova & Sergey V. Razin UMR9018, CNRS, Université Paris-Sud Paris-Saclay, Institut Gustave Roussy, Villejuif, France Vlada V. Zakharova, Diego Germini & Yegor S. Vassetzky Skolkovo Institute of Science and Technology, Moscow, Russia Aleksandra A. Galitsyna, Kirill E. Polovnikov, Ekaterina E. Khrameeva, Mariya D. Logacheva & Mikhail S. Gelfand Faculty of Physics, M.V. Lomonosov Moscow State University, Moscow, Russia Pavel I. Kos & Alexander V. Chertovich Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA Kirill E. Polovnikov MRC Human Genetics Unit, Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh, UK Ilya M. Flyamer Institute of Molecular Genetics, National Research Centre "Kurchatov Institute", Moscow, Russia Elena A. Mikhaleva & Yuri Y. Shevelyov Center for Precision Genome Editing and Genetic Technologies for Biomedicine, Institute of Gene Biology, Russian Academy of Sciences, Moscow, Russia Alexey A. Gavrilov Institute for Information Transmission Problems (the Kharkevich Institute), Russian Academy of Sciences, Moscow, Russia Alexander S. Gorsky & Mikhail S. Gelfand Moscow Institute for Physics and Technology, Dolgoprudnyi, Russia Alexander S. Gorsky Interdisciplinary Scientific Center Poncelet (CNRS UMI 2615), Moscow, Russia Sergey K. Nechaev P.N. Lebedev Physical Institute, Russian Academy of Sciences, Moscow, Russia Koltzov Institute of Developmental Biology, Russian Academy of Sciences, Moscow, Russia Yegor S. Vassetzky Semenov Federal Research Center for Chemical Physics, Moscow, Russia Alexander V. Chertovich Sergey V. Ulianov Vlada V. Zakharova Aleksandra A. Galitsyna Pavel I. Kos Elena A. Mikhaleva Ekaterina E. Khrameeva Diego Germini Mariya D. Logacheva Mikhail S. Gelfand Yuri Y. Shevelyov Sergey V. Razin S.V.R., S.V.U., and I.M.F. conceived the project; D.G. performed cell sorting; V.V.Z. and Y.S.V. prepared snHi-C and bulk BG3 in situ Hi-C libraries; A.A.Gal., K.E.P., E.E.K., S.V.U., A.A.Gav., A.S.G., S.K.N., and M.S.G. analyzed snHi-C, bulk BG3 in situ Hi-C, and publicly available data; P.I.K. and A.V.C. performed polymer simulations; I.M.F. performed FISH; E.A.M. and Y.Y.S. maintained cell cultures; M.D.L. performed sequencing of snHi-C and bulk BG3 in situ Hi-C libraries; S.V.U., V.V.Z., Y.S.V., A.A.Gal., E.E.K., and S.V.R. wrote the manuscript with input from all authors. Correspondence to Sergey V. Razin. Peer review information Nature Communications thanks Nicolae Radu Zabet and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Source data Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Ulianov, S.V., Zakharova, V.V., Galitsyna, A.A. et al. Order and stochasticity in the folding of individual Drosophila genomes. Nat Commun 12, 41 (2021). https://doi.org/10.1038/s41467-020-20292-z Received: 13 February 2020 Genomes and Epigenomes Editors' Highlights Top Articles of 2019 Nature Communications ISSN 2041-1723 (online)
CommonCrawl
Merging-emerging systems can describe spatio-temporal patterning in a chemotaxis model DCDS-B Home December 2013, 18(10): 2505-2512. doi: 10.3934/dcdsb.2013.18.2505 Trudinger-Moser type inequality for radially symmetric functions in a ring and applications to Keller-Segel in a ring Tomasz Cieślak 1, Institute of Mathematics, Polish Academy of Sciences, Śniadeckich 8, 00-956 Warszawa, Poland Received April 2013 Revised July 2013 Published October 2013 We prove that for radially symmetric functions in a ring $\Omega = ${$ x \in \mathbb{R}^n, n \geq 2 : r \leq |x| \leq R $} a special type of Trudinger-Moser-like inequality holds. Next we show how to infer from it a lack of blowup of radially symmetric solutions to a Keller-Segel system in $\Omega$. Keywords: Chemotaxis.. Mathematics Subject Classification: 35B44, 35K20, 35K55, 92C1. Citation: Tomasz Cieślak. Trudinger-Moser type inequality for radially symmetric functions in a ring and applications to Keller-Segel in a ring. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2505-2512. doi: 10.3934/dcdsb.2013.18.2505 N. Alikakos, An Application of the Invariance Principle to Reaction-Diffusion Equations, J. Differential Equations, 33 (1979), 201-225. doi: 10.1016/0022-0396(79)90088-3. Google Scholar P. Biler, Local and global solvability of some parabolic systems modelling chemotaxis, Adv. Math. Sci. Appl., 8 (1998), 715-743. Google Scholar P. Biler, W. Hebisch and T. Nadzieja, The Debye system: Existence and large time behavior of solutions, Nonlinear Anal. TMA, 23 (1994), 1189-1209. doi: 10.1016/0362-546X(94)90101-5. Google Scholar P. Biler and T. Nadzieja, Existence and nonexistence of solutions for a model of gravitational interaction of particles I, Colloq. Math., 66 (1994), 319-334. Google Scholar J. Burczak, T. Cieślak and C. Morales-Rodrigo, Global existence vs. Blowup in a fully parabolic quasilinear 1D Keller-Segel system, Nonlinear Anal. TMA, 75 (2012), 5215-5228. doi: 10.1016/j.na.2012.04.038. Google Scholar T. Cieślak and C. Stinner, Finite-time blowup in a supercritical quasilinear parabolic-parabolic Keller-Segel system in dimension 2, Acta Appl. Math., (2013). doi: 10.1007/s10440-013-9832-5. Google Scholar M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model, Ann. Scuola Norm. Super. Pisa Cl. Sci., 24 (1997), 633-683. Google Scholar E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theor. Biology, 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar J. Moser, A sharp form of an inequality of N. Trudinger, Indiana Univ. Math. J., 20 (1971), 1077-1092. Google Scholar T. Nagai, Blow-up of radially symmetric solutions to a chemotaxis system, Adv. Math. Sci. Appl., 5 (1995), 581-601. Google Scholar T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains, J. Inequal. Appl., 6 (2001), 37-55. doi: 10.1155/S1025583401000042. Google Scholar T. Nagai, T. Senba, K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis, Funkc. Ekvacioj, 40 (1997), 411-433. Google Scholar V. Nanjundiah, Chemotaxis, signal relaying and aggregation morphology, J. Theoretical Biology, 42 (1973), 63-105. doi: 10.1016/0022-5193(73)90149-5. Google Scholar N. Trudinger, On imbeddings into Orlicz spaces and some applications, Indiana Univ. Math. J., 17 (1967), 473-483. Google Scholar V. I. Yudovich, Some estimates connected with integral operators and with solutions of elliptic equations, Dokl. Akad. Nauk SSSR, 138 (1961), 805-808. Google Scholar M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., (2013). doi: 10.1016/j.matpur.2013.01.020. Google Scholar Monica Marras, Stella Vernier-Piro, Giuseppe Viglialoro. Decay in chemotaxis systems with a logistic term. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 257-268. doi: 10.3934/dcdss.2020014 Nicola Bellomo, Youshan Tao. Stabilization in a chemotaxis model for virus infection. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 105-117. doi: 10.3934/dcdss.2020006 Shangbing Ai, Wenzhang Huang, Zhi-An Wang. Reaction, diffusion and chemotaxis in wave propagation. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 1-21. doi: 10.3934/dcdsb.2015.20.1 Nicolas Vauchelet. Numerical simulation of a kinetic model for chemotaxis. Kinetic & Related Models, 2010, 3 (3) : 501-528. doi: 10.3934/krm.2010.3.501 Mihaela Negreanu, J. Ignacio Tello. On a Parabolic-ODE system of chemotaxis. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 279-292. doi: 10.3934/dcdss.2020016 Kentarou Fujie, Akio Ito, Michael Winkler, Tomomi Yokota. Stabilization in a chemotaxis model for tumor invasion. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 151-169. doi: 10.3934/dcds.2016.36.151 Hua Chen, Shaohua Wu. The moving boundary problem in a chemotaxis model. Communications on Pure & Applied Analysis, 2012, 11 (2) : 735-746. doi: 10.3934/cpaa.2012.11.735 Hua Chen, Jian-Meng Li, Kelei Wang. On the vanishing viscosity limit of a chemotaxis model. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1963-1987. doi: 10.3934/dcds.2020101 Alina Chertock, Alexander Kurganov, Xuefeng Wang, Yaping Wu. On a chemotaxis model with saturated chemotactic flux. Kinetic & Related Models, 2012, 5 (1) : 51-95. doi: 10.3934/krm.2012.5.51 José Antonio Carrillo, Stefano Lisini, Edoardo Mainini. Uniqueness for Keller-Segel-type chemotaxis models. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1319-1338. doi: 10.3934/dcds.2014.34.1319 Tong Li, Jeungeun Park. Traveling waves in a chemotaxis model with logistic growth. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6465-6480. doi: 10.3934/dcdsb.2019147 Tomomi Yokota, Noriaki Yoshino. Existence of solutions to chemotaxis dynamics with logistic source. Conference Publications, 2015, 2015 (special) : 1125-1133. doi: 10.3934/proc.2015.1125 Shubo Zhao, Ping Liu, Mingchao Jiang. Stability and bifurcation analysis in a chemotaxis bistable growth system. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1165-1174. doi: 10.3934/dcdss.2017063 Andriy Sokolov, Robert Strehl, Stefan Turek. Numerical simulation of chemotaxis models on stationary surfaces. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2689-2704. doi: 10.3934/dcdsb.2013.18.2689 Anne Nouri, Christian Schmeiser. Aggregated steady states of a kinetic model for chemotaxis. Kinetic & Related Models, 2017, 10 (1) : 313-327. doi: 10.3934/krm.2017013 Manuel Delgado, Inmaculada Gayte, Cristian Morales-Rodrigo, Antonio Suárez. On a chemotaxis model with competitive terms arising in angiogenesis. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 177-202. doi: 10.3934/dcdss.2020010 Xin Lai, Xinfu Chen, Mingxin Wang, Cong Qin, Yajing Zhang. Existence, uniqueness, and stability of bubble solutions of a chemotaxis model. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 805-832. doi: 10.3934/dcds.2016.36.805 Piotr Biler, Grzegorz Karch, Jacek Zienkiewicz. Morrey spaces norms and criteria for blowup in chemotaxis models. Networks & Heterogeneous Media, 2016, 11 (2) : 239-250. doi: 10.3934/nhm.2016.11.239 Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301 T. Hillen, K. Painter, Christian Schmeiser. Global existence for chemotaxis with finite sampling radius. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 125-144. doi: 10.3934/dcdsb.2007.7.125 Tomasz Cieślak
CommonCrawl
Is possible to make an "almost-perfectly" sealed ship? In a series I remember that a group of people were trying to make a colonization ship and they had a big problem: the air. Ships (like every object) aren't perfect and they have micro fissures in the armor or in the edges of two plates of hull and by that fissures the air slowly escape. By a normal ship it isn't a big problem, because the air losse is very little and they can refill oxygen in other station, but for a colonization ship (who has a travel time of several hundred of years) they can't "refill" air. I first thought that it was possible but then I remember that I read one time that NASA's fuel tank for rockets are only filled some days before the launching because the pressure of the hydrogen and it's small size were able to go through matter, so the tanks usually lose around 1% of their fuel. Maybe that could also happen with oxygen. My question is: Is it possible to make an almost perfectly sealed ship? (You can use technology above some centuries) (With almost perfectly I mean the same air could be inside the ship for several millennia, not the rest of the eternity). science-based spaceships shipbuilding oxygen Ender Look Ender LookEnder Look $\begingroup$ Engineering does not work this way. You don't make a perfect bridge, you make a maintainable bridge. You don't make a perfectly sealed ship, you make a ship with good enough seals and plan to replace the lost gas; for example, you may want to carry liquefied oxygen and nitrogen. And don't plan to store helium or hydrogen for a very long time; they diffuse through most materials. $\endgroup$ – AlexP Jul 24 '17 at 8:38 $\begingroup$ Use a planet. You'll lose some air but not enough to matter over the course of the journey. $\endgroup$ – Whelkaholism Jul 24 '17 at 9:39 $\begingroup$ I hear that Maxwellian daemons can be hired for such tasks, but their fees are almost infinite. $\endgroup$ – can-ned_food Jul 24 '17 at 10:31 $\begingroup$ @AlexP, the question is perfectly practical. He does use the "perfect" hyperbole, but specifies that he means "several millenia". So the question is "how would one construct a ship that keeps its athmosphere for several millenia". With "technology some centuries in the future". OP, I'd suggest you remove the "perfect" wording, it is a bit misleading. $\endgroup$ – AnoE Jul 24 '17 at 10:36 $\begingroup$ Anything will escape through anything given enough time. I remember how a local company thought someone was stealing their platinum until they realized it was escaping through the hull of their reactor. $\endgroup$ – Raditz_35 Jul 24 '17 at 15:25 You face a few challenges... Cort's answer provides the math for an ideal system. But your system won't be an ideal, sealed, sphere. It will therefore likely outgas faster. In part because it isn't an ideal sphere. And in part because things fail over time. Case in point. I used to own a high-cost medical device that was sealed. It was rated to be water proof up to about 10 meters. Not that much, really. However, after normal use for a few months, that rating was destroyed. It got wet, it got ruined. Because the seals weakened due to casual wear and tear. Your station must be able to withstand the general abuse your ship will take. That abuse takes many forms: Micro-meteor impacts at your cruising speed. Friction wear from cycling open/closed any exit portals. Aging parts Design faults Stresses while under acceleration or as thrust vectors change Solving these will require multiple approaches. Use the best materials and manufacturing techniques Design seals in depth, so if one seal fails, another can take the pressure Constant monitoring for micro-leaks, so they can be patched before they become mega-leaks maybe designing some kinds of self-sealing materials that can flow into and fill any micro-leaks And all of this must be crazy simple to maintain while under flight. No one can just fly down to your local neighborhood Ship-Mart for spare parts (Shop smart, shop Ship-Mart). I suggest you make at least five independent hulls. Pressurize each to some degree with a high-weight noble gas. Noble gas, because they don't like to react with other things. And partial pressure so that there is less pressure differential between the vacuum of space and the pressure vessel full of people. It wouldn't need to be anywhere near 1 atmosphere pressure. Just enough to help ease the pressure (pun intended). These exterior safety hulls should be designed in some sort of honeycomb grid of cells, so that a rupture to one cell doesn't empty the entire hull. This also greatly increases the structural strength of that hull. Or you could store water ice in at least one of these hulls, to provide a radiation shield / water supply / hydrogen+oxygen supply as needed. In a word, no. Perfect seals simply do not exist in the real world. Fortunately, neither do perfect ships. Even if you had perfect seals, you wouldn't be able to cruise the skies for all eternity because you'd eventually break down as you impact small particulate matter. Entropy always wins. Fortunately, for practical purposes, you can do okay. You mention hydrogen, and hydrogen is indeed quite special. It's far smaller than anything else, and is notorious for doing evil things in high vacuum setups. Normal steel is typically sufficient for most gasses, but hydrogen can diffuse right through it. That's why every vacuum chamber you see is made out of stainless steel (and thus costs more than my house!). The most important thing you can do is minimize the seals, and minimize thermal effects. Don't have any fancy seals like those which can rotate or which can open. Focus almost entirely on joints like the copper knife-edge seals they use on high vacuum setups. These are joints that feature a knife edge which cuts into a copper gasket to create a very strong seal. These seals are trusted in high vacuum situations, so they should be good for you. Also, make sure you pay attention to thermal effects. As long as the ship is at relative equilibrium, you won't see too many microfissures. For some perspective, you can look at the high vacuum community. These aren't the normal vacuums you're used to. Most of us deal with low vacuum, which might bring the pressure down from a normal 760 Torr down to 100 Torr. The high vacuum community likes to operate in the nanotorr region and below. At these pressures (or lacks of pressure), everything outgasses. They actually care about this because the tiniest flow in will ruin their experiment. From documentation, one can expect stainless steel to "outgas" at $3\cdot10^{-13} \frac{Torr\cdot Liter}{sec\cdot cm^2}$. This means that you expect gas to flow through the steel at roughly this rate. You can use that number to determine how long the pressure can remain in your ship. Let's make up some numbers. The ISS has a volume of about 1000 cubic meters (1,000,000L). If we made it a sphere (the best shape for minimizing losses), it'd be about 12m in diameter, so it would have a surface area of about 2000 square meters (2*10^7 cm^2). Multiplying/dividing these through, along with that constant for stainless steel, and you get $6\cdot10^{-12}\frac{Torr}{sec}$. That's your pressure loss per second. That's $0.000185274 \frac{Torr}{year}$, or $0.185274 \frac{Torr}{millennia}$. If you started with atmospheric pressure (760 Torr), it would take 4 million years to deplete out, using these rough estimates. You will have better results with larger spheres, so you can easily get into the 10s of millions of years. But it's not perfect. $\begingroup$ @Erik no. No seal is ever perfect. Not against any of the gases. It's only that hydrogen is worst of them, so it works as decent "worst case scenario", and numbers are relatively easy to find. $\endgroup$ – Mołot Jul 24 '17 at 9:27 $\begingroup$ Primary contribution to outgassing are release of adsorbed and absorbed gasses, not permeation through. This outgassing drops with time, until potentially other effects (permeation and sublimation) take over. Permeation should sharply drop with material thickness. In short: it looks to me that you are using wrong data, quoted number is mostly for desorption, not permeation. $\endgroup$ – M i ech Jul 24 '17 at 9:43 $\begingroup$ How is called this property? I want to the values of other materials. $\endgroup$ – Ender Look Jul 24 '17 at 15:32 $\begingroup$ @CortAmmon Actually, that makes your numbers useless. They don't answer the question at all. Principle is correct in that permeation does exist, though practically limited only to Hydrogen, but your estimation bears NO relation to mechanism you describe. There is no way to infer real permeative leak from your answer. Furthermore, question is concerned with atmosphere, where hydrogen escape is a non-issue. Hydrogen is mentioned specifically to ask if oxygen is subject to same principle. While it's certainly interesting piece of information, IMO you didn't actually answer question at all. $\endgroup$ – M i ech Jul 24 '17 at 18:46 $\begingroup$ @erik hydrogen scooping doesn't actually work. The thickest gas cloud in outer space (ie a hydrogen nebula) is eight times closer to a perfect vacuum than we've created in a lab here on Earth. $\endgroup$ – Draco18s no longer trusts SE Jul 24 '17 at 21:52 After reading Cort's impressive answer; I'd offer an alternative, to "nearly" perfectly sealed. Construct a cover that fits over the ship; as close as possible with the constraint of being only two pieces with a single seal between them (as small as possible). Or for practicality, as few pieces as possible with seals as obvious as possible. Make the cover of glass diamond$^1$ and stainless steel. Then pressurize the gap between the cover and the ship to match (or very slightly exceed) the ship's pressure. $^1$ added: The OP allows future tech; present tech allows us to deposit diamond film and use high pressure to create gemstones; presumably future tech will be able to make pure diamond windows and thick diamond films for the cover described. The point here is to use some non-toxic commonly available gas (the most common are hydrogen, helium, oxygen, nitrogen, neon, in that order) to pressurize the gap between the cover and the ship wall. Neon is probably your best bet, it is non-toxic and chemically inert, meaning it forms no compounds (unlike nitrogen and oxygen which both form compounds), and has an atomic mass of 20 (vs. 1 and 4 for hydrogen and helium resp.) As Neon outgasses from the cover, it can be scooped up for replacement purposes. This makes it a good "sacrificial" gas, i.e. we may leak neon, but we don't leak our oxygen and other special recipe of gasses inside the ship that sustain life comfortably. To the extent that Neon ingasses to the interior of the ship; it is non-toxic and we can filter it out for re-injection into the gap (the ship walls can have ports for this; remember only the outer shell needs to have as few joins as possible). The advantage of the cover is also maintainability; with just a few simple straight seals that are easily accessible, we can mount equipment there to monitor the seals for leakage and fix them with relative ease. Such equipment can operate in a vacuum; the communication can be by magnetic field fluctuation, acoustic or radio wave through the cover without penetrating it. The same goes for other sensory equipment the ship may require, or antennae, lasers, telescopes, dishes, armaments, etc. Of course the entire outer cover is sacrificial, as well. In the event of damage by space debris it can be repaired; but because it is not pieces bolted together and has no "components" other than the single seal (or a few simple seals) repairs can be hard welds and permanently fused glass melted into place. In dock near planets the neon gas can be depressurized and re-liquefied for storage (yes, neon of all elements has the narrowest range of temperatures for liquefaction; just a 5.5F degree window, but we have future science on our side!). Then the cover can be detached; perhaps stored in space while the ship heads to the planet surface. Of course the ship would still be constructed to be pressurized itself, and would suffice in an emergency (like the cover being breached by an impact that does not breach the ship's hull); but it can be designed with many components for maneuverability, landing, loading and unloading cargo or passengers and so on. $\begingroup$ It won't be "perfect", which is what the question asks for. There will be diffusion between the outer and inner hulls, at approximately the same rate as between a single hull and vacuum, as the gases seek to find their correct partial pressure. Some of this diffused gas will then diffuse through to vacuum. You can probably shave off a couple of orders of magnitude from an already small number, but it still won't be "perfect" $\endgroup$ – nzaman Jul 24 '17 at 12:54 $\begingroup$ @nzaman No there won't, the gap between the outer and inner hull is pressurized and filled with neon gas, it is not a vacuum. We have already stipulated that "perfect" is impossible, short of magic. perhaps with tech a few centuries from now, we can make the outer hull from perfectly formed diamond. After all, we can already deposit diamond film on a surface. But sure, quantum tunneling alone prevents perfection. Given that, the only answer to this question is "NO", but to help the OP, further explanation on how to get asymptotically closer to perfection is in order. $\endgroup$ – Amadeus-Reinstate-Monica Jul 24 '17 at 13:24 $\begingroup$ Yes, but there's no oxygen or nitrogen, so as far as those gases are concerned, it's a vacuum and they'll move to fill it. Another way of looking at it is that there is no movement of $N_2$ and $O_2$ from the inter-hull gap inwards but there is one from the outward. Dynamic equilibrium will be reached when the two flows are equal, and that requires a sufficient amount of those gases in the gap. Until equilibrium is reached, $N_2$ and $O_2$ will keep leaking into the hull gap and some part will escape through the outer hull--but that will be a couple of orders of magnitude less than otherwise. $\endgroup$ – nzaman Jul 24 '17 at 17:13 $\begingroup$ @nzaman I don't believe that is how it works; nitrogen and oxygen aren't magical thinkers, they don't care if "their kind" is elsewhere or "recognize" their own kind. This is mass and size, pure and simple, and you are engaged in magical thinking. $\endgroup$ – Amadeus-Reinstate-Monica Jul 24 '17 at 18:12 $\begingroup$ @Amadeus: You are the one engaged in "magical thinking" unfortunately. Not only do nitrogen and oxygen not recognize "their kind", they don't even recognize backpressure. They will permeate outward no matter what. The reason that equal partial pressures cause the loss to stop is not because the movement stops, but because movement rates in both directions are equal and cancel-equilibrium! When you think about the problem correctly, it does indeed matter what substance is outside causing the backpressure, because now you have oxygen leaking out, and neon leaking in, and they don't cancel. $\endgroup$ – Ben Voigt Jul 25 '17 at 2:19 It just goes to my mind, not sure how practical would be: How about an additional layer of hull? The gap between both layers can be wide enough to send a robot/a man in a space suit to perform repairs/maintenance. And use of vacuum pumps to gather air back under the internal layer. mpasko256mpasko256 $\begingroup$ This is already popular in hard SF. In The Expanse, season 2, you could see it in action. In other works it happened as well. $\endgroup$ – Mołot Jul 24 '17 at 10:55 The issue is refilling the lost air (and presumably other materials), so what you need is a way to carry sufficient quantities of replacement elements but without adding excessive mass to the ship (which makes engineering more difficult, costs more energy to accelerate, decelerate or make any course changes and so on) Fortunately, there is a way to achieve this. Since the ship will be in the hard radiation environment of space, you need massive shielding. If the ship is moving at any appreciable velocity, interstellar dust, gas molecules and so on will be impacting the hull and slowly eroding it. So the ship needs to be both massively shielded and have some sort of protective armour in the front to protect against erosion in the direction of travel. The "ideal" shape of the ship would resemble a golf "Tee", with the wide end up front acting as a "wake shield" and a massive cylindrical sheath over the remainder of the ship. Typical golf tee In order to keep the rest of the mass down. this shielding is made of ice, and serves double duty, both as the shield and as a reservoir of hydrogen and oxygen to supplement the life support system. In the cold of interstellar space, a thin metalized foil cover over the outer surface is probably all the protection you need for the ice. Since water, oxygen and hydrogen are not sufficient in of themselves, the ice is mixed with other frozen "ices", such as methane, CO2, nitrogen and so on, so tapping the ice reservoir provides many important elements for the life support system. If your recycling is efficient enough, then the amount of mass being drawn from the ice shield reservoir will only be a fraction of the total amount of ice actually available. Drawing the ice from the rear (near where the engines would probably be) doesn't sacrifice the protection over the rest of the ship overly much, and if necessary, some excess heat can be leaked into the hull to allow the ice to "flow" like a glacier to cover or recover thinned out or damaged spots in the protective shield. Over the long term, materials degrade under environmental stress (and therefore in unpredictable ways). They require endless, regular observation and repair. "Intelligent" materials (e.g. impregnated with nanomachines) or structural integrity systems could automate the job of keeping materials in top shape, provided they have ready access to an unlimited supply of repair material, appropriate infrastructure for removing waste, and the ability to completely maintain and/or replicate themselves. Assuming the ship structure and materials manage zero gas loss when their state is within some margin of ideal that this system is capable of maintaining, you'd be fine. Even if the structure or materials do permit some small loss in their ideal state, this kind of system (especially nanomachine-impregnated materials) could also enable a kind of active reverse-osmosis by consuming energy to repel escaping gases back into the ship. Unfortunately excessive damage would still lead to rapid gas loss before the automated maintenance system could repair it, and given enough time such damage is virtually guaranteed. But as others have suggested, replacing the lost gases would make up for it, and a system that already has the infrastructure and materials to fabricate repairs to containment systems could use its resources to generate lost gases as well. All of this assuming we're within a few centuries of cost-effective, micro-/nano-scale matter-energy conversion, which is probably the only way to solve all of these problems (maybe even just one of them) with a single system. talrnutalrnu Not the answer you're looking for? Browse other questions tagged science-based spaceships shipbuilding oxygen or ask your own question. Can one build a spacecraft containing an evacuated chamber on its exterior wall? What properties would make a plant ideal for use on a space ship? How do I make oxygen for a generation ship? Runaway Starship Ramps (Somewhat) Realistic force field Man-portable anti-satellite weapon What could make a ship made of pumice seaworthy?
CommonCrawl
HomeGlobal Water Pathogen ProjectPART FOUR. MANAGEMENT OF RISK FROM EXCRETA AND WASTEWATERPersistence Pathogen Specific Persistence Modeling Data Mitchell, J. and Akram, S. 2017. Pathogen Specific Persistence Modeling Data. In: J.B. Rose and B. Jiménez-Cisneros, (eds) Global Water Pathogen Project. http://www.waterpathogens.org (M. Yates (eds) Part 4 Management of Risk from Excreta and Wastewater) http://www.waterpathogens.org/book/pathogen-specific-persistence-modeling-data Michigan State University, E. Lansing, MI, UNESCO. Acknowledgements: K.R.L. Young, Project Design editor; Website Design: Agroknow (http://www.agroknow.com); K. Dean, E. Willis and A. Wissler, literature review and reviewed and organized data. Last published: October 26, 2017 Jade Mitchell (Michigan State University) , Sina Akram (Michigan State University) Persistence modeling facilitates the accurate simulation of different stages of growth, survival, and death of microorganisms in environmental matrices by describing the changes in population size of microorganisms over time. The most commonly used model for simulating persistence patterns of pathogens is the first order exponential one-parameter model. However, persistence curves for many microorganisms do not follow this classic linear trend, as evidenced by decades of studies across the growing field of predictive microbiology and for microbes in many different matrices. Therefore, it is essential for an evaluation of linear and non-linear curves (models) in order to provide an accurate description of both pathogen and matrix specific persistence (or inactivation). Seventeen linear and nonlinear persistence models were used to find the best models for describing the persistence of water microbes - bacteria, viruses, bacteriophages, bacteroidales, and protozoa in human urine, wastewater, freshwater, marine water, groundwater matrices, biosolids and manure. A total number of 30 datasets were used in this study containing 180 different pathogen (or indicator)/matrix combinations to find the best fitting models through linear regression techniques to describe persistence subject to various conditions. Like the exponential decay model, these models contain general parameter(s) to mathematically describe the relationship between reductions in microbial populations with time. The models do not contain explanatory variables to isolate the effects of environmental conditions (i.e. temperature, UV exposure) on inactivation. Overall, three models (JM2, JM1, and Gamma) were found to be the best fitting models across the entire data set and represented 59%, 34% and 25% of the data sets respectively. JM2 fit the persistence data the best across environmental matrices except in human urine, and groundwater in which JM1 performed the best at describing the persistence patterns. Across pathogens, JM2 was the best model for bacteria, bacteriophages, and bacteroidales. However, viruses were best fit by JM1. The models which best describe the persistence pattern of each pathogen or indicator in a matrix under different treatments and their corresponding parameters are presented in this chapter. In addition, T90 and T99 values, which are commonly used to specify the time required for a pathogen concentration to decrease by one and two log units, respectively, is reported for all the datasets and compared between matrices and microorganisms types. While this metric is often used to describe pathogen persistence, it is only relevant in the linear region of the persistence curve and will be misleading if an incorrect model is assumed or a model other than the best fitting model is used to estimate these values. Therefore, the results in this chapter that contains the best fitting models and parameters along with the associated calculation of T90 and T99 for various pathogen/matrix combinations can reduce uncertainty in estimations of pathogen population size in water environments over time. Mathematical models are commonly used to describe the microbial inactivation of pathogens persisting in environmental matrices and to predict population sizes for subsequent human health risk calculations, which may lead to treatment decisions. Persistence modeling facilitates the accurate simulation of the stages of growth, survival, and death of microorganisms in different matrices by describing the mathematical relationship between the population size of microorganisms and time through regression techniques to estimate kinetic parameters (constants). While T90 and T99 values are commonly used to indicate the time required for a pathogen concentration to decrease by one and two log units, respectively, these metrics are only relevant in the linear region of a persistence curve and may be misleading if more complex persistence patterns accurately describe a specific pathogen in a specific matrix (i.e. curves with shoulders or tails). The significance of utilizing mathematical models to describe kinetics is in their ability to describe the persistence pattern of specific pathogens in different environments enabling engineers and policy makers to predict the absolute microorganism populations at any given time through interpolation or extrapolation, not just the population size relative to the initial conditions like the T90 or T99. The most commonly used model for simulating persistence patterns of pathogens is the first order exponential one-parameter model, which was originally developed to describe the inactivation of chemical disinfectants (Chick 1908). However, as predictive microbial modeling has developed as a filed over many years, persistence curves for microorganisms in a number of environments were observed that do not follow this classic linear pattern. .Therefore, accurate description of non-linear persistence curves is essential. A common explanation for this non-linearity in persistence is that the population of microorganisms may consist of several sub-populations, each with different inactivation kinetics. Along with linear curves, curves with "shoulder" (a delay before attenuation begins), curves with "tailing" (attenuation slows with time), and sigmoidal curves (both a shoulder and a tailing) are the four most typically observed models for bacterial decay (Xiong et al. 1999). Shoulders in curves represent the smooth initial inactivation, while tailing can represent an intrinsic resistance of some microorganisms or that they are protected by various factors. Figure 1 shows a schematic representation of different patterns of persistence of microorganisms. Figure 1. Schematic representation of four different persistence patterns As the first order model cannot describe more complex persistence patterns such as shoulders, tailing, and sigmoidal curves noted above, several other models were developed and tested over time. Table 1 shows the list of the microbial persistence models, which were utilized for the studies described in this chapter as well as their corresponding equations. These models are mostly empirical and have three or fewer model parameters. Table 1. Models utilized in this study Equationa $$\frac{Nt}{N0}=exp(-k_{1}t)$$ Chick, 1908 $$\frac{Nt}{N0}=\frac{2}{1+exp(-k_{1}t)}$$ Kamau et al., 1990 $$\frac{Nt}{N0}=\frac{1}{1+exp(-k_{1}(t-k_{2}))}$$ Peleg, 1995 Exponential damped $$\frac{Nt}{N0}=10^{exp(-k_{1}t\times exp(-k_{2}t))}$$ Cavalli-Sforza et al., 1983 Juneja and Marks 1 $$\frac{Nt}{N0}=1-(1-exp(-k_{1}t))^{k_{2}}$$ Juneja et al., 2001 $$\frac{Nt}{N0}=\frac{1}{1+exp(k_{1}+k_{2}log(t))}$$ Gompertz 2 $$\frac{Nt}{N0}=exp\left [ \frac{-k_{1}}{k_{2}}exp((k_{2}t)-1) \right ]$$ Wu et al., 2004 Weibull $$\frac{Nt}{N0}=10^{-((t/k_{1})^k_{2})}$$ Lognormal $$\frac{Nt}{N0}=1-\left \{ (ln(t)-k_{1}/k_{2}) \right \}$$ Aragao et al., 2007 $$\frac{Nt}{N0}=exp\left \{(t^{k_{1}-1})exp^{(\frac{-t}{k_{2}})} \right \}$$ van Gerwen and Zwietering, 1998 Broken-line $$\frac{Nt}{N0}=exp(-k_{1}t), t<k_{3}$$ Muggeo, 2003 Broken-line 2 $$\frac{Nt}{N0}=exp(-k_{1}t+k_{2}(t-k_{3})), t \geq k_{3}$$ Double exponential $$\frac{Nt}{N0}=k_{3}exp(-k_{1}t)+(1-k_{3})exp(-k_{2}t)$$ Gerard Abraham et al., 1990 $$\frac{Nt}{N0}=10^{k_{1}exp\left [ -exp(\frac{-k_{2}exp(1)(k_{3}-t)}{k_{1}}+1)\right ] }$$ Gil et al., 2011 Gompertz-Makeham Gzm $$\frac{Nt}{N0}=10^{(-k_{3}t-k_{1}/k_{2}(exp(k_{2}t)-1))}$$ Jodrá, 2009 Sigmoid type A $$\frac{Nt}{N0}=10^{( (k_{1}t) / \left \{ (1+k_{2}t)(k_{3}-t) \right \} )}$$ Sigmoid type B $$\frac{Nt}{N0}=10^{- (k_{1} \times t^{k_{3}}) / (k_{2}+t^{k_{3}}) }$$ aNt denotes the number of organisms remaining at time t; N0 denotes the initial number of organisms; k1, k2, and k3 are model parameters. Model parameters, ki, are constants in each model that have different units depending on the model. 2.0 Best-Fitting Models A literature review was conducted as described in the previous chapter, "Persistence of Pathogens in Sewage and Other Water Types". However, it was also expanded to include papers with die-off data presented additional water matrix terms: sludge, urine and manure. For this analysis, raw data of microorganism concentration vs. time was obtained from the original author of the peer-reviewed journal article or digitized from the figures in the publications. A total of 95 studies contained acceptable data for modeling, which contained 304 individual data sets describing a pathogen and matrix combination under various environmental conditions (urine, n=9; freshwater, n=131; wastewater, n=16; biosolids, n=42; marine, n=60; groundwater, n=46). While the environmental conditions - temperature, the presence of UV light or indigenous microbiota, for example - are known to influence decay rates, the focus of this analysis is not to describe empirical data sets with explanatory variables through regression modeling as this is reported in the original papers. The purpose of this chapter is to summarize pathogen specific decay rates using generalized persistence models which best-fit decay data sets for specific water environments under specific conditions. Seventeen previously established linear and nonlinear persistence models were evaluated to determine the best models for describing the persistence of different bacteria, viruses, bacteriophages, bacteroidales, and protozoa in human urine, wastewater, biosolids and manure, freshwater, marine water, groundwater matrices. The chapter includes a selected representation of pathogens, marker or indicator data sets across the 6 environmental matrices noted above. A total number of 30 studies are summarized in this chapter which includes 180 different pathogen or indicator/matrix combinations. Indicator bacteria are typically used to detect the level of fecal contamination in environments and are generally not pathogenic to human health while pathogens are microorganisms which can produce diseases. Table 2 shows the best fitting models for all collected data. JM2, JM1, and gamma models were the best fit for 58.7%, 33.7% and 25.0% of the experimental data better than the other tested models respectively. As JM2 fit the persistence data the best in all matrices except in human urine, and groundwater where JM1 performed the best at describing the persistence patterns. Table 2. Best fitted models to different matrices Percent of the combinations for which the model was the best fit All Dataset Human Urine Marine Water Manure & Biosolids aModels described in Table 1 Table 3 shows the best fitting models for different pathogen and indicator types. JM2 was the best model for bacteria, Bacteriophages, and bacteroidales. For viruses, JM1 best described the persistence curves. Table 3. Best fitted models to different types of microorganisms Microbe Group Bacteriophages Bacteroidales Table 4 shows the best fitting models for specific pathogens and indicators. The JM2 model best described the persistence curves for E.Coli, Enterococci, Salmonella, and HF183. The persistence was best described for MS2 by the JM1 model, and exponential model was the best fitting model for Adenovirus. Table 4. Best fitted models to specific pathogens and indicators Indicator or Pathogen Type Number of Pathogen and Indicator/Matrix Combinations Percent of the Pathogen and Indicator/Matrix Combinations for Which the Model was the Best Fit Enterococci MS2 bacteriophage aFor model description see Table 1; bBacteroides human source tracking gene target 3.0 Summary of T90 and T99 data The time that the concentration of a type of microorganism in a specific environment decreases by one and two log units is called T90 and T99 respectively. These values are calculated and summarized using the best fitting models and their corresponding parameters for every pathogen (or indicator)/matrix combination. The predicted number of days needed to achieve 90% and 99% decay rates (T90 and T99) of pathogens and indicators in different matrices are summarized in Tables 5 and 6. The tables also show the range of variations and the standard deviations in T90 and T99 values calculated for every pathogen (or indicator)/matrix combination. Table 5. Range, average and standard deviations in T90 and T99 values in different matrices Number of Pathogen and Indicator/Matrix T90 Days Average (SDa) 0.1 to 62.5 Average (SD) 21.1 (25.6) 0.1 to 125.8 aSD: Standard Deviation Table 6. Range, average and standard deviations in T90 and T99 values for different pathogen or indicator types Pathogen or 0.1 to127.8 Bacteroidalesb 11.4 to 55.2 aSD: Standard Deviation; bGene persistence 4.0 Persistence Modeling in Pathogen/Matrix Combinations The results from the different studies on the persistence of various types of pathogenic human bacteria, viruses, bacteroidales, protozoa, and bacteriophages in different environments of human urine, wastewater, freshwater, marine water, groundwater, and biosolids were collected to find the best-fit mathematical models for different pathogen (or indicator)/matrix combinations. 4.1 Persistence Modeling in Human Urine The model parameters for different pathogen combinations in a human urine matrix along with the corresponding fitted parameters are presented in Table 7. Pathogens studied consisted of adenovirus and MS2 bacteriophage. Data were obtained from a yet unpublished study conducted by Dr. Tamar Kohn from Swiss Federal Institute of Technology in Lausanne. Table 7. Best fitted models and fitting parameters for MS2 bacteriophage (virus indicator) and adenovirus in human urine matrix Virus Type T90 (T99) Days ̊C Best Fit Modela Pointsb Modelsc Depd pH=8.47; NH3=15.8 Epd, Gam, Bi3, lg1 pH=8.72; EC=33.6; NH3=81 Ep, Epd pH=8.79; EC=33.0; NH3=106 pH=8.49; EC=16.0; NH3=28.2 Bi3, Ep, Epd JM1, Gam aFor model description see Table 1; bNumber of data points modeled in the experiment; cOther models that provided an equally statistical best-fit; dBest fit a model with three parameters, where k3= 0.00000001 4.2 Persistence Modeling in Wastewater The model parameters for different pathogen combinations in wastewater matrices along with the corresponding fitting parameters are presented in Table 8. Studied pathogens consisted of bacteria, viruses, bacteroidales, and protozoa and matrices consisted of treated and untreated wastewaters. Table 8. Best fitted models and fitting parameters for pathogens in wastewater matrices (Other Modelsb) Data Pointsc INDICATOR or PATHOGEN (Ep, Epd, ln) Walters et al., 2009 Walters and Field, 2009 Gz3e NRd Czajkowska et al., 2008 Salmonella Thompson Ravva and Sarreal, 2014 Salmonella enterica Boehm et al., 2012 0.12 (0.22) (ln, JM1,Gam) Humbacg PROTOZOA AND VIRUSES Cryptosporidium parvum Jenkins et al., 2013 (Gz3, Gam, ln, JM1, Sb) Skraber et al., 2009 (JM2, Gam, ln) aFor model description see Table 1; bOther models that provided an equally statistical best-fit; cNumber of data points modeled in the experiment; dNR: Not Reported; eBest fit a model with three parameters, where k3=7.74; fBest fit a model with three parameters, where k3=63.8; gA gene target 4.3 Persistence Modeling in Manure and Biosolids The model parameters for different pathogen combinations in different manure and biosolids matrices along with the corresponding fitting parameters are presented in Table 9. Studied pathogens consisted of bacteria and viruses, bacteroidales, and bacteriophages and matrices consisted of different biosolid types of composted manure, sludge, manure, and freshwaters contaminated with feces. Table 9. Best fitted models and fitting parameters for indicators and pathogens in manure and biosolids matrices (T99) (ln, Gam, JM2) Klein et al., 2011 (Dep) (JM1) Clostridium sporogenes (Ep, lg1, Epd, JM1, JM2, ln) Composted manure (lg1, Epd, JM1, Gam) N/Ae (Epd, JM1, lg1) Depg (Gzm) (lg2, JM2, Gam) (JM1, lg1, JM2, Gam) In Dark Treatment In Light Treatment Gz3h Oladeinde et al., 2014 Gz3i No Shading BACTERIA SOURCE TRACKING GENE MARKERSf CF128 DNA (ln, Ep, lg1) CF128 RNA (Wb, ln) 0.6 9 (1.1) (ln, JM1) (lg1, Bi3) (ln) Liang et al., 2012 (ln, Wb) (JM1, JM2, Gam, Bi3, lg1) Genbac Depj Cow M3 Depk Rum-2-Bac Depl BACTERIOPHAGE AND VIRUSES (Epd, Gam, JM1, JM2, Wb) Manure, pH=8.08; ECl=2.0 (mS/cm); NH4=99 (mM) Decrey and Kohn, 2017 (JM1, JM2, Gam) EC=4.6 (mS/cm); NH4=505 (mM) Bi3m NH4=121(mM) (Epd, JM1, JM2, Gam) Sludge, pH=7.76; (JM1, Gam) (lg1) aFor model description see Table 1; bOther models that provided an equally statistical best-fit; cNumber of data points modeled in the experiment; dNot reported; eN/A not applicable as model only has one parameter; fBelonging to the order of the Bacteroidales; gBest fit a model with three parameters, where k3=0.000001; hBest fit a model with three parameters, where k3=2.14; iBest fit a model with three parameters, where k3=3.84; jBest fit a model with three parameters, where k3=0.01; kBest fit a model with three parameters, where k3=0.02; lBest fit a model with three parameters, where k3=4.17; mEC Electrical conductivity 4.4 Persistence Modeling in Freshwater The model parameters for different pathogen combinations in different freshwater matrices along with the corresponding fitting parameters are presented in Tables 10a,10b, and 10c. Studied pathogens consisted of bacteria, viruses, bacteroidales, bacteriophages, and protozoa and matrices consisted of different freshwater types of lakes and rivers. Table 10a. Best fitted models and fitting parameters for bacteria indicators and pathogens in freshwater matrices Indicator or (Other Models)b (Gam, JM2) Indigenous Microbiota and Ambient Sunlight Korajkic et al., 2014 (JM1, ln, Gam, Epd) Indigenous Microbiota (Ep, Epd, JM2) Ambient Sunlight (Ep, JM1, ln) N/Ad Without a Treatment (Epd, JM2) Dick et al., 2010 Reduced Predation Entero1ae (JM1, Gamma, lg1) (Ep, Lg1) Fecal streptococci Phosphate buffered freshwater with sunlight Fujioka et al., 1981 (Dep, Gzm, sB) Bacteroides fragilis Balleste and Blanch, 2010 aFor model description see Table 1; bOther models that provided an equally statistical best-fit; cNumber of data points modeled in the experiment; dN/A not applicable as model only has one parameter; eMolecularly based fecal indicator bacteria; fBest fit a model with three parameters, where k3=84.2; gNR: Not Reported Table 10b. Best fitted models and fitting parameters for bacterial indicator and microbial source tracking genes in freshwater matrices Best Fit Modela (Other Models)b ENT 23s 23.5 (278.5) JM2 (ln, Ep, Epd) JM2 (Ep, ln, Epd) BACTERIA SOURCE TRACKING GENETIC MARKERSd GenBac3 JM2 (JM1) JM2 (Gam, JM1) HumM2 BifAd 4.07(8.26) Jeanneau et al., 2012 Humbac 0.66(21.06) B. thetaiotamicron Ep (JM1, Gam, ln, JM2) aFor model description see Table 1; bOther models that provided an equally statistical best-fit; cNumber of data points modeled in the experiment; dBelonging to the order of the Bacteroidales; eNR: Not Reported Table 10c. Best fitted models and fitting parameters for bacteriophage and viruses in freshwater (JM1, lg1, Epd) (Epd, Bi) Espinosa et al., 2008 F+DNA, f1 Long and Sobsey, 2004 F+DNA, fd F+DNA, M13 (188.4) (Ep, ln, JM2, JM1) F+DNA, OW (Ep, ln, JM2, JM1, Gam) F+DNA, SD F+DNA, ZJ2 Depi F+RNA, Dm3 (Gam, Dep, Epd) F+RNA, Go1 (Bi, JM2) (Gz3) F+RNA, MS2 F+RNA, SG1 (Epd, JM2, Wbl, ln,Gam, JM1) F+RNA, SG42 F+RNA, sp2 FRNAPH aFor model description see Table 1; bOther models that provided an equally statistical best-fit; cNumber of data points modeled in the experiment; dN/A not applicable as model only has one parameter; eExperiment conducted in the dark; fExperiment conducted in the light; gBest fit a model with three parameters, where k3=0.0008; iBest fit a model with three parameters, where k3=0.000005; jBest fit a model with three parameters, where k3=0.00017; kBest fit a model with three parameters, where k3=0.00005 4.5 Persistence Modeling in Marine Water The model parameters for different pathogen combinations in different marine water matrices along with the corresponding fitting parameters are presented in Table 11. Studied pathogens consisted of bacteria and viruses, and matrices consisted of different marine water types of seawater, laboratory prepared saltwater and isolated estuarine waters. Table 11. Best fitted models and fitting parameters for indicators and pathogens in marine water matrices (T99) Days Beach Sand Microcosms Zhang et al., 2015 Chandran and Hatha, 2005 Depf Miyagi et al., 2001 (Bi2) No UV radiation Beckinghausen et al., 2014 (JM1, ln, Gam, Wb, sB, lg1) UV radiation and algae association UV radiation and no algae association (Ep, ln, lg1) (ln, Ep, sB, JM1, Gam, lg1) Deph (Gam, lg1, Dep, JM2) Without Sunlight (Wb, Gam) With Sunlight Salmonella Heidelberg Salmonella Mbandaka (Bi, Gzm) (JM2, Epd) Seawater w/o Alginic Acid Davidson et al., 2015 Seawater with Filtered Seawater w/o Alginic Acid (Wb, Gam, Bi3, Gz3, Gzm) with Alginic Acid Unfiltered seawater with Alginic acid (ln, JM1, Gam) Unfiltered seawater without Alginic acid Gzml -4*10-7 No UV radiation and algae association Vibrio fluvialis (Dep, Bi3) Amel et al., 2008 No Sediment (Dep, Gam, JM1, JM2, ln) Konishi et al., 2007 (Gam, JM2, Dep) No UV irradiation de Abreu Correa et al., 2012 (Gam, Dep) UV irradiation (Gam, JM2, Bi3, Dep) Murine Norovirus-1 (Gam, ln, JM2, Epd) (Epd, JM1, Gam) Enriquez et al., 1995 a For model description see Table 1; b Other models that provided an equally statistical best-fit; cNumber of data points modeled in the experiment; dNot Reported; eBest fit a model with three parameters, where k3=0.97; fBest fit a model with three parameters, where k3=6.94; gBest fit a model with three parameters, where k3=2.70; hBest fit a model with three parameters, where k3=0.000002; iBest fit a model with three parameters, where k3=3.61; jBest fit a model with three parameters, where k3=2.24; kBest fit a model with three parameters, where k3=4.0; lBest fit a model with three parameters, where k3=0.000003; The T90 and T99 values were eliminated in decay rate analysis (section 2.0) due to considerably different experimental conditions. 4.6 Persistence Modeling in Groundwater The model parameters for different pathogen combinations in different groundwater matrices along with the corresponding fitting parameters are presented in Table 12. Studied pathogens consisted of viruses, a protozoa, and a bacteriophage. Table 12. Best fitted models and fitting parameters for pathogens in groundwater Pathogen Agent Depe Sidhu and Toze, 2012 (ln, Epd, JM1, Gam, Dep, Ep) Echovirus (Ep, ln) Yates et al., 1990 (JM1, ln, Dep) (Wb) aFor model description see Table 1; bOther models that provided an equally statistical best-fit; cNumber of data points modeled in the experiment; dNR: Not reported; eFit the model with three parameters K3=0.16 5.0 Models and Methodological Approach As the first order model could not describe the persistence pattern of various pathogen/matrix combinations, several other models were developed and tested over time. The logistic model was developed originally to describe the sigmoid shaped decay curves in microbiology (Kamau et al. 1990). The Fermi model was applied originally to describe the influence of electric field intensity on the persistence of a microbial population (Peleg 1995). In addition, an exponentially damped polynomial model can be used to describe tailing survival curves (Cavalli-Sforza et al. 1983). Juneja and Marks developed two models, denoted here as JM1 and JM2, to describe the fate of foodborne pathogens in food processing operations (Juneja et al. 2003; Juneja et al. 2006). JM1 was mostly utilized to simulate the decay which had a convex shape and describe the tail in decay curves over long periods. JM2 was frequently used to simulate the non-linear persistence curves of thermal inactivation rates. Different variations of Gompertz models, denoted Gz2 and Gz3, were developed to predict the number of microorganisms persisting under stressed environmental conditions (Wu et al. 2004; Gil et al. 2011). Along with the logistic model, Gompertz functions are frequently used to fit sigmoidal kinetics (Membre et al. 1997). Gz2 and Gz3 models can describe log-linear kinetics, shoulders, and tailing effects (Zwietering et al. 1990; Chhabra et al. 1999). Weibull and lognormal models are commonly used for thermal and non-thermal disinfection in different matrices. Weibull model can predict linear, concave, and convex and sigmoidal curves (Coroller et al. 2006). The other commonly used persistence model is Gamma. This is a simple model with few parameters and is commonly being utilized to simulate the microbial persistence in water matrices under varying environmental conditions (van Gerwen & Zwietering 1998). The broken line models of Bi and Bi2 were originally developed to simulate multiple break-points in decay curves (Muggeo 2003). The double exponential model that was originally developed to simulate thermal inactivation is able to represent linear and biphasic persistence curves (Abraham et al. 1990). On the other hand, sigmoidal models of sA and sB are typically being used to describe concave inactivation curves (Peleg 2006). This chapter specifies and discusses the most appropriate mathematical models to describe the persistence patterns of various types of pathogens in different matrices, and compares the best fit models and decay rates in the specific pathogen (or indicator)/matrix combinations. Maximum likelihood estimation and Bayesian Information Criterion (BIC) values were used in order to assess the goodness of fit among the 17 persistence models which were fit to each pathogen (or indicator)/matrix combination. The models with the lowest absolute BIC values were selected as the best fitting models. Differences less than 2 in BIC values are not considered strong evidence for model selection. If the difference between the lowest BIC values of some models in a pathogen/matrix combination was less than 2, all these models were considered as the best fit models in this chapter. BIC is defined as: BIC = k ln (n) - 2 ln (Lm) where k is the total number of parameters, n is the number of data points in the observed data (x), and Lm is the maximum likelihood of the model. Lm is defined as: Lm = p(x|θ, Μ) where θ are the parameter values which maximize the likelihood function, and M is the model used. Raw data from different studies were obtained/extracted, analyzed, and fit to various persistence models using R statistical language (R Development Core Team, 2013). Data-capturing software, GetData Graph Digitizer (http://www.getdata-graph-digitizer.com), was used to digitize figures in the published papers where data values were not specifically stated. Abraham, G., Debray, E., Candau, Y. and Piar, G. (1990). Mathematical Model of Thermal Destruction of Bacillus stearothermophilus Spores. Appl. Envir. Microbiol. 56, pp. 3073–3080. Amel, B.Kahla Nakb, Amine, B. and Amina, B. (2008). Survival of Vibrio fluvialis in seawater under starvation conditions. Microbiological Research. 163, pp. 323–328. doi: 10.1016/j.micres.2006.06.006. Aragao, G.M.F., Corradini, M.G., Normand, M.D. and Peleg, M. (2007). Evaluation of the Weibull and log normal distribution functions as survival models of Escherichia coli under isothermal and non-isothermal conditions. International Journal of Food Microbiology. 119, pp. 243–257. doi: 10.1016/j.ijfoodmicro.2007.08.004. Ballesté, E. and Blanch, A.R. (2010). Persistence of Bacteroides species populations in a river as measured by molecular and culture techniques. Applied and Environmental Microbiology. 76, American Society for Microbiology. pp. 7608–16. doi: 10.1128/AEM.00883-10. Beckinghausen, A., Haznedaroglu, B.Z., Martinez, A. and Blersch, D. (2014). Association of nuisance filamentous algae Cladophora spp. with E. coli and Salmonella in public beach waters: impacts of UV protection on bacterial survival. Environmental Science: Processes and Impacts. 16, pp. 1267–1274. doi: 10.1039/c3em00659j. Boehm, A.B., Soetjipto, C. and Wang, D. (2012). Solar inactivation of four Salmonella serovars in fresh and marine waters. Journal of Water and Health. 10, pp. 504–510. doi: 10.2166/wh.2012.084. Cavalli-Sforza, L.Tommaso, Menozzi, P. and Strata, A. (1983). A model and program for study of a tolerance curve: Application to lactose absorption tests. International Journal of Bio-Medical Computing. 14, pp. 31–41. doi: http://dx.doi.org/10.1016/0020-7101(83)90084-3. Chandran, A. and Hatha, A.A.Mohamed (2005). Relative survival of Escherichia coli and Salmonella typhimurium in a tropical estuary. Water Research. 39, pp. 1397–1403. doi: 10.1016/j.watres.2005.01.010. Chhabra, A.T., Carter, W.H., Linton, R.H. and Cousin, M.A. (1999). A predictive model to determine the effects of pH, milkfat, and temperature on thermal inactivation of Listeria monocytogenes. Journal of Food Protection. 62, pp. 1143–1149. Chick, H. (1908). An Investigation of the Laws of Disinfection. The Journal of Hygiene. 8, pp. 92–158. Coroller, L., Leguerinel, I., Mettler, E., Savy, N. and Mafart, P. (2006). General model, based on two mixed weibull distributions of bacterial resistance, for describing various shapes of inactivation curves. Applied and Environmental Microbiology. 72, pp. 6493–6502. Czajkowska, D., Boszczyk-Maleszak, H., Sikorska, I. and Sochaj, A. (2008). Studies on the survival of enterohemorrhagic and environmental Escherichia coli strains in wastewater and in activated sludges from dairy sewage treatment plants. Polish Journal of Microbiology. 57, pp. 165–171. Davidson, M.C.F., Berardi, T., Aguilar, B., Byrne, B.A. and Shapiro, K. (2015). Effects of transparent exopolymer particles and suspended particles on the survival of Salmonella enterica serovar Typhimurium in seawater. FEMS Microbiology Ecology. 91, pp. fiv005––fiv005. doi: 10.1093/femsec/fiv005. de Abreu Corrêa, A., Souza, D.S.M., Moresco, V., Kleemann, C.R., Garcia, L.A.T. and Barardi, C.R.M. (2012). Stability of human enteric viruses in seawater samples from mollusc depuration tanks coupled with ultraviolet irradiation. Journal of Applied Microbiology. 113, pp. 1554–1563. doi: 10.1111/jam.12010. Decrey, L. and Kohn, T. (2017). Virus inactivation in stored human urine, sludge and animal manure under typical conditions of storage or mesophilic anaerobic digestion. Environmental Science: Water Research and Technology. 3, pp. 492-501. Dick, L.K., Stelzer, E.A., Bertke, E.E., Fong, D.L. and Stoeckel, D.M. (2010). Relative decay of bacteroidales microbial source tracking markers and cultivated escherichia coli in freshwater microcosms. Applied and Environmental Microbiology. 76, pp. 3255–3262. doi: 10.1128/AEM.02636-09. Enriquez, C.E., Hurst, C.J. and Gerba, C.P. (1995). Survival of the enteric adenoviruses 40 and 41 in tap, sea, and waste water. Water Research. 29, Elsevier. pp. 2548–2553. Espinosa, A.Cecilia, Mazari-Hiriart, M., Espinosa, R., Maruri-Avidal, L., Méndez, E. and Arias, C.F. (2008). Infectivity and genome persistence of rotavirus and astrovirus in groundwater and surface water. Water Research. 42, pp. 2618–2628. doi: https://doi.org/10.1016/j.watres.2008.01.018. Fujioka, R.S., Hashimoto, H.H., Siwak, E.B. and Young, R.H. (1981). Effect of sunlight on survival of indicator bacteria in seawater. Applied and Environmental Microbiology. 41, pp. 690–696. Gil, M.M., Miller, F.A., Brandão, T.R.S. and Silva, C.L.M. (2011). On the use of the Gompertz model to predict microbial thermal inactivation under isothermal and non-isothermal conditions. Food Engineering Reviews. 3, pp. 17–25. doi: 10.1007/s12393-010-9032-2. Jeanneau, L., Solecki, O., Wery, N., Jardé, E., Gourmelon, M., Communal, P.-.Y. et al. (2012). Relative Decay of Fecal Indicator Bacteria and Human-Associated Markers: A Microcosm Study Simulating Wastewater Input into Seawater and Freshwater. Environmental Science and Technology. 46, American Chemical Society. pp. 2375–2382. doi: 10.1021/es203019y. Jenkins, M.B., Liotta, J.L. and Bowman, D.D. (2013). Inactivation Kinetics of \\textless\i\\textgreater\Cryptosporidium parvum\\textless\/i\\textgreater\ Oocysts in a Swine Waste Lagoon and Spray Field. Journal of Parasitology. 99, pp. 337–342. doi: 10.1645/GE-3193.1. Jodrá, P. (2009). A closed-form expression for the quantile function of the Gompertz–Makeham distribution. Mathematics and Computers in Simulation. 79, pp. 3069–3075. doi: http://dx.doi.org/10.1016/j.matcom.2009.02.002. Juneja, V.K., Eblen, B.S. and Marks, H.M. (2001). Modeling non-linear survival curves to calculate thermal inactivation of Salmonella in poultry of different fat levels. International Journal of Food Microbiology. 70, pp. 37–51. doi: http://dx.doi.org/10.1016/S0168-1605(01)00518-9. Juneja, V.K., Huang, L. and Thippareddi, H.H. (2006). Predictive model for growth of Clostridium perfringens in cooked cured pork. International Journal of Food Microbiology. 110, pp. 85–92. doi: http://dx.doi.org/10.1016/j.ijfoodmicro.2006.01.038. Juneja, V.K., Marks, H.M. and Mohr, T. (2003). Predictive thermal inactivation model for effects of temperature, sodium lactate, NaCl, and sodium pyrophosphate on Salmonella serotypes in ground beef. Applied and Environmental Microbiology. 69, pp. 5138–5156. Kamau, D.N., Doores, S. and Pruitt, K.M. (1990). Enhanced thermal destruction of Listeria monocytogenes and Staphylococcus aureus by the lactoperoxidase system. Applied and Environmental Microbiology. 56, pp. 2711–2716. Klein, M., Brown, L., Ashbolt, N.J., Stuetz, R.M. and Roser, D.J. (2011). Inactivation of indicators and pathogens in cattle feedlotmanures and compost as determined bymolecularand culture assays. FEMS Microbiology Ecology. 77, pp. 200–210. doi: 10.1111/j.1574-6941.2011.01098.x. Konishi, K., Saito, N., Shoji, E., Takeda, H., Kato, M., Asaka, M. et al. (2007). Helicobacter pylori: longer survival in deep ground water and sea water than in a nutrient-rich environment. APMIS. 115, Blackwell Publishing Ltd. pp. 1285–1291. doi: 10.1111/j.1600-0643.2007.00594.x. Korajkic, A., Wanjugi, P. and Harwood, V.J. (2013). Indigenous Microbiota and Habitat Influence Escherichia coli Survival More than Sunlight in Simulated Aquatic Environments. Applied and Environmental Microbiology. 79, pp. 5329–5337. doi: 10.1128/AEM.01362-13. Korajkic, A., McMinn, B.R., Shanks, O.C., Sivaganesan, M., G Fout, S. and Ashbolt, N.J. (2014). Biotic interactions and sunlight affect persistence of fecal indicator bacteria and microbial source tracking genetic markers in the upper mississippi river. Applied and Environmental Microbiology. 80, pp. 3952–3961. doi: 10.1128/AEM.00388-14. Liang, Z., He, Z., Zhou, X., Powell, C.A., Yang, Y., Roberts, M.G. et al. (2012). High diversity and differential persistence of fecal Bacteroidales population spiked into freshwater microcosm. Water Research. 46, pp. 247–257. doi: http://dx.doi.org/10.1016/j.watres.2011.11.004. Long, S.C. and Sobsey, M.D. (2004). A comparison of the survival of F+RNA and F+DNA coliphages in lake water microcosms. Journal of Water and Health. 2, pp. 15–22. Membre, J.M., Majchrzak, V. and Jolly, I. (1997). Effects of temperature, ph, glucose, and citric acid on the inactivation of salmonella typhimurium in reduced calorie mayonnaise. Journal of Food Protection. 60, pp. 1497–1501. Miyagi, K., Omura, K., Ogawa, A., Hanafusa, M., Nakano, Y., Morimatsu, S. et al. (2001). Survival of Shiga toxin-producing Escherichia coli O157 in marine water and frequent detection of the Shiga toxin gene in marine water samples from an estuary port. Epidemiology and Infection. 126, pp. 129–33. Muggeo, V.M.R. (2003). Estimating regression models with unknown break-points. Statistics in Medicine. 22, pp. 3055–71. doi: 10.1002/sim.1545. Oladeinde, A., Bohrmann, T., Wong, K., Purucker, S.T., Bradshaw, K., Brown, R. et al. (2014). Decay of fecal indicator bacterial populations and bovine-associated source-tracking markers in freshly deposited cow pats. Applied and Environmental Microbiology. 80, pp. 110–118. doi: 10.1128/AEM.02203-13. Peleg, M. (2006). Advanced quantitative microbiology for foods and biosystems: models for predicting growth and inactivation. CRC Series in Contemporary Food Science. Taylor and Francis. Boca Raton. pp. 417. Peleg, M. (2003). Calculation of the Non-Isothermal Inactivation Patterns of Microbes Having Sigmoidal Isothermal Semi-Logarithmic Survival Curves. Critical Reviews in Food Science and Nutrition. 43, Taylor &amp; Francis. pp. 645–658. doi: 10.1080/10408690390251156. Peleg, M. (1995). A model of microbial survival after exposure to pulsed electric fields. Journal of the Science of Food and Agriculture. 67, pp. 93–99. doi: 10.1002/jsfa.2740670115. Ravva, S.V. and Sarreal, C.Z. (2014). Survival of salmonella enterica in aerated and nonaerated wastewaters from dairy lagoons. International Journal of Environmental Research and Public Health. 11, pp. 11249–11260. doi: 10.3390/ijerph111111249. Sidhu, J.P.S. and Toze, S. (2012). Assessment of pathogen survival potential during managed aquifer recharge with diffusion chambers. Journal of Applied Microbiology. 113, pp. 693–700. doi: 10.1111/j.1365-2672.2012.05360.x. Skraber, S., Ogorzaly, L., Helmi, K., Maul, A., Hoffmann, L., Cauchie, H.M. et al. (2009). Occurrence and persistence of enteroviruses, noroviruses and F-specific RNA phages in natural wastewater biofilms. Water Research. 43, pp. 4780–4789. doi: 10.1016/j.watres.2009.05.020. van Gerwen, S.J. and Zwietering, M.H. (1998). Growth and inactivation models to be used in quantitative risk assessments. Journal of Food Protection. 61, pp. 1541–9. Walters, S.P., Yamahara, K.M. and Boehm, A.B. (2009). Persistence of nucleic acid markers of health-relevant organisms in seawater microcosms: Implications for their use in assessing risk in recreational waters. Water Research. 43, pp. 4929–4939. doi: http://dx.doi.org/10.1016/j.watres.2009.05.047. Walters, S.P. and Field, K.G. (2009). Survival and persistence of human and ruminant-specific faecal Bacteroidales in freshwater microcosms. Environmental Microbiology. 11(6), pp. 1410–1421. Wu, J.W., Hung, W.L. and Tsai, C.H. (2004). Estimation of parameters of the Gompertz distribution using the least squares method. Applied Mathematics and Computation. 158, pp. 133–147. doi: 10.1016/j.amc.2003.08.086. Xiong, R., Xie, G., Edmondson, A.E. and Sheard, M.A. (1999). A mathematical model for bacterial inactivation. International Journal of Food Microbiology. 46, pp. 45–55. Yates, M.V.,,. and,. (1990). Modelling microbial transport in soil and groundwater. ASM News (Washington). 54, pp. 324–327. Zhang, Q., He, X. and Yan, T. (2015). Differential Decay of Wastewater Bacteria and Change of Microbial Communities in Beach Sand and Seawater Microcosms. Environmental Science and Technology. 49, pp. 8531–8540. doi: 10.1021/acs.est.5b01879. Zwietering, M.H., Jongenburger, I., Rombouts, F.M. and K Riet, van.'t (1990). Modeling of the bacterial growth curve. Applied and Environmental Microbiology. 56, pp. 1875–1881. doi: 10.1111/j.1472-765X.2008.02537.x.
CommonCrawl
FindStat GelfandTsetlinPatterns Contribute Map Contribute Statistic [[4,0],[0]] [[4,0],[1]] [[4,0],[2]] [[4,0],[3]] [[4,0],[4]] [[3,1],[1]] [[3,1],[2]] [[3,1],[3]] [[2,2],[2]] 2. Additional information The two defining conditions ensure every row to be an integer partition and that any two adjacent rows define a skew shape. The shape of a GT-pattern is the partition given by its top row. Its weight (or type) is the integer partition of the multiplicities of its entries. The following GT-pattern has shape $(5,4,3,1)$ and weight $(4,4,3,1,1)$: $$\begin{matrix}5 & & 4 & & 3 & & 1 & & 0\\ & 5 & & 4 & & 3 & & 0 \\ & & 5 & & 3 & & 3 & \\ & & & 5 & & 3 \\ & & & & 4\\ \end{matrix}$$ There is a natural bijection between GT-patterns with parameters $(l,s)$ and semi-standard Young tableaux with $s$ boxes and maximal entry at most $l$. The skew shape defined by row $j$ and $j+1$ (counting from the bottom) in a GT-pattern describes which cells in the corresponding tableau contain $j$. 3. Gelfand-Tsetlin polytopes The inequalities above (with a fixed top row and fixed row sums) describe a polytope in $\mathbb{R}^{\frac{n(n+1)}{2}}$, and each GT-pattern therefore naturally corresponds to a lattice point inside this polytope. This type of polytope has integer vertices. The usual definition of a Gelfand-Tsetlin polytope is to impose an additional restriction on the row sums. Thus, the GT-polytope $P_{\lambda,w}$ is defined as all real Gelfand-Tsetlin patterns $(x_{ij})$ such that $x_{ni}=\lambda_i$ and $\sum_i (x_{j,i}-x_{j-1,i}) = w_j$ where $x_{ij}=0$ whenever the indexing is outside the triangular array. Then, the lattice points in $P_{\lambda,w}$ are in bijection with semi-standard Young tableaux of shape $\lambda$ and type $w$, and the lattice points are counted by the Kostka numbers $K_{\lambda w}$. The vertices of these polytopes are for small examples integral [KTT04], but there are families of GT-patterns where the GT-polytope has arbitrary large denominator, see [Lo04]. Hence, for each pair $(\lambda,w)$ the polytope can $P_{\lambda,w}$ either be Non-integral All non-empty $P_{\lambda,w}$ contain at least one integral vertex, (this is a consequence of Knutson and Tao's proof of the Saturation conjecture). All polytopes $P_{\lambda,\lambda}$ consist of exactly one lattice point. Since $K_{\lambda w}$ is invariant under permutation of the entries in $w$ (proved by Bender and Knuth), most research has been focused on the case when $w$ is a partition. However, $P_{\lambda,w}$ might be integral for some $w$ but non-integral for some permutation of w. 3.1. Tiles In [Lo04], the notion of tiles is introduced as an important statistic on GT-patterns. The related statistic is linked to the dimension of faces of Gelfand-Tsetlin polytopes. The tiling of a pattern is the finest partition of the entries in the pattern, such that adjacent (NW,NE,SW,SE) entries that are equal belong to the same part. These parts are called tiles, and each entry in a pattern belong to exactly one tile. A tile is free if it do not intersect any of the first and the last row. has 7 tiles, and the tile consisting of the two 3:s is the only free tile. "The machinery of tilings allows us easily to find non-integral vertices of GT-polytopes by looking for a tiling with a tiling matrix satisfying certain properties. [See Lemma 2.2 in source] Then the tilings can be "filled" in a systematic way with the entries of a GT pattern that is a non-integral vertex." [Lo04] 4. Applications 4.1. Schur polynomials The close connection between GT-patterns and Schur polynomials is quite apparent. The Schur polynomial $s_\lambda(x_1,\dotsc,x_n)$ can be expressed as a sum over lattice points in $GT(\lambda)$, the set of integral triangular GT-patterns with top row equal to $\lambda$. We have that $$s_\lambda(x_1,\dotsc,x_n) = \sum_{G \in GT(\lambda) } x^{w(G)}$$ where $w$ is the weight of the pattern $G$. 4.2. Hall–Littlewood polynomials The identity for Schur functions can be generalized to Hall–Littlewood polynomials, via a weighted version of Brion's theorem. See [FM06] Let $\lambda$ be a partition. Then $$P_\lambda(x;t) = \sum_{G \in GT(\lambda) } p_G(t) x^{w(G)}$$ where $p_G(t)$ is a certain statistic that depends on $G$. This theorem can also be proved via other means, see [Mac95] 4.3. Key polynomials The key polynomials generalize the Schur polynomials. In [KST12], a formula for key polynomials is proved, where the sum runs over lattice points in a union of so called reduced Kogan faces of $GT(\lambda)$. 4.4. Schubert polynomials It was recently proved in 1903.08275 that Schubert polynomials can be expressed as a sum over lattice points in a Minkowski-sum of GT-polytopes. Alternatively, Schubert polynomials can also be expressed as a sum over reduced Kogan faces of a certain Gelfand-Tsetlin polytope, where each face contribute with a single monomial. The reduced Kogan faces are in bijection with pipe dreams, which are used to describe Schubert polynomials. 5. Remarks It is possible to extend this definition to a parallelogram GT-pattern, in which case the GT-pattern corresponds to a skew semi-standard Young tableau. For example, here is such a GT-pattern and the corresponding skew semi-standard Young tableau: $$\begin{matrix}5 & & 4 & & 3 & & 1\\ & 4 & & 4 & & 3 & & 1\\ & & 4 & & 4 & & 2 & & 1\\ & & & 4 & & 3 & & 2 & & 1\\ & & & & 3 & & 2 & & 2 & & 0\\ & & & & & 2 & & 2 & & 0 & & 0 \\ \end{matrix} \qquad \begin{matrix} & & 1 & 2 & 5\\ & & 2 & 3\\1 & 1 & 4\\2\\ \end{matrix}.$$ The skew shape is $(5, 4, 3, 1)/(2, 2)$ and the weight is $(3, 3, 1, 1, 1)$. 5.1. Alternating sign matrices There is also a correspondence between Gelfand Tsetlin patterns and Alternating Sign Matrices. In this case we use a special type of Gelfand Tsetlin pattern called a monotone triangle. Monotone triangles are Gelfand Tsetlin patterns with strict inequalities instead of weak, and the top row equals $1 2 \cdots n$. Using the alternating sign matrix and its monotone triangle as above we have a map which gives the following Gelfand Tsetlin pattern: \begin{align*} 1 & &2 & &3 & &4 \\ &\; \; \; \; \; \; \;1 & &\; \; \; \; \; \; \;\;3 & &\; \; \; \; \; \; \;4 & \\ & & 1 & &3 & & \\ & & &\; \; \; \; \; \; \; \;2 & & & \end{align*} which has column partial sum matrix $\begin{pmatrix}0 &1 &0 &0\\1 &0 &1 &0 \\1 &0 &1 &1 \\1 &1 &1 &1 \end{pmatrix}$ and maps to the alternating sign matrix $\begin{pmatrix}0 &1 &0 &0 \\1 &-1 &1 &0 \\0 &0 &0 &1 \\0 &1 &0 &0 \end{pmatrix}$ 6. Additional resources See here an interesting video showing the bijection from a GT Pattern to a SSYT. See [Sta99] p. 313 for further references. Gelfand-Tsetlin patterns in PerAlexandersson's catalog of symmetric functions. [FM06] Boris Feigin and Igor Makhlin, A combinatorial formula for affine Hall–Littlewood functions via a weighted Brion theorem, Selecta Mathematica, 22(3):1703–1747, March 2016. [KST12] Valentina A. Kirichenko, Evgeny Yu Smirnov, and Vladlen A. Timorin, Schubert calculus and Gelfand–Zetlin polytopes, Russian Mathematical Surveys, 67(4):685, 2012. [KTT04] R.C. King, C. Tollu, and F. Toumazet, Stretched Littlewood-Richardson coefficients and Kostka coefficients, CRM Proceedings and Lecture Notes (2004), no. 24, 99–112.. [Lo04] J. A. De Loera and T. B. McAllister, Vertices of Gelfand-Tsetlin polytopes, Discrete Comput. Geom. 32 (2004), no. 4, 459–470. [Mac95] Ian G. Macdonald, Symmetric functions and Hall polynomials, The Clarendon Press Oxford University Press, New York, second edition, 1995. [Sta99] R. Stanley, Enumerative Combinatorics II, Cambridge University Press (1999). 8. Sage examples 9. Technical information for database usage A GT-pattern is uniquely represented as a list of lists representing its rows from top to bottom. GT-patterns are parametrized by two parameters length of the first row and sum of the first row. The database contains all GT-pattern where the sum of the two parameters is at most 6.
CommonCrawl
9. Applications of Integration> Let \(R\) be the region under the curve \(y=f(x)\) between \(x=a\) and \(x=b\) (\(0\leq a<b\)) (Figure 1(a)). In Section 9.2, we computed the volume of the solid obtained by revolving \(R\) about the \(x\)-axis. Another way of generating a totally different solid is to revolve the region \(R\) about the \(y\)-axis as shown in Figure 1(b). (a) (b) To compute the volume of this solid, consider a vertical thin rectangle of height \(f(x)\) and width \(dx\). When this rectangle is revolved around the \(y\)-axis, it generates a hollow, thin-walled shell of radius \(x\), height \(f(x)\) and thickness \(dx\) (Figure 2(a)). If the cylindrical shell has been rolled out flat like a thin sheet of tin, it becomes a thin slab of height $f(x)$, thickness $dx$, and length \(2\pi x\), which is the circumference of the shell (Figure 2(b)). Therefore, the element of volume is \[dV=\underbrace{2\pi x}_{\small \rm circumference}\underbrace{f(x)}_{\small \rm height}\underbrace{dx}_{\small \rm thickness}.\] The total volume is then obtained by adding up the columns of the infinitesimal shells. \[V=\int_{a}^{b}dV=\int_{a}^{b}2\pi xf(x)dx.\] In principle, the volume of this solid can also be obtained by considering thin disks generated by revolving infinitesimally thin horizontal rectangles; however, it often turns out to be more difficult because (1) the equation \(y=f(x)\) has to be solved for \(x\) in terms of \(y\), and (2) the formula for the length of the horizontal rectangle may vary in the region. In such cases, we have to compute more than one integral. If the region is between two curves \(y=f(x)\) and \(y=g(x)\) (with \(f(x)\geq g(x)\)), then the height of the vertical rectangle, which is the same as the height of the cylindrical shell, is \(f(x)-g(x)\) (Figure 3). Therefore, in this case \[dV=2\pi x[f(x)-g(x)]dx\] and \[V=2\pi\int_{a}^{b}x[f(x)-g(x)]dx.\] In general, we can write \[ \bbox[#F2F2F2,5px,border:2px solid black]{V=\int_{a}^{b}2\pi(\text{shell radius})(\text{area of thin rectangle})=\int_{a}^{b}2\pi\rho dA}\] The region in the first quadrant between \(y=4-x^{2}\) and \(y=x^{2}-4\) is revolved about the \(y\)-axis. Find the volume of the resulting solid by the shell method. Let \(y_{1}=4-x^{2}\) and \(y_{2}=x^{2}-4\). Then the height of the typical shell \(MN\) is \[\begin{aligned} MN & =y_{1}-y_{2}\\ & =\left(4-x^{2}\right)-\left(x^{2}-4\right)\\ & =8-2x^{2}\end{aligned}\] So the volume of the shell shown in the figure is \[dV=2\pi\rho\ dA,\] where \[dA=MN\ dx=\left(8-2x^{2}\right)dx\] and \[\rho=\text{the distance of the thin rectangle from the axis of rotation}=x.\] Therefore, \[dV=2\pi x\left(8-2x^{2}\right)dx\] and the total volume is \[\begin{aligned} V & =\int_{0}^{2}dV=\int_{0}^{2}2\pi x\left(8-2x^{2}\right)dx\\ & =4\pi\int_{0}^{2}\left(4x-x^{3}\right)dx\\ & =4\pi\left[2x^{2}-\frac{1}{4}x^{4}\right]_{0}^{2}\\ & =4\pi\left(8-\frac{16}{4}\right)=16\pi.\end{aligned}\] The region inside the circle \(x^{2}+y^{2}=a^{2}\) is revolved about the \(y\)-axis. Find the volume of the resulting solid (which is a sphere) by the shell method. We write the equation of the circle as \[y_{1}=\sqrt{a^{2}-x^{2}}\quad\text{and}\quad y_{2}=-\sqrt{a^{2}-x^{2}}.\] Then \[\begin{aligned} dA & =\text{area of the thin rectangle}\\ & =(y_{1}-y_{2})dx\\ & =2\sqrt{a^{2}-x^{2}}dx\end{aligned}\] and the volume of the shell is \[dV=2\pi\rho\,dA\] where \[\rho=\text{distance of the thin rectangle from the axis of rotation}=x.\] Therefore, \[dV=2\pi x\left(2\sqrt{a^{2}-x^{2}}\right)dx.\] Because \(x\) (= the distance of the thin rectangle from the \(y\)-axis) varies between \(0\) and \(a\), the total volume is: \[\begin{aligned} V & =\int_{0}^{a}dV\\ & =\int_{0}^{a}2\pi x\left(2\sqrt{a^{2}-x^{2}}\right)dx\\ & =2\pi\int_{0}^{a}2x\sqrt{a^{2}-x^{2}}dx\end{aligned}\] Let \(u=a^{2}-x^{2}\). Then \(du=-2x\,dx\) and \[\begin{aligned} V & =-2\pi\int_{-a^{2}}^{0}\sqrt{u}\,du\qquad\small{(u=0 \text{ when } x=a \text{ and } u=-a^{2} \text{ when } x=0)}\\ & =2\pi\int_{0}^{a^{2}}\sqrt{u}\,du\\ & =2\pi\left[\frac{2}{3}u^{\frac{3}{2}}\right]_{0}^{a^{2}}\\ & =\frac{4\pi}{3}a^{3}\end{aligned}\] The region inside the curve \[\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1\] is revolved about the \(y\)-axis. Find the volume of the resulting ellipsoid. If we solve the equation of the ellipse for \(y\), we get \[y_{1}=b\sqrt{1-\frac{x^{2}}{a^{2}}},\quad y_{2}=-b\sqrt{1-\frac{x^{2}}{a^{2}}}.\] Then \[\begin{aligned} dA & =\text{area of the thin rectangle}\\ & =(y_{1}-y_{2})dx\\ & =2b\sqrt{1-\frac{x^{2}}{a^{2}}}dx\end{aligned}\] and the volume of the shell is \[\begin{aligned} dV & =2\pi xdA\\ & =2\pi x\left(2b\sqrt{1-\frac{x^{2}}{a^{2}}}\right)\,dx\end{aligned}\] Because \(x\) (= the distance between the thin rectangle and the \(y\)-axis) varies between \(0\) and \(a\), the total volume is \[\begin{aligned} V & =\int_{0}^{a}dV\\ & =\int_{0}^{a}4b\pi x\sqrt{1-\frac{x^{2}}{a^{2}}}\,dx\end{aligned}\] Let \(u=1-\frac{x^{2}}{a^{2}}\). Then \[du=\frac{-2}{a^{2}}x\,dx.\] We know that \(u=1\) when \(x=0\) and \(u=0\) when \(x=a\). Therefore \[\begin{aligned} V & =\int_{0}^{a}4b\pi x\sqrt{1-\frac{x^{2}}{b^{2}}}dx\\ & =4b\pi\int_{1}^{0}\sqrt{u}\underbrace{\left(-\frac{a^{2}}{2}du\right)}_{xdx}\\ & =-2a^{2}b\pi\left[\frac{2}{3}u^{\frac{3}{2}}\right]_{1}^{0}\\ & =\frac{4}{3}\pi a^{2}b\end{aligned}\] Given that a disk with radius \(1\) and center \((5,0)\) is revolved about the \(y\)-axis, compute the volume of the resulting doughnut-shaped solid. The equation of a circle with radius \(1\) and center \((5,0)\) is \[(x-5)^{2}+y^{2}=1\] This equation gives us two functions \[y_{1}=\sqrt{1-(x-5)^{2}}\quad\text{and}\quad y_{2}=-\sqrt{1-(x-5)^{2}}\] Therefore, the height of the typical thin rectangle is \[\begin{aligned} MN & =y_{1}-y_{2}\\ & =\sqrt{1-(x-5)^{2}}-\left(-1\sqrt{1-(x-5)^{2}}\right)\\ & =2\sqrt{1-(x-5)^{2}},\end{aligned}\] and the area of a thin rectangle is \[\begin{aligned} dA & =(MN)\,dx\\ & =2\sqrt{1-(x-5)^{2}}dx,\end{aligned}\] and the volume of the shell is \[\begin{aligned} dV & =2\pi\rho\,dA\\ & =2\pi x\left(2\sqrt{1-(x-5)^{2}}\right)dx\\ & =4\pi x\sqrt{1-(x-5)^{2}}dx.\end{aligned}\] Because \(\rho=x\) (\(=\) the distance between the thin rectangle and the origin) varies between \(4\) and \(6\), the total volume is \[V=\int_{4}^{6}dV=\int_{4}^{6}4\pi x\sqrt{1-(x-5)^{2}}dx\] Let \(u=x-5\). Then \[\begin{aligned} V & =\int_{-1}^{1}4\pi({\color{red}u}+{\color{blue}5})\sqrt{1-u^{2}}du\\ & =4\pi\int_{-1}^{1}{\color{red}u}\sqrt{1-u^{2}}du+4\pi\int_{-1}^{2}{\color{blue}5}\sqrt{1-u^{2}}du\end{aligned}\] Because the function \(f(u)=u\sqrt{1-u^{2}}\) is odd, that is \[f(-u)=-u\sqrt{1-(-u)^{2}}=-f(u)\] we know \[\int_{-1}^{1}u\sqrt{1-u^{2}}du=0\] On the other hand, \(\int_{-1}^{1}\sqrt{1-u^{2}}du\) is equal to the area of a semicircle of radius \(1\) (Figure 8). Therefore \[\int_{-1}^{1}\sqrt{1-u^{2}}du=\frac{\pi}{2}(1)^{2}=\frac{\pi}{2}\] Figure 8: The integral \(\int_{-1}^{1}\sqrt{1-u^{2}}du\) is equal to the area of a semicircle of radius 1. Therefore, the volume of the doughnut-shaped solid is \[\begin{aligned} V & =4\pi\underbrace{\int_{-1}^{1}u\sqrt{1-u^{2}}du}_{=0}+20\pi\underbrace{\int_{-1}^{1}\sqrt{1-u^{2}}du}_{=\frac{\pi}{2}}\\ & =10\pi^{2}.\end{aligned}\] If a circle is revolved about an axis, the doughnut-shaped solid is called a torus. If the radius of the circle is \(r\) and the distance between the center of the circle and the axis is \(a\) then the volume of the torus is \[V=\left(\pi r^{2}\right)(2\pi a)=2\pi^{2}r^{2}a.\] Figure 9: : A torus and its volume.
CommonCrawl
How can I read off the fact that gravity is associated with spin-2 particles from the Einstein-Hilbert action? I have often heard that the gravitational field has spin $2$. How can I read the spin of the field from the Einstein-Hilbert action $$S=\int \! \mathrm{d}^4x \,\sqrt{|g|} \, \mathcal{R} \, \, \, ?$$ general-relativity quantum-spin gravitational-waves $\begingroup$ One way of doing it, as Peskin and Schroeder demonstrate for other quantum field theories, is to compute the conserved currents using Noether's theorem, quantize the theory, promote the conserved currents of internal angular momentum to an operator, and act on a state to determine the spin. $\endgroup$ – JamalS Apr 13 '14 at 14:48 $\begingroup$ But I don't know how to quantize Einstein-Hilbert action. $\endgroup$ – 346699 Apr 13 '14 at 15:04 $\begingroup$ See the answer I posted, it should make things clearer. $\endgroup$ – JamalS Apr 13 '14 at 15:16 $\begingroup$ @JamalS in P&S they act with the operator of conserved currents of total angular momentum on a massive state which they have given zero momentum (i.e. the state is at rest). From there you can conclude how much intrinsic angular momentum the state has. But is this also possible for massless states? They cannot be at rest, so I don't see how you can conclude what their intrinsic angular momentum is. $\endgroup$ – Hunter Apr 15 '14 at 14:57 $\begingroup$ For massless states, helicity is the correct quantum number (people often use spin and helicity interchangeably and this causes confusion). To compute helicity, one does not go to the rest frame. (since there isn't any!) $\endgroup$ – Prahar Apr 15 '14 at 15:15 A common procedure to determine the spin of the excitations of a quantum field is to first determine the conserved currents arising from quasi-symmetries via Noether's theorem. For example, in the case of the Dirac field, described by the Lagrangian, $$\mathcal{L}=\bar{\psi}(i\gamma^\mu \partial_\mu -m)\psi $$ the associated conserved currents under a translation are, $$T^{\mu \nu} = i \bar{\psi}\gamma^\mu \partial^\nu \psi - \eta^{\mu \nu} \mathcal{L}$$ and the currents corresponding to Lorentz symmetries are given by, $$(\mathcal{J}^\mu)^{\rho \sigma} = x^\rho T^{\mu \sigma} - x^\sigma T^{\mu \rho}-i\bar{\psi}\gamma^\mu S^{\rho \sigma} \psi$$ where the matrices $S^{\mu \nu}$ form the appropriate representation of the Lorentz algebra. After canonical quantization, the currents $\mathcal{J}$ become operators, and acting on the states will confirm that, in this case, the excitations carry spin $1/2$. In gravity, we proceed similarly. The metric can be expanded as, $$g_{\mu \nu} = \eta_{\mu \nu} + f_{\mu \nu}$$ and we expand the field $f_{\mu \nu}$ as a plane wave with operator-valued Fourier coefficients, i.e. $$f_{\mu \nu} \sim \int \frac{\mathrm{d}^3 p}{(2\pi)^3} \frac{1}{\sqrt{\dots}} \left\{ \epsilon_{\mu \nu} a_p e^{ipx} + \dots\right\}$$ We only keep terms of linear order $\mathcal{O}(f_{\mu \nu})$, compute the conserved currents analogously to other quantum field theories, and once promoted to operators as well act on the states to determine the excitations indeed have spin $2$. Counting physical degrees of freedom The graviton has spin $2$, and as it is massless only two degrees of freedom. We can verify this in gravitational perturbation theory. We know $h^{ab}$ is a symmetric matrix, and only $d(d+1)/2$ distinct components. In de Donder gauge, $$\nabla^{a}\bar{h}^{ab} = \nabla^a\left(h^{ab}-\frac{1}{2}h g^{ab}\right) = 0$$ which provides us $d$ gauge constraints. There is also a residual gauge freedom, providing that infinitesimally, we shift by a vector field, i.e. $$X^\mu \to X^\mu + \xi^\mu$$ providing $\square \xi^\mu + R^\mu_\nu \xi^\nu = 0$, which restricts us by $d$ as well. Therefore the total physical degrees of freedom are, $$\frac{d(d+1)}{2}-2d = \frac{d(d-3)}{2}$$ If $d=4$, the graviton indeed has only two degrees of freedom. Important Caveat Although we often find a field with a single vector index has spin one, with two indices spin two, and so forth, it is not always the case, and determining the spin should be done systematically. Consider, for example, the Dirac matrices, which satisfy the Clifford algebra, $$\{ \Gamma^a, \Gamma^b\} = 2g^{ab}$$ On an $N$-dimensional Kahler manifold $K$, if we work in local coordinates $z^a$, with $a = 1,\dots,N$, and the metric satisfies $g^{ab} = g^{\bar{a} \bar{b}} = 0$, the expression simplifies: $$\{ \Gamma^a, \Gamma^b\} = \{ \Gamma^{\bar{a}}, \Gamma^{\bar{b}}\} = 0$$ $$\{ \Gamma^a, \Gamma^{\bar{b}}\} = 2g^{ab}$$ Modulo constants, we see that we can think of $\Gamma^a$ as an annihilation operator, and $\Gamma^{\bar{b}}$ as a creation operator for fermions. Given that we define $\lvert \Omega \rangle$ as the Fock vacuum, we can define a general spinor field $\psi$ on the Kahler manifold $K$ as, $$\psi(z^a,\bar{z}^{\bar{a}}) = \phi(z^a,\bar{z}^{\bar{a}}) \lvert \Omega \rangle + \phi_{\bar{b}}(z^a,\bar{z}^{\bar{a}}) \Gamma^{\bar{b}} \lvert \Omega \rangle + \dots$$ Given that $\phi$ has no indices, we would expect it to be a spinless field, but it can interact with the $U(1)$ part of the spin connection. Interestingly, we can only guarantee that $\phi$ is neutral if the manifold $K$ is Ricci-flat, in which case it is Calabi-Yau manifold. JamalSJamalS If you linearise the theory such that $$ g^{\mu \nu}(x) = \eta^{\mu \nu} + h^{\mu \nu}(x) $$ say, you will find that your quantum of gravitation is this tensor $h^{\mu \nu}(x)$. Then clearly it has two free indices, and is what we call a 'spin-2 particle'. The maths to do the linearisation and prove that it transforms as a spin-2 particle would under Lorentz transformations is quite long and difficult, I find. But if you want a more 'complete' answer I can try. I need to go over it I guess... Flint72Flint72 $\begingroup$ I'm so grateful. If you feel it's so tedious, you can give me a reference. Thanks! $\endgroup$ – 346699 Apr 13 '14 at 15:10 $\begingroup$ @user34669 I guess Birrell & Davies - Quantum Field Theory In Curved Space would be a good place to start. It will teach you about one-loop semi-classical gravity, where you can do this linear approximation and get a graviton. I don't know exactly where in the book though. $\endgroup$ – Flint72 Apr 13 '14 at 15:29 $\begingroup$ If I remember correctly, papers by Wheeler present one loop calculations in linearized gravity as well. In fact, I think he was the first to derive the vertex rules, which infamously have over 100+ terms :( $\endgroup$ – JamalS Apr 13 '14 at 15:38 protected by Qmechanic♦ Mar 22 '16 at 20:34 Not the answer you're looking for? Browse other questions tagged general-relativity quantum-spin gravitational-waves or ask your own question. How come gravity is $\mathcal{N}=8$? Why is graviton spin 2 What does it mean for a particle to have spin of 2? What is the relation between the metric tensor and the graviton? Why aren't gravitons spin 1? Interpretation of the Einstein-Hilbert action Evaluating the Einstein-Hilbert action The Einstein-Hilbert Action On-Shell That the gravitational mass equals to inertial mass can imply that only Einstein-Hilbert action is satisfied How to prove that the nonlinear completion of free massless spin-2 action must be Einstein-Hilbert action? The dimension of the energy-momentum tensor and the Einstein-Hilbert action What is the motivation to Einstein-Hilbert action? Can Einstein-Hilbert action be derived from symmetry considerations? What's the matter with Planck mass $M_P$ in Einstein-Hilbert action? Varying the Einstein-Hilbert action without reference to a chart
CommonCrawl
Continuous tenor extension of affine LIBOR models with multiple curves and applications to XVA Antonis Papapantoleon ORCID: orcid.org/0000-0002-9504-28221 & Robert Wardenga2 We consider the class of affine LIBOR models with multiple curves, which is an analytically tractable class of discrete tenor models that easily accommodates positive or negative interest rates and positive spreads. By introducing an interpolating function, we extend the affine LIBOR models to a continuous tenor and derive expressions for the instantaneous forward rate and the short rate. We show that the continuous tenor model is arbitrage-free, that the analytical tractability is retained under the spot martingale measure, and that under mild conditions an interpolating function can be found such that the extended model fits any initial forward curve. This allows us to compute value adjustments (i.e. XVAs) consistently, by solving the corresponding 'pre-default' BSDE. As an application, we compute the price and value adjustments for a basis swap, and study the model risk associated to different interpolating functions. In the aftermath of the credit crisis and the European sovereign debt crisis, several of the classical paradigms in finance were no longer able to describe the new reality and needed to be designed afresh. On the one hand, significant spreads have appeared between rates of different tenors, which led to the development of multiple curve interest rate models. On the other hand, counterparty credit risk has emerged as the native form of default risk, along with liquidity risk, funding constraints and the collateralization of trades. Therefore, in post-crisis markets the quoted price of a derivative product (or, better, the cost of its hedging portfolio) is computed as the "clean" price of the product together with several value adjustments that reflect counterparty credit risk, liquidity risk, funding constraints, etc. In the context of interest rate derivatives, the "clean" price is typically computed as the discounted expected payoff under a martingale measure using a (discrete tenor) LIBOR market model, while the value adjustments are provided via the solution of a BSDE, which requires the existence of a short rate to discount the cash flows. The aim of this work is to compute prices and value adjustments consistently, in the sense that we only calibrate a discrete tenor LIBOR model and then infer the dynamics of the short rate from it, instead of resorting to an additional, external short rate model. In the sequel, we will work with the class of affine LIBOR models with multiple curves. This class of models easily produces positive interest rates and positive spreads, as well as negative interest rates alongside positive spreads. Moreover, the models are analytically tractable in the sense that the driving process remains affine under all forward measures, which allows deriving of explicit expressions for the prices of caplets and semi-analytic expressions for swaptions. Thus, these models can be efficiently calibrated to market data; cf. Grbac et al. (2015) for more details. Once the affine LIBOR model has been set up, we introduce an interpolating function that allows extending the model from a discrete to a continuous tenor, and derive explicit expressions for the dynamics of the instantaneous forward rate and of the short rate. This part follows and extends Keller-Ressel (2009), while a similar interpolation for affine LIBOR models has been recently introduced by Cuchiero et al. (2016). Moreover, we show that the resulting continuous tenor model is arbitrage-free and belongs to the class of affine term structure models. Let us mention that, on the contrary, the arbitrage-free interpolation of "classical" LIBOR market models is a challenging task; see e.g., Beveridge and Joshi (2012). In addition, we show that the driving process remains an affine process under the spot martingale measure, hence also the short rate is analytically tractable under this measure. The choice of the interpolating function is not innocuous though, as it may lead to undesirable behavior of the short rate; e.g., it may induce jumps at fixed times. Thus, we investigate what properties the (discrete tenor) affine LIBOR model and the interpolating function should have in order to avoid such situations. In particular, we show that under mild assumptions there exists an interpolating function such that the extended model can fit any initial forward curve. Then, we can compute value adjustments via solutions of a "pre-default" BSDE using the framework of Crépey (2015a,b). As an illustration, we design and calibrate an affine LIBOR model, and consider a simple post-crisis interest rate derivative, namely a basis swap. Using the methodology outlined above, we derive the dynamics of the short rate and of the basis swap using an interpolating function, and compute the value adjustments for different specifications of the contract. As the choice of an interpolating function is still arbitrary, we study the model risk associated to different choices. This paper is organized as follows: Section "Affine processes on \({\mathbb {R}}_{\geqslant 0}^{d}\)" reviews affine processes and Section "Affine LIBOR models with multiple curves" presents an overview of multiple curve markets and affine LIBOR models. Section "Continuous tenor extension of affine LIBOR models" focuses on the continuous tenor extension of affine LIBOR models and studies the properties of interpolating functions. The final Section "Computation of XVA in affine LIBOR models" outlines the computation of value adjustments in affine LIBOR models, and discusses the model risk associated with the choice of interpolating functions. The Appendix contains a useful result on the time integration of affine processes. Affine processes on \({\mathbb {R}}_{\geqslant 0}^{d}\) This section provides a brief overview of the basic notions and properties of affine processes. Proofs and further details can be found in Duffie et al. (2003), in Keller-Ressel (2008), and in Filipović (2005) for the time-inhomogeneous case. Let denote a complete stochastic basis in the sense of Jacod and Shiryaev (2003, Def. I.1.3), where \(\mathbb F=(\mathcal {F}_{t})_{t\in [0,T]}\) and T∈[0,∞) denotes the time horizon. In the sequel, we will consider a process X that satisfies the following: Assumption ( \(\mathbb {A}\) ) Let be a conservative, time-homogeneous, stochastically continuous Markov process taking values in \(D=\mathbb {R}_{\geqslant 0}^{d}\), i.e., is a family of probability measures on \((\Omega,\mathcal {F})\)and X=(X t )t∈[0,T] is a Markov process such that for every x∈D it holds X0=x, -almost surely. Denote by the expectation w.r.t. the measure and by 〈·,·〉 the inner product in \({\mathbb {R}}^{d}\). Setting we assume that \(0\in \mathcal {I}_{T}^{\circ }\), where \(\mathcal I_{T}^{\circ }\) denotes the interior of \(\mathcal I_{T}\); The conditional moment generating function of X t under has exponentially-affine dependence on x; that is, there exist functions \(\phi \colon [0,T]\times \mathcal {I}_{T}\rightarrow \mathbb {R}\) and \(\psi \colon [0,T]\times \mathcal {I}_{T}\rightarrow {\mathbb {R}}^{d}\) such that for all \((t,u,x)\in [0,T]\times \mathcal {I}_{T}\times D\). The functions ϕ and ψ satisfy the semi-flow equations, that is, for all 0≤t+s≤T and \(u\in \mathcal {I}_{T}\) $$ \begin{aligned} \phi_{t+s}(u) &= \phi_{t}(u) + \phi_{s}(\psi_{t}(u)),\\ \psi_{t+s}(u) &= \psi_{s}(\psi_{t}(u)), \end{aligned} $$ $$\phi_{0}(u)=0 \quad\text{and }\quad \psi_{0}(u)=u. $$ Using the semi-flow equations we can derive the generalized Riccati equations $$ \begin{aligned} \frac{\partial}{\partial t}\phi_{t}(u) &= F(\psi_{t}(u)), \qquad \phi_{0}(u)=0,\\ \frac{\partial}{\partial t}\psi_{t}(u) &= R(\psi_{t}(u)), \qquad \psi_{0}(u)=u, \end{aligned} $$ for \((t,u)\in [0,T]\times \mathcal {I}_{T}\), where F and R=(R1,…,R d ) are functions of Lévy–Khintchine form: $$ \begin{aligned} F(u) &= \langle b,u\rangle + \int_{D}\left(\mathrm{e}^{\langle\xi,u\rangle}-1\rangle\right)m(\mathrm{d} \xi),\\ R_{i}(u) &= \langle \beta_{i},u\rangle + \left\langle\frac{\alpha_{i}}2u,u\right\rangle + \int_{D}\left(\mathrm{e}^{\langle\xi,u\rangle}-1-\langle u,h_{i}(\xi)\rangle\right)\mu_{i}(\mathrm{d} \xi), \end{aligned} $$ while (b,m,α i ,β i ,μ i )1≤i≤d are admissible parameters—see Definition 2.6 in Duffie et al. (2003) for details—and \(h_{i}\colon {\mathbb {R}_{\geqslant 0}}^{d}\rightarrow \mathbb {R}^{d}\) are suitable truncation functions. The infinitesimal generator of a process satisfying Assumption (\(\mathbb A\)) is provided by $$ \begin{aligned} \mathcal{A}f(x) &= \left\langle b+\sum\limits_{i=1}^{d}\beta_{i}x_{i},\nabla f(x)\right\rangle + \frac1{2} \sum\limits_{i=1}^{d} \alpha_{i,kl}x_{i} \frac{\partial^{2}}{\partial x_{k}\partial x_{l}}f(x)\\ &\quad + \int_{D} \left(f(x+\xi)-f(x)\right) m(\mathrm{d} \xi) \\ &\quad + \sum\limits_{i=1}^{d} \int_{D} \left(f(x+\xi) - f(x) -\langle h_{i}(\xi),\nabla f(x)\rangle \right)x_{i} \mu_{i}(\mathrm{d} \xi), \end{aligned} $$ for all \(f\in C_{0}^{2}(D)\) and x∈D. Additional results are summarized in the following lemma. In the sequel, inequalities have to be understood componentwise, in the sense that (a1,a2)≤(b1,b2) if and only if a1≤b1 and a2≤b2. Lemma 1 The functions ϕ and ψ satisfy the following: ϕ t (0)=ψ t (0)=0 for all t∈[0,T]. \(\mathcal {I}_{T}\) is a convex set; moreover, for each t∈[0,T], the functions u↦ϕ t (u) and u↦ψ t (u), for \(u\in \mathcal {I}_{T}\), are (componentwise) convex. ϕ t (·) and ψ t (·) are order preserving: let \((t,u),\,(t,v)\in [0,T]\times \mathcal {I}_{T}\), with u≤v. Then $$ \phi_{t}(u)\leq\phi_{t}(v) \quad \text{and} \quad \psi_{t}(u)\leq\psi_{t}(v). $$ ψ t (·) is strictly order-preserving: let \((t,u),\,(t,v)\in [0,T]\times \mathcal {I}_{T}^{\circ }\), with u<v. Then ψ t (u)<ψ t (v). ϕ and ψ are jointly continuous on \([0,T]\times \mathcal {I}_{T}^{\circ }\). The partial derivatives $$ \frac{\partial}{\partial u_{i}}\phi_{t}(u) \quad \text{and} \quad \frac{\partial}{\partial u_{i}}\psi_{t}(u),\quad i=1,\dots,d $$ exist and are continuous for \(\left (t,u\right)\in [0,T]\times \mathcal {I}_{T}^{\circ }\). See Keller-Ressel et al. (2013, Lem. 4.2) for statements (1)–(4) and Keller-Ressel (2008, Prop. 3.16 and Lem. 3.17) for the last two. □ Affine processes have rich structural properties which have been proved particularly useful when it comes to financial modeling. However, there are situations where the condition of time-homogeneity cannot be met; for example, time-inhomogeneity may be introduced through an equivalent change of measure. Filipović (2005) introduced time-inhomogeneous affine processes, whose conditional moment generating function takes the form for all 0≤s≤t≤T and \(u\in \mathcal {I}_{T}\). Theorem 2.7 in Filipović (2005) yields that the infinitesimal generator is provided by where the functions F and R retain the same form as in the time-homogeneous case, however, the (admissible) parameters are now time-dependent. If the process X is strongly regular affine—that is, the parameters satisfy some continuity conditions, see Definition 2.9 in Filipović (2005) for more details—then ϕs,t(u) and ψs,t(u) satisfy generalized Riccati equations with time-dependent functional characteristics F(s,u) and R(s,u), i.e., for all 0≤s≤t≤T $$ \begin{aligned} -\frac{\partial}{\partial s}\phi_{s,t}(u) &= F\left(s,\psi_{s,t}(u)\right),\quad\phi_{t,t}(u)=0,\\ -\frac{\partial}{\partial s}\psi_{s,t}(u) &= R\left(s,\psi_{s,t}(u)\right),\quad\psi_{t,t}(u)=u. \end{aligned} $$ Affine LIBOR models with multiple curves A multiple curve setting We start by introducing some basic notation and the main concepts used in multiple curve LIBOR models, following the approach introduced by Mercurio (2010); see also Grbac et al. (2015) for an overview and more details. The emergence of significant spreads between the OIS and LIBOR rates which depend on the investment horizon, also called tenor, means that we cannot work with a single tenor structure any longer. Let \(\mathcal {T}=\{0=T_{0}<T_{1}<\cdots <T_{N} = T\}\) denote a discrete, equidistant time structure where T k , for \(k\in \mathcal {K}=\{1,\dots,N\}\), denote the relevant market dates, e.g., payment dates and maturities of traded instruments. The set of tenors is denoted by \(\mathcal {X}=\{x_{1},\dots,x_{n}\}\), where we typically have \(\mathcal {X}=\{1,3,6,12\}\) months. Then, for every \(x\in \mathcal {X}\) we consider the corresponding tenor structure \(\mathcal {T}^{x}=\{0=T_{0}^{x}<T_{1}^{x}<\cdots <T_{N^{x}}^{x}=T_{N}\}\) with constant tenor length \(\delta _{x}=T_{k}^{x}-T_{k-1}^{x}\). We denote by \(\mathcal {K}^{x}=\{1,\dots,N^{x}\}\) the collection of all subscripts related to the tenor structure \(\mathcal {T}^{x}\), and assume that \(\mathcal {T}^{x}\subseteq \mathcal {T}\) for all \(x\in \mathcal {X}\). The Overnight Indexed Swap (OIS) rate is regarded as the best market proxy for the risk-free interest rate. Moreover, the majority of traded interest rate derivatives are nowadays collateralized and the remuneration of the collateral is based on the overnight rate. Therefore, the discount factors B(0,T) are assumed to be stripped from OIS rates and defined for every possible maturity \(T\in \mathcal {T}\); see also Grbac and Runggaldier (2015, §1.3.1). B(t,T) denotes the discount factor, i.e., the time-t price of a zero coupon bond with maturity T, which is assumed to coincide with the corresponding OIS-based zero coupon bond. Let be a complete stochastic basis, where denotes the terminal forward measure, i.e., the martingale measure associated to the numeraire B(·,T N ). We consider the forward measures associated to the numeraires \(\{B(\cdot,T_{k}^{x})\}_{x,k}\) for every pair (x,k) with \(x\in \mathcal {X}\) and \(k\in \mathcal {K}^{x}\). Assuming that the processes \(B(\cdot,T_{k}^{x})/B(\cdot,T_{N})\) are true -martingales for every pair (x,k), the forward measures are absolutely continuous with respect to and defined in the usual way, i.e., via the Radon–Nikodym density Therefore, the forward measures are associated to each other via hence they are related to the terminal measure via The expectations with respect to the forward measures and the terminal measure are denoted by and , respectively. Next, we define the main modeling objects in the multiple curve LIBOR setting: the OIS forward rate, the forward LIBOR rate and the corresponding spread. The time-tOIS forward rate for the time interval \([T_{k-1}^{x},T_{k}^{x}]\) is defined by $$ F_{k}^{x}(t) := \frac1{\delta_{x}} \left(\frac{B\left(t,T_{k-1}^{x}\right)}{B\left(t,T_{k}^{x}\right)}-1 \right). $$ The time-tforward LIBOR rate for the time interval \([T_{k-1}^{x},T_{k}^{x}]\) is defined by where \(L\left (T_{k-1}^{x},T_{k}^{x}\right)\) denotes the spot LIBOR rate at time \(T_{k-1}^{x}\) for the time interval \(\left [T_{k-1}^{x},T_{k}^{x}\right ]\). The forward LIBOR rate is the rate implied by a forward rate agreement where the future spot LIBOR rate is exchanged for a fixed rate; cf. Mercurio (2009, pp. 12–13). The spot LIBOR rate \(L\left (T_{k-1}^{x},T_{k}^{x}\right)\) is set in advance, hence it is \({\mathcal {F}}_{T_{k-1}^{x}}\)-measurable, therefore we have that the forward LIBOR rate coincides with the spot LIBOR rate at the corresponding tenor dates, i.e., The (additive) spread between the forward LIBOR rate and the OIS forward rate is defined by $$S_{k}^{x}(t) := L_{k}^{x}(t) - F_{k}^{x}(t). $$ In a single curve setup, the forward LIBOR rate is defined via (11) and the spread equals zero for all times. However, in a multiple curve model these rates are not equal any more and we typically have that \(L_{k}^{x} \geq F_{k}^{x}\). \(F_{k}^{x}\) and \(L_{k}^{x}\) can also be interpreted as forward rates corresponding to a riskless and a risky bond, respectively; see e.g., Crépey et al. (2012). We now turn our attention to the affine LIBOR models developed by Keller-Ressel et al. (2013) and extended to the multiple curve setting by Grbac et al. (2015). An important ingredient are martingales that are greater than, or equal to, one. Consider a process X satisfying Assumption (\(\mathbb {A}\)) and starting at the canonical value 1=(1,1,…,1), and let \(u\in \mathcal {I}_{T}\). Then, the process \({{M^{u}=(M^{u}_{t})_{t\in [0,T]}}}\) defined by is a martingale. Moreover, if \(u\in \mathcal {I}_{T}\cap \mathbb {R}_{\geqslant 0}^{d}\) the mapping \(u \mapsto M_{t}^{u}\) is increasing and \(M^{u}_{t}\ge 1\) for every t∈[0,T]; see Keller-Ressel et al. (2013, Thm. 5.1) and Papapantoleon (2010). The multiple curve affine LIBOR models are defined as follows: A multiple curve affine LIBOR model \((X,\mathcal {X},T_{N},u,v)\) consists of the following elements: An affine process X under satisfying Assumption (\(\mathbb {A}\)) and starting at the canonical value 1. A set of tenors \(\mathcal {X}\). A terminal maturity T N . A sequence of vectors u=(u1,…,u N ) with \(u_{l}=:u_{k}^{x}\in \mathcal {I}_{T}\cap \mathbb {R}_{\geqslant 0}^{d}\), for all \(l=kT_{1}^{x}/T_{1}\) and \(x\in \mathcal {X}\), such that $$ u_{1} \ge u_{2} \ge \cdots \ge u_{N}=0. $$ A collection of sequences of vectors \(v={\left \{(v_{1}^{x},\dots,v^{x}_{N^{x}})\right \}}_{x\in \mathcal {X}}\) with \(v_{k}^{x}\in \mathcal {I}_{T}\cap \mathbb {R}_{\geqslant 0}^{d}\), such that $$ \begin{aligned} v_{k}^{x} \ge u_{k}^{x} \quad \text{for all}~ k\in\mathcal{K}^{x},x\in\mathcal{X}. \end{aligned} $$ The dynamics of the OIS forward rates and the forward LIBOR rates in the model evolve according to $$ 1 + \delta_{x} F_{k}^{x}(t) = \frac{M_{t}^{u_{k-1}^{x}}}{M_{t}^{u_{k}^{x}}} \quad\text{and }\quad 1 + \delta_{x} L_{k}^{x}(t) = \frac{M_{t}^{v_{k-1}^{x}}}{M_{t}^{u_{k}^{x}}}, $$ for all \(t\in [0,T_{k}^{x}]\), \(k\in \mathcal {K}^{x}\) and \(x\in \mathcal {X}\). The definition of multiple curve affine LIBOR models implies that the dynamics of OIS forward rates and forward LIBOR rates, more precisely of \(1+\delta _{x}F_{k}^{x}\) and \(1+\delta _{x}L_{k}^{x}\), exhibit an exponential-affine dependence in the driving process X; see (13) and (16). Glau et al. (2016) recently showed that models that exhibit this exponential-affine dependence are the only ones that produce structure preserving LIBOR models; cf. Proposition 3.11 therein. The denominators in (16) are the same in both cases, since both rates have to be -martingales by definition. On the other hand, different sequences \((u_{l})_{l\in \mathcal {K}}\) and \(\left (v_{k}^{x}\right)_{k\in \mathcal {K}^{x}}\) are used in the numerators in (16) producing different dynamics for OIS and LIBOR rates. These sequences are used to fit the multiple curve affine LIBOR model to a given initial term structure of OIS and LIBOR rates. In particular, the subsequent propositions show that by fitting the model to the initial term structure we automatically obtain sequences \((u_{l})_{l\in \mathcal {K}}\) and \((v_{k}^{x})_{k\in \mathcal {K}^{x}}\) that satisfy (14) and (15), respectively; see also Grbac et al. (2015, Rem. 4.4 and 4.5) for further comments on these sequences. The following quantity measures the ability of a multiple curve affine LIBOR model to fit a given initial term structure In several models commonly used in mathematical finance, such as the Cox–Ingersoll–Ross model and Ornstein–Uhlenbeck processes driven by subordinators, this quantity equals infinity. The following propositions show that the affine LIBOR models are well-defined and can fit any initial term structure under mild conditions. Consider the time structure \(\mathcal {T}\), let B(0,T l ), \(l\in \mathcal {K}\), be the initial term structure of OIS discount factors and assume that $$ B(0,T_{1})\geq\cdots\geq B(0,T_{N})>0. $$ Then, the following hold: If γ X >B(0,T1)/B(0,T N ), there exists a sequence \((u_{l})_{l\in \mathcal {K}}\) in \(\mathcal {I}_{T}\cap \mathbb {R}_{\geqslant 0}^{d}\) satisfying (14) such that $$ M_{0}^{u_{l}} = \frac{B(0,T_{l})}{B(0,T_{N})}\quad\text{for all }l\in\mathcal{K}. $$ In particular, if γ X =∞, then the multiple curve affine LIBOR model can fit any initial term structure of OIS rates. If X is one-dimensional, the sequence \((u_{l})_{l\in \mathcal {K}}\) is unique. If all initial OIS rates are positive, the sequence \((u_{l})_{l\in \mathcal {K}}\) is strictly decreasing. See Proposition 6.1 in Keller-Ressel et al. (2013). □ Consider the setting of the previous proposition, fix \(x\in \mathcal {X}\) and the corresponding tenor structure \(\mathcal {T}^{x}\). Let \(L_{k}^{x}(0)\), \(k\in \mathcal {K}^{x}\), be the initial term structure of non-negative forward LIBOR rates and assume that for every \(k\in \mathcal {K}^{x}\) $$ L_{k}^{x}(0) \geq \frac1{\delta_{x}}\left(\frac{B\left(0,T_{k-1}^{x}\right)}{B\left(0,T_{k}^{x}\right)}-1\right) = F_{k}^{x}(0). $$ If \(\gamma _{X}>(1+\delta _{x}L_{k}^{x}(0))B(0,T_{k}^{x})/B(0,T_{N})\) for all \(k\in \mathcal {K}^{x}\), then there exists a sequence \((v_{k}^{x})_{k\in \mathcal {K}^{x}}\) in \({\mathcal {I}}_{T}\cap {\mathbb {R}_{\geqslant 0}}^{d}\) satisfying (15) such that $$ M_{0}^{v_{k}^{x}} = \left(1+\delta_{x}L_{k+1}^{x}(0)\right) M_{0}^{u_{k+1}^{x}}, \quad \text{for all}~ k\in\mathcal{K}^{x}\setminus\{N^{x}\}. $$ In particular, if γ X =∞, then the multiple curve affine LIBOR model can fit any initial term structure of forward LIBOR rates. If X is one-dimensional, the sequence \((v_{k}^{x})_{k\in \mathcal {K}^{x}}\) is unique. If all initial LIBOR-OIS spreads are positive (i.e., (19) becomes strict), then \(v_{k}^{x}>u_{k}^{x}\), for all \(k\in \mathcal {K}^{x}\setminus \{N^{x}\}\). See Proposition 4.2 in Grbac et al. (2015). □ The proofs of these propositions are constructive and provide an easy algorithm for fitting an affine LIBOR model to a given initial term structure of OIS and LIBOR rates. However, for d>1 the sequences u and vx are not unique, hence questions about optimality arise; see the discussion in Subsections "On the choice of the interpolating function" and "Discussion". In the proof of Proposition 1, the sequence \((u_{l})_{l\in \mathcal {K}}\) is chosen along a straight line in \(\mathcal {I}_{T}\cap \mathbb {R}^{d}_{\geqslant 0}\) from some \(u\in \mathcal {I}_{T}\) to 0, such that u satisfies However, any other continuous path from another u′ to 0, that satisfies (20) and is componentwise decreasing, would have worked as well. The next proposition shows that multiple curve affine LIBOR models are analytically tractable, in the sense that the affine structure is preserved under any forward measure. The underlying process X is a time-inhomogeneous affine process under the measure , for every \(x\in \mathcal {X}\) and \(k\in \mathcal {K}^{x}\). The moment generating function is provided by for every w such that \(w+\psi _{T_{N}-t}(u_{k}^{x})\in \mathcal {I}_{T}\), where $$\begin{array}{@{}rcl@{}} \phi_{t}^{x,k}(w) & = \phi_{t}\left(\psi_{T_{N}-t}(u_{k}^{x})+w\right)-\phi_{t}\left(\psi_{T_{N}-t}(u_{k}^{x})\right),\\ \psi_{t}^{x,k}(w) & = \psi_{t}\left(\psi_{T_{N}-t}(u_{k}^{x})+w\right)-\psi_{t}\left(\psi_{T_{N}-t}(u_{k}^{x})\right). \end{array} $$ The multiple curve affine LIBOR models defined above and satisfying the prerequisites of Propositions 1 and 2 are arbitrage-free discrete tenor models, in the sense that \(F_{k}^{x}\) and \(L_{k}^{x}\) are -martingales for every \(k\in \mathcal {K}^{x}\), \(x\in \mathcal {X}\), while the interest rates and the spread are positive, i.e., \(F_{k}^{x}(t)\ge 0\) and \(S_{k}^{x}(t)=L_{k}^{x}(t)-F_{k}^{x}(t)\ge 0\) for every \(t\in [0,T_{k-1}^{x}], k\in \mathcal {K}^{x}\), \(x\in \mathcal {X}\); cf. Proposition 4.3 in Grbac et al. (2015). The class of affine LIBOR models with multiple curves can be extended to accommodate negative interest rates alongside positive spreads; see Grbac et al. (2015, §4.1) for the details. We could use time-dependent parameters, i.e., time-inhomogeneous affine processes, in the construction of affine LIBOR models, in particular since the dynamics of X are time-dependent under forward measures; see Proposition 3. We use affine processes instead, in order to ease the presentation of the model and its properties, and to be consistent with the relevant literature (cf. Keller-Ressel et al. 2013 and Grbac et al. 2015). Continuous tenor extension of affine LIBOR models Discrete to continuous tenor This section is devoted to the extension of the affine LIBOR models from a discrete to a continuous tenor structure, and the derivation of the dynamics of the corresponding instantaneous forward rate and short rate. The main tool is an interpolating function , which is a function defined on [0,T N ] that matches u l at each tenor date T l . This subsection follows and extends Keller-Ressel (2009). An interpolating function for the multiple curve affine LIBOR model \((X,\mathcal {X},T_{N},u,v)\) is a continuous, componentwise decreasing function with for all t∈[0,T N ] and bounded right-hand derivatives, such that for all \(T_{l}\in \mathcal {T}\). Since is a mapping from [0,T N ], it makes sense to define a element. This can be chosen such that which is consistent with Proposition 1. The interpolating function allows deriving of an explicit expression for the dynamics of zero coupon bond prices in the multiple curve affine LIBOR model. In particular, they belong to the class of affine term structure models; see e.g., Björk (2009, §24.3). Let be an interpolating function for the multiple curve affine LIBOR model \((X,\mathcal {X},T_{N},u,v)\) and define the (OIS zero coupon) bond price B(t,T) by for 0≤t≤T≤T N , where is the martingale defined by (13). Then, bond prices satisfy Using the definition of the OIS forward rate, (16), and the positivity of bond prices, we get in the discrete tenor case, $$B(T_{k},T_{i}) = \prod\limits_{l=k}^{i-1} \frac{B\left(T_{k},T_{l+1}\right)}{B(T_{k},T_{l})} = \prod\limits_{l=k}^{i-1} \frac{M_{T_{k}}^{u_{l+1}}}{M_{T_{k}}^{u_{l}}} = \frac{M_{T_{k}}^{u_{i}}}{M_{T_{k}}^{u_{k}}} $$ for every \(T_{k},T_{i}\in \mathcal {T}\) such that T k ≤T i ≤T N . Similarly in the continuous tenor case, using (21) we get for 0≤t≤T≤T N that where ⌊t⌋ is such that T⌊t⌋ is the largest element in the time structure \(\mathcal {T}\) less than or equal to t. Hence, since depends exponentially-affine on X t , we arrive at (22)–(23). □ Next, we will show that the extension of an affine LIBOR model from a discrete to a continuous tenor is an arbitrage-free term structure model. Following Musiela and Rutkowski (1997, Def. 2.3), we say that a family of bond prices satisfies a no-arbitrage condition if there exists a measure such that is a -local martingale and for any Theorem 1 Let be an interpolating function for the multiple curve affine LIBOR model \((X,\mathcal {X},T_{N},u,v)\). Then, is a continuous tenor extension of the affine LIBOR model, i.e., an arbitrage-free model for all maturities , such that for all maturities \(T\in \mathcal {T}\) the bond prices coincide with those of the (discrete tenor) affine LIBOR model. The definition of the interpolating function yields immediately that bond prices in the continuous tenor extension coincide with those of the discrete tenor affine LIBOR model for all maturities. According to Musiela and Rutkowski (1997, §2.3), in order to show that the model is arbitrage-free it suffices to verify the following conditions on the family of bond prices: is a strictly positive special semimartingale and the left-hand limit process is also strictly positive, for every The bond price quotients are -martingales. B(t,S)≤B(t,U) for all 0≤t≤S≤U≤T N . The second condition follows immediately from (21) and the construction of Mu as a -martingale. In order to check the first and the third conditions, we shall use the representation for the bond prices from Lemma 2. Indeed, the last condition follows directly from representation (22)–(23), using the monotonicity of the function and the order preserving property of ϕ and ψ; cf. Lemma 1. Moreover, the continuity of ϕ and ψ together with (22) imply that which ensures the positivity of Finally, is a smooth function of X hence it is also a semimartingale, which is special if its associated jump process is absolutely bounded; cf. Jacod and Shiryaev (2003, Lem. I.4.24). The processes X and X− are non-negative a.s. and the same is true for ΔX, since the compensator of the jump measure of X is entirely supported on the positive half-space; cf. Duffie et al. (2003, Def. 2.6). Using again Lemma 1 (4) and the monotonicity of we get that and take values in the negative half space. Thus, we can estimate the jump process as follows: Having bond prices for all maturities at hand, we can now calculate the dynamics of the instantaneous forward rate with maturity prevailing at time t and of the short rate r t prevailing at time t. These quantities are commonly defined as Together with the requirement that we get that the former is equivalent to Let be an interpolating function and consider the continuous tenor extension of the affine LIBOR model \({(X,\mathcal {X},T_{N},u,v)}\). Then, the instantaneous forward rate and the short rate are provided by for all Here, a∘b denotes the componentwise multiplication of two vectors a and b having the same dimension. We know from Lemma 2 that bond prices are log-affine functions of X, in particular they are provided by (22)–(23). The result now follows by taking the right-hand derivative of w.r.t. , which exists by Definition 5 and Lemma 1(6). Note also that and are positive, for all □ Moreover, the continuously compounded bank account B⋆ is defined as usual: $$ B^{\star} = \exp\left(\int_{0}^{\cdot} r_{s}\mathrm{d} s\right), $$ while the associated spot measure , under which bond prices are provided by is calculated next. Let be an interpolating function and consider the continuous tenor extension of the affine LIBOR model \({(X,\mathcal {X},T_{N},u,v)}\). Then, the spot measure is determined by the density process where and . The spot measure and the terminal forward measure are related via cf. Musiela and Rutkowski (2005, §13.2.2). Then, the representation above follows easily from (28), (22)–(23), Lemma 1(1), and Remark 5, using the fact that hence □ The next result resembles Proposition 3 and shows that the driving process X remains an affine process under the spot measure . In other words, the multiple curve affine LIBOR model remains analytically tractable under the spot measure as well. Let be an interpolating function and consider the continuous tenor extension of the affine LIBOR model \({(X,\mathcal {X},T_{N},u,v)}\). Then the underlying process X is a time-inhomogeneous affine process under the spot measure . In particular, X is strongly regular affine and the functional characteristics under are provided by $$F^{\star}\left(t,w\right)=F\left(w+Q_{t}\right)-F\left(Q_{t}\right) \quad \text{and} \quad R^{\star}\left(t,w\right)=R\left(w+Q_{t}\right)-R\left(Q_{t}\right), $$ for every w such that \(w+Q_{t}\in \mathcal {I}_{T}\). We will first show that the moment generating function of X has an exponential-affine form under . Starting from the moment generating function of X under , and using the conditional density process in Lemma 4 and the dynamics of the short rate process in (26)–(27), we arrive at Theorem 4.10 in Keller-Ressel (2008) provides an elegant way to calculate the functional characteristics of a time integrated affine process. This result is proved for \(Y_{\cdot }=\int _{0}^{\cdot } X_{u} \mathrm {d} u\) and is extended to \(\widetilde {Y}_{\cdot }=\big (\int _{0}^{\cdot } \theta _{u}^{i}X_{u}^{i} \mathrm {d} u\big)_{1\le i\le d}\) for a deterministic, bounded and positive θ in Theorem 3 of the appendix. Then, we have that the functional characteristics of the joint process \(\left (X_{t},\widetilde {Y}_{t}\right)\) are provided by $$\widetilde{F}(t,w_{x},w_{y}) = F(w_{x}) \quad\text{and}\quad \widetilde{R}(t,w_{x},w_{y}) = \left(\begin{array}{c} R\left(w_{x}\right)+\theta_{t} \circ w_{y}\\0 \end{array}\right). $$ The definition of the interpolating function together with Lemma 1 (6) yield that q in (27) is bounded. Hence, applying Theorem 3 yields that B in (29) takes the form $$B = \exp\left(\widetilde{\phi}_{t-s}(w+Q_{t},\mathbf1) + \left\langle \widetilde{\psi}_{t-s}(w+Q_{t},\mathbf1), (X_{s},\widetilde{Y}_{s}) \right\rangle \right), $$ where \(\widetilde {\phi }\) and \(\widetilde {\psi }\) are the solutions of the generalized Riccati equations defined by \(\widetilde {F}\) and \(\widetilde {R}\); cf. (3). The form of \(\widetilde {R}\) implies that the components of \(\widetilde {\psi }\) corresponding to \(\widetilde {Y}\) satisfy \(\widetilde {\psi }_{t-s}(w_{x},w_{y})_{y}=w_{y}\). Hence, we get from (29) that Now, conditioning on X s =x and taking the right-hand derivatives with respect to s at t=s, we arrive at the generator of X under : $$ \begin{aligned} \mathcal{A}_{t} \mathrm{e}^{\left\langle w,x\right\rangle} = \left(F(w+Q_{t})-F(Q_{t}) + \langle R(w+Q_{t})-R(Q_{t}),x\rangle \right)\mathrm{e}^{\langle w,x\rangle} ; \end{aligned} $$ compare with (7). The semigroup of the affine process X under is weakly regular in the sense of Filipović (2005, Def. 2.3), since the process X is stochastically continuous under and the generator exists and is continuous at w=0 for all (t,x)∈[0,T]×D. Moreover, X is strongly regular affine under since the weakly admissible parameters (α⋆(t),b⋆(t),β⋆(t),m⋆(t),μ⋆(t)) implied by (30) are continuous transformations of (α,b,β,m,μ). □ Using the last proposition, we get that the conditional moment generating function of X under is given by where \(\phi _{s,t}^{\star }\) and \(\psi _{s,t}^{\star }\) are the solutions of the generalized Riccati equations with functional characteristics F⋆ and R⋆; cf. (8). Since both the instantaneous forward rate and the short rate are time-dependent affine transformations of the driving process X, they will inherit many (distributional) properties from X. In fact, once we have computed the characteristics of the driving process X under the spot measure , it is easy to see that also the short rate r has time-inhomogeneous characteristics, that are affine w.r.t. X. Indeed, from (26) and (31) we get On the choice of the interpolating function The requirements on the interpolating function are rather weak, such that even a linear interpolation between the u k 's corresponding to the maturities T k , \(k\in \mathcal K\), can be used. However, looking at Eqs. (26)–(27) for the dynamics of the short rate process r, we can immediately observe that jumps will occur at fixed times, the maturities T k , if is not continuously differentiable. A more sophisticated, but still arbitrary, choice for an interpolating function are cubic splines, i.e., piecewise polynomials of degree three, which are continuously differentiable and thus do not lead to deterministic discontinuities; see Fig. 1 for an illustration. Twenty sample paths of the short rate when X is the 1D CIR process, together with the mean as a function of time (computed over 105 paths). Left panel: using a cubic spline interpolation of (u k ) to obtain . Right panel: using a linear interpolation The next corollary is an immediate consequence of Lemma 3 and the fact that X is stochastically continuous. Corollary 1 Let be a continuously differentiable interpolating function, i.e., , and consider the continuous tenor extension of the affine LIBOR model \({(X,\mathcal {X},T_{N},u,v)}\). Then the short rate process r is stochastically continuous. However, even when the interpolating function is continuously differentiable there can be sources of undesirable behavior of the short rate inherited from the sequence (u k ) itself (which is not unique unless d=1; cf. Remark 2). Consider, for example, the following "diagonal" structure for (u k ), which is similar to the one employed by Grbac et al. (2015) to model independence between rates of different maturities: with \(\bar {u}_{i}\in \mathbb {R}_{\geqslant 0}\), for 1≤i≤N. The only paths that can be used to interpolate in this case are the ones connecting the elements u k and uk+1 of the sequence (u k ) via straight lines, otherwise the interpolating function will not be component-wise decreasing. Hence, any interpolating function maps onto a non-smooth manifold. Then, requiring that the interpolating function is continuously differentiability (in time) will lead to a short rate that drops to zero at every T k , \(k\in \mathcal {K}\), since the derivative of the interpolating function will equal zero at each T k ; see again (26) and (27), and the illustration in Fig. 2. Twenty sample paths of the short rate together with the 2.5/97.5 percentiles induced by a diagonal structure and a continuously differentiable interpolating function We would like in the sequel to provide conditions such that the short rate resulting from a continuous tenor extension of an affine LIBOR model exhibits "reasonable" behavior, in the sense that it neither jumps at fixed times, nor drops to zero at each maturity date. Moreover, we would like to identify a method for choosing an interpolating function that removes the arbitrariness from this choice. A condition for the former is that the sequence \((u_{k})_{k\in \mathcal {K}}\) lies on a smooth manifold. Regarding the latter, we could require that a continuum of bond prices are fitted as well. These together lead to a uniquely defined, continuously differentiable interpolating function. Let \({(X,\mathcal {X},T_{N},u,v)}\) be an affine LIBOR model and assume that the sequence \((u_{k})_{k\in \mathcal {K}}\) admits an interpolating function \(\hat {{U}}\) that maps onto a smooth manifold Moreover, let \(\tilde {f}(0,\cdot)\colon [0,T_{N}]\rightarrow \mathbb {R}_{\geqslant 0}\) be an initial forward curve—belonging, e.g., to the Nelson–Siegel or Svensson family—that is consistent with the initial bond prices, i.e., $$ B(0,T_{k})=\exp\left(-\int_{0}^{T_{k}} \tilde f(0,s)\mathrm{d} s \right). $$ Then, we can find an interpolating function such that the continuous-tenor extended affine LIBOR model fits the given initial forward curve \(\tilde f\). In order to achieve this, the interpolating function should satisfy the following: for all This equation follows directly from the requirement that (34) holds for all together with (22)–(23) and Remark 5. Moreover, the dynamics of the instantaneous forward rate are provided by (26)–(27) for all T∈[0,T N ], and satisfy the initial condition The curve fitting problem in (35) can be solved analogously to the problem of fitting the sequence (u k ) to an initial term structure of bond prices; compare with Proposition 6.1 in Keller-Ressel et al. (2013) and the corresponding proof. The resulting interpolating function is then differentiable with respect to time, as the following result shows. Let \({(X,\mathcal {X},T_{N},u,v)}\) be a multiple curve affine LIBOR model, assume that \(\tilde f(0,\cdot):[0,T_{N}]\to \mathbb {R}_{\geqslant 0}\) is continuous and that \((u_{k})_{k\in \mathcal {K}}\) allows for an interpolating function that maps onto a C1-manifold Then, there exists a unique continuously differentiable interpolating function satisfying (35). This statement is an easy consequence of the implicit function theorem, where the differentiability of ϕ and ψ in space and time as well as their order preserving property (cf. Lemma 1) are used. Indeed, the function $$G{\left(t,u\right)} = \phi_{T_{N}}{\left(u\right)} + {\left\langle \psi_{T_{N}}{\left(u\right)}, X_{0} \right\rangle} - \int^{T_{N}}_{t}\tilde{f}{\left(0, s\right)}\mathrm{d} s $$ is continuously differentiable in t∈[0,T N ] and \(u\in \mathcal {I}_{T}\). Let be an atlas for the C1-manifold , where is a finite index set, is an open covering of and \(g_{\alpha }:U_{\alpha }\rightarrow I_{\alpha }\subseteq \mathbb {R}\) is a C1-homeomorphism. Define \(G^{\star }\colon [0,T_{N}]\times I_{\alpha }\rightarrow {\mathbb {R}}\) by \({\left (t,x\right)}\mapsto G{\left (t,g^{-1}_{\alpha {\left (x\right)}}\right)}\). By the (strict) order preserving property of ϕ and ψ we know that the partial derivative \(\frac {\partial }{\partial x}G^{\star {\left (t,x\right)}}\) is not zero, hence by a compactness argument there exists a unique continuously differentiable function \(x\colon [0,T_{N}]\rightarrow \mathbb {R}\) such that G⋆(t,x(t))=0 for all t∈[0,T N ]. The interpolating function is then given by □ Figure 1 reveals another interesting behavior of the short rate implied by an affine LIBOR model. In particular, there exists a lower bound for the short rate that is greater than zero. Indeed, since the state space of our driving affine process is \(\mathbb {R}_{\geqslant 0}^{d}\), we have that r t ≥p t , which is greater than zero as ϕ is strictly order preserving and u is decreasing. A similar phenomenon was already observed in the discrete tenor model for the LIBOR rate, compare with Keller-Ressel et al. (2013, Rem. 6.4). Computation of XVA in affine LIBOR models The quoted price of a derivative product in pre-crisis markets was equal to its discounted expected payoff (under a martingale measure), since counterparties were considered default-free, there was abundance of liquidity in the markets, and other frictions were also negligible. In post-crisis markets, however, these assumptions have been challenged; in particular, counterparty credit risk has emerged as the natural form of default risk, there is shortage of liquidity in financial markets, while other frictions have also gained importance. These facts have thus to be factored into the quoted price. One way to do that, is to compute first the so-called "clean" price of the derivative, which equals its discounted expected payoff (under a martingale measure), and then add to it several value adjustments, collectively abbreviated as XVA, that reflect counterparty credit risk, liquidity costs, etc. We refer to Brigo et al. (2013), Crépey (2015a,b), Crépey and Bielecki (2014), and Bichuch et al. (2016) among others for more details on XVA. Clean valuation This section reviews basis swaps and provides formulas for their clean price in affine LIBOR models with multiple curves. The clean valuation of caps, swaptions, and basis swaptions in these models is extensively studied in Grbac et al. (2015). The typical example of an interest rate swap is where a floating rate is exchanged for a fixed rate; see, e.g., Musiela and Rutkowski (2005, §9.4). The appearance of significant spreads between rates of different tenors has given rise to a new kind of interest rate swap, called basis swap, where two streams of floating payments linked to underlying rates of different tenors are exchanged. As an example, in a 3M-6M basis swap linked to the LIBOR, the 3-month LIBOR is paid quarterly and the 6-month LIBOR is received semiannually. Let \(\mathcal {T}_{p_{1}q_{1}}^{x_{1}} = \left \{T_{p_{1}}^{x_{1}},\dots,T_{q_{1}}^{x_{1}}\right \}\) and \(\mathcal {T}_{p_{2}q_{2}}^{x_{2}} = \left \{T_{p_{2}}^{x_{2}},\dots,T_{q_{2}}^{x_{2}}\right \}\) denote two tenor structures, where \(T_{p_{1}}^{x_{1}}=T_{p_{2}}^{x_{2}}\), \(T_{q_{1}}^{x_{1}}=T_{q_{2}}^{x_{2}}\), and \(\mathcal {T}_{p_{2}q_{2}}^{x_{2}}\subset \mathcal {T}_{p_{1}q_{1}}^{x_{1}}\). Consider a basis swap that is initiated at \(T_{p_{1}}^{x_{1}}=T_{p_{2}}^{x_{2}}\), with the first payments due at \(T_{p_{1}+1}^{x_{1}}\) and \(T_{p_{2}+1}^{x_{2}}\), respectively. In order to reflect the possible discrepancy between the floating rates at initiation, the interest rate \(L\left (T_{i-1}^{x_{1}},T_{i}^{x_{1}}\right)\) corresponding to the shorter tenor length x1 is replaced by \(L\left (T_{i-1}^{x_{1}},T_{i}^{x_{1}}\right)+S\) for a fixed S, which is called the basis swap spread. The time-r value of a basis swap with notional amount normalized to 1, for \(0\le r\le T_{p_{1}}^{x_{1}}\), is given by The fair basis swap spread \(S_{r}\left (\mathcal {T}_{p_{1}q_{1}}^{x_{1}},\mathcal {T}_{p_{2}q_{2}}^{x_{2}}\right)\) is then computed so that the value of the swap at inception is zero, i.e., \(\mathbb {BS}_{r}\left (S,\mathcal {T}_{p_{1}q_{1}}^{x_{1}},\mathcal {T}_{p_{2}q_{2}}^{x_{2}}\right)=0\) for \(0\leq r\leq T_{p_{1}}^{x_{1}}\). Hence, the fair spread is given by $$\begin{aligned} \!S_{r}\!\left(\mathcal{T}_{p_{1}q_{1}}^{x_{1}},\mathcal{T}_{p_{2}q_{2}}^{x_{2}}\right) = \frac{\sum_{i=p_{2+1}}^{q_{2}}\delta_{x_{2}}B\left(r,T_{i}^{x_{2}}\right)L_{i}^{x_{2}} (r)-\sum_{i=p_{1}+1}^{q_{1}}\delta_{x_{1}}B\left(r,T_{i}^{x_{1} }\right)L_{i}^{x_{1}}(r)}{\sum_{i=p_{1}+1}^{q_{1}}\delta_{x_{1}} B(r,T_{i}^{x_{1}})}. \end{aligned} $$ Moreover, the time-t value of the basis swap, for \(t\in \left [T_{p_{1}}^{x_{1}},T_{q_{2}}^{x_{2}}\right ]\), using (16) and (24), takes the form: where \(S_{r}=S_{r}\left (\mathcal {T}_{p_{1}q_{1}}^{x_{1}},\mathcal {T}_{p_{2}q_{2}}^{x_{2}}\right)\), for \(r\in \left [0,T_{p_{1}}^{x_{1}}\right ]\) being the date of inception, while \(\lceil t \rceil _{i} = \min \left \{ k\in \mathcal {K}^{x_{i}}: t<T_{k}^{x_{i}} \right \}\). Basis swaps are post-crisis financial products, which can only be priced in models accounting for the multiple curve nature of interest rates. In a single curve model, the price of a basis swap is zero; cf. Crépey et al. (2012, p. 181) XVA equations The pricing formulas in the previous subsection reflect valuation in an environment without counterparty credit risk, funding constraints, and other market frictions. In order to include the latter into the pricing framework, several value adjustments have been introduced: credit and debt valuation adjustment (CVA and DVA), funding valuation adjustment (FVA), as well as replacement cost (RC), among others. However, we disregard the capital valuation adjustment (KVA), that was introduced due to increasing capital requirements and the cost associated to them, and the margin valuation adjustment (MVA), arising when an initial margin is required, by assuming perfect hedging by the bank and the absence of initial margins. The various valuation adjustments are typically abbreviated by XVA, while we will refer to their sum as the valuation adjustment (VA), i.e., $$\text{VA} := \text{CVA}+\text{DVA}+\text{FVA}+\text{RC}. $$ Our approach to the computation of VA closely follows the work of Crépey (2015a,b), while our exposition and numerical examples are based on Crépey et al. (2013), and Crépey et al. (2015, §5.1). We consider two counterparties, called a bank and an investor in the sequel, that are both defaultable, and denote by τ b the default time of the bank, by τ i the default time of the investor, while we set τ=τ b ∧τ i ∧T. The default intensities of τ b ,τ i , and τ are denoted γ b ,γ i , and γ, respectively. We also consider the "full model" filtration \(\mathbb {G}\), which is given by \(\mathbb {F}\) enlarged by the natural filtrations of the default times τ b and τ i , and assume that the immersion hypothesis holds, that is, every \(\mathbb {F}\)-martingale stopped at τ is a \(\mathbb {G}\)-martingale. The VA can be viewed as the price of a dividend paying option on the debt of the bank to the investor, paying off at the first-to-default time τ. Here, we have implicitly adopted the point of view of the bank. The VA from the point of view of the investor is similar, but not identical, due to, e.g., different funding conditions. The effective conclusion of Crépey (2015b) is that the VA in the setting described above can be computed in a "pre-default" framework, where the default risk of the counterparties appears only through the default intensities; see, in particular, Section 3 therein. More specifically, the VA Θ is the solution of the following BSDE under a martingale measure where r denotes the short rate process, P the clean price process, and g the VA coefficient. The overall price of the contract for the bank, in other words, the cost of the hedge incorporating the various risks, is then given by the difference between the clean price and the VA: $$\Pi_{t} = P_{t} - \Theta_{t}, \quad t\in[0,T]. $$ The VA coefficient g has the following form: $$\begin{aligned} & \!\!\!\!\!\!\!\!\!\!\!\!\! g_{t}(r_{t}, P_{t}, \Theta_{t}) + r_{t}\Theta_{t} \ = \\ & -\gamma_{t}^{i}\left(1-\rho^{i}\right)\left(Q_{t}-\Gamma_{t}\right)^{-} & \text{CVA}\\ & +\gamma_{t}^{b}\left(1-\rho^{b}\right)\left(Q_{t}-\Gamma_{t}\right)^{+} & \text{DVA}\\ & + b_{t}\Gamma_{t}^{+} - \bar{b}_{t}\Gamma_{t}^{-} + \lambda_{t}\left(P_{t}-\Theta_{t}-\Gamma_{t}\right)^{+} - \tilde\lambda_{t} \left(P_{t}-\Theta_{t}-\Gamma_{t}\right)^{-} &\text{FVA}\\ & +\gamma_{t}\left(P_{t}-\Theta_{t}-Q_{t}\right), &\text{RC} \end{aligned} $$ where \(\tilde \lambda _{t} = \bar {\lambda }_{t}-\gamma _{t}^{b}(1-\mathfrak {r})\), while each line on the right side corresponds to one of the four components of the VA. The parameters in the above equation have the following financial interpretation: \(\gamma _{t}^{i},\,\gamma _{t}^{b}\), and γ t are the default intensities of the investor, the bank, and the first to default intensity, respectively. ρi, ρb are the recovery rates of the investor and the bank to each other, and \(\mathfrak {r}\) is the recovery rate of the bank to its unsecured funder (which is a third party that jumps in when the banks' internal sources of funding have been depleted; this funder is assumed to be risk free). Q t is the value of the contract according to some valuation scheme specified in the credit support annex (CSA), which is a common part in an over-the-counter contract. \(\Gamma _{t}=\Gamma _{t}^{+}-\Gamma _{t}^{-}\) is the value of the collateral posted by the bank to the investor. \(b_{t},\,\bar {b}_{t}\) and \(\lambda _{t},\,\bar {\lambda }_{t}\) are the spreads over the risk free rate r t corresponding to the remuneration of collateral and external lending and borrowing (from the unsecured funder). The value of the contract Q and of the collateral Γ, as well as the funding coefficients b and \(\bar {b}\) are specified in the CSA of the contract, which is used to mitigate counterparty risk. Different CSA specifications will result in different behavior of the VA; see Crépey et al. (2013, Sec. 3) and the next subsection for more details. The immersion hypothesis implies weak or indirect dependence between the contract and the default times of the involved parties. Therefore, not every contract can be priced within the pre-default VA framework. As interest rate contracts exhibit weak dependence on the default times, this approach is appropriate for our setting; see also Crépey (2015b, Rem. 2.3). XVA computation in affine LIBOR models We are now interested in computing the value adjustments for interest rate derivatives, and focus on basis swaps as a prime example of a post-crisis product. The OIS forward rate and the forward LIBOR rate for each tenor are modeled according to the affine LIBOR models with multiple curves, and the model is calibrated to caplet data; see Grbac et al. (2015, §8) for details on the calibration of affine LIBOR models. An interpolating function is subsequently chosen and the dynamics of the short rate process are derived. Afterwards, the computation of the value adjustments is a straightforward application of the VA BSDE in (37). This methodology allows us to compute option prices and value adjustments consistently since we only have to calibrate the discrete-tenor affine LIBOR model, while the dynamics of the short rate process, which is essential in the computation of the VA, follows from the interpolation. In particular, we do not need to introduce and calibrate (or estimate) an "exogenous" model for the short rate, as is done in other approaches. The interpolating function thus plays a crucial role in our methodology, since this is the only "free'" ingredient once the affine LIBOR model has been calibrated. At the same time, it introduces an element of model risk, through the different possible choices of interpolating functions. In the sequel, we are going to examine the impact of different interpolating functions on the value adjustments. The data we use for our numerical experiments correspond to the EUR market on 27 May 2013 and were collected from Bloomberg; see also Grbac et al. (2015, §8.4) for more details. The affne LIBOR model with multiple curves was calibrated to caplet data on a 10-year horizon, where the tenor lengths were 3 and 6 months. The driving affine process consists of three independent CIR processes. The sequences (u k ) and (v k ) were constructed so that they lie on a smooth manifold on \(\mathcal {I}_{T} \cap {\mathbb {R}}^{3}_{\geqslant 0}\); see Fig. 3. In particular, u k lies on straight lines for k∈{0,…,k1}∪{k2,…,k3}∪{k4,…,N} and on elliptical segments for k∈{k1+1,…,k2−1}∪{k3+1,…,k4−1}. The sequence u looks as follows: $$\begin{array}{ccccc} u_{N-1} & = & (0 & 0 & \bar{u}_{N-1}) \\ & \vdots & & & \\ u_{k_{4}} & = & (0 & 0 & \bar{u}_{k_{4}}) \\ u_{k_{4}-1} & = & (0 & \tilde{u}_{k_{4}-1} & \bar{u}_{k_{4}-1}) \\ u_{k_{4}-2} & = & (0 & \tilde u_{k_{4}-2} & \bar{u}_{k_{4}-2}) \\ & \vdots & & & \\ \end{array} $$ where \(\bar u_{j}, \tilde u_{j}\in \mathbb {R}_{\geqslant 0}\) and satisfy \(\bar u_{j}\ge \bar u_{j+1}\) and \(\tilde u_{j}\ge \tilde u_{j+1}\) for all relevant \(j\in \mathcal K\); see (14) again. The structure of the sequence vx for each tenor x is analogous. In other words, short-term forward LIBOR rates are driven by all three components of the driving process X, medium term rates by two components, while long-term rates are only driven by the last component of X. Once the manifolds have been constructed, the sequences (u k ) and (v k ) were obtained by fitting the model to OIS and EURIBOR data from the same date. (Note that in this example we have N=40 and we chose k1=9,k2=16,k3=21, and k4=28.) The smooth manifold used for fitting the sequences (u k ) and (v k ) In order to illustrate the effect of different interpolating functions on the value adjustments, we consider three different specifications for the interpolating function U: (IF1): Interpolation by fitting an entire forward curve (see Example 1); (IF2): Linear interpolation between the u k 's; (IF3): Spline interpolation on sectors where all but one component of the vector u k are constant in k, and linear interpolation in between these sectors (i.e., when the u k 's lie on curved segments of the manifold). Let us now turn our attention to the computation of value adjustments. We consider a 3M-6M basis swap on the LIBOR, with inception at t=0 and maturity in 10 years. We follow Crépey et al. (2013) and consider five different CSA specifications, provided by $$\begin{array}{lcclccl} (\text{CSA}_{1}): &&&\left(\mathfrak{r},\rho^{b},\rho^{i}\right)=\left(0.4,0.4,0.4\right), & Q=P, && \Gamma=0, \\ (\text{CSA}_{2}): &&&\left(\mathfrak{r},\rho^{b},\rho^{i}\right)=\left(1,0.4,0.4\right), & Q=P, && \Gamma=0, \\ (\text{CSA}_{3}): &&&\left(\mathfrak{r},\rho^{b},\rho^{i}\right)=\left(1,1,0.4\right), & Q=P, && \Gamma=0, \\ (\text{CSA}_{4}): &&&\left(\mathfrak{r},\rho^{b},\rho^{i}\right)=\left(1,1,0.4\right), & Q=\Pi, && \Gamma=0, \\ (\text{CSA}_{5}): &&&\left(\mathfrak{r},\rho^{b},\rho^{i}\right)=\left(1,0.4,0.4\right), & Q=P, && \Gamma=Q=P, \end{array} $$ while the default intensities and spreads equal $$\gamma^{b}=5\%, \ \gamma^{i}=7\%, \ \gamma=10\%, \ b=\bar{b}=\lambda=1.5\%, \quad \text{and} \quad \bar{\lambda}=4.5\%. $$ The first three CSA specifications correspond to a "clean" recovery scheme without collateralization, since the value of the contract Q equals the clean price and there is no collateral posted. The fourth specification corresponds to a "pre-default" recovery scheme without collateralization, while the last one corresponds to a fully collateralized contract. Moreover, the first specification yields a linear BSDE in the VA Θ, which allows to use (forward) Monte Carlo simulations for the computation of the VA. The price P t of the basis swap is provided by (36) for each \(t\in [T_{p_{1}}^{x_{1}},T_{q_{2}}^{x_{2}}]\), and we can observe that P t is a deterministic transformation of X t . Moreover, the short rate r t is a deterministic, affine, transformation of X t ; cf. (26). Therefore, the VA coefficient g t (r t ,P t ,Θ t ) is also a deterministic transformation of X t , and we can define a deterministic function \(\hat g\) such that $$\hat g(t,X_{t},\Theta_{t}) := g_{t}(r_{t},P_{t},\Theta_{t}). $$ In other words, the VA BSDE is Markovian in this case, and the VA is also provided by the solution of a semi-linear PDE. In order to compute the VA for the basis swap numerically, we worked under the spot martingale measure using a space grid consisting of 105 paths and a time grid with n=200 steps of step size h. We applied a backwards regression on the space-time grid, i.e., and approximated the conditional expectation using an m-nearest neighbors estimator with m=3. This choice turned out to be optimal when compared to (forward) Monte Carlo simulations in the case of a linear VA coefficient. The outcome of the numerical experiments is summarized in Figs. 4, 5 and 6. Starting with the top panel in Fig. 4, we observe that there are significant structural differences in the dynamics of the short rate due to the different interpolating functions; this is mostly visible when looking at the averages and the percentile lines. The bottom panel in the same figure displays the price process of the 3M-6M basis swap for the different interpolating functions. As the differences are not as clearly visible as before, we have plotted the absolute differences in prices due to the different interpolating functions in Fig. 5. There we see that notable differences in prices appear when using different interpolating functions (keep in mind, that the notional amount of the swap equals one, thus the deviations in prices are not negligible). As expected, the largest discrepancies between prices stemming from the first vs. second and the first vs. third interpolating functions occur on the curved section of the manifold used to construct (u k ) and (v k ). On the contrary, the discrepancies between prices from the second vs. third interpolating functions on the curved section of the manifold are zero, since both functions interpolate linearly in that segment. Twenty sample paths of the short rate (top panels) and the price process of a basis swap (bottom panels) for each interpolating function along with the mean (black line) and 97.5% and 2.5% percentiles computed over 105 realizations Absolute difference between the basis swap price processes for different interpolating functions Twenty sample paths of the VA process for the basis swap, together with the mean and percentiles, for the 5 different CSA specifications (left panels, top to bottom), and the difference between the VA processes for the different interpolating functions (left to right) Figure 6 depicts the sample path of the VA process using the first interpolating function (left panels) for the five different CSA specifications (top to bottom), while the other figures show the differences in the VA due to the different interpolating functions. The differences in the VA are one order of magnitude smaller than the differences in prices, however, the VA itself is an order of magnitude smaller than the basis swap price. Reflecting the situation for the prices, the largest discrepancies between VAs using the first and the other two interpolating functions occur around the curved section of the manifold. However, the discrepancies in the VA in the flat sections of the manifold are more pronounced than the corresponding discrepancies in prices. The reason is that the interpolation affects value adjustments both via the basis swap price and via the short rate used for discounting, and its effect is propagated in different segments through the backward regression. This becomes clear when one looks at the differences between prices and value adjustments stemming from using the second and third interpolating functions; although the difference in prices is flat zero, the difference in value adjustments is far from zero. The numerical examples presented above show that the choice of the interpolating function entails significant model risk. The functions we chose are not especially far apart, in terms of their supremum norm, thus the differences above could become even higher. In fact, the coefficients of the short rate can become arbitrarily large due to the interpolating function. Therefore, both the manifolds on which the sequences u and v lie and the interpolating function have to be selected with caution, as they can fundamentally change the behavior of the model. Time-integration of Affine processes The following result is an extension of Theorem 4.10 in Keller-Ressel (2008). Let \(\theta \colon [0,T] \rightarrow {\mathbb {R}}^{d}_{\geqslant 0}\) be a bounded function and (X t )t∈[0,T] be an affine process on \(\mathbb {R}^{d}_{\geqslant 0}\), with functional characteristics F and R. Then \({\left (X_{t},\int _{0}^{t} \theta _{u} \circ X_{u} \mathrm {d} u\right)}_{t\in [0,T]}\) is a time-inhomogeneous affine process on \({\mathbb {R}}^{d}_{\geqslant 0} \times {\mathbb {R}}^{d}_{\geqslant 0}\) with functional characteristics $$\widetilde{F}{\left(t,u_{X},u_{Y}\right)}=F{\left(u_{X}\right)} \quad {\text and} \quad \widetilde{R}{\left(t,u_{X},u_{Y}\right)}=\left(\begin{array}{c} R\left(u_{X}\right)+\theta_{t}\circ u_{Y}\\0 \end{array}\right). $$ Here, ∘ denotes the componentwise multiplication between vectors. Let \(n\in \mathbb {N}\), k∈{0,…,n}, and define \(s_{k}=\frac {k}{n}s\) and \(h=\frac {s}{n}\), hence s0,…,s n is an equidistant partition of [0,s] with step size h. Approximating the integral with Riemann sums and using the dominated convergence theorem, we have Using next the tower law of conditional expectations and the affine property of X, A n can be written as follows Iterating this procedure, we arrive at $$A_{n}=\exp\left(p_{n}{\left(u_{X},u_{Y}\right)}+{\left\langle q_{n}{\left(u_{X},u_{Y}\right)}, X_{t} \right\rangle} \right), $$ with p0(u X ,u Y )=0, q0(u X ,u Y )=u X +hu Y ∘θt+s and $$\begin{aligned} p_{k+1}{\left(u_{X},u_{Y}\right)}&=p_{k}{\left(u_{X},u_{Y}\right)}+\phi_{h}{\left(q_{k}{\left(u_{X},u_{Y}\right)}\right)} \\ q_{k+1}{\left(u_{X},u_{Y}\right)}&=\psi_{h}{\left(q_{k}{\left(u_{X},u_{Y}\right)}\right)}+h u_{Y}\circ \theta_{t+s_{n-(k+1)}}. \end{aligned} $$ Using the generalized Riccati Eq. (3), we can expand ϕ and ψ linearly around the origin. Thus, we get $$\begin{aligned} p_{k+1}{\left(u_{X},u_{Y}\right)}&=p_{k}{\left(u_{X},u_{Y}\right)}+h F{\left(q_{k}{\left(u_{X},u_{Y}\right)}\right)}+o{\left(h\right)}, \\ q_{k+1}{\left(u_{X},u_{Y}\right)}&=q_{k}{\left(u_{X},u_{Y}\right)}+h{\left(R{\left(q_{k}\left(u_{X},u_{Y}\right)\right)}+u_{Y}\circ \theta_{s_{n-(k+1)}}\right)}+o(h). \end{aligned} $$ As θ is non-negative and bounded, the second part of the proof of Theorem 4.10 in Keller-Ressel (2008) remains the same. Hence, the recursive scheme above is an Euler-type approximation, starting from the terminal time, to the ODE $$\begin{aligned} \frac{\partial}{\partial s} p{\left(s,t,u_{X},u_{Y}\right)}&=F{\left(q{\left(s,t,u_{X},u_{Y}\right)}\right)}, \\ \frac{\partial}{\partial s} q{\left(s,t,u_{X},u_{Y}\right)}&=R{\left(q{\left(s,t,u_{X},u_{Y}\right)}\right)}+u_{Y}\circ \theta_{s} \end{aligned} $$ with initial conditions p(r,r,u X ,u Y )=0 and q(r,r,u X ,u Y )=u X , for all r≥0. □ Beveridge, C, Joshi, M: Interpolation schemes in the displaced-diffusion LIBOR market model. SIAM. J. Finan. Math. 3, 593–604 (2012). Bichuch, M, Capponi, A, Sturm, S: Arbitrage-free XVA. Math.Finan. (2016). https://arxiv.org/abs/1608.02690. Björk, T: Arbitrage Theory in Continuous Time. 3rd edition. Oxford University Press, Chichester (2009). Brigo, D, Morini, M, Pallavicini, A: Counterparty Credit Risk, Collateral and Funding: with Pricing Cases for all Asset Classes. Wiley (2013). Crépey, S: Bilateral Counterparty risk under funding constraints — Part I: Pricing. Math. Finan. 25, 1–22 (2015a). Crépey, S: Bilateral Counterparty risk under funding constraints — Part II: CVA. Math. Finan. 25, 23–50 (2015b). Crépey, S, Bielecki, TR: Counterparty Risk and Funding: A Tale of two Puzzles. Chapman & Hall/CRC Financial Mathematics Series. CRC Press, Boca Raton (2014). With an introductory dialogue by Damiano Brigo. Crépey, S, Grbac, Z, Nguyen, H-N: A multiple-curve HJM model of interbank risk. Math. Financ. Econ. 6, 155–190 (2012). Crépey, S, Gerboud, R, Grbac, Z, Ngor, N: Counterparty risk and funding: The four wings of the TVA. Int. J. Theor. Appl. Financ. 16(1350006) (2013). Crépey, S, Grbac, Z, Ngor, N, Skovmand, D: A Lévy HJM multiple-curve model with application to CVA computation. Quant. Financ. 15, 401–419 (2015). Cuchiero, C, Fontana, C, Gnoatto, A: Affine multiple yield curve models. Preprint. arXiv:1603.00527 (2016). Duffie, D, Filipović, D, Schachermayer, W: Affine processes and applications in finance. Ann. Appl. Probab. 13, 984–1053 (2003). Filipović, D: Time-inhomogeneous affine processes. Stoch. Process. Appl. 115, 639–659 (2005). Glau, K, Grbac, Z, Papapantoleon, A: A unified view of LIBOR models. In: Kallsen, J, Papapantoleon, A (eds.)Advanced Modelling in Mathematical Finance – In Honour of Ernst Eberlein, pp. 423–452. Springer, Cham (2016). Grbac, Z, Runggaldier, WJ: Interest Rate Modeling: Post-Crisis Challenges and Approaches. Springer, Cham (2015). Grbac, Z, Papapantoleon, A, Schoenmakers, J, Skovmand, D: Affine LIBOR models with multiple curves: theory, examples and calibration. SIAM. J. Financ. Math. 6, 984–1025 (2015). Jacod, J, Shiryaev, AN: Limit Theorems for Stochastic Processes. 2nd edition. Springer, Berlin Heidelberg (2003). Keller-Ressel, M: Affine Processes: Theory and Applications to Finance. PhD thesis, TU Vienna (2008). Keller-Ressel, M: Affine LIBOR models with continuous tenor (2009). Unpublished manuscript. Keller-Ressel, M, Papapantoleon, A, Teichmann, J: The affine LIBOR models. Math. Financ. 23, 627–658 (2013). Mercurio, F: Interest rates and the credit crunch: New formulas and market models. Preprint. SSRN/1332205 (2009). Mercurio, F: A LIBOR market model with a stochastic basis. Risk.84–89 (2010). Musiela, M, Rutkowski, M: Continuous-time term structure models: forward measure approach. Financ. Stoch. 1, 261–291 (1997). Musiela, M, Rutkowski, M:Martingale Methods in Financial Modelling. 2nd edition. Springer, Berlin Heidelberg (2005). Papapantoleon, A: Old and new approaches to LIBOR modeling. Stat. Neerlandica. 64, 257–275 (2010). RW acknowledges funding from the Excellence Initiative of the German Research Foundation (DFG) under grant ZUK 64. Financial support from the Europlace Institute of Finance project "Post-crisis models for interest rate markets" is gratefully acknowledged. Department of Mathematics, National Technical University of Athens, Zografou Campus, Athens, 15780, Greece Antonis Papapantoleon Institut für Mathematische Stochastik, TU Dresden, Dresden, 01062, Germany Robert Wardenga Both authors read and approved the final manuscript. Correspondence to Antonis Papapantoleon. Papapantoleon, A., Wardenga, R. Continuous tenor extension of affine LIBOR models with multiple curves and applications to XVA. Probab Uncertain Quant Risk 3, 1 (2018). https://doi.org/10.1186/s41546-017-0025-4 Affine LIBOR models Multiple curves Discrete tenor Continuous tenor XVA Model risk 91G30
CommonCrawl
Budgeting & Savings Budgeting & Savings Budgeting By Will Kenton What is Disposable Income Disposable income, also known as disposable personal income (DPI), is the amount of money that households have available for spending and saving after income taxes have been accounted for. Disposable personal income is often monitored as one of the many key economic indicators used to gauge the overall state of the economy. DPI=Personal Income−Personal Income Taxes\begin{aligned} &\text{DPI} = \text{Personal Income} - \text{Personal Income Taxes} \\ \end{aligned}​DPI=Personal Income−Personal Income Taxes​ BREAKING DOWN Disposable Income Disposable income is an important measure of household financial resources. For example, consider a family with a household income of $100,000, and the family has an effective income tax rate of 25% (versus marginal tax rate). This household's disposable income would then be $75,000 ($100,000 - $25,000). Economists use DPI as a starting point to gauge households' rates of savings and spending. Statistical Uses of Disposable Income Many useful statistical measures and economic indicators derive from disposable income. For example, economists use disposable income as a starting point to calculate metrics such as discretionary income, personal savings rates, marginal propensity to consume (MPC), and marginal propensity to save (MPS). Disposable income minus all payments for necessities (mortgage, health insurance, food, transportation) equals discretionary income. This portion of disposable income can be spent on what the income earner chooses or, alternatively, it can be saved. Discretionary income is the first to shrink amid a job loss, pay reduction, or economic downturn. As such, businesses that sell discretionary goods tend to suffer the most during recessions and are watched closely by economists for signs of both recession and recovery. The personal savings rate is the percentage of disposable income that goes into savings for retirement or use at a later date. Marginal propensity to consume represents the percentage of each additional dollar of disposable income that gets spent, while marginal propensity to save denotes the percentage that gets saved. For several months in 2005, the average personal savings rate dipped into negative territory for the first time since 1933. This means that in 2005, Americans were spending all of their disposable income each month and then tapping into savings or debt for further spending. Disposable Income for Wage Garnishment The federal government uses a slightly different method to calculate disposable income for wage garnishment purposes. Sometimes, the government garnishes an income earner's wages for payment of back taxes or delinquent child support. It uses disposable income as a starting point to determine how much to seize from the earner's paycheck. As of 2019, the amount garnished may not exceed 25% of a person's disposable income or the amount by which a person's weekly income exceeds 30 times the federal minimum wage, whichever is less. In addition to income taxes, the government subtracts health insurance premiums and involuntary retirement plan contributions from gross income when calculating disposable income for wage garnishment purposes. Returning to the above example, if the family described pays $10,000 per year in health insurance premiums and is required to contribute $5,000 to a retirement plan, its disposable income for wage garnishment purposes shrinks from $75,000 to $60,000. Marginal Propensity to Save (MPS) Definition Marginal propensity to save (MPS) refers to the proportion of a pay raise that a consumer saves rather than spends on immediate consumption. Fiscal Multiplier Definition The fiscal multiplier measures the effect that increases in fiscal spending will have on a nation's economic output, or gross domestic product (GDP). Earnings Withholding Order An earnings withholding order is a court document requiring an employer to garnish an employee's wages to settle a judgment. Consumption Function The consumption function is a mathematical formula that represents the functional relationship between total consumption and gross national income. Multiplier Effect Definition The multiplier effect measures the impact that a change in investment will have on final economic output. What Is Discretionary Income? Discretionary income is the amount of an individual's income that is left for spending, investing, or saving after taxes and necessities are paid. Marginal Propensity to Consume vs. to Save: Knowing the Difference How do Disposable Income and Discretionary Income Differ? Why Upper Middle are Living Paycheck-to-Paycheck How Is Social Security Tax Calculated? How do you calculate the marginal propensity to consume? Which Income Class Are You?
CommonCrawl
RUTGERS ALGEBRA SEMINAR -Fall 2022 Wednesdays at 2:00-3:00 PM in H705 A more comprehensive listing of all Math Department seminars is available. Here is a link to the algebra seminars in previous semesters Spring 2023 Seminars (Wednesdays at 2:00 AM in H705) 8 Feb Abi Ali (Rutgers) TBA 15 Mar no seminar ------------------- Spring Break ---------- 4 May Classes end Monday May 1 Spring 2021 classes begin Tuesday January 17 and end Monday May 1; Finals are May 4-10. Fall 2022 Seminars (Wednesdays at 2:00 AM in H705) 14 Sept Laurent Vera (RU) "Super-equivalences and odd categorification of sl2" 21 Sept Tamar Blanks (RU) "Trace forms and the Witt invariants of finite groups" 28 Sept Lauren Heller (Berkeley) "Characterizing multigraded regularity on products of projective spaces" 12 Oct Yael Davidov (RU) "Admissibility of Groups over Semi-Global Fields in the 'Bad Characteristic' Case" 19 Oct Mandi Schaeffer Fry (Metropolitan State U) TBA (group representation theory) 26 Oct Chuck Weibel (Rutgers) "Grothendieck-Witt groups of singular schemes" 2 Nov Marco Zaninelli (U.Antwerp) "The Pythagoras number of a function field in one variable" 9 Nov Anders Buch (Rutgers) "Pieri rules for quantum K-theory of cominuscule Grassmannians" 16 Nov Francesca Tombari (KTH Sweden) "Realisations of posets and tameness" 23 Nov --- no seminar --- Thanksgiving is Nov. 24 30 Nov --- no seminar --- 7 Dec Eilidh McKemmie (Rutgers) "Galois groups of random additive polynomials" 14 Dec --- no seminar --- Fall 2022 classes begin Tuesday September 6 and end Wednesday Dec. 14. Next semester: Andrzej Zuk (Univ Paris VII) TBA Spring 2022 Seminars (Wednesdays at 2:00 PM in H705) no seminar January 19 as we start in remote mode 26 Jan Weihong Xu (Rutgers) "Quantum $K$-theory of Incidence Varieties" (remote) 2 Feb Rudradip Biswas (Manchester) "Cofibrant objects in representation theory" (remote) 9 Feb Chuck Weibel (RU) "An introduction to monoid schemes" 16 Feb Ian Coley (RU) "Hochster's description of Spec(R)" 23 Feb no seminar -------------------------------------------- 2 Mar Alexei Entin (Tel Aviv) "The minimal ramification problem in inverse Galois theory" (remote) 9 Mar Pham Tiep (RU) "Representations and tensor product growth" 30 Mar Tim Burness (Bristol, UK) "Fixed point ratios for primitive groups and applications" 6 Apr Eugen Rogozinnikov (Strasbourg) "Hermitian Lie groups of tube type as symplectic groups over noncommutative algebras" 13 Apr Eilidh McKemmie (RU) "A survey of various random generation problems for finite groups" 20 Apr Yael Davidov (RU) "Exploring the admissibility of Groups and an Application of Field Patching" 27 Apr Daniel Douglas (Yale) "Skein algebras and quantum trace maps" Spring 2022 classes begin Tuesday January 18 and end Monday May 2. Spring break is March 12-20, 2022 Fall 2021 Seminars (Wednesdays at 11:00 AM in H525) 15 Sept Yom Kippur 29 Sept Yoav Segev "A characterization of the quaternions using commutators" 6 Oct Lev Borisov (RU) "Explicit equations for fake projective planes" 13 Oct Ian Coley (RU) "Introduction to topoi" 20 OctAnders Buch (RU) POSTPONED to 12/8 ("Tevelev Degrees") 27 Oct Eilidh McKemmie (RU) "The probability of generating invariably a finite simple group" 3 Nov Ian Coley and Chuck Weibel "Localization, and the K-theory of monoid schemes" 10 Nov Max Peroux (Penn) "Equivariant variations of topological Hochschild homology" 17 Nov Shira Gilat (RU) "The infinite Brauer group" 29 Nov (Monday) Wednesday class schedule 1 Dec --- no seminar --- 8 Dec Anders Buch (RU) "Tevelev Degrees" Fall 2021 classes begin Tuesday September 1 and end Monday Dec. 13. Spring 2021 Seminars (Wednesdays at 2:00 PM, on-line) 27 Jan Ian Coley (Rutgers) "Tensor Triangulated Geometry?" 3 Feb Aline Zanardini (U.Penn) "Stability of pencils of plane curves" 10 Feb Svetlana Makarova (U.Penn) "Moduli spaces of stable sheaves over quasipolarized K3 surfaces, and Strange Duality" 17 Feb no seminar 24 Feb Patrick McFaddin (Fordham) "Separable algebras and rationality of arithmetic toric varieties" 3 Mar Christian Klevdal (U.Utah) "Integrality of G-local systems" 10 Mar Justin Lacini (U.Kansas) "On log del Pezzo surfaces in positive characteristic" 17 Mar no seminar ------------------- Spring Break ---------- 24 Mar John Kopper (Penn State) "Ample stable vector bundles on rational surfaces" 14 Apr Allechar Serrano Lopez (U.Utah) "Counting elliptic curves with prescribed torsion over imaginary quadratic fields" 21 Apr Franco Rota (Rutgers) "Motivic semiorthogonal decompositions for abelian varieties" 28 Apr Ben Wormleighton (Washington U./St.Louis) "Geometry of mutations: mirrors and McKay" 5 May Morgan Opie (Harvard) "Complex rank 3 vector bundles on CP5" 12 May Avery Wilson (N. Carolina) "Compactifications of moduli of G-bundles and conformal blocks" Fall 2020 Seminars (Wednesdays at 2:00 PM, on-line) 9 Sept Lev Borisov Rutgers "A journey from the octonionic P2 to a fake P2" 16 Sept Stefano Filipazzi (UCLA) "On the boundedness of n-folds of Kodaira dimension n-1" 23 Sept Yifeng Huang (U.Michigan) "Betti numbers of unordered configuration spaces of a punctured torus" 30 Sept Andrea Ricolfi (SISSA, Italy) "Moduli of semiorthogonal decompositions" 7 Oct Giacomo Mezzedimi (Hannover) "The Kodaira dimension of some moduli spaces of elliptic K3 surfaces" 14 Oct Katrina Honigs (Oregon) "An obstruction to weak approximation on some Calabi-Yau threefolds" 21 Oct Alex Wertheim (UCLA) "Degree One Milnor K-Invariants of Groups of Multiplicative Type" 28 Oct Inna Zakharevich (Cornell) 3:30PM Colloquium "The Dehn complex: scissors congruence, K-theory, and regulators" 4 Nov Michael Wemyss (Glasgow) "Tits cone intersections and Applications" 11 Nov Pieter Belmans (Univ. Bonn) 12:00 noon "Graph potentials as mirrors to moduli of vector bundles on curves" 18 Nov David Hemminger (UCLA) "Lannes' T-functor and Chow rings of classifying spaces" 25 Nov --- no seminar --- Thanksgiving is Nov. 26; Friday class schedule 2 Dec Clover May (UCLA) "Classifying perfect complexes of Mackey functors" 9 Dec Be'eri Greenfeld (UCSD) "Combinatorics of words, symbolic dynamics and growth of algebras" Fall 2020 classes begin Tuesday September 1 and end Thursday December 10 Spring 2020 Seminars (Wednesdays at 2:00 in H425) Until March 2020, the Algebra Seminar met on Wednesdays at 2:00-3:00PM in the Hill Center, on Busch Campus of Rutgers University. After Spring Break, the seminar moved on-line. 22 Jan Shira Gilat (Bar-Ilan U.) "Higher norm principles for norm varieties" 12 Feb Chuck Weibel (Rutgers) "The K'-theory of monoid sets" 19 Feb Linhui Shen (Michigan State) "Quantum geometry of moduli spaces of local system" 26 Feb Lev Borisov Rutgers "Six explicit pairs of fake projective planes" 4 Mar Saurabh Gosavi (Rutgers) "Generalized Brauer dimension" 25 Mar CANCELLED -- Future seminars moved on-line 1 Apr Joaquin Moraga (Princeton) "On the Jordan property for local fundamental groups" 8 Apr Ian Coley (Rutgers) "Higher K-theory via generators and relations" 15 Apr Franco Rota (Rutgers) "Moduli spaces on the Kuznetsov component of Fano threefolds of index 2" 22 Apr James Cameron (UCLA) "Group cohomology rings via equivariant cohomology" 29 Apr Luca Schaffler (U. Mass) "Compactifications of moduli of points and lines in the projective plane" 6 May Angela Gibney (Rutgers) CANCELLED Spring 2020 classes begin Tuesday January 22 and end Monday May 4 Fall 2019 Seminars (Wednesdays at 2:00 in H525) 18 Sep Sándor Kovács (U.Washington) "Rational singularities and their cousins in arbitrary characteristics" 20 Sep Jacob Lurie (IAS) "On Makkai's Strong Conceptual Completeness Theorem" 25 Sep Saurabh Gosavi (Rutgers) "Generalized Brauer dimension and other arithmetic invariants of semi-global fields" 2 Oct Danny Krashen (Rutgers) "The arithmetic of semiglobal fields via combinatorial topology" 9 Oct Ian Coley (Rutgers) "What is a derivator?" 16 Oct Sumit Chandra Mishra (Emory U.) "Local-global principle for norm over semi-global fields" 23 Oct Chengxi Wang (Rutgers) "strong exceptional collections of line bundles" 30 Oct Angela Gibney (Rutgers) "Vertex algebras of CohFT-type" 6 Nov Robert Laugwitz (Nottingham, UK) "dg categories and their actions" 13 Nov Volodia Retakh (Rutgers) "Noncommutative Laurent Phenomenon: two examples" 20 Nov Carl Lian (Columbia) "Enumerating pencils with moving ramification on curves" 27 Nov --- no seminar --- Thanksgiving is Nov. 28; Friday class schedule 11 Dec Diane Mclagan (U.Warwick) "Tropical scheme theory" Fall 2019 classes begin Tuesday September 3 and end Wednesday Dec.11. Finals are December 14-21, 2019 Spring 2019 Seminars (Wednesdays at 2:00 in SERC 206) Note: First 3 seminars were in H-005; 4th in H705; all others in SERC-206 23 Jan Patrick Brosnan U.Maryland "Palindromicity and the local invariant cycle theorem" 30 Jan Khashayar Sartipi UIUC "Paschke Categories, K-homology and the Riemann-Roch Transformation" 6 Feb Chuck Weibel Rutgers "The Real graded Brauer group" 13 Feb Volodia Retakh Rutgers "An analogue of mapping class groups and noncommutative triangulated surfaces" 20 Feb Dawei Chen Boston College&IAS "Volumes and intersection theory on moduli spaces of abelian differentials" 6 Mar no seminar 13 Mar Jeanne Duflot Colorado State U. "A Degree Formula for Equivariant Cohomology" 27 Mar Louis Rowen Bar-Ilan Univ "The algebraic theory of systems" 3 Apr Iulia Gheorghita Boston College "Effective divisors in the Hodge bundle" 10 Apr Gabriel Navarro U.Valencia "Character Tables and Sylow Subgroups of Finite Groups" 17 Apr John Sheridan Stony Brook "Continuous families of divisors on symmetric powers of curves" 24 Apr Yaim Cooper IAS "Severi degrees via representation theory" 1 May Dave Anderson Ohio State "Schubert calculus and the Satake correspondence" Classes end Monday May 6; Finals are May 9-15, 2019 19 Sep Nicola Tarasca Rutgers "Geometry and Combinatorics of moduli spaces of curves" 26 Sep Angela Gibney Rutgers "Basepoint free loci on $M_{0,n}$-bar from Gromov-Witten theory of smooth homogeneous varieties" 5 Oct(FRI) Michael Larsen Indiana U "Irrationality of Motivic Zeta Functions" *** Friday at 10:00 AM in Hill 005 *** 10 Oct Yotam Hendel Weizmann Inst. "On singularity properties of convolutions of algebraic morphisms" 17 Oct Qixiao Ma Columbia Univ. "Brauer class over the Picard scheme of curves" 24 Oct Sandra Di Rocco KTH-Sweden "Generalized Polar Geometry" 31 Oct Igor Rapinchuk Michigan State "Algebraic groups with good reduction and unramified cohomology" 7 Nov Isabel Vogt MIT "Low degree points on curves" 14 Nov Bob Guralnick USC "Low Degree Cohomology" 28 Nov Julie Bergner U.Virginia "2-Segal spaces and algebraic K-theory" 5 Dec Chengxi Wang Rutgers "Quantum Cohomology of Grassmannians" 12 Dec Patrick Brosnan U.Maryland POSTPONED Classes end Wednesday Dec. 12; Finals begin Dec. 15, 2018 Abstracts of seminar talks Galois groups of random additive polynomials (Eilidh McKemmie, Dec. 7, 2022) The Galois group of an additive polynomial over a finite field is contained in a finite general linear group. We will discuss three different probability distributions on these polynomials, and estimate the probability that a random additive polynomial has a "large" Galois group. Our computations use a trick that gives us characteristic polynomials of elements of the Galois group, so we may use our knowledge of the maximal subgroups of GL(n,q). This is joint work with Lior Bary-Soroker and Alexei Entin. Realisations of posets and tameness (Francesca Tombari, Nov.16, 2022) Persistent homology is commonly encoded by vector space-valued functors indexed by posets. These functors are called tame, or persistence modules, and capture the life-span of homological features in a dataset. Every poset can be used to index a persistence module, however some posets are particularly well suited. We introduce a new construction called realisation, which transforms posets into posets. Intuitively, it associates a continuous structure to a locally discrete poset by filling in empty spaces. Realisations share several properties with upper semi-lattices. They behave similarly with respect to certain notions of dimension for posets that we introduce. Moreover, as indexing posets of persistence modules, they allow for good discretisations and effective computation of homological invariants via Koszul complexes. Pieri rules for quantum K-theory of cominuscule Grassmannians (Anders Buch, Nov.9, 2022) The quantum K-theory ring QK(X) of a flag variety X is constructed using the K-theoretic Gromov-Witten invariants of X, defined as arithmetic genera of Gromov-Witten varieties parametrizing curves meeting fixed subvarieties in X, and can be used to compute these invariants. A Pieri formula means a formula for multiplication with a set of generators of QK(X). Such a formula makes it possible to compute efficiently in this ring. I will speak about a Pieri formula for QK(X) when X is a cominuscule Grassmannian, that is, an ordinary Grassmannian, a maximal orthogonal Grassmannian, or a Lagrangian Grassmannian. This formula is expressed combinatorially in terms of counting diagrams of boxes labeled by positive integers, also known as tableaux. This is joint work with P.-E. Chaput, L. Mihalcea, and N. Perrin. The Pythagoras number of a function field in one variable (Marco Zaninelli, Nov.2, 2022) The Pythagoras number of a field K is the minimum number n such that any sum of squares in K can be written as a sum of n squares in K. Despite its elementary definition, the computation of the Pythagoras number of a field can be a very complicated task, to the point that for many families of fields we are not even able to produce an upper bound for it. When we are, it is usually thanks to local-global principles for quadratic forms and to modern techniques from algebraic geometry. In this seminar we will focus on the Pythagoras number of function fields in one variable, and more precisely we will show how to obtain the upper bound 5 for the Pythagoras number of a large family of such fields by exploiting a recent local-global principle due to V. Mehmeti. Grothendieck-Witt groups of singular schemes (Chuck Weibel, Oct.26, 2022) We establish some new structural results for the Witt and Grothendieck–Witt groups of schemes over Z[1/2], including homotopy invariance of Witt groups Witt groups of X[t,1/t] Witt groups of punctured affine spaces. Admissibility of Groups over Semi-Global Fields in the "Bad Characteristic" Case (Yael Davidov, Oct.12, 2022) We say a finite group, G, is admissible over a field, F, if there exists a division algebra with center F and a maximal subfield K such that K/F is Galois with group G. The question of which groups are admissible over a given field is generally difficult to answer but has been solved in the case that F is a transcendence degree 1 extension of a complete discretely valued field with algebraically closed residue field, so long as the characteristic of the residue field does not divide the order of the group. This result was obtained in a paper by Harbater, Hartmann and Krashen using field patching techniques in 2009. In this talk we will be discussing progress towards generalizing this result and trying to answer the question, what happens when the characteristic of the residue field does divide the order of G? We will restrict our attention to a special case to make the discussion accessible. Trace forms and the Witt invariants of finite groups (Tamar Blanks, Sept. 21, 2022): A Witt invariant of an algebraic group G over a field k is a natural transformation from G-torsors to the Witt ring, that is, a rule that assigns quadratic forms to algebraic objects in a way that respects field extensions over k. An important example is the invariant sending each etale algebra to its trace form. Serre showed that the ring of Witt invariants of the symmetric group is generated by the trace form invariant and its exterior powers. In this talk we will discuss work towards generalizing Serre's result to other Weyl groups, and more generally to other finite groups. We will also describe the connection between Witt invariants and cohomological invariants via the Milnor conjecture. Super-equivalences and odd categorification of sl2 (Laurent Vera, Sept. 14, 2022): In their seminal work on categorifications of quantum groups, Chuang and Rouquier showed that an action of sl2 on a category gives rise to derived equivalences. These equivalences can be used to prove Broué's abelian defect group conjecture for symmetric groups. In this talk, I will present a "super version" of these results. I will introduce the odd 2-category associated with sl2 and describe the properties of its 2-representation theory. I will then describe the super analogues of the Chuang-Rouquier complexes and explain how they give rise to derived equivalences on 2-representations. These derived equivalences lead to a proof of the abelian defect group conjecture for spin symmetric groups. This is joint work with Mark Ebert and Aaron Lauda. Skein algebras and quantum trace maps (Daniel Douglas, April 27, 2022): Skein algebras are certain noncommutative algebras associated to surfaces, appearing at the interface of low-dimensional topology, representation theory, and combinatorics. They occur as quantum deformations of character varieties with respect to their natural Poisson structure, and in particular possess fascinating connections to quantum groups. In this talk, I will discuss the problem of embedding skein algebras into quantum tori, the latter of which have a relatively simple algebraic structure. One such embedding, called the quantum trace map, has been used to shed light on the representation theory of skein algebras, and is related to Fock and Goncharov's quantum higher Teichmüller theory. Exploring the admissibility of Groups and an Application of Field Patching (Yael Davidov, April 20, 2022): Similarly to the inverse Galois problem, one can ask if a group G is admissible over a given field F. This is answered in the affirmative if there exists a division algebra with F as its center that contains a maximal subfield that is a Galois extension of F, with Galois group G. We will review admissibility results over the rationals that have been proven by Schacher and Sonn. We will also give some idea as to how one might try to construct division algebras that prove the admissibility of a particular group. Finally, we will briefly outline how Harbater, Hartmann and Krashen were able to obtain admissibility criteria for groups over a particular class of fields using field patching techniques. Hermitian Lie groups of tube type as symplectic groups over noncommutative algebras (Eugen Rogozinnikov, Aprli 6, 2022): We introduce the symplectic group Sp_2(A,σ) over a noncommutative algebra A with an anti-involution σ. We realize several classical Lie groups as Sp2 over various noncommutative algebras, which provides new insights into their structure theory. We construct several geometric spaces, on which the groups Sp2(A,σ) act. We introduce the space of isotropic A-lines, which generalizes the projective line. We describe the action of Sp2(A,σ) on isotropic A-lines, generalize the Kashiwara-Maslov index of triples and the cross ratio of quadruples of isotropic A-lines as invariants of this action. When the algebra A is Hermitian or the complexification of a Hermitian algebra, we introduce the symmetric space XSp2(A,σ), and construct different models of this space. Applying this to classical Hermitian Lie groups of tube type (realized as Sp2(A,σ)) and their complexifications, we obtain different models of the symmetric space as noncommutative generalizations of models of the hyperbolic plane and of the three-dimensional hyperbolic space. Fixed point ratios for primitive groups and applications (Tim Burness, March 30, 2022): Let G be a finite permutation group and recall that the fixed point ratio of an element x, denoted fpr(x), is the proportion of points fixed by x. Fixed point ratios for finite primitive groups have been studied for many decades, finding a wide range of applications. In this talk, I will present some of the main results and applications, focussing on recent joint work with Bob Guralnick where we determine the triples (G,x,r) such that G is primitive, x has prime order r and fpr(x) > 1/(r+1). The latter result allows us to prove new results on the minimal degree and minimal index of primitive groups, and we have used it in joint work with Moreto and Navarro to study the commuting probability of p-elements in finite groups. Representations and tensor product growth (Pham Tiep, March 9, 2022): The deep theory of approximate subgroups establishes 3-step product growth for subsets of finite simple groups G of Lie type of bounded rank. We will discuss 2-step growth results for representations of such groups G (including those of unbounded rank), where products of subsets are replaced by tensor products of representations. This is joint work with M. Larsen and A. Shalev. The minimal ramification problem in inverse Galois theory (Alexei Entin, March 2, 2022): For a number field K and a finite group G the Boston-Markin Conjecture predicts the minimal number of ramified places (of K) in a Galois extension L/K with Galois group G. The conjecture is wide open even for the symmetric and alternating groups S_n, A_n over the field of rational numbers Q. We formulate a function field version of this conjecture, settle it for the rational function field K=F_q(T) and G=S_n with a mild restriction on q,n and make significant progress towards the G=A_n case. We also discuss some other groups and the connection between the minimal ramification problem and the Abhyankar conjectures on the etale fundamental group of the affine line in positive characteristic. An introduction to monoid schemes (Chuck Weibel, Feb. 9, 2022): If A is a pointed abelian monoid, its prime ideals make sense and form a topological space, analogous to Spec of a ring; the notion of a monoid scheme is analogous to the notion of a scheme in Algebraic Geometry. The monoid ring construction k[A] gives a link to geometry. In this talk I will give an introduction to the basic ideas, including toric monoid schemes, which model toric varieties. Cofibrant objects in representation theory (Rudradip Biswas, Feb. 2, 2022): Cofibrant modules, as defined by Benson, play important roles in many cohomology questions of infinite discrete groups. In this talk, I will (a) talk about my new work on the relation between the class of these modules and Gorenstein projectives where I'll build on Dembegioti-Talelli's work, and (b) highlight new results from one of my older papers on the behaviour of an invariant closely related to these modules. If time permits, I'll show a possible generalization of many of these results to certain classes of topological groups. Quantum $K$-theory of Incidence Varieties (Weihong Xu, Jan. 26, 2022): Buch, Mihalcea, Chaput, and Perrin proved that for cominuscule flag varieties, (T-equivariant) K-theoretic (3-pointed, genus 0) Gromov-Witten invariants can be computed in the (equivariant) ordinary K-theory ring. Buch and Mihalcea have a related conjecture for all type A flag varieties. In this talk, I will discuss work that proves this conjecture in the first non-cominuscule case--the incidence variety Fl(1,n-1;n). The proof is based on showing that Gromov-Witten varieties of stable maps to Fl(1,n-1;n) with markings sent to a Schubert variety, a Schubert divisor, and a point are rationally connected. As applications, I will also discuss positive formulas that determine the equivariant quantum K-theory ring of Fl(1,n-1;n). The talk is based on the arxiv preprint at https://arxiv.org/abs/2112.13036. Tevelev degrees (Anders Buch, December 8, 2021): Let X be a non-singular complex projective variety. The virtual Tevelev degree of X associated to (g,d,n) is the (virtual) degree of the forgetful map from theKontsevich moduli space Mg,n(X,d) of n-pointed stable maps to X of genus g and degree d, to the product Mg,n × Xn. Recent work of Lian and Pandharipande shows that this invariant is enumerative in many cases, that is, it is the number of degree-d maps from a fixed genus-g curve to X, that send n fixed points in the curve to n fixed points in X. I will speak about a simple formula for this degree in terms of the (small) quantum cohomology ring of X. If X is a Grassmann variety (or more generally, a cominuscule flag variety) then the virtual Tevelev degrees of X can be expressed in terms of the (real) eigenvalues of a symmetric endomorphism of the quantum cohomology ring. If X is a complete intersection of low degree compared to its dimension, then the virtual Tevelev degrees of X are given by an explicit product formula. I will do my best to keep this talk student-friendly, so the most of it will be about explaining the ingredients of this abstract. The results are joint work with Rahul Pandharipande. Equivariant variations of topological Hochschild homology (Maximilien Peroux, November 10, 2021): Topological Hochschild homology (THH) is an important variant for rings and ring spectra. It is built as a geometric realization of a cyclic bar construction. It is endowed with an action of the circle, because it is a geometric realization of a cyclic object. The simplex category factors through Connes' category Λ. Similarly, real topological Hochschild homology (THR) for rings (and ring spectra) with anti-involution is endowed with a O(2)-action. Here instead of the cyclic category Λ, we use the dihedral category Ξ. From work in progress with Gabe Angelini-Knoll and Mona Merling, I present a generalization of Λ and Ξ called crossed simplicial groups, introduced by Fiedorwicz and Loday. To each crossed simplical group G, I define THG, an equivariant analogue of THH. Its input is a ring spectrum with a twisted group action. THG is an algebraic invariant endowed with different action and cyclotomic structure, and generalizes THH and THR. Localization, and the K-theory of monoid schemes (Ian Coley and Chuck Weibel, November 3, 2021): We develop the K-theory of sets with an action of a pointed monoid (or monoid scheme), analogous to the $K$-theory of modules over a ring (or scheme). In order to form localization sequences, we construct the quotient category of a nice regular category by a Serre subcategory. A special case is the localization of an abelian category by a Serre subcategory. The probability of generating invariably a finite simple group (Eilidh McKemmie, October 27, 2021): We say a group is invariably generated by a subset if every tuple in the product of conjugacy classes of elements in that subset is a generating tuple. We discuss the history of computational Galois theory and probabilistic generation problems to motivate some results about the probability of generating invariably a finite simple group, joint work with Daniele Garzoni. We also highlight some methods for studying probabilistic invariable generation. Introduction to topoi (Ian Coley, October 13, 2021): The theory of sheaves on a topological space or scheme admits a generalization to sheaves on a category equipped with a topology, which we call a site. This level of generality allows us access to interesting cohomology theories on schemes that don't make sense at the point-set level. We'll give the basic definitions, warm up by categorifying the notion of sheaves on a topological space, then get into these new topologies and their associated sheaf cohomologies. Explicit equations for fake projective planes (Lev Borisov, October 6, 2021): There are 50 complex conjugate pairs of fake projective planes, realized as quotients of the complex 2-ball. However, in most cases there are no known explicit embeddings into a projective space. In this talk I will describe my work over the past several years (with multiple co-authors) which resulted in explicit equations for 9 out of the 50 pairs. It is a wild ride in the field of computer assisted AG computations. A characterization of the quaternions using commutators (Yoav Segev, September 29, 2021): Let D be a quaternion division algebra over a field F. Thus D=F +F i +F j+ F k, with i^2, j^2 in F and k=ij=-ji. A pure quaternion is an element p in D such that p is in F i+F j+F k. It is easy to check that p^2 is in F, for a pure quaternion p, and that given x,y in D, the commutator (x,y)=xy-yx is a pure quaternion. We show that this characterizes quaternion division algebras, namely, any associative ring R with 1, such that the commutator (x,y) is not a zero divisor and satisfies (x,y)^2 is in the center of R, for all nonzero x,y in R, is a quaternion division algebra. The proof is elementary and self contained. This is joint work with Erwin Kleinfeld Compactifications of moduli of G-bundles and conformal blocks (Avery Wilson, May 12, 2021): I will talk about Schmitt and Munoz-Castaneda's compactification of the moduli space of G-bundles on a curve and its relation to conformal blocks. I use this compactification to prove finite generation of the conformal blocks algebra over the stack of stable curves of genus >1, which Belkale-Gibney had previously proven for G=SL(r). This yields a nice compactification for the relative moduli space of G-bundles. Complex rank 3 vector bundles on CP5 (Morgan Opie, May 5,2021: Given the ubiquity of vector bundles, it is perhaps surprising that there are so many open questions about them -- even on projective spaces. In this talk, I will outline my ongoing work on complex rank 3 topological vector bundles on CP5. I will describe a classification of such bundles, involving a connection to topological modular forms. I will also discuss a topological, rank-preserving additive structure which allows for the construction of new rank 3 bundles on CP^5 from "simple" ones. This construction is an analogue to an algebraic construction of Horrocks. As time allows, I will discuss future algebro-geometric directions related to this project. Geometry of mutations: mirrors and McKay (Ben Wormleighton, April 28, 2021): There are several notions of mutation that arise in different parts of algebra, geometry, and combinatorics. I will discuss some of these appearances in mirror symmetry and in the McKay correspondence with a view towards approaching classification problems for Fano varieties and for crepant resolutions of orbifold singularities. Motivic semiorthogonal decompositions for abelian varieties (Fanco Rota, April 21, 2021): A motivic semiorthogonal decomposition is the decomposition of the derived category of a quotient stack [X/G] into components related to the "fixed-point data". They represent a categorical analog of the Atiyah-Bott localization formula in equivariant cohomology, and their existence is conjectured for finite G (and an additional smoothess assumption) by Polishchuk and Van den Bergh. I will present joint work with Bronson Lim, in which we construct a motivic semiorthogonal decomposition for a wide class of smooth quotients of abelian varieties by finite groups, using the recent classification by Auffarth, Lucchini Arteche, and Quezada. Counting elliptic curves with prescribed torsion over imaginary quadratic fields (Allechar Serrano Lopez, April 14, 2021): A generalization of Mazur's theorem states that there are 26 possibilities for the torsion subgroup of an elliptic curve over a quadratic extension of Q. If G is one of these groups, we count the number of elliptic curves of bounded naive height whose torsion subgroup is isomorphic to G in the case of imaginary quadratic fields. Ample stable vector bundles on rational surfaces (John Kopper, March 24, 2021): Ample vector bundles are among the most important "positive" vector bundles in algebraic geometry, but have resisted attempts at classification, especially in dimensions two and higher. In this talk, I will discuss a moduli-theoretic approach to this problem that dates to Le Potier and is particularly powerful on rational surfaces: study Chern characters for which the general stable bundle is ample. After reviewing the ideas of stability and ampleness for vector bundles, I will discuss some new results in this direction for minimal rational surfaces. First, I will give a complete classification of Chern characters on these surfaces for which the general stable bundle is both ample and globally generated. Second, I will explain how this classification also holds in an asymptotic sense without the assumption of global generation. This is joint work with Jack Huizenga. On log del Pezzo surfaces in positive characteristic (Justin Lacini, March 10, 2021): A log del Pezzo surface is a normal surface with only Kawamata log terminal singularities and anti-ample canonical class. Over the complex numbers, Keel and McKernan have classified all but a bounded family of log del Pezzo surfaces of Picard number one. In this talk we will extend their classification to positive characteristic. In particular, we will prove that for p>3 every log del Pezzo surface of Picard number one admits a log resolution that lifts to characteristic zero over a smooth base. As a consequence, we will see that Kawamata-Viehweg vanishing holds in this setting. Finally, we will conclude with some counterexamples in characteristic two, three and five. Integrality of G-local systems (Christian Klevdal, March 3,2021): Simpson conjectured that for a reductive group G, rigid G-local systems on a smooth projective complex variety are integral. I will discuss a proof of integrality for cohomologically rigid G-local systems. This generalizes and is inspired by work of Esnault and Groechenig for GL_n. Surprisingly, the main tools used in the proof (for general G and GL_n) are the work of L. Lafforgue on the Langlands program for curves over function fields, and work of Drinfeld on companions of \ell-adic sheaves. The major differences between general G and GL_n are first to make sense of companions for G-local systems, and second to show that the monodromy group of a rigid G-local system is semisimple. All work is joint with Stefan Patrikis. Separable algebras and rationality of arithmetic toric varieties (Patrick McFaddin, February 24, 2021): The class of toric varieties defined over the complex numbers gives a robust testing ground for computing various invariants, e.g., algebraic K-theory and derived categories. To obtain a broader sense of the capabilities of these invariants, we look to the arithmetic setting and twisted forms of toric varieties. In this talk, I will discuss work on distinguishing forms of toric varieties using separable algebras and how this sheds light on the connection between derived categories and rationality questions. This is joint work with M. Ballard, A. Duncan, and A. Lamarche. Moduli spaces of stable sheaves over quasipolarized K3 surfaces, and Strange Duality (Svetlana Makarova, February 10, 2021): I will talk about a construction of relative moduli spaces of stable sheaves over the stack of quasipolarized surfaces. For this, I first retrace some of the classical results in the theory of moduli spaces of sheaves on surfaces to make them work over the nonample locus. Then I will recall the theory of good moduli spaces, whose study was initiated by Alper and concerns an intrinsic (stacky) reformulation of the notion of good quotients from GIT. Finally, I use a criterion by Alper-Heinloth-Halpern-Leistner, coupled with some categorical arguments, to prove existence of the good moduli space. Stability of pencils of plane curves (Aline Zanardini, February 3, 2021): I will discuss some recent results on the problem of classifying pencils of plane curves via geometric invariant theory. We will see how the stability of a pencil is related to the stability of its generators, to the log canonical threshold of its members, and to the multiplicities of its base points. What is Tensor Triangulated Geometry? (Ian Coley, January 27, 2021): Based on work of Thomason, Balmer defined a way to think about varieties from a purely category-theoretic point of view. By considering not only the triangulated structure of the derived category but also the tensor product, one can (nearly) do geometry within the category Db(X) itself. I will discuss the construction of the 'Balmer spectrum' and give some pertinent examples. Combinatorics of words, symbolic dynamics and growth of algebras (Be'eri Greenfeld, December 9, 2020): The most important invariant of a finite dimensional algebra is its dimension. Let A be a finitely generated, infinite dimensional associative or Lie algebra over some base field F. A useful way to 'measure its infinitude' is to study its growth rate, namely, the asymptotic behavior of the dimensions of the spaces spanned by (at most n)-fold products of some fixed generators. Up to a natural asymptotic equivalence relation, this function becomes a well-defined invariant of the algebra itself, independent of the specification of generators. The question of 'how do algebras grow?', or, which functions can be realized as growth rates of algebras (perhaps with additional algebraic properties, as grading, simplicity etc.) plays an important role in classifying infinite dimensional algebras of certain classes, and is thus connected to ring theory, noncommutative projective geometry, quantum algebra, arithmetic geometry, combinatorics of infinite words, symbolic dynamics and more. We present new results on possible and impossible growth rates of important classes of associative and Lie algebras, thereby settling several open questions in this area. Among the tools we apply are novel techniques and recent constructions arising from noncommutative algebra, combinatorics of (infinite trees of) infinite words and convolution algebras of étale groupoids attached to them. Classifying perfect complexes of Mackey functors (Clover May, December 2, 2020): Mackey functors were introduced by Dress and Green to encode operations that behave like restriction and induction in representation theory. They play a central role in equivariant homotopy theory, where homotopy groups are replaced by homotopy Mackey functors. In this talk I will discuss joint work with Dan Dugger and Christy Hazel classifying perfect chain complexes of Mackey functors for G=Z/2. Our classification leads to a computation of the Balmer spectrum of the derived category. It has topological consequences as well, classifying all modules over the equivariant Eilenberg--MacLane spectrum HZ/2. Lannes' T-functor and Chow rings of classifying spaces (David Hemminger, November 18, 2020): Equivariant Chow rings, including Chow rings of classifying spaces of algebraic groups, appear often in nature but are difficult to compute. Like singular cohomology in topology, these Chow rings modulo a prime p carry the additional structure of unstable modules over the Steenrod algebra. We utilize this extra structure to refine estimates of equivariant Chow rings mod p. As a special case, we prove an analog of Quillen's stratification theorem, generalizing and recovering prior results of Yagita and Totaro. Graph potentials as mirrors to moduli of vector bundles on curves (Pieter Belmans, November 11, 2020): In a joint work with Sergey Galkin and Swarnava Mukhopadhyay we have introduced a class of Laurent polynomials associated to decorated trivalent graphs which we called graph potentials. These Laurent polynomials satisfy interesting symmetry and compatibility properties. Under mirror symmetry they are related to moduli spaces of rank 2 bundles (with fixed determinant of odd degree) on a curve of genus $g\geq 2$, which is a class of Fano varieties of dimension $3g-3$. I will discuss (parts of) the (enumerative / homological) mirror symmetry picture for Fano varieties, and then explain what we understand for this class of varieties and what we can say about the (conjectural) semiorthogonal decomposition of the derived category. Tits cone intersections and Applications (Michael Wemyss, November 4, 2020): In the first half of the talk, I will give an overview of Tits cone intersections, which are structures that can be obtained from (possibly affine) ADE Dynkin diagrams, together with a choice of nodes. This is quite elementary, but visually very beautiful, and it has some really remarkable features and applications. In the second half of the talk I will highlight some of the applications to algebraic geometry, mainly to 3-fold flopping contractions, through mutation and stability conditions. This should be viewed as a categorification of the first half of my talk. Parts are joint work with Yuki Hirano, parts with Osamu Iyama. The Dehn complex: scissors congruence, K-theory, and regulators (Inn Zakharevich, October 28, 2020): Hilbert's third problem asks: do there exist two polyhedra with the same volume which are not scissors congruent? In other words, if P and Qare polyhedra with the same volume, is it always possible to write P as the union of P_i, and Q as the union of Q_i, such that the P's and Q's intersect only on the boundaries and such that P_i is congruent to Q_i? In 1901 Dehn answered this question in the negative by constructing a second scissors congruence invariant now called the "Dehn invariant," and showing that a cube and a regular tetrahedron never have equal Dehn invariants, regardless of their volumes. We can then restate Hilbert's third problem: do the volume and Dehn invariant separate the scissors congruence classes? In 1965 Sydler showed that the answer is yes; in 1968 Jessen showed that this result extends to dimension 4, and in 1982 Dupont and Sah constructed analogs of such results in spherical and hyperbolic geometries. However, the problem remains open past dimension 4. By iterating Dehn invariants Goncharov constructed a chain complex, and conjectured that the homology of this chain complex is related to certain graded portions of the algebraic K-theory of the complex numbers, with the volume appearing as a regulator. In joint work with Jonathan Campbell, we have constructed a new analysis of this chain complex which illuminates the connection between the Dehn complex and algebraic K-theory, and which opens new routes for extending Dehn's results to higher dimensions. In this talk we will discuss this construction and its connections to both algebraic and Hermitian K-theory, and discuss the new avenues of attack that this presents for the generalized Hilbert's third problem. Degree One Milnor K-Invariants of Groups of Multiplicative Type (Alex Wertheim, October 21, 2020): Many important algebraic objects can be viewed as G-torsors over a field F, where G is an algebraic group over F. For example, there is a natural bijection between F-isomorphism classes of central simple F-algebras of degree n and PGL_n(F)-torsors over Spec(F). Much as one may study principal bundles on a manifold via characteristic classes, one may likewise study G-torsors over a field via certain associated Galois cohomology classes. This principle is made precise by the notion of a cohomological invariant, which was first introduced by Serre. In this talk, we will determine the cohomological invariants for algebraic groups of multiplicative type with values in H^1(-, Q/Z(1). Our main technical analysis will center around a careful examination of mu_n-torsors over a smooth, connected, reductive algebraic group. Along the way, we will compute a related group of invariants for smooth, connected, reductive groups An obstruction to weak approximation on some Calabi-Yau threefolds (Katrina Honigs, October 14, 2020): The study of Q-rational points on algebraic varieties is fundamental to arithmetic geometry. One of the few methods available to show that a variety does not have any Q-points is to give a Brauer-Manin obstruction. Hosono and Takagi have constructed a class of Calabi-Yau threefolds that occur as a linear section of a double quintic symmetroid and given a detailed analysis of them as complex varieties in the context of mirror symmetry. This construction can be used to produce varieties over Q as well, and these threefolds come tantalizingly equipped with a natural Brauer class. In work with Hashimoto, Lamarche and Vogt, we analyze these threefolds and their Brauer class over Q and give a condition under which the Brauer class obstructs weak approximation, though it cannot obstruct the existence of Q-rational points. The Kodaira dimension of some moduli spaces of elliptic K3 surfaces (Giacomo Mezzedimi, October 7, 2020): Let $\mathcal{M}_{2k}$ denote the moduli space of $U\oplus \langle -2k\rangle$-polarized K3 surfaces. Geometrically, the K3 surfaces in $\mathcal{M}_{2k}$ are elliptic and contain an extra curve class, depending on $k\ge 1$. I will report on a joint work with M. Fortuna and M. Hoff, in which we compute the Kodaira dimension of $\mathcal{M}_{2k}$ for almost all $k$: more precisely, we show that it is of general type if $k\ge 220$ and unirational if $k\le 50$, $k\not\in \{11,35,42,48\}$. After introducing the general problem, I will compare the strategies used to obtain both results. If time permits, I will show some examples arising from explicit geometric constructions. Moduli of semiorthogonal decompositions (Andrea Ricolfi, September 30, 2020): We discuss the existence of a moduli space parametrising semiorthogonal decompositions on the fibres of a smooth projective morphism X/U. More precisely, we define a functor on (Sch/U) sending V/U to the set of semiorthogonal decompositions on Perf(X_V). We show this functor defines an etale algebraic space over U. As an application, we prove that if the generic fibre of X/U is indecomposable, then so are all fibres. We discuss some examples and applications. Joint work with Pieter Belmans and Shinnosuke Okawa. Betti numbers of unordered configuration spaces of a punctured torus (Yifeng Huang, September 23, 2020): Let X be a elliptic curve over C with one point removed, and consider the unordered configuration spaces Conf^n(X)={(x_1,...,x_n): x_i\ne x_j for i\ne j} / S_n. We present a rational function in two variables from whose coefficients we can read off the i-th Betti numbers of Conf^n(X) for all i and n. The key of the proof is a property called "purity", which was known to Kim for (ordered or unordered) configuration spaces of the complex plane with r >= 0 points removed. We show that the unordered configuration spaces of X also have purity (but with different weights). This is a joint work with G. Cheong. On the boundedness of n-folds of Kodaira dimension n-1 (Stefano Filipazzi, September 16, 2020): One of the main topics in the classification of algebraic varieties is boundedness. Loosely speaking, a set of varieties is called bounded if it can be parametrized by a scheme of finite type. In the literature, there is extensive work regarding the boundedness of varieties belonging to the three building blocks of the birational classificaiton of varieties: varieties of Fano type, Calabi--Yau type, and general type. Recently, work of Di Cerbo--Svaldi and Birkar introduced ideas to deduce boundedness statements for fibrations from boundedness results concerning these three classes of varieties. Following this philosophy, in this talk I will discuss some natural conditions for a set of n-folds of Kodaira dimension n-1 to be bounded. Part of this talk is based on joint work with Roberto Svaldi. A journey from the octonionic P2 to a fake P2 (Lev Borisov, September 9, 2020): This is joint work with Anders Buch and Enrico Fatighenti. We discover a family of surfaces of general type with K2=3 and p=q=0 as free C13 quotients of special linear cuts of the octonionic projective plane OP2. A special member of the family has 3 singularities of type A2, and is a quotient of a fake projective plane, which we construct explicitly. Compactifications of moduli of points and lines in the projective plane (Luca Schaffler, April 29, 2020): Projective duality identifies the moduli space Bn parametrizing configurations of n general points in projective plane with X(3,n), parametrizing configurations of n general lines in the dual plane. When considering degenerations of such objects, it is interesting to compare different compactifications of the above moduli spaces. In this work, we consider Gerritzen-Piwek's compactification Bn and Kapranov's Chow quotient compactification X(3,n), and we show they have isomorphic normalizations. We prove that Bn does not admit a modular interpretation claimed by Gerritzen and Piwek, namely a family of n-pointed central fibers of Mustafin joins associated to one-parameter degenerations of n points in the plane. We construct the correct compactification of Bn which admits such a family, and we describe it for n=5,6. This is joint work in progress with Jenia Tevelev. Group cohomology rings via equivariant cohomology (James Cameron, April 22, 2020): The cohomology rings of finite groups are typically very complicated, but their geometric properties are often tractable and retain representation theoretic information. These geometric properties become more clear once one considers group cohomology rings in the context of equivariant cohomology. In this talk I will discuss how to use techniques involving flag varieties dating back to Quillen and a filtration of equivariant cohomology rings due to Duflot to study the associated primes and local cohomology modules of group cohomology rings. This talk will be online, using webex Moduli spaces on the Kuznetsov component of Fano threefolds of index 2 (Franco Rota, April 15, 2020): The derived category of a Fano threefold Y of Picard rank 1 and index 2 admits a semiorthogonal decomposition. This defines a non-trivial subcategory Ku(Y) called the Kuznetsov component, which encodes most of the geometry of Y. I will present a joint work with M. Altavilla and M. Petkovic, in which we describe certain moduli spaces of Bridgeland-stable objects in Ku(Y), via the stability conditions constructed by Bayer, Macri, Lahoz and Stellari. Furthermore, in our work we study the behavior of the Abel-Jacobi map on these moduli space. As an application in the case of degree d=2, we prove a strengthening of a categorical Torelli Theorem by Bernardara and Tabuada. Higher K-theory via generators and relations (Ian Coley, April 8, 2020): K0 (the Grothendieck group) of an exact category has a nice description in terms of generators and relations. Nenashev (after Quillen and Gillet-Grayson) proved that K1 can also be described in terms of generators and relations, and Grayson extended that argument to all higher K-groups. I will sketch Grayson's argument and (ideally) show some advantages of the generators and relations approach. On the Jordan property for local fundamental groups (Joaquin Moraga, April 1, 2020): We discuss the Jordan property for the local fundamental group of klt singularities. We also show how the existence of a large Abelian subgroup of such a group reflects on the geometry of the singularity. Finally, we show a characterization theorem for klt 3-fold singularities with large local fundamental group. Six explicit pairs of fake projective planes (Lev Borisov, February 26, 2020): I will briefly review the history of fake projective planes and will talk about my latest work on the subject, joint with Enrico Fatighenti. Quantum geometry of moduli spaces of local system (Linhui Shen, February 19, 2020): Let G be a split semi-simple algebraic group over Q. We introduce a natural cluster structure on moduli spaces of G-local systems over surfaces with marked points. As a consequence, the moduli spaces of G-local systems admit natural Poisson structures, and can be further quantized. We will study the principal series representations of such quantum spaces. It will recover many classical topics, such as the q-deformed Toda systems, quantum groups, and the modular functor conjecture for such representations. This talk will mainly be based on joint work with A.B. Goncharov. The K'-theory of monoid sets (Chuck Weibel, February 5, 2020): There are three flavors of K-theory for a pointed abelian monoid A; they depend on the A-sets one allows. This talk considers the well-behaved family of partially cancellative (pc) A-sets, and its K-theory. For example, if A is the natural numbers, then pc A-sets are just rooted trees. Higher norm principles for norm varieties (Shira Gilat, January 22, 2020): The norm principle for a division algebra states that the image of the reduced norm is an invariant of its Brauer-equivalence class. This can be generalized to symbols in the Milnor K-group KMn(F). We prove a generalized norm principle for symbols in KMn(F) for a prime-to-p closed field F of characteristic zero (for some prime p). We also give a new proof for the norm principle for division algebras, using the decomposition theorem for (noncommutative) polynomials over the algebra. Tropical scheme theory (Diane Mclagan, December 11, 2019): Tropical geometry can be viewed as algebraic geometry over the tropical semiring (R union infinity, with operations min and +). This perspective has proved surprisingly effective over the last decade, but has so far has mostly been restricted to the study of varieties and cycles. I will discuss a program to construct a scheme theory for tropical geometry. This builds on schemes over semirings, but also introduces concepts from matroid theory. This is joint work with Felipe Rincon, involving also work of Jeff and Noah Giansiracusa and others. Enumerating pencils with moving ramification on curves, (Carl Lian, November 20, 2019): We consider the general problem of enumerating branched covers of the projective line from a fixed general curve subject to ramification conditions at possibly moving points. Our main computations are in genus 1; the theory of limit linear series allows one to reduce to this case. We first obtain a simple formula for a weighted count of pencils on a fixed elliptic curve E, where base-points are allowed. We then deduce, using an inclusion-exclusion procedure, formulas for the numbers of maps E→ P1 with moving ramification conditions. A striking consequence is the invariance of these counts under a certain involution. Our results generalize work of Harris, Logan, Osserman, and Farkas-Moschetti-Naranjo-Pirola. Noncommutative Laurent Phenomena: two examples (Volodia Retakh, November 13, 2019): We discuss two examples when iterations of the noncommutative rational map are given by noncommutative Laurent polynomials. The first example is related to noncommutative triangulation of surfaces. The second example, which leads to a noncommutative version of the Catalan numbers, is related to solutions of determinant-like equations. The talk is based on joint papers with A. Berenstein from U. of Oregon. Vertex algebras of CohFT-type (Angela Gibney, October 30, 2019): Finitely generated admissible modules over "vertex algebras of CohFT-type" can be used to construct vector bundles of coinvariants and conformal blocks on moduli spaces of stable curves. In this talk I will say what vertex algebras of CohFT-type are, and explain how such bundles define semisimple cohomological field theories. As an application, one can give an expression for their total Chern character in terms of the fusion rules. I'll give some examples. Strong exceptional collections of line bundles (Chengxi Wang, October 23, 2019): We study strong exceptional collections of line bundles on Fano toric Deligne-Mumford stacks with rank of Picard group at most two. We prove that any strong exceptional collection of line bundles generates the derived category of the stack, as long as the number of elements in the collection equals the rank of the (Grothendieck) K-theory group of the stack. The problem reduces to an interesting combinatorial problem and is solved by combinatorial means. Local-global principle for norm over semi-global fields, Sumit Chandra Mishra, Oct. 16, 2019): Let K be a complete discretely valued field with residue field κ. Let F be a function field in one variable over K and X a regular proper model of F with reduced special fibre X a union of regular curves with normal crossings. Suppose that the graph associated to X is a tree (e.g. F = K(t). Let L/F be a Galois extension of degree n with Galois group Gand n coprime to char(κ). Suppose that κ is algebraically closed field or a finite field containing a primitive nth root of unity. Then we show that an element in F* is a norm from the extension L/F if it is a norm from the extensions L⊗FFν (i.e., $L\otimes_F F_\nu/F_\nu$) for all discrete valuations ν of F. What is a derivator? (Ian Coley, October 9, 2019): Derivators were introduced in the 90s by Grothendieck, Heller, and Franke (independently) to generalize triangulated categories and answer questions in homotopy theory and algebraic geometry using a more abstract framework. Since then, applications to modular representation theory, tensor triangulated geometry, tilting theory, K-theory, equivariant homotopy theory, and more have been developed by scores of mathematicians. This talk will give the basic definition of a derivator, motivated by the initial question of enhancing a triangulated category, describe some of these useful applications to the "real world" away from category theory. We assume a priori the listener's interest in triangulated category theory and one or more of the above disciplines. In particular, no knowledge of infinity/quasicategories is required! Generalized Brauer dimension and other arithmetic invariants of semi-global fields (Saurabh Gosavi, October 2, 19): Given a finite set of Brauer classes B of a fixed period ℓ, we define ind(B) to be the gcd of degrees of field extensions L/F such that α⊗FL=0 for every α in B. We provide upper-bounds for ind(B) which depends upon arithmetic invariants of fields of lower arithmetic complexity. As a simple application of our result, we will obtain upper-bounds for the splitting index of quadratic forms and finiteness of symbol length for function fields of curves over higher-local fields. Rational singularities and their cousins in arbitrary characteristics (Sándor Kovacs, Sept. 18, 2019): I will discuss several results about rational and closely related singularities in arbitrary characteristics. The results concern various properties of these singularities including their behavior with respect to deformations and degenerations, and applications to moduli theory. On Makkai's Strong Conceptual Completeness Theorem (Jacob Lurie, Sept. 20, 2019): One of the most fundamental results of mathematical logic is the celebrated Godel completeness theorem, which asserts that every consistent first-order theory T admits a model. In the 1980s, Makkai proved a much sharper result: any first-order theory T can be recovered, up to a suitable notion of equivalence, from its category of models Mod(T) together with some additional structure (supplied by the theory of ultraproducts). In this talk, I'll explain the statement of Makkai's theorem and sketch a new proof of it, inspired by the theory of "pro-etale sheaves" studied by Scholze and Bhatt-Scholze. Severi degrees via representation theory (Dave Anderson, May 1, 2019): As a vector space, the cohomology of the Grassmannian Gr(k,n) is isomorphic to the k-th exterior power of C^n. The geometric Satake correspondence explains how to naturally upgrade this isomorphism to one of $gl_n$-representations. Inspired by work of Golyshev and Manivel from 2011, we use these ideas to find new proofs of Giambelli formulas for ordinary and orthogonal Grassmannians, as well as rim-hook rules for quantum cohomology. This is joint with Antonio Nigro. Severi degrees via representation theory (Yaim Cooper, April 24, 2019): The Severi degrees of $P^1$ x $P^1$ can be computed in terms of an explicit operator on the Fock space $F[P^1]$. We will discuss this and variations on this theme. We will explain how to use this approach to compute the relative Gromov-Witten theory of other surfaces, such as Hirzebruch surfaces and Ex$P^1$. We will also discuss operators for calculating descendants. Joint with R. Pandharipande. Continuous families of divisors on symmetric powers of curves (John Sheridan, April 17, 2019): For X a smooth projective variety, we consider its set of effective divisors in a fixed cohomology class. This set naturally forms a projective scheme and if X is a curve, this scheme is a smooth, irreducible variety (fibered in linear systems over the Picard variety). However, when X is of higher dimension, this scheme can be singular and reducible. We study its structure explicitly when X is a symmetric power of a curve. Character Tables and Sylow Subgroups of Finite Groups (Gabriel Navarro, April 10, 2019): Brauer's Problem 12 asks which properties of Sylow subgroups can be detected in the character table of a finite group. We will talk about recent progress on this problem. Effective divisors in the Hodge bundle (Iulia Gheorghita, April 3, 2019): Computing effective divisor classes can reveal important information about the geometry of the underlying space. For example, in 1982 Harris and Mumford computed the Brill-Noether divisor class and used it to determine the Kodaira dimension of the moduli space of curves. In this talk I will explain how to compute the divisor class of the locus of canonical divisors in the projectivized Hodge bundle over the moduli space of curves which have a zero at a Weierstrass point. I will also explain the extremality of the divisor class arising from the stratum of canonical divisors with a double zero. The algebraic theory of systems (Louis Rowen, March 27, 2019): The notion of ``system'' is introduced to unify classical algebra with tropical mathematics, hyperfields, and other related areas for which we can embed a partial algebraic structure into a fuller structure from which we can extract more information. The main ideas are a generalized negation map since our structures lack classical negatives, and a ``surpassing relation'' to replace equality. We discuss this theory with emphasis on the main applications, which will be described from the beginning: 1. Classical algebra 2. Supertropical mathematics (used for valuations and tropicalization) 3. Symmetrized systems (used for embedding additively idempotent semi structures into systems) 4. Hyperfields A Degree Formula for Equivariant Cohomology (Jeanne Duflot, March 13, 2019): I will talk about a generalization of a result of Lynn on the "degree" of an equivariant cohomology ring $H^*_G(X)$. The degree of a graded module is a certain coefficient of its Poincaré series, expanded as a Laurent series about t=1. The main theorem, which is joint with Mark Blumstein, is an additivity formula for degree: $$\deg(H^*_G(X)) = \sum_{[A,c] \in \mathcal{Q'}_{max}(G,X)}\frac{1}{|W_G(A,c)|} \deg(H^*_{C_G(A,c)}(c)).$$ Volumes and intersection theory on moduli spaces of abelian differentials (Dawei Chen, February 20, 2019): Computing volumes of moduli spaces has significance in many fields. For instance, the celebrated Witten's conjecture regarding intersection numbers on the Deligne-Mumford moduli space of stable curves has a fascinating connection to the Weil-Petersson volume, which motivated Mirzakhani to give a proof via Teichmueller theory, hyperbolic geometry, and symplectic geometry. The initial two other proofs of Witten's conjecture by Kontsevich and by Okounkov-Pandharipande also used various ideas in ribbon graphs, Gromov-Witten theory, and Hurwitz theory. In this talk I will introduce an analogous formula of intersection numbers on the moduli spaces of abelian differentials that computes the Masur-Veech volumes. This is joint work with Moeller, Sauvaget, and Zagier (arXiv:1901.01785). The Real graded Brauer group (Chuck Weibel, February 6, 2019): We introduce a version of the Brauer--Wall group for Real vector bundles of algebras (in the sense of Atiyah), and compare it to the topological analogue of the Witt group. For varieties over the reals, these invariants capture the topological parts of the Brauer--Wall and Witt groups. Paschke Categories, K-homology and the Riemann-Roch Transformation (Khashayar Sartipi, January 30, 2019): For a separable C*-algebra A, we introduce an exact C*-category called the Paschke Category of A, which is completely functorial in A, and show that its K-theory groups are isomorphic to the topological K-homology groups of the C*-algebra A. Then we use the Dolbeault complex and ideas from the classical methods in Kasparov K-theory to construct an acyclic chain complex in this category, which in turn, induces a Riemann-Roch transformation in the homotopy category of spectra, from the algebraic K-theory spectrum of a complex manifold X, to its topological K-homology spectrum. Palindromicity and the local invariant cycle theorem (Patrick Brosnan, January 23, 2019): In its most basic form, the local invariant cycle theorem of Beilinson, Bernstein and Deligne (BBD) gives a surjection from the cohomology of the special fiber of a proper morphism of smooth varieties to the monodromy invariants of the general fiber. This result, which is one of the last theorems stated in the book by BBD, is a relatively easy consequence of their famous decomposition theorem. In joint work with Tim Chow on a combinatorial problem, we needed a simple condition ensuring that the above surjection is actually an isomorphism. Our theorem is that this happens if and only if the special fiber has palindromic cohomology. I will explain the proof of this theorem and a generalization proved using the (now known) Kashiwara conjecture. I will also say a little bit about the combinatorial problem (the Shareshian-Wachs conjecture on Hessenberg varieties) which motivated our work. 2-Segal spaces and algebraic K-theory (Julie Bergner, November 28, 2018): The notion of a 2-Segal space was defined by Dyckerhoff and Kapranov and independently by Galvez-Carrillo, Kock, and Tonks under the name of decomposition space. Although these two sets of authors had different motivations for their work, they both saw that a key example is obtained by applying Waldhausen's S-construction to an exact category, showing that 2-Segal spaces are deeply connected to algebraic K-theory. In joint work with Osorno, Ozornova, Rovelli, and Scheimbauer, we show that any 2-Segal space arises from a suitable generalization of this construction. Furthermore, our generalized input has a close relationship to the CGW categories of Campbell and Zakharevich. In this talk, I'll introduce 2-Segal structures and discuss what we know and would like to know about the role they play in algebraic K-theory. Low Degree Cohomology (Bob Guralnick November 14, 2018): Let G be a finite group with V an absolutely irreducible kG-module with k a field of positive characteristic. We are interested in bounds on the dimension of the first and second degree cohomology groups of G with coefficients in V. We will discuss some old and new bounds, conjectures and applications. Low degree points on curves (Isabel Vogt, November 7, 2018): We will discuss an arithmetic analogue of the gonality of a curve over a number field: the smallest positive integer $e$ such that the points of residue degree bounded by $e$ are infinite. By work of Faltings, Harris-Silverman and Abramovich-Harris, it is understood when this invariant is 1, 2, or 3; by work of Debarre-Fahlaoui these criteria do not generalize to $e$ at least 4. We will focus on scenarios under which we can guarantee that this invariant is actually equal to the gonality using the auxiliary geometry of a surface containing the curve. This is joint work with Geoffrey Smith. Algebraic groups with good reduction and unramified cohomology (Igor Rapinchuk, October 31, 2018): Let $G$ be an absolutely almost simple algebraic group over a field K, which we assume to be equipped with a natural set V of discrete valuations. In this talk, our focus will be on the K-forms of $G$ that have good reduction at all v in V . When K is the fraction field of a Dedekind domain, a similar question was considered by G. Harder; the case where $K=\mathbb{Q}$ and V is the set of all p-adic places was analyzed in detail by B.H. Gross and B. Conrad. I will discuss several emerging results in the higher-dimensional situation, where K is the function field $k(C)$ of a smooth geometrically irreducible curve $C$ over a number field k, or even an arbitrary finitely generated field. These problems turn out to be closely related to finiteness properties of unramified cohomology, and I will present available results over various classes of fields. I will also highlight some connections with other questions involving the genus of $G$ (i.e., the set of isomorphism classes of K-forms of $G$ having the same isomorphism classes of maximal K-tori as $G$), Hasse principles, etc. The talk will be based in part on joint work with V. Chernousov and A. Rapinchuk Generalized Polar Geometry (Sandra Di Rocco, October 24, 2018): Polar classes are very classical objects in Algebraic Geometry. A brief introduction to the subject will be presented and ideas and preliminarily results towards generalizations will be explained. These ideas can be applied towards variety sampling and relevant applications in Kinematics and Biochemistry. Brauer class over the Picard scheme of curves (Qixiao Ma, October 24, 2018): We study the Brauer class rising from the obstruction to the existence of a tautological line bundle on the Picard scheme of curves. If we consider the universal totally degenerate curve with a fixed dual graph, then, using symmetries of the graph, we give bounds on the period and index of the Brauer classes. As a result, we provide some division algebra of prime degree, serving as candidates for the cyclicity problem. On singularity properties of convolutions of algebraic morphisms (Yotam Hendel, October 10, 2018): In analysis, the convolution of two functions results in a smoother, better behaved function. It is interesting to ask whether an analogue of this phenomenon exists in the setting of algebraic geometry. Let $f$ and $g$ be two morphisms from algebraic varieties X and Y to an algebraic group $G$. We define their convolution to be a morphism $f*g$ from $X\times Y$ to $G$ by first applying each morphism and then multiplying using the group structure of $G$. In this talk, we present some properties of this convolution operation, as well as a recent result which states that after finitely many self convolutions every dominant morphism $f:X\to G$ from a smooth, absolutely irreducible variety X to an algebraic group G becomes flat with reduced fibers of rational singularities (this property is abbreviated FRS). The FRS property is of particular interest since by works of Aizenbud and Avni, FRS morphisms are characterized by having fibers whose point count over the finite rings $Z/p^kZ$ is well-behaved. This leads to applications in probability, group theory, representation growth and more. We will discuss some of these applications, and if time permits, the main ideas of the proof which utilize model-theoretic methods. Joint work with Itay Glazer. Irrationality of Motivic Zeta Functions (Michael Larsen, October 5, 2018): It is a remarkable fact that the Riemann zeta function extends to a meromorphic function on the whole complex plane. A conjecture of Weil, proved by Dwork, asserts that the zeta function of any variety over a finite field is likewise meromorphic, from which it follows that it can be expressed as a rational function. In the case of curves, Kapranov observed that this is true in a very strong sense, which continues to hold even in characteristic zero. He asked whether this remains true for higher dimensional varieties. Valery Lunts and I disproved his conjecture fifteen years ago, and recently disproved a weaker conjecture due to Denef and Loeser. This explains, in some sense, why Weil's conjecture was so much easier in dimension 1 than in higher dimension. Charles Weibel / weibel @ math.rutgers.edu / November 1, 2021
CommonCrawl
On neighbourhood degree sequences of complex networks Exploiting graphlet decomposition to explain the structure of complex networks: the GHuST framework Rafael Espejo, Guillermo Mestre, … Ettore Bompard Degree difference: a simple measure to characterize structural heterogeneity in complex networks Amirhossein Farzam, Areejit Samal & Jürgen Jost A detailed characterization of complex networks using Information Theory Cristopher G. S. Freitas, Andre L. L. Aquino, … Osvaldo A. Rosso Explaining the emergence of complex networks through log-normal fitness in a Euclidean node similarity space Keith Malcolm Smith Rare and everywhere: Perspectives on scale-free networks Petter Holme The inherent community structure of hyperbolic networks Bianka Kovács & Gergely Palla p-adic numbers encode complex networks Hao Hua & Ludger Hovestadt Gauss's law for networks directly reveals community boundaries Ayan Sinha, David F. Gleich & Karthik Ramani Ordinal Preferential Attachment: A Self-Organizing Principle Generating Dense Scale-Free Networks Taichi Haruna & Yukio-Pegio Gunji Keith M. Smith ORCID: orcid.org/0000-0002-4615-90201 Scientific Reports volume 9, Article number: 8340 (2019) Cite this article Network topology is a fundamental aspect of network science that allows us to gather insights into the complicated relational architectures of the world we inhabit. We provide a first specific study of neighbourhood degree sequences in complex networks. We consider how to explicitly characterise important physical concepts such as similarity, heterogeneity and organization in these sequences, as well as updating the notion of hierarchical complexity to reflect previously unnoticed organizational principles. We also point out that neighbourhood degree sequences are related to a powerful subtree kernel for unlabeled graph classification. We study these newly defined sequence properties in a comprehensive array of graph models and over 200 real-world networks. We find that these indices are neither highly correlated with each other nor with classical network indices. Importantly, the sequences of a wide variety of real world networks are found to have greater similarity and organisation than is expected for networks of their given degree distributions. Notably, while biological, social and technological networks all showed consistently large neighbourhood similarity and organisation, hierarchical complexity was not a consistent feature of real world networks. Neighbourhood degree sequences are an interesting tool for describing unique and important characteristics of complex networks. Contemplating the roles of components in natural and man-made systems, we begin to realise their diversity. Take for example, the structure of an organisation. At face value, employees are assigned titles and pay-scales which place the workforce in a convenient hierarchy with each level comprising of equivalencies based on the competitive value of the work done. However, in large and multifaceted organisations the work done is often highly variable and it is beneficial to have employees with a diverse range of skills and talents interacting in different ways. Network science provides a natural framework to understand relationship patterns of such complex systems and we shall here formulate and study hierarchical equivalency in terms of neighbourhood degree sequences of complex networks. Figure 1A provides an illustration of how neighbourhood degree sequences intuitively help to understand global hierarchical patterns. (A) How can we efficiently capture the organisation of this graph mathematically without reference to node placements on the plane? We can note that the neighbourhoods of nodes of a given degree are equivalent with respect to the degrees of nodes they connect to– e.g. all yellow nodes (degree 4) connect to the same number of green (degree 3), orange (degree 5) and red (degree 8) nodes. Thus neighbourhood degree sequences appear as a promising avenue. (B) Illustration of a multi-ordered degree graph whose equal-degree nodes are organised into two distinct classes with different degree sequences. (C) Illustration of a subtree of height 2 for node i in panel A. The number of nodes at height 1 is the degree of i, while for a node at height 1, its degree is the the number of nodes at height 2 extending from it, all captured by i's neighbourhood degree sequence, si. The distribution of connections among nodes in complex networks, known as the degree distribution, is a key consideration of its topology. Predated by the study of degree sequences1, interest in degree distributions arose from the study of real-world networks, where it was noted that they approximated various statistical distributions with heavy tails2, being particularly driven by the prevalence of strong hubs in real-world networks which are not present, for example, in random graphs3, random geometric graphs4 and small-world models5. Pertinent random null models, called configuration models, have since been developed in which the degree distribution is fixed, allowing unbiased random controls for studying network topologies6,7. Although often explicitly mentioned with regard to real-world networks, what is meant by concepts such as organisation and complexity has largely been left to intuition. In seeking to understand the complexity of real world networks, Smith & Escudero8 recently proposed to look at neighbourhood degree sequences. For a given node, its neighbourhood degree sequence was defined as the ordered degrees of nodes in its neighbourhood. This was based on observations that ordered networks such as regular networks, quasi-star networks, grid networks and highly patterned networks shared the common feature of highly homogeneous neighbourhood degree sequences for nodes of the same degree. Conceptualising the degree distribution as a hierarchy of nodes, they proposed an index called hierarchical complexity to characterise the heterogeneity of hierarchically equivalent (i.e. same degree) nodes. Note, the term 'hierarchy' in networks is also associated with the scaling of community structure9,10. Here, it is used– in the more lexically familiar sense– with respect to levels of importance, where nodes of higher degree are often considered of higher importance in the network topology11. Hierarchical complexity was developed in the context of electroencephalogram functional connectivity, which, in contrast to ordered and random systems, was found to have inordinately high levels of heterogeneity amongst its neighbourhood degree sequences8. This concept has since been utilised to help understand how best to binarise EEG functional connectivity for topological analysis12 and has been validated in structural MRI networks13. However, the prevalence of such topology amongst complex networks in general is unknown. In pure mathematics, Barrus & Donovan independently initiated study of neighbourhood degree lists as a topological invariant more refined than both the degree sequence and joint degree graph matrix14, while Nishimura & Subramanya proposed to study neighbourhood degree lists for the combinatorial problem of changing a graph into one with given neighbourhood degrees15. That is as far as has been done with neighbourhood degree sequences to date. Yet, the intriguing insights provided by hierarchical complexity in brain networks makes a broader study of neighbourhood degree sequences across a broader range of domains worthwhile. This work comes after work done involving neighbouring degrees and centralities such as the eigenvector centrality, a centrality index which is larger depending on the centralities of the nodes a node is connected to16; assortativity, an index of degree-degree correlation between connected nodes17; and network entropy, a measure of edgewise node degree eccentricity18. Neighbourhood degree sequences, however, are a completely separate consideration of networks. Most notably, rather than comparing nodes which are connected to each other, we compare nodes which have the same degree, irrespective of whether they are connected or not, regarding such nodes as hierarchically equivalent within the network topology. In this study a number of ways to analyse neighbourhood degree sequences are proposed. Notably, indices of node heterogeneity and neighbourhood similarity are introduced. We also consider a new notion of multi-orderedness in a network. This is based on the observation that nodes of a given degree in an ordered network may have several distinct neighbourhood degree sequences. This gives rise to another index defined as neighbourhood organisation which measures the extent to which such multi-orderedness is present in the network. We then show that the existence of multi-ordered degrees can artificially raise the network's hierarchical complexity. Thus, we utilise the formulation of neighbourhood organisation to provided a version of hierarchical complexity which corrects for multi-ordered degrees. We also described how neighbourhood degree sequences have clear links with powerful and efficient subtree kernels for graph classification. The proposed indices are then applied to a range of network models and compared with existing classical network indices, the aim of which is to ascertain to what extent these indices explain unique topological properties in complex networks. They are also applied to 215 real world networks from various disciplines of study in order to assess the characteristics of neighbourhood degree sequences in the world around us and the insights these new indices offer. Neighbourhood Degree Sequences For ki the degree of node i, the neighbourhood degree sequence, si, of node i is $${s}_{i}=\{{k}_{1}^{i},{k}_{2}^{i},\ldots ,{k}_{{k}_{i}}^{i}\},$$ where the \({k}_{j}^{i}\)s are the degrees of the nodes to which i is connected and such that \({k}_{1}^{i}\le {k}_{2}^{i}\le \ldots \le {k}_{{k}_{i}}^{i}\). For example, the graph in Fig. 1A has four degree 4 nodes (yellow) all with neighbourhood degree sequence {3, 3, 5, 8} and four degree 5 nodes (orange) all with neighbourhood degree sequence {3, 4, 5, 5, 8}. In the following we shall consider a number of ways to study these sequences. Node heterogeneity One way to characterise neighbourhood degree sequences would be to employ the same methods to characterise degree distributions and then average over all nodes. As a pertinent example of this, a common index of graph heterogeneity is the degree variance v = var(k)19. We can then define node heterogeneity, Vn, as the average variance of neighbourhood degree sequences of a graph for all nodes of degree greater than 1: $${V}_{n}(G)=\frac{1}{n}\sum _{i\,{\rm{s}}{\rm{.t}}.{k}_{i} > 1}{\rm{var}}({s}_{i})\mathrm{.}$$ Of course, it is then interesting to understand how average node heterogeneity compares to graph heterogeneity, i.e. comparing local and global heterogeneities of a graph. To do this we can simply divide (3) by v, giving $${\hat{V}}_{n}(G)=\frac{1}{n{\rm{var}}(k)}\sum _{i\,{\rm{s}}{\rm{.t}}{\rm{.}}{k}_{i}\mathrm{ > 1}}{\rm{var}}({s}_{i})\mathrm{.}$$ High values of this measure tell us that nodes tend to be connected to nodes of homogeneous degrees, given the degree distribution, and low values tell us the opposite. Specifically, if this value is below 1, the degree variance within the neighbourhoods is on average less than the global degree variance, indicating that the nodes have more homogeneous neighbourhood degrees. It is worth highlighting the distinction between this and assortativity, which seeks to measure the similarity of degrees of connected nodes. Node heterogeneity is a measure of the similarity of the degrees of all neighbouring nodes, irrespective of the degree of the node itself. Note that v is clearly minimal for regular graphs and is known to be maximal for quasi-star and quasi-complete graphs for any given number of nodes and edges20. On the other hand Vn is zero for regular graphs but is also small for quasi-star and quasi-complete graphs. For instance, the star graph consists of one node connected to all other nodes and no other edges. Thus it has one n − 1 degree node with degree sequence {1, 1, …, 1} and n − 1 1 degree nodes with degree sequence {n − 1}. Clearly, these all have zero variance, giving Vn = 0 for the star graph. This is interesting because, while some believe star graphs should have maximum heterogeneity21, Vn points at a possible different view. The degree distribution of a star graph is just 1 node away from being completely regular– take the dominant node out and you have an empty graph (redundantly regular). Heterogeneity could perhaps be alternatively formulated in the sense that removing or adding nodes does not relegate the graph to being regular. Neighbourhood similarity The other way of characterising neighbourhood degree sequences we shall consider is to compare all neighbourhood degree sequences of equal length. Indeed, this is the perspective employed to formulate hierarchical complexity, looking at the element-wise variance of equal-length neighbourhood degree sequences. Another, fairly more simple characteristic can be posed by considering the number of nodes in the network whose neighbourhood degree sequence matches that of another node in the graph. We call this neighbourhood similarity (reflecting the concept of geometric similarity) and, using the Kronecker delta function δ(x, y) which is 1 if x = y and 0 otherwise, write $$S(G)=\frac{\sum _{i=1}^{n}(1-\delta (\sum _{j=1}^{n}\delta ({s}_{i},{s}_{j}\mathrm{),0}))}{n}\mathrm{.}$$ Notice, this uses the δ function twice. The first time is to find the number of matching neighbourhood degree sequences for node i. The second delta is used to determine if there are any matching sequences, i.e. seeing if the sum of the first δ s is different from 0. Since this is a negation (δ returns 0 if there are any matches), we then have to subtract the answer from 1 to provide the answer to whether any match exists for node i. Summing over all i and dividing by n provides the proportion of nodes which have at least one matching neighbourhood degree sequence. It is clear that 0 ≤ S ≤ 1 for all graphs, since it concerns a fraction of the network nodes. It certainly attains 1 for regular graphs. However, we prove the following result with respect to graph symmetry on the plane, establishing the link between neighbourhood similarity and graph symmetry. Proposition 1: Let G be a graph which can be arranged on the plane such that G has mirror or rotational symmetry whose axis does not pivot on any node. Then S(G) = 1. Proof: Let si be a neighbourhood degree sequence for general node i. Then the node, j, in the position symmetric to i with respect to the axis of symmetry has neighbourhood degree sequence sj and has the same degree as i. Further, each node in the neighbourhood of i, pi, also has a node in position symmetric to pi with respect to the axis of symmetry, pj, and these nodes are connected to j and such that \({k}_{{p}_{i}}={k}_{{p}_{j}}\), by symmetry. Thus si = sj and since si was arbitrary and no nodes lie on the axis of symmetry itself, S(G) = 1, as required. Thus, neighbourhood similarity of a graph is indeed related to the planar symmetry of a graph. That being said, the opposite is not true– not all values S(G) = 1 are attained by planar symmetric graphs, as can be quickly seen by regarding non-symmetric regular graphs such as the Frucht graph22. Hierarchical complexity: oversights of multi-ordered degree graphs Hierarchical complexity is an index developed with the aim to be low for all highly ordered graphs and graphs with simple generative mechanisms. Simple in the sense that one needs only a few rules to compute the graph such as in random graphs (edges exist with uniformly random probabilities) or random geometric graphs (nodes are randomly sampled on a n-D Euclidean space and then connected based on distances in the space). In this sense, one can describe precisely how one can expect the graph and subsamples of the graph to behave. On the other hand, attempts to model real world networks indicates that a larger and more a complicated set of rules would be required to generate complex network-like topologies where subsamples of the graph (such as node neighbourhoods) would be less likely to show similar behaviours13. The hypothesis is that nodes of a given degree in highly ordered graphs play equivalent roles in the topology, which implies that they have the same or similar neighbourhood degree sequences. However, what fails to be taken account of in its formulation is the possibility to have a high degree of order in which nodes of a given degree can be split into different groups of identical sequences. For example, Fig. 1B shows a graph with degree 1 and 6 nodes. The six-degree nodes fall into one of two sequences {1, 1, 6, 6, 6, 6} and {6, 6, 6, 6, 6, 6}, as illustrated by the green and orange nodes, respectively. One-degree nodes are connected to either one- or six-degree nodes, as illustrated by the grey and yellow nodes, respectively. We call such a graph here a multi-ordered degree graph. Definition 1: Let qp be the number of all p-length neighbourhood degree sequences and \({\sigma }_{p}=\{{s}_{i}{\}}_{{k}_{i}=p}\) be the set of (unique) p-length neighbourhood degree sequences. Then p is a multi-ordered degree of the graph if 1 < |σp| ≪ qp. A graph for which 1 < |σp| ≪ qp or, otherwise, |σp| = 1 for all p is called a multi-ordered degree graph. Neighbourhood organisation We can pose a measure for this sense of multi-ordered degrees using neighbourhood degree sequences. We could simply divide the number of unique p-length sequences by the total number of p-length sequences, giving $$\frac{|{\sigma }_{p}|}{{q}_{p}},$$ however this is the same no matter how many unique degree sequences occur more than once. Consider the following. Let cpj denote the number of neighbourhood degree sequences of length p in G that have equivalency to sj ∈ σp. Then, for example, take qp = 5 and |σp| = 3. We could have cp1 = 1, cp2 = 1 and cp3 = 3 or cp1 = 1, cp2 = 2 and cp3 = 2. Both of these options would have the same value of (5), yet the latter has better qualities of being multiply ordered than the former since there are two distinct sequences which occur more than once, rather than just the one in the former case. We can offset (5) by considering the differences between the number of p-length sequences, qp, and the number of occurrences of each (unique) neighbourhood degree sequence in σp. Then \({\sum }_{j\mathrm{=1}}^{|{\sigma }_{p}|}{c}_{pj}=1\) and we consider the entity $$\sum _{j\mathrm{=1}}^{|{\sigma }_{p}|}({q}_{p}-{c}_{pj})\mathrm{.}$$ This is maximal, qp(qp − 1), when all p-length neighbourhood degree sequences are unique and zero (i.e. minimal) when all p-length neighbourhood degree sequences are equal. We can thus normalise this term as $$\frac{\sum _{j\mathrm{=1}}^{|{\sigma }_{p}|}({q}_{p}-{c}_{pj})}{{q}_{p}({q}_{p}-\mathrm{1)}}\mathrm{.}$$ Just taking (6) would also not reflect the multi-order requirement. It is really the combination of (5) and (6) that is required to realise a measure of multi-ordered degrees– elements of σp should occur frequently and at the same time the number of unique sequences should be as large as possible. Combining (5) and (6), then, we get $${\omega }_{p}=\frac{|{\sigma }_{p}|\sum _{j\mathrm{=1}}^{|{\sigma }_{p}|}({q}_{p}-{c}_{pj})}{{q}_{p}^{2}({q}_{p}-1)}$$ Taking the mean of this over all degrees and subtracting from 1, we have the neighbourhood organisation coefficient $${\rm{\Omega }}(G)=1-\frac{1}{|{{\mathscr{D}}}_{2}|}\sum _{p\mathrm{=1}}^{n-1}{\omega }_{p},$$ where \({{\mathscr{D}}}_{2}\) is the set of degrees of the graph taken by at least 2 nodes. Updated hierarchical complexity Given the above consideration of multi-ordered degrees and the neighbourhood organisation index, we can formulate an update to hierarchical complexity that takes into account multi-ordered degrees. In the terminology of this paper, hierarchical complexity can be written $$R(G)=\frac{1}{|{{\mathscr{D}}}_{2}|}\sum _{p\in {{\mathscr{D}}}_{2}}\frac{1}{p({q}_{p}-\mathrm{1)}}\sum _{j\mathrm{=1}}^{p}\sum _{i\in {{\mathscr{V}}}_{p}}{({s}_{i}^{p}(j)-{\mu }^{p}(j))}^{2}$$ where \({{\mathscr{V}}}_{p}\) is the set of nodes of degree p and μp(j) is the mean of the j th entries of all p length neighbourhood degree sequences. To correct for multi-ordered degrees in this index, we can implement the term ωp inside the first summand in to give $${R}_{{\rm{\Omega }}}(G)=\frac{1}{|{{\mathscr{D}}}_{2}|}\sum _{p\in {{\mathscr{D}}}_{2}}\frac{{\omega }_{p}}{p({q}_{p}-\mathrm{1)}}\sum _{j\mathrm{=1}}^{p}\sum _{i\in {{\mathscr{V}}}_{p}}{({s}_{i}^{p}(j)-{\mu }^{p}(j))}^{2}.$$ When ωp is small, multi-orderedness is present in the p degree nodes and thus the value of hierarchical complexity for these degrees is suppressed and vice versa. Computing this for the example in Fig. 1A we obtain RΩ = 0.0029– a 65 fold decrease from R and a more reasonable expected value of neighbourhood degree sequence diversity. Link to the graph isomorphism problem The Weisfeiler-Lehman graph isomorphism test23 is a powerful method for distinguishing labelled graph topologies which holds for almost all graphs24. Based on this test, subtree kernels have been produced for assessing graph similarity in machine learning approaches which are highly efficient compared to other successful kernels25. Indeed, these subtree kernels have been shown to outperform the competition when implemented into a graph neural network approach while mapping similar graph topologies to similar embeddings in a low-dimensional space26. The subtree of node i of height h constructs a tree rooted at i which extends out to i's neighbours and then out again to i's neighbours' neighbours and so on for h steps, see Fig. 1C. The kernel is a reduction of these subtrees to identifying labels which are then compared between two graphs to check their similarity. Subtrees of height h = 2 or 3 have been shown to achieve best performance in most cases25. The link to neighbourhood degree sequences then can be established by realising that the information in a subtree of height 2 in an unlabelled graph is completely captured by the node's neighbourhood degree sequence. The length of the neighbourhood degree sequence tells us how many nodes are at height 1 of its subtree kernel (i.e. the degree of the node), while the entries of the sequence tell us how many nodes at height 2 are linked to each node at height 1 (the degrees of each neighbouring node). Real-world networks Thirty networks were obtained from the network repository27 from different research domains. Descriptions are kept to a minimum. For further details, we refer the reader to the references. The classical Zachary's karate club network28, a dolphin social network29, the Advogato network30; the anybeat network; the Hamsterster network31; and a wikivote network32. Biological networks The macaque cortex network freely available from the BCT was used33. This comes as a binary, directed network. To make this undirected we simply took all connections as undirected connections to signify whether or not any connection exists between two regions. We also look at the undirected c. elegans metabolic network34; bioGRID protein networks of the fruitfly, mouse and a plant; a yeast protein interaction network35; and a mouse brain network36. Ecological networks The everglades, florida and mangwet ecosystems networks37. Economic networks The global city network is a network of economic ties between cities38. This is a weighted network which was binarised at 20% density (20% of largest weights kept) for our analysis. We also used the beacxc and beaflw economic networks. Interaction networks A university email network39; a Dublin infection network40; and an enron email network41. Infrastructure networks A US and Canada airport network found in the Graph Algorithms in Matlab Code toolbox42; the euroroad network43; and a grid power network5. Web networks The EPA hyperlink network44; the edu hyperlink network45 and the indochina 2004 hyperlink network46. Technological networks A router network. In addition, we study a benchmark dataset of 406 real world networks used in47 from the Colorado Index of Complex Networks48. This includes 186 static networks of which just 3 overlap with the above (dolphin social network, Macaque cortex and the uni email network). It also includes two temporal networks relating to the same data of organisation affiliations each with 111 samples taken monthly from May 2002 until August 201149. The first of these is a network of organisation co-affiliations of directors while the other is a network of co-directorship among organisations. Configuration models Random graphs with fixed degree distributions7 were generated using a freely available algorithm in the Brain Connectivity Toolbox33. Fifty randomisations were computed for each real world network. Classical global network indices Clustering coefficient The global clustering coefficient, C, measures the ratio of closed to open triples in the network. A triple is a path of length two, {(i, j), (j, k)}, where it is closed if (k, i) also exists in the network and open otherwise. It is a measure of network segregation. Degree variance The degree variance, v = var(k), is a measure of network heterogeneity19. Here we use the normalised version50. Characteristic path length The characteristic path length, L, is the average of the shortest paths existing between all pairs of nodes in the network. It is known as a measure of network integration. Assortativity Assortativity, r, is a correlation of the degrees of nodes which are connected in the network. It is positive if similar degree nodes are generally connected to one another, negative if similar degree nodes are generally not connected to one another and zero if there is no pattern of correlation17. Modularity, Q, measures the propensity of nodes to form into highly connected communities which are less connected to the rest of the network51. The supplementary material contains results of indices of a variety of different models– random graphs3, random geometric graphs4, small-world models5, scale-free models52 and random hierarchy models8. The main article shall focus on experiments using the most relevant data of all– over 200 real world networks. Index correlations Spearman correlations were computed between the proposed indices alongside classical network indices across all real networks, Fig. 2. We used Spearman's correlation since the values clearly did not follow a normal distribution (i.e. Pearson's correlation would not have been valid). The red box contains all correlations between neighbourhood degree sequence indices and classical network indices. It is clear that there are no observable high correlations between proposed indices and classical indices, providing strong evidence that indeed these new indices explain previously unrealised properties of network topology. Unsurprisingly, R and RΩ were highly correlated, although the correlation between Ω and RΩ was only low to moderate. But the fact there were no strong correlations other than between R and RΩ (>0.8) suggests there is a rich amount of information to be obtained from neighbourhood degree sequences. Absolute values of index correlations (Spearman's correlation coefficient) for combined values of small-world, scale-free, random, random geometric, and random hierarchy network models and the thirty real world networks considered (a total of 150 samples). Within the red square are the correlations between neighbourhood degree indices S– neighbourhood similarity, Vn– node heterogeneity, Ω– neighbourhood organisation, R– hierarchical complexity, RΩ– hierarchical complexity corrected for multi-ordered degrees, and classical network indices C- transitivity, v– degree variance, L– characteristic path length, r– assortativity, and Q– modularity. On the other hand, among classical network indices, strong correlations were found to exist between the L, V and Q, indicating that these indices all pointed mostly towards a single topological property of the networks. We suggest that this property is likely to be about the dominance of hub nodes, since these nodes are those which enable general short path lengths, while Newman's modularity is known to be confounded by hubs53. Although high correlations which are above the standard of 0.8 have been highlighted, there are notable moderate correlations between L and S (0.6477), Q and S (0.6419) and V and R (0.6274). However, the average correlation across all metric pairs has a magnitude of 0.4283, which would be regarded as a low-to moderate correlation. We then have to expect that measurements of a network will likely have some degree of correlation simply due to the fact that they are enacted in measuring the same topologies and since complex networks tend to show broadly consistent features in comparison with random null models. Nonetheless, the standard deviation of the metric correlation magnitudes is 0.2269, putting one standard deviation above the mean at 0.6552 of which none of the moderate correlations previously mentioned lie above. Thus, although in usual terms these are moderate correlations, with respect to complex network metrics they appear to be within reasonable limits to suggest they broadly measure different network properties. It is also worth recalling that correlation does not mean causation. This means that the general tendency of complex networks to exhibit correlated metrics does not necessarily mean they are measuring the same or similar property in the network, as it may be that networks which have greater modularity have greater characteristic path lengths by virtue of an underlying joint causation. Characteristics of real-world networks All proposed indices were applied to the thirty real-world networks of the Network Repository and the 181 non-overlapping static networks of the ICON, alongside median values taken over the two temporal networks. In addition, ten realisations of configuration models with fixed degree distributions were generated for each real-world network and we compared the neighbourhood indices of the real networks with the average values obtained from configuration models. The results are described for each Network Repository network in Table 1. Scatter plots of all real network values against configuration model values are show in Fig. 3. Table 1 Neighbourhood degree sequence characteristics of 30 real-world networks from the Network Repository. Scatter plots (log-log scale) of real network index values against average values for configuration models. Real world networks have generally well organised neighbourhoods for their degree distributions. They also show a tendency towards hierarchical complexity, although there is a strong drop in statistical significance when taking into account multi-ordered degrees. Although all indices found significant differences between real networks and configuration models, Table 2, first row, the greatest general differences found were in neighbourhood similarity, p = 4.74 × 10−28 with a paired ranked effect size of 0.5320, and in neighbourhood organisation, p = 1.66 × 10−23 with a paired ranked effect size of 0.4841. This was clearly observed in Fig. 3, first and centre plots, respectively. On the other hand, hierarchical complexity was only weakly greater in real networks than their configuration models. This was even less convincing when we took account of multi-orderedness, increasing the p-value to just below 0.05. This is interesting in light of the work done on hierarchical complexity of the human brain function and structure. Hierarchical complexity was not a consistent feature of real world networks and can thus be conjectured as a special feature of brain networks, where a great diversity of functional roles is present13. Table 2 Statistical differences between neighbourhood degree sequence characteristics of 213 real-world networks. Tentatively, hierarchical complexity also appears to be a strong property of ecological networks. We only studied three such networks here, but all had substantially higher hierarchical complexity than expected for their degree distributions, while other characteristics are not notably different from the expected values, Table 1. We then looked at neighbourhood degree sequence properties among different network classes. We applied Wilcoxon sign rank tests, as before, but this time restricted to classes and subclasses of networks, see47 for more details. Results are shown in Fig. 4. Greater neighbourhood organisation and similarity were found consistently among all classes with a high enough statistical power. On the other hand, technological networks, including digital circuit networks failed to find any difference in neighbourhood heterogeneity between real networks and their configuration models, suggesting a general topological difference between technological networks and biological and social networks, particularly. Interestingly, technological networks (including digital circuit networks) were found to have less hierarchical complexity than their configuration models. We expect that this is to do with a higher degree of order present in digital circuit networks, where different components connect in limited ways, constricted by the logical ordering of electronics. It was also very noticeable that the difference of hierarchical complexity in biological and social networks dropped away when updating for multi-orderedness, suggesting that multi-orderedness is a distinct feature of biological and social networks. In biological networks, this appeared to be driven by protein networks, where food webs and connectomes were not found to be more hierarchically complex than configuration models even from the original definition. The fact that connectomes of animals (3 cat, 5 primate, 2 macaque, 2 nematode, 2 visual cortical neuron level networks in human) were not found to have a general property of hierarchical complexity again suggests the specialness of this feature in the macro-level human brain particularly13 and hints towards possible links with intelligence. The p-values of Wilcoxon sign rank tests of the statistical difference between network indices for classes of real networks and their configuration models. Neighbourhood organisation in Norwegian director co-affiliation temporal networks In a specific example of revealing new insights into networks using these methods, we undertook an analysis of the two temporal networks included in the ICON corpus. These were monthly sampled social networks of Norwegian company directors, where edges between directors appeared where the two were affiliated with at least one company, and concurrently sampled Norwegian company networks where edges existed where those companies shared a director49. Both spanned the same time period from May 2002 to August 2011 and the significance of the data was that, during this time period, legislation was passed to ensure proportional representations of women in directorships to counteract structural inequalities49. From an organisational standpoint, it stands to reason that this may cause a fairly dramatic disruption to these networks. Figure 5 shows neighbourhood organisation over time for both networks alongside that of their configuration models constructed at each time point. Temporal progression of neighbourhood organisation in the director-organisation affiliation networks. It is striking that while the company network maintained similar levels of neighbourhood organisation throughout the period, the neighbourhood organisation of the director network steadily decreased throughout the period from roughly 0.8 down to around 0.4 (coinciding with company network levels) by mid 2008 where it stayed until the end of the sampling. No particular trends were notice in either of the configuration models. Looking more closely at the director network trend, it was apparent that the decrease in neighbourhood organisation appeared almost stepwise in two year cycles with steps down around May 2004, 2006 and 2008. This validates the hypothesis that the overhaul in directorships in a short space of time contributed to a substantial disruption to the neighbourhood organisation of the network. Although it is beyond the scope of this study, it would be of interest to seek out explanations for this trend as well as possible correlations with this phenomenon and other factors. Limitations and Future Work There is significant scope to extend and improve on these proposed methods. A lot of the methods developed here depend on comparing nodes of the same degree, however it would be of great relevance to have this property more relaxed so that comparisons can be done across nodes of similar but not necessarily identical degrees. This is particularly the case for real-world and configuration models where the greater spontaneity of connections means that nodes which exhibit similar properties may differ in degree by one or two connections. Furthermore, this may help to create more reliable indices with less variability within populations. We demonstrated a link between neighbourhood degree sequences and Weisfeiler-Lehman graph subtree kernels25 which provide powerful graph learning results26 based on long-standing graph isomorphism results23. It would be of high interest to undertake a detailed study of the relevance of the neighbourhood degree sequence analyses for interpreting the embedding space of these graph classification approaches as network phenomena. At the same time, this link hints that analysing the diversity and structure of neighbourhood degree sequences within a network– such as hierarchical complexity and neighbourhood organisation– is indeed a very powerful and efficient way to describe the topological similarity within a network. Further detailed work is required to substantiate this conjecture. We introduced several methods to understand complex networks through neighbourhood degree sequences. These targeted key concepts such as similarity and symmetry, organisation, complexity and heterogeneity. The developed network indices were not found to be strongly correlated with each other nor with classical network indices over 215 real world networks, indicating that neighbourhood degree sequences offer a rich and unique branch of analysis. We found that neighbourhood similarity and neighbourhood organisation were consistent general characteristics of complex networks. Evidence suggested that the hierarchical complexity evident in the human brain was not a general property of animal connectomes. Also, neighbourhood organisation was found to decrease over time in a company director network where the composition of directors went through major alterations, while neighbourhood organisation in the company network remained steady. It is expected that this study will act as a springboard for new methods and applications relating to neighbourhood degree sequences, revealing important insights into networks across various disciplines. The real data used in the manuscript were obtained freely online as noted in section III.A. Code for computing the network models and novel indices are available on the Open Science Framework at https://doi.org/10.17605/OSF.IO/W7BK6. Bollobás, B. Degree sequences of random graphs. Discret. Math. 33, 1–19 (1981). Strogatz, S. H. Exploring complex networks. Nat. 410, 268–276 (2001). Article ADS CAS Google Scholar Erdös, P. & Rényi, A. On random graphs. Pubilcationes Math. Debrecen 6, 290–297 (1959). MATH Google Scholar Dall, J. & Christensen, M. Random geometric graphs. Phys. Rev. E 66, 016121 (2002). Article ADS MathSciNet Google Scholar Watts, D. J. & Strogatz, S. H. Collective dynamics of small-world networks. Nat. 393, 440–442 (1998). Newman, M. E. J., Strogatz, S. H. & Watts, D. J. Random graphs with arbitrary degree distributions and their applications. Phys. Rev. E 6402, 6118 (2001). Maslov, S. & Sneppen, K. Specificity and Stability in Topology of Protein Networks. Sci. 296, 910 LP–913 (2002). Article ADS Google Scholar Smith, K. & Escudero, J. The complex hierarchical topology of EEG functional connectivity. J. Neurosci. Methods 276, 1–12 (2017). Ravasz, E. & Barabasi, A. L. Hierarchical organization in complex networks. Phys. Rev. E 67, 26112 (2003). Kaiser, M., Hilgetag, C. C. & Kötter, R. Hierarchy and dynamics of neural networks. Front. Neuroinformatics 4, 112 (2010). Barthélemy, M., Barrat, A., Pastor-Satorras, R. & Vespignani, A. Velocity and hierarchical spread of epidemic outbreaks in scale-free networks. Phys. Rev. Lett. 92, 178701 (2004). Smith, K., Abásolo, D. & Escudero, J. Accounting for the Complex Hierarchical Topology of EEG Phase-based Functional Connectivity in Network Binarisation. PLOS One 12, e0186164 (2017). Smith, K. et al. Hierarchical Complexity of the Adult Human Structural Connectome. Neuroimage 191, 205–215 (2019). Barrus, M. & Donovan, E. Neighbourhood degree lists of graphs. Discret. Math. 341, 175–183 (2018). Nishimura, N. & Subramanya, V. Graph editing to a given neighbourhood degree list is fixed-parameter tractable. In Gao, X., Du, H. & Han, M. (eds) COCOA 2017: Combinatorial optimization and applications, vol. 10628 of Lecture Notes in Computer Science, 138–153 (Springer, Cham, 2017). Bonacich, P. Factoring and weighting approaches to clique identification. J. Math. Sociol 2, 113–120 (1972). Newman, M. Assortative mixing in networks. Phys. Rev. Lett. 89, 208701 (2002). Solé R. & Valverde, S. Complex Networks. vol. 650 of Lecture Notes in Physics, chap. Informatio, 189–207 (Springer, 2004). Snijders, T. A. B. The degree variance: an index of graph heterogeneity. Soc. Networks 3, 163–174 (1981). Bell, F. K. A note on the irregularity of graphs. Lin. Alg. Appl. 161, 45–64 (1992). Estrada, E. Quantifying network heterogeneity. Phys. Rev. E 82, 066102 (2010). Frucht, R. Herstellung von Graphen mit vorgegebener abstrakter Gruppe. Compos. Math. 6, 239–250 (1939). Weisfeiler, B. & Lehman, A. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsiya 2, 12–16 (1968). Babai, L. & Kucera, L. Canonical labelling of graphs in linear average time. In Proceedings Symposium on Foundations of Computer Science, 39–46 (1979). Shervashidze, N., Schweitzer, P., van Leeuwen, E., Mehlhorn, K. & Borgwardt, K. Weisfeiler-lehman graph kernels. J. Mach. Learn. Res. 12, 2539–2561 (2011). Xu, K., Hu, W., Leskovec, J. & Jegelka, S. How powerful are graph neural networks? https://arxiv.org/abs/1810.00826 (2018). Rossi, R. A. & Ahmed, N. K. The network data repository with interactive graph analytics and visualization. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (2015). Zachary, W. W. An Information Flow Model for Conflict and Fission in Small Groups. J. Anthro. Research 33, 452–473 (1977). Lusseau, D. et al. The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations. Behavioral Ecology and Sociobiology 54, 396–405 (2003). Massa, P., Salvetti, M. & Tomasoni, D. Bowling alone and trust decline in social network sites. In Dependable, Autonomic and Secure Computing, 2009. DASC'09. Eighth IEEE International Conference on, 658–663 (IEEE, 2009). Hamsterster. Hamsterster social network, http://www.hamsterster.com. Leskovec, J., Huttenlocher, D. & Kleinberg, J. Signed networks in social media. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1361–1370 (ACM, 2010). Rubinov, M. & Sporns, O. Complex network measures of brain connectivity: uses and interpretations. NeuroImage 52, 1059–1069 (2010). Duch, J. & Arenas, A. Community identification using extremal optimization phys. Rev. E 72, 027104 (2005). Jeong, H., Mason, S., Barabasi, A. & Oltvai, Z. Lethality and centrality in protein networks. arXiv preprint cond-mat/0105306 (2001). Amunts, K. et al. Bigbrain: An ultrahigh-resolution 3d human brain model. Sci. 340, 1472–1475 (2013). Melián, C. J. & Bascompte, J. Food web cohesion. Ecol. 85, 352–358 (2004). Taylor, P. Specification of the world city network. Geogr. Analysis 33, 181–194 (2001). Guimera, R., Danon, L., Diaz-Guilera, A., Giralt, F. & Arenas, A. Self-similar community structure in a network of human interactions. Phys. Rev. E 68, 065103 (2003). SocioPatterns. Infectious contact networks, http://www.sociopatterns.org/datasets/. Accessed 09/12/12. Cohen, W. Enron email dataset. http://www.cs.cmu.edu/enron/. Accessed in 2009. The US airport network. https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/submissions/24134/versions/1/previews/gaimc/demo/html/airports.html?access_key=. Bader, D. A., Meyerhenke, H., Sanders, P. & Wagner, D. Graph partitioning and graph clustering. In 10th DIMACS Implementation Challenge Workshop (2012). De Nooy, W., Mrvar, A. & Batagelj, V. Exploratory social network analysis with Pajek, vol. 27 (Cambridge University Press, 2011). Gleich, D., Zhukov, L. & Berkhin, P. Fast parallel pagerank: A linear system approach. Yahoo! Research Technical Report YRL-2004-038 13, 22 (2004). Boldi, P., Rosa, M., Santini, M. & Vigna, S. Layered label propagation: A multiresolution coordinate-free ordering for compressing social networks. In WWW, 587–596 (2011). Ghasemian, A., Hosseinmardi, H. & Clauset, A. Evaluating overfit and underfit in models of network community structure, https://arxiv.org/abs/1802.10582. Clauset, A., Tucker, E. & Sainz, M. The colorado index of complex networks. Seierstad, C. & Opsahl, T. For the few not the many? the effects of affirmative action on presence, prominence, and social capital of women directors in norway. Scand. J. Manag 27, 44–54 (2011). Smith, K. & Escudero, J. Normalised degree variance, https://arxiv.org/abs/1803.03057. Newman, M. E. J. & Girvan, M. Finding and evaluating community structure in networks. Phys. Rev. E 69, 26113 (2004). Barabási, A.-L. & Albert, R. Emergence of Scaling in Random Networks. Sci. 286, 509 LP–512 (1999). Yang, J. & Leskovec, J. Overlapping communities explain core-periphery organization of networks. Proceedings of the IEEE 102, 1892–1902 (2014). We would like to thank Aaron Clauset for helpful discussions and provision of the data from the Colorado Index of Complex Networks. This work was supported by Health Data Research UK (MRC ref Mr/S004122/1), which is funded by the UK Medical Research Council, Engineering and Physical Sciences Research Council, Economic and Social Research Council, National Institute for Health Research (England), Chief Scientist Office of the Scottish Government Health and Social Care Directorates, Health and Social Care Research and Development Division (Welsh Government), Public Health Agency (Northern Ireland), British Heart Foundation and Wellcome. A version of this article has been made available on an online preprint server at https://arxiv.org/abs/1901.02353. Usher Institute of Population Health Science and Informatics, University of Edinburgh, 9 BioQuarter, Little France, Edinburgh, EH16 4UX, UK Keith M. Smith K.S. is the sole author and did all the work. Correspondence to Keith M. Smith. The author declares no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Smith, K.M. On neighbourhood degree sequences of complex networks. Sci Rep 9, 8340 (2019). https://doi.org/10.1038/s41598-019-44907-8 Received: 25 February 2019 Structural connectivity of the sensorimotor network within the non-lesioned hemisphere of children with perinatal stroke Brandon T. Craig Eli Kinney-Lang Adam Kirton About Scientific Reports Guide to referees Journal highlights Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
CommonCrawl
A New Notion of Causal Closedness Leszek Wroński1 & Michał Marczyk1 Erkenntnis volume 79, pages 453–478 (2014)Cite this article In recent years part of the literature on probabilistic causality concerned notions stemming from Reichenbach's idea of explaining correlations between not directly causally related events by referring to their common causes. A few related notions have been introduced, e.g. that of a "common cause system" (Hofer-Szabó and Rédei in Int J Theor Phys 43(7/8):1819–1826, 2004) and "causal (N-)closedness" of probability spaces (Gyenis and Rédei in Found Phys 34(9):1284–1303, 2004; Hofer-Szabó and Rédei in Found Phys 36(5):745–756, 2006). In this paper we introduce a new and natural notion similar to causal closedness and prove a number of theorems which can be seen as extensions of earlier results from the literature. Most notably we prove that a finite probability space is causally closed in our sense iff its measure is uniform. We also present a generalisation of this result to a class of non-classical probability spaces. The so-called Principle of the Common Cause is usually taken to say that any surprising correlation between two factors which are believed not to directly influence one another is due to their (possibly hidden) common cause. The original version of the Principle as introduced by Hans Reichenbach in his book The Direction of time (1956) includes precise mathematical conditions connected to the notion (see definition 4 below) and became a hot topic for philosophers of science in the last decades of the previous century, after van Fraassen (1982) had linked it with the issues regarding causality in the context of EPR correlations. The Principle was widely criticised (see e.g. Arntzenius 1992 for a collection of its difficulties), but in recent years a number of researchers explored various mathematical questions regarding it, in at least one case leading even to the statement that the principle is "unfalsifiable" (Hofer-Szabó et al. 2000). This paper contributes to the discussion about the mathematical notions relevant to the Reichenbachian approach to explaining correlations. We prove a number of results concerning the (types of) probability spaces in which one can find Reichenbach-style explanations for correlations between events given an independence relation. Suppose a probability space contains a correlation between two events we believe to be causally independent. Does the space contain a common cause for the correlation? If not, can the probability space be extended to contain such a cause but 'preserving' the old measure? This question has been asked and answered in the positive in Hofer-Szabó et al. (1999), where the notion of common cause completability was introduced: speaking a bit informally, a probability space S is said to be common cause completable with respect to a set A of pairs of correlated events iff there exists an extension of the space containing statistical common causes of all the correlated pairs in A. Gyenis and Rédei (2004) introduced the notion of common cause closedness, which (in our slightly different terminology) is equivalent to the following: a probability space S is common cause closed (or "causally closed") with respect to a relation of independence \(R_{ind} \subseteq S^{2}\) iff it contains statistical common causes (see definition 4 below) for all pairs of correlated events belonging to R ind . The authors have proven therein that a finite classical probability space with no atoms of probability 0 is non-trivially common cause closed w.r.t. the relation of logical independence iff it is the space consisting of a Boolean algebra with 5 atoms and the uniform probability measure.Footnote 1 In other words, finite classical probability spaces (big enough to contain correlations between logically independent events) are in general not common cause closed w.r.t. the relation of logical independence, i.e. they contain a correlation between logically independent events for which no statistical common cause in the space exists; the only exception to this rule is the space with precisely 5 atoms of probability \(\frac{1}{5}\) each. More spaces are common cause closed w.r.t. a more stringent relation of logical independence modulo measure zero event ("L + ind ", see definition 6 below): they are the spaces with 5 atoms of probability \(\frac{1}{5}\) each and any number of atoms of probability 0. Still, a (statistical) common cause is not the only entity which could be used as an explanation for a correlation. Hofer-Szabó and Rédei (2004) generalized the idea of a statistical common cause, arriving at statistical common cause systems ("SCCSs"; see definition 5 below). SCCSs may have any countable size greater than 1;Footnote 2 the special case of size 2 reduces to the usual notion of common cause. It was natural for corresponding notions of causal closedness to be introduced; a probability space is said to be causally n-closed Footnote 3 w.r.t. a relation of independence R ind iff it contains an SCCS of size n for any correlation between A, B such that \(\langle A,B \rangle \in R_{ind}\). It is one of the results of the present paper that with the exception of the 5-atom uniform distribution probability space, no finite probability spaces without 0 probability atoms are causally n-closed w.r.t. the relation of logical independence, for any \(n\geqslant 2\). Similarly, with the exception of the spaces with 5 atoms of probability \(\frac{1}{5}\) each and any number of atoms of probability 0, no finite probability spaces with 0 probability atoms are causally n-closed w.r.t. L + ind , for any \(n\geqslant 2\). We are interested in a slightly different version of causal closedness. If the overarching goal is to find explanations for correlations, why should we expect all explanations to be SCCSs of the same size? Perhaps some correlations are explained by common causes and others by SCCSs of a bigger size. We propose to explore the idea of causal up-to-n-closedness—a probability space is causally up-to-n-closed w.r.t. a relation of independence R ind iff it contains an SCCS of size at most n for any correlation between events A, B such that \(\langle A,B \rangle \in R_{ind}\). It turns out that, in the class of finite classical probability spaces with no atoms of probability 0, just as the space with 5 atoms and the uniform measure is unique with regard to common cause closedness, the whole class of spaces with uniform distribution is special with regard to causal up-to-3-closedness—see theorem 2: a finite classical probability space with no atoms of probability 0 is causally up-to-3-closed w.r.t. the relation of logical independence iff it has the uniform distribution. We provide a method of constructing a statistical common cause or an SCCS of size 3 for any correlation between logically independent events in any finite classical probability space with the uniform distribution. We require (following Gyenis and Rédei) of a causally closed probability space that all correlations be explained by means of proper—that is, differing from both correlated events by a non-zero measure event—statistical common causes. This has the consequence that a space causally closed w.r.t. the relation of logical independence can be transformed into a space which is not causally closed w.r.t. this relation just by adding a 0-probability atom. Perhaps, to avoid this unfortunate consequence, the notion of logical independence modulo measure zero event should be required? We discuss the matter in Sect. 4. In this paper we also briefly consider other independence relations, and a generalisation of our results to finite non-classical probability spaces. Causal (up-to-n-)closedness Preliminary Definitions Throughout this paper the sample spaces of the probability spaces involved are irrelevant. The crucial elements are the Boolean algebra (of which, due to Stone's theorem, we always think as of a field of sets and therefore compatible with set theoretical operators) containing the events and the measure defined on that algebra. This motivates the phrasing of the following definition in terms of pairs, instead of triples: (Probability space) A (classical) probability space is a pair \(\langle S, P \rangle\) such that S is a Boolean algebra and P is a function from S to \({[0,1]\subseteq {\mathbb R}}\) such that \(P(\mathbf{1}_{S}) = 1\); P is countably additive: for a countable family \(\mathcal{G}\) of pairwise disjoint members of \(S, \,\cup\mathcal{G} \in S\) and \(P(\cup \mathcal{G}) = \sum_{A \in \mathcal{G}} P(A)\). In the following the context will usually be that of a finite classical probability space, i.e., a space \(\langle S, P \rangle\) in which S is finite. By Stone's representation theorem, in such a case S is isomorphic—and will be identified with—the algebra of all subsets of the set \(\{0, \ldots, n-1 \}\) for some \({n \in {\mathbb N}}\). In such a case the requirement of countable additivity reduces to the simple condition that for two disjoint events \(A, \,B \in S, \,P(A \cup B) = P(A) + P(B)\). In Sect. 6 nonclassical spaces are considered, in which the Boolean algebra is exchanged for a nondistributive orthomodular lattice. The required definitions are presented therein. In the sequel we will sometimes consider spaces of the form \(\langle S^+, P^+ \rangle\), where S + and P + are as defined below: Let \(\langle S, P \rangle\) be a finite classical probability space. S + is the unique Boolean algebra whose set of atoms consists of all the non-zero probability atoms of S. P + is the restriction of P to S +. This paper concerns a certain approach to explaining correlations; loosely speaking, this is to be done by events which screen off the correlated events and are postively statistically relevant for them. We introduce all these important notions in the following definition: (Correlation, screening off, statistical relevance) Let \(\langle S, P\rangle\) be a probability space and let \(A, \,B \in S\). We say that: A and B are (positively) correlated whenever P(AB) > P(A)P(B); event \(C \in S\) screens off A and B whenever P(AB |C) = P(A|C)P(B|C); an event \(C \in S\) is positively statistically relevant for A if \(P(A|C) > P(A|{C}^{\perp})\); a partition of \(\mathbf{1}_S \,\{C_i\}_{i \in I}\) is statistically relevant for A and B if, whenever i ≠ j, $$ \left(P(A \mid C_i) - P(A \mid C_j)\right) \left(P(B \mid C_i) - P(B \mid C_j)\right) > 0. $$ Notice that, according to the above definition, if C is positively statistically relevant for both A and B, then \(\{C, {C}^{\perp} \}\) is statistically relevant for A and B. In The direction of time (1971) Hans Reichenbach offerred a causal theory of time in which a central role was played by "conjunctive forks"—triples of events A, B, C in which C is positively statistically relevant for both A and B and both C and \({C}^{\perp}\) screen off A and B (see def. 4 below). A part of the literature refers to events defined as meeting Reichenbach's conditions for the "C" in such a conjunctive fork as ("Reichenbachian") "common causes"; see e.g. Hofer-Szabó and Rédei (2004). Hofer-Szabó et al. (2000) and Hofer-Szabó and Rédei (2006) even go so far as to state that Reichenbach himself defined common causes as the middle elements of conjunctive forks with correlated extreme elements; in other words, that fulfilling the statistical requirements for being the middle element of a conjunctive fork is sufficient to be a common cause for the correlated events. This is unfortunate since Reichenbach himself noticed that common effects could also meet his probabilistic requirements (Reichenbach 1971, p. 161–162) and also suggested that if there is more than one common cause for a given correlation the conditions are to be met by their disjunction, not by the causes themselves (p. 159). Reichenbach's "Principle of the Common Cause" maintains simply that in the case of a correlation between A and B there is a common cause C such that A, B and C meet the statistical requirements for a conjunctive fork (p. 163). Nevertheless, the main results of this work pertain to problems posed in various papers by the above-cited authors. Therefore, some slight terminological changes are in order. (Statistical common cause) Let \( \langle S, P \rangle\) be a probability space. Let \(A, B \in S\). Any \(C \in S\) different from both A and B such that C screens off A and B; \({C}^{\perp}\) screens off A and B; C is positively statistically relevant for both A and B; is called a statistical common cause of A and B. Statistical common causes (henceforth "SCCs") have at least two features relevant from the perspective of explaining correlations. First, the screening off conditions mean the correlation disappears after conditionalisation on the SCC. Second (as noted by Reichenbach), from the fact that there exists an SCC for A and B one can derive the correlation between A and B. It is intuitive that a similar notion could be considered, with the difference that it would permit the cause to be more complicated than a simple "yes" / "no" event. This is indeed the path taken without further comment by van Fraassen (1982), but only the screening off requirement is retained. A generalisation which also takes into account the conditions of statistical relevance was developed by Hofer-Szabó and Rédei (2004); the resulting constructs were originally called "Reichenbachian common cause systems", but, for reasons given above, we will abstain from the adjective "Reichenbachian". (Statistical common cause system) Let \( \langle S, P \rangle\) be a probability space. A partition of \(\mathbf{1}_S\) is said to be a statistical common cause system (SCCS) for A and B iff: all its members are different from both A and B; all its members screen off A and B; it satisfies the statistical relevance condition w.r.t. A and B. The cardinality of the partition is called the size of the statistical common cause system. As remarked above, statistical common cause systems (henceforth "SCCSs") come in different cardinalities; they may have any countable size greater than 1. SCCSs share the "deductive" explanatory feature of SCCs: from the assumption that one exists for A and B, the correlation between A and B is derivable.Footnote 4 Throughout this paper, by a "common cause" we always mean a "statistical common cause". At the beginning we usually supply the additional adjective, but then sometimes refrain from using it to conserve space, as the arguments unfortunately become rather cluttered even without the additional vocabulary. We will now define two relations of independence. Intuitively, we will regard two events as logically independent if, when we learn that one of the events occurs (or does not occur), we cannot infer that the other occurs (or does not occur), for all four Boolean combinations. (Logical independence) We say that events \(A, B \in S\) are logically independent (\(\langle A, B \rangle \in L_{ind}\)) iff all of the following sets are nonempty: \(A \cap B, \,A \cap {B}^{\perp}, \,{A}^{\perp} \cap B\) and \({A}^{\perp} \cap {B}^{\perp}\). We say that events \(A, B \in S\) are logically independent modulo measure zero event (\(\langle A, B \rangle \in L_{ind}^+\)) iff all of the following numbers are positive: \(P(A \cap B), \,P(A \cap {B}^{\perp}), \,P({A}^{\perp} \cap B)\) and \(P({A}^{\perp} \cap {B}^{\perp})\). Equivalently, two events are logically independent if neither of the events is contained in the other one, their intersection is non-empty and the union of the two is less than the whole space. Two events are logically independent modulo measure zero event if every Boolean combination of them has a non-zero probability of occurring. It is always true that \(L_{ind}^+ \subseteq L_{ind}\); if there are 0-probability atoms in the space, the inclusion may be strict. The following definition is a refinement of the SCC idea, expressing the requirement that a common cause should be meaningfully different from both correlated events. (Proper SCC(S)) A statistical common cause C of events A and B is a proper statistical common cause of A and B if it differs from both A and B by more than a measure zero event. It is an improper SCC of these events otherwise. An SCCS \(\{C_i\}_{i \in I}\) of events A and B is a proper SCCS of A and B if all its elements differ from both A and B by more than a measure zero event. It is an improper SCCS of these events otherwise. We will sometimes say that a probability space contains an SCCS, which means that the SCCS is a partition of unity of the event algebra of the space. We now come to the main topic of this paper. Should someone prefer it, the following definition could be phrased in terms of SCCSs only. (Causal (up-to-n-)closedness) We say that a classical probability space is causally up-to-n-closed w.r.t. to a relation of independence R ind if all pairs of correlated events independent in the sense of R ind possess a proper statistical common cause or a proper statistical common cause system of size at most n. A classical probability space is causally n-closed w.r.t. to a relation of independence R ind if all pairs of correlated events independent in the sense of R ind possess a proper statistical common cause system of size n. If the space is causally up-to-2-closed, in other words causally 2-closed, we also say that it is causally closed or common cause closed. Note that, in terms of providing explanation for correlations, a space which is causally up-to-3-closed (or up-to-n-closed for any other finite n) is as good (or as bad) as a causally closed space. Namely, any correlation is provided with something that screens the correlation off and from the existence of which the given correlation can be deduced. This is the reason for which it is interesting to check whether we might have luck finding up-to-3-closed spaces, as opposed to searching "just" for causally closed spaces. Forgetting about the measure zero-related issues for a second, it turns out that while among finite classical probability spaces there is only one that is (non-trivially) causally closed, infinitely many are causally up-to-3-closed. Summary of Results Theorem 1 will be our main tool in proving the lemmas featured in Table 1. Theorem 1 Let \(\langle S, P \rangle\) be a finite classical probability space with S + having at least 4 atoms of non-zero probability. Then P + is uniform if and only if \(\langle S^+ , P^+ \rangle\) is causally up-to-3-closed w.r.t. L + ind . Lemmas 1-3 tie uniformity of P and P + with causal up-to-3-closedness of \(\langle S, P \rangle\) with respect to the two notions of independence introduced above. Lemma 1 Let \(\langle S, P \rangle\) be a finite classical probability space with S having at least 4 atoms. If P is uniform, then \(\langle S, P \rangle\) is causally up-to-3-closed w.r.t. L ind and L + ind . Let \(\langle S, P \rangle\) be a finite classical probability space with S + having at least 4 atoms. If P + is not uniform, then \(\langle S, P \rangle\) is not causally up-to-3-closed w.r.t. either L ind or L + ind . Let \(\langle S, P \rangle\) be a finite classical probability space with S + having at least 4 atoms. If P + is uniform, then \(\langle S, P \rangle\) is causally up-to-3-closed w.r.t. L + ind . All correlated pairs from \(L_{ind} \setminus L_{ind}^+\) have statistical common causes, but some only have improper ones. Some Useful Parameters For expository reasons, we will not prove theorem 1 directly, but rather demonstrate its equivalent, theorem 2 (p. 9). Before proceeding with the proof, we shall introduce a few useful parameters one may associate with a pair of events A, B in a finite classical probability space \(\langle S, P \rangle\). Let n be the number of atoms in the Boolean algebra S. The size of the set of atoms lying below A in the lattice ordering of S will from now on be referred to as a, and likewise for B and b. The analogous parameter associated with the conjunction of events A and B is just the size of the intersection of the relevant sets of atoms and will be called k. It will soon become apparent that while a and b have some utility in the discussion to follow, the more convenient parameters describe A and B in terms of the number of atoms belonging to one, but not the other. Thus we let a′ = a − k and b′ = b − k. In fact, if we set z = n − (a′ + k + b′), we obtain a set of four numbers precisely describing the blocks of the partition of the set of atoms of S into the four classes which need to be non-empty for A and B to be logically independent. It is clear that in the case of logically independent events a′, b′, k and z are all non-zero. Lastly, before we begin the proof of the main result of this paper, let us state the following important lemma: when searching for statistical common causes, screening off is enough. If both an event and its complement screen off a correlation, then one of them is a statistical common cause for the correlation. Let \(\langle S, P \rangle\) be a probability space. Let \(A, B, C \in S\). Suppose A and B are positively correlated. If both C and \({C}^{\perp}\) screen off A from B, then either C or \({C}^{\perp}\) is a statistical common cause of A and B. As the reader may check, if events A and B are correlated, then for all events C such that 0 < P(C) < 1 $$ \frac{P(AB|C)-P(A|C)P(B|C)}{P(\neg C)} + \frac{P(AB|\neg C)-P(A|\neg C)P(B|\neg C)}{P(C)} {} > - [ P(A|C)-P(A|\neg C) ][P(B|C) - P(B|\neg C)]. $$ Then, if both C and \({C}^{\perp}\) screen off A from B, the left-hand side of inequality 1 is 0. Therefore \([P(A|C)-P(A|\neg C) ][P(B|C) - P(B|\neg C)]\) is positive, which means that both differences have the same sign—so either C or \({C}^{\perp}\) meets the conditions for being a statistical common cause for A and B. \(\square\) Proof of Theorem 1 In this section we will provide a proof of the main tool in this paper—theorem 1, formulated on p. 7. The form in which it was stated in that section is dictated by its use in the proofs of lemmas 1-3. However, when treated in isolation, it is better phrased in the following way: (Equivalent to theorem 1) Let \(\langle S, P \rangle\) be a finite classical probability space with no atoms of probability 0. Suppose S has at least 4 atoms. Footnote 5 The following conditions are equivalent: Measure uniformity: P is the uniform probability measure on S; Causal up-to-3-closedness w.r.t. \(L_{ind}: \,\langle S, P \rangle\) is causally up-to-3-closed w.r.t. the relation of logical independence. Before proceeding with the proof we will provide a sketch of the construction and some requisite definitions. Instead of focusing on a particular n-atom algebra, we will show how the problem presents itself while we 'move' from smaller to bigger algebras. We assume without loss of generality that the set of atoms of an n-atom Boolean algebra is \(\{0, 1, \ldots, n-1\}\) and that each event is a set of atoms. Consider the sequence of all finite classical probability spaces with the uniform probability measure, in which the number of atoms of the underlying Boolean algebra of the space increases by 1 at each step, beginning with the algebra with a single atom. We use the shorthand expression "at stage n" to mean "in the probability space with uniform distribution whose underlying Boolean algebra has n atoms". Observe that due to our convention whereby events are identified with sets of atoms, an event present at stage m (one found in the algebra from that stage) is also present at all further stages. In other words, a set of atoms defining an event at stage m can also be interpreted as defining an event at any stage m′, with m′ > m. Thus we can naturally say that a certain event belongs to many different probability spaces; e.g. the event {1, 2, 11} is present at stages 12, 13 and so on. Similarly, pairs of events can be present at many stages—and be correlated at some, but not at others. If they are correlated at stage m, they are correlated at all stages n, for n > m (see below). The same is true of logical independence: a pair may not consist of logically independent events at stage n, because their union is the whole set of n atoms, but may become a pair of logically independent events at stage n + 1, when an additional atom is introduced, which does not belong to either of the events in question.Footnote 6 Some remarks on the shape of events considered are in order. We will always be talking about pairs of events A, B, with numbers a, a′, b, b′, k, z and n defined as above (see Sect. 3.1). We assume (without loss of generality) \(a \geqslant b\). Also, since we are dealing with the uniform measure, all relevant characteristics of a pair of events A, B are determined by the numbers a′, b′, k, and z; therefore, for any combination of these numbers it is sufficient only to consider a single example of a pair displaying them. The rest is just a matter of renaming the atoms. For example, if we are looking for an explanation for the pair {{8, 7, 3, 5}, {2, 8, 7}} at stage 10, or the pair {{1, 3, 5, 6}, {1, 6, 4}} at the same stage, we shall search for an explanation for the pair {{0, 1, 2, 3}, {2, 3, 4}} at stage 10 and then just appropriately 'translate' the result (explicit examples of this follow in Sect. 3.2.1). In general: the convention we adopt is for A to be a set of consecutive atoms beginning with 0, and B a set of consecutive atoms beginning with a − k For illustrative purposes we propose to examine the situation at the early stages. The proof proper begins with definition 9 below. For the remainder of Sect. 3.2, by "common cause" we will always mean "proper common cause"; similarly with "common cause system". There are no correlated pairs of logically independent events at stage 1; similarly for stages 2, 3 and 4. (Remember the measure is uniform and so at stage 4 e.g. the pair {{0, 1}, {1, 2}}, while composed of logically independent events, is not correlated.) First correlated pairs of logically independent events appear at stage 5. These are of one of the two following types: either a′ = b′ = k = 1, or a′ = b′ = 1 and k = 2. Proposition 3 from Gyenis and Rédei (2004) says that all pairs of these types have statistical common causes at stage 5. As noted above, we can without loss of generality consider just two tokens of these types—the pairs {{0, 1}, {1, 2}} and {{0, 1, 2}, {1, 2, 3}}. In the first case, the events already formed a logically independent pair at stage 4, but were not correlated—we will say that the pair appears from below at stage 5 (see definition 9 below). In the second case, stage 5 is the first stage where the events form a logically independent pair, and they are already correlated at that stage. We will say that the pair {{0, 1, 2}, {1, 2, 3}} appears from above at stage 5. There are no other correlated pairs of logically independent events at stage 5. It will turn out that we can always find statistical common causes for pairs which appear from above or from below at a given stage. Let us move to stage 6. A new (type of) pair appears from above—{{0, 1, 2, 3}, {1, 2, 3, 4}}. No pairs appear from below, but both pairs which appeared at stage 5 are still correlated and logically independent at stage 6 (as well as at all later stages), so they are again in need of an explanation at this higher stage. It turns out that if a correlated pair of logically independent events at stage n is 'inherited' from the earlier stages, i.e. it appears neither from above nor from below at stage n, we can modify the common cause which we know how to supply for it at the stage where it originally appeared to provide it with an explanation adequate at stage n. This takes the form of a statistical common cause or, in some cases, an SCCS of size 3. (Appearing from above or below) A pair {A, B} of events of the form {0, ..., a − 1}, {a − k, ..., a − k + b − 1} appears from above at stage n if it is (1) logically independent at stage n, (2) not logically independent at stage n − 1 and (3) correlated at stage n. A pair {A, B} of events of the same form appears from below at stage n if it is (1) logically independent at stage n, (2) logically independent at stage n − 1 and (3) correlated at stage n, but (4) not correlated at stage n − 1. We will divide common causes into types depending on whether the occurrence of a given common cause makes the occurrence of at least one member of the correlation it explains necessary, impossible or possible with probability less then 1.Footnote 7 Definition 10 (1-, 0-, and #-type statistical common causes) A proper statistical common cause C for a correlated pair of logically independent events A, B is said to be: 1-type iff \(P(A \mid C) = 1\) or \(P(B \mid C) = 1\); 0-type iff \(P(A \mid {C}^{\perp}) = 0\) or \(P(B \mid {C}^{\perp}) = 0\); #-type iff it is neither 1-type nor 0-type. Notice that no proper statistical common cause C for some two logically independent, correlated events A and B can be both 1-type and 0-type at the same time. (0-type statistical common cause system) A proper statistical common cause system of size \(n \,\{C_i\}_{i \in \{0, \ldots, n-1\}}\) is a 0-type statistical common cause system (0-type SCCS) for the correlation iff \(P(A \mid C_{n-1}) = 0\) or \(P(B \mid C_{n-1}) = 0\). We do not need to worry about the fact that rearranging the elements of a 0-type SCCS necessarily makes it lose the 0-type status, because during the proof the SCCSs will be explicitly construed so that their "last" element gives conditional probability 0 to both correlated events to be explained. Were this notion to be used in general, its definition should be rephrased as an existential condition: "there exists m ⩽ n − 1 such that \(P(A \mid C_m) = 0\) and \(P(B \mid C_m) = 0\)". We will prove the following: if a pair appears from above at stage n, it has a statistical common cause at that stage (lemma 6); if a pair appears from below at stage n, it has a statistical common cause at that stage (lemma 7); if a pair of logically independent events is correlated at stage n and has a statistical common cause or a 0-type SCCS of size 3 at that stage, it has a statistical common cause or a 0-type SCCS of size 3 at stage n + 1 (lemma 8). It should be straightforward to see that this is enough to prove theorem 2 (p. 9) in its 'downward' direction. Consider a correlated pair of logically independent events A, B at stage n. If it appears from above, we produce a common cause using the technique described in lemma 6. If it appears from below, we use the method from lemma 7. If it appears neither from above nor from below, it means that it was logically independent at stage n − 1 and was correlated at that stage, and we repeat the question at stage n − 1. This descent terminates at the stage where our pair first appeared, which clearly must have been either from below or from above. This allows us to apply either lemma 6 or lemma 7, as appropriate, followed by lemma 8 to move back up to stage n, where we will now be able to supply the pair with an SCC or an SCCS of size 3. As said before, the SCCs and SCCSs we will construct will always be proper SCCs and SCCSs. Put Corr(A, B) : = P(AB) − P(A)P(B). Corr(A, B) can always be expressed as a fraction with denominator n 2. Of special interest to us will be the numerator of this fraction. Let us call this number SC n (A, B). (For example, if A = {0, 1, 2} and B = {2, 3}, SC 5(A, B) = − 1.) If SC n (A, B) ⩽ 0, the events are not correlated at stage n. If SC n (A, B) > 0, A and B are correlated at stage n and we need to find either a common cause or a common cause system of size 3 for them. The following lemma will aid us in our endeavour (remember the definitions from Sect. 3.1): Let \(\langle S_n, P \rangle\) be a finite classical probability space, S n being the Boolean algebra with n atoms and P the uniform measure on S n . Let \(A, B \in S_n\). Then SC n (A, B) = kz − a′b′. \(Corr(A,B) = P(AB) - P(A)P(B) = \frac{k}{n} - \frac{k+a'}{n}\frac{k+b'}{n}\,= \,= \frac{k(n - k - a' - b') - a'b'}{n^2} =\,\frac{kz-a'b'}{n^2}\). Therefore SC n (A, B) = kz − a′b′. \(\square\) An immediate consequence of this lemma is that any pair of logically independent events will eventually (at a high enough stage) be correlated—it is just a matter of injecting enough atoms into z. For example, consider events A = {0, 1, 2, 3, 4, 5, 6}, B = {6, 7, 8, 9, 10, 11}. At any stage n, SC n (A, B) is equal to z − 30. This means that the pair is correlated at all stages in which z > 30; in other words, at stages 43 and up. At some earlier stages (from 13 to 42) the pair is logically independent but not correlated; at stage 12 it is not logically independent; and the events constituting it do not fit in the algebras from stages lower than that. Notice that since for any A, B: SC n+1(A, B) = SC n (A, B) + k, it follows that at the stage m where the pair first appears (either from above or from below) SC m (A, B) is positive but less than or equal to k. We now have all the tools we need to prove theorem 2. Proof (of theorem 2) Measure uniformity ⇒ Causal up-to-3-closedness w.r.t. L ind Suppose a pair A, B appears from above at stage n. Then there exists a 1-type common cause for the correlation at that stage. We are at stage n. Since the pair A, B appears from above at this stage, z = 1 and so (by lemma 5) SC n (A, B) = k − a′b′. (If z was equal to 0, the events would not be logically independent at stage n; if it was greater than 1, the events would be logically independent at stage n − 1 too, and so the pair would not appear from above at stage n.) Notice that since A, B are logically independent (so both a′ and b′ are non-zero) but correlated at stage n, 0 < SC n (A, B) = k − a′b′ < k. Let C consist of exactly SC n (A, B) atoms from the intersection A ∩ B. Such a C will be a screener-off for the correlation, since \(P(AB \mid C) = 1 = P(A \mid C)P(B \mid C)\). What remains is to show that \({C}^{\perp}\) is a screener-off as well. This follows from the observation that \(P(AB \mid {C}^{\perp}) = \frac{k-(k-a'b')}{n-(k-a'b')} = \frac{a'b'}{n-k+a'b'} = \frac{a'b'(n-k+a'b')}{(n-k+a'b')^2} = \frac{a'b'(1+a'+b'+k) - a'b'k + {a}^{'2}{b}^{'2}}{{(n-k+a'b')}^{2}} = \frac{a'b' + a'{b}^{'2} + {a}^{'2}b' + {a}^{'2}{b}^{'2}}{{(n-k+a'b')}^{2}} = \frac{a'+a'b'}{n-k+a'b'}\cdot\frac{b'+a'b'}{n-k+a'b'} = \frac{k+a'-(k-a'b')}{n-k+a'b'} \cdot \frac{k+b'-(k-a'b')}{n-k+a'b'} = \frac{k+a'-SC_{n}(A,B)}{n-k+a'b'} \cdot \frac{k+b'-SC_{n}(A,B)}{n-k+a'b'} = P(A \mid {C}^{\perp})P(B \mid {C}^{\perp})\). \(\square\) Suppose a pair A, B appears from below at stage n. Then there exists a 1-type common cause or a 0-type common cause for the correlation at that stage. Case 1: k > b′ and a′ > z. In this case we will construct a 1-type common cause. Let C consist of k − b′ atoms from A ∩ B and a′ − z atoms from \(A \setminus B\). Since \(C \subset A\), it screens off the correlation: \(P(AB \mid C) = P(B \mid C) = 1 \cdot P(B \mid C) = P(A \mid C)P(B \mid C)\). We need to show that \({C}^{\perp}\) screens off the correlation as well. This follows from the fact that \(P(AB \mid {C}^{\perp}) = \frac{b'}{n-(k-b')-(a'-z)} = \frac{b'}{2b'+2z} = \frac{2b^{'2} + 2zb'}{(2b' + 2z)^2} = \frac{(b'+z)2b'}{(2b' + 2z)^2} = \frac{b'+z}{2b'+2z} \cdot \frac{2b'}{2b'+2z} = \frac{b'+z}{n-(k-b')-(a'-z)} \cdot \frac{2b'}{n-(k-b')-(a'-z)} = P(A \mid {C}^{\perp})P(B \mid {C}^{\perp})\). Case 2: z > b′ and a′ > k. In this case we will construct a 0-type common cause. Let \({C}^{\perp}\) consist of a′ − k atoms from \(A \setminus B\) and z − b′ atoms from \((A \cup B)^{\perp}\). Since \({C}^{\perp} \subset {B}^{\perp}\), it screens off the correlation: \(P(AB \mid {C}^{\perp}) = 0 = P(A \mid {C}^{\perp}) \cdot 0 = P(A \mid {C}^{\perp})P(B \mid {C}^{\perp})\). We need to show that C too screens off the correlation. This follows from the fact that \(P(AB \mid C) = \frac{k}{n-(a'-k)-(z-b')} = \frac{k}{2k+2b'} = \frac{2k^2 + 2kb'}{(2k+2b')^2} = \frac{2k(k+b')}{(2k+2b')^2} = \frac{2k}{2k+2b'} \cdot \frac{k+b'}{2k+2b'} = \frac{2k}{n-(a'-k)-(z-b')} \cdot \frac{k+b'}{n-(a'-k)-(z-b')} = P(A \mid C)P(B \mid C)\). Case 3a: \(z \geqslant a', \,k \geqslant a'\) and a′ > b′. As can be verified easily, in this case k = z = a′ and b′ = a′ − 1. We can construct both a 0-type common cause and a 1-type common cause. Suppose we choose to produce the former. An appropriate \({C}^{\perp}\) would consist just of a single atom from \((A \cup B)^\perp\). \({C}^{\perp}\) screens off the correlation because \(P(AB \mid {C}^{\perp}) = 0 = P(A \mid {C}^{\perp})P(B \mid {C}^{\perp})\). That C is also a screener-off is guaranteed by the fact that \(P(AB \mid C) - P(A \mid C)P(B \mid C) = \frac{k}{k+a'+b'+z-1} - \frac{k+a'}{k+a'+b'+z-1} \cdot \frac{k+b'}{k+a'+b'+z-1} = \frac{k}{4k-2} - \frac{2k}{2(2k-1)} \cdot \frac{2k-1}{4k-2} = 0\). To produce a 1-type common cause instead, let C consist just of a single atom from A ∩ B. C screens off the correlation because \(P(AB \mid C) = 1 = P(A \mid C)P(B \mid C)\). That \({C}^{\perp}\) is also a screener-off follows from the fact that \(P(AB \mid {C}^{\perp}) = \frac{k-1}{k-1+a'+b'+z} = \frac{b'}{2b'+2a'} = \frac{2b^{'2}+2a'b'}{(2b'+2a')^2} = \frac{(a'+b')2b'}{(2b'+2a')^2} = \frac{a'+b'}{2b'+2a'} \cdot \frac{2b'}{2b'+2a'} = \frac{k-1+a'}{2b'+2a'} \cdot \frac{k-1+b'}{2b'+2a'} = P(A \mid {C}^{\perp})P(B \mid {C}^{\perp})\). Case 3b: z = a′ + 1 and k = a′ = b′. In this case we will construct a 0-type common cause. Let \({C}^{\perp}\) consist of just a single atom from \((A \cup B)^\perp\). \({C}^{\perp}\) screens off the correlation because \(P(AB \mid {C}^{\perp}) = 0 = P(A \mid {C}^{\perp})P(B \mid {C}^{\perp})\). C screens off the correlation because \(P(AB \mid C) = \frac{k}{4k} = \frac{4k^2}{16k^2} = \frac{2k}{4k} \cdot \frac{2k}{4k} = \frac{k+a'}{k+a'+b'+z-1} \cdot \frac{k+b'}{k+a'+b'+z-1} = P(A \mid C)P(B \mid C)\). Case 3c: k = a′ + 1 and z = a′ = b′. In this case we will construct a 1-type common cause. Let C consist of just a single atom from \((A \cap B)\). As in case 3a, C screens off the correlation. That \({C}^{\perp}\) is also a screener-off follows from \(P(AB \mid {C}^{\perp}) = \frac{a'}{4a'} = \frac{4a^{'2}}{16a^{'2}} = \frac{2a'}{4a'} \cdot \frac{2a'}{4a'} = \frac{k-1+a'}{k-1+a'+b'+z} \cdot \frac{k-1+b'}{k-1+a'+b'+z} = P(A \mid {C}^{\perp})P(B \mid {C}^{\perp})\). \(\square\) Notice that the five cases used in the proof above are exhaustive. To see this, consider that \(a' \geqslant b'\) (by our convention) and SC n (A, B) = kz − a′b′ > 0 (because A and B are correlated). The latter inequality rules out the possibility that k, z ⩽ a′, b′. Also, if b′ ⩽ k ⩽ z ⩽ a′, then the leftmost inequality must be strict, since b′ = k ⩽ z ⩽ a′ clearly violates the condition on SC n (A, B). The remaining possibilities are as follows: k ⩽ b′ ⩽ a′ < z, z ⩽ b′ ⩽ a′ < k, b′ < k ⩽ z < a′, b′ < k ⩽ z = a′. is further subdivided into the following cases: k = b′ = a′ < z—this is Case 3b (if additionally z > a′ + 1, then the pair A, B would have been already logically independent and correlated at the prior stage and would not appear from below at stage n), k = b′ < a′ < z—this matches the conditions in Case 2, k < b′ ⩽ a′ < z—likewise. z = b′ = a′ < k—this is Case 3c (a remark similar to that on the first subcase of 1. applies), z = b′ < a′ < k—this matches the conditions in Case 1, z < b′ ⩽ a′ < k—likewise. matches the conditions in Case 2. is further subdivided into two cases depending on whether the inequality k ⩽ z is strict: k < z—this matches the conditions in Case 2, k = z—this matches the conditions in Case 3a. Suppose A, B form a pair of logically independent events correlated at stage n. Suppose further that they have a common cause or a 0-type SCCS of size 3 at that stage. Then they have a common cause or a 0-type SCCS of size 3 at stage n + 1. (Note that the cases are not exclusive; they are, however, exhaustive, which is enough for the present purpose.) Case 1: A, B have a 0-type common cause at stage n. Let C be a 0-type common cause for the correlation. When moving from stage n to n + 1, a new atom ({n + 1}) is added. Let \({C'}^{\perp} = {C}^{\perp} \cup \{n+1\}\). Notice that C and \({C'}^{\perp}\) form a partition of unity of the algebra at stage n + 1. C contains exclusively atoms from the algebra at stage n and so continues to be a screener off. Notice that since C was a 0-type common cause at stage n, at that stage \(P(A \mid {C}^{\perp}) = 0\) or \(P(B \mid {C}^{\perp}) = 0\). Since the atom n + 1 lies outside the events A and B, at stage n + 1 we have \(P(A \mid {C'}^{\perp}) = 0\) or \(P(B \mid {C'}^{\perp}) = 0\), and so \({C'}^{\perp}\) is a screener-off too. Thus C and \({C'}^{\perp}\) are both screener-offs and compose a partition of unity at stage n + 1. By lemma 4 (p. 8), this is enough to conclude that A, B have a 0-type common cause at stage n + 1. Case 2: A, B have a common cause which is not a 0-type common cause at stage n. Let C be a non-0-type common cause for the correlation at stage n. Notice that both \(P(AB \mid C)\) and \(P(AB \mid {C}^{\perp})\) are non-zero. In this case the 'new' atom cannot be added to C or \({C}^{\perp}\) without breaking the corresponding screening-off condition. However—as we remarked in the previous case—the atom n + 1 lies outside the events A and B, so the singleton {n + 1} is trivially a screener-off for the pair. Since conditioning on {n + 1} gives probability 0 for both A and B, the statistical relevance condition is satisfied. Therefore our explanation of the correlation at stage n + 1 will be a 0-type SCCS of size 3: \(C' = \{C, {C}^{\perp},\{n+1\}\}\).Footnote 8 Case 3: A, B have a 0 -type SCCS of size 3 at stage n. Let the partition \(C = \{C_i\}_{i \in \{0,1,2\}}\) be a 0-type SCCS of size 3 at stage n for the correlation, with C 2 being the zero element (that is \(P(A \mid C_2) = 0\) or \(P(B \mid C_2) = 0\) (or possibly both), with the conditional probabilities involving C 0 and C 1 being positive). Let C′ = {C 0, C 1, C 2 ∪ {n + 1}}. Appending the additional atom to C 2 does not change any conditional probabilities involved, so the statistical relevance condition is satisfied. Since \(n+1 \notin A \cup B, \,C_2 \cup \{n+1\}\) screens off the correlation at stage n + 1 and C′ is a 0-type SCCS of size 3 at stage n + 1 for the correlation. \(\square\) As mentioned above, lemmas 6–8 complete the proof of this direction of the theorem since a method is given for obtaining a statistical common cause or an SCCS of size 3 for any correlation between logically independent events in any finite probability space with uniform distribution. We proceed with the proof of the 'upward' direction of theorem 2. Causal up-to-3-closedness w.r.t. L ind ⇒ Measure uniformity In fact, we will prove the contrapositive: if in a finite probability space with no 0-probability atoms the measure is not uniform, then there exist logically independent, correlated events A, B possessing neither a common cause nor an SCCS of size 3.Footnote 9 In the remainder of the proof we extend the reasoning from case 2 of proposition 4 of Gyenis and Rédei (2004), which covers the case of common causes. Consider the space with n atoms; arrange the atoms in the order of decreasing probability and label them as numbers \(0, 1, \ldots, n-1\). Let A = {0, n − 1} and B = {0,n − 2}. Gyenis and Rédei (2004) prove that A, B are correlated and do not have a common cause. We will now show that they do not have an SCCS of size 3 either. Suppose \(C = \{C_i\}_{i \in \{0,1,2\}}\) is an SCCS of size 3 for the pair A, B. If for some \(i \in \{0,1,2\} \,A \subseteq C_i, \,C\) violates the statistical relevance condition, since for the remaining \(j,k \in \{0,1,2\}, j \neq k, i \neq j, i \neq k, \,P(A \mid C_j) = 0 = P(A \mid C_k)\). Similarly if B is substituted for A in the above reasoning. It follows that none of the elements of C can contain the whole event A or B. Notice also that no C i can contain the atoms n − 1 and n − 2, but not the atom 0, as then it would not be a screener-off. This is because in such a case \(P(AB \mid C_i) = 0\) despite the fact that \(P(A \mid C_i) \neq 0\) and \(P(B \mid C_i) \neq 0\). But since C is a partition of unity of the space, each of the three atoms forming \(A \cup B\) has to belong to an element of C, and so each C i contains exactly one atom from \(A \cup B\). Therefore for some \(j,k \in \{0,1,2\} \,P(A \mid C_j) > P(A \mid C_k)\) but \(P(B \mid C_j) < P(B \mid C_k)\), which means that C violates the statistical relevance condition. All options exhausted, we conclude that the pair A, B does not have an SCCS of size 3; thus the probability space is not causally up-to-3-closed. \(\square\) The reasoning from the 'upward' direction of the theorem can be extended to show that if a probability space with no 0-probability atoms has a non-uniform probability measure, it is not causally up-to-n-closed for any \(n \geqslant 2\). The union of the two events A and B described above only contains 3 atoms; it follows that the pair cannot have an SCCS of size greater than 3, since it would have to violate the statistical relevance condition (two or more of its elements would, when conditioned upon, give probability 0 to event A or B). This, together with proposition 3 of Gyenis and Rédei (2004) justifies the following claims: No finite probability space with a non-uniform measure and without 0-probability atoms is causally up-to-n-closed w.r.t. L ind for any \(n \geqslant 2\). Corollary 9 No finite probability space with a non-uniform measure and without 0-probability atoms is causally n-closed w.r.t. L ind for any \(n \geqslant 2\). The proofs of lemmas 2 and 3 in Sect. 3.3 will make it clear how to generalize both theorem 3 and corollary 9 to arbitrary finite spaces (also those possessing some 0-probability atoms) with a non-uniform measure. We omit the tedious details. We will now present a few examples of how our method of finding explanations for correlations works in practice, analysing a few cases of correlated logically independent events in probability spaces of various sizes (with uniform probability distribution). n = 7, A = {0, 2, 3, 5, 6}, B = {1, 2, 5, 6}. We see that a′ = 2, b′ = 1 and k = 3, so we will analyse the pair A 1 = {0, 1, 2, 3, 4}, B 1 = {2, 3, 4, 5}. We now check whether A 1 and B 1 were independent at stage 6, and since at that stage \(A_{1}^{\perp} \cap B_{1}^{\perp} = \emptyset\) we conclude that they were not. Therefore the pair A 1,B 1 appears from above at stage 7. Notice that SC 7(A 1,B 1) = 1. By construction from lemma 6 we know that an event consisting of just a single atom from the intersection of the two events satisfies the requirements for being a common cause of the correlation. Therefore C = {2} is a common cause of the correlation between A and B at stage 7. n = 10, A = {2, 3, 8}, B = {2, 8, 9}. We see that a′ = 1, b′ = 1 and k = 2, so we will analyse the pair A 1 = {0, 1, 2}, B 1 = {1, 2, 3}. Since SC 10(A 1,B 1) = 11, we conclude that the lowest stage at which the pair is correlated is 5 (as remarked earlier, SC changes by k from stage to stage).A 1 and B 1 are logically independent at that stage, but not at stage 4, which means that the pair appears from above at stage 5. We employ the same method as in the previous example to come up with a 1-type common cause of the correlation at that stage—let it be the event {1}. Now the reasoning from case 2 of lemma 8 is used to 'translate' the explanation to stage 6, where it becomes the following 0-type SCCS: {{1}, {0, 2, 3, 4}, {5}}. Case 3 of the same lemma allows us to arrive at an SCCS for A 1, B 1 at stage 10: {{1}, {0, 2, 3, 4}, {5, 6, 7, 8, 9}}. Its structure is as follows: one element contains a single atom from the intersection of the two events, another the remainder of A 1 ∪ B 1 as well as one atom not belonging to any of the two events, while the third element of the SCCS contains the rest of the atoms of the algebra at stage 10. We can therefore produce a 0-type SCCS for A and B at stage 10: {{2}, {0, 3, 8, 9}, {1, 4, 5, 6, 7}}. n = 12, A = {2, 4, 6, 8, 9, 10, 11}, B = {1, 3, 6, 10, 11}. We see that a′ = 4, b′ = 2 and k = 3, so we will analyse the pair A 1 = {0, 1, 2, 3, 4, 5, 6}, B 1 = {4, 5, 6, 7, 8}. We also see that A 1 and B 1 were logically independent at stage 11, but were not correlated at that stage. Therefore the pair A 1,B 1 appears from below at stage 12. Notice that z = 3. Therefore we see that z > b′ and a′ > k, which means we can use the method from case 2 of lemma 7 to construct a 0-type common cause, whose complement consists of 1 atom from \(A_1 \setminus B_1\) and 1 atom from \((A_1 \cup B_1)^\perp\). Going back to A and B, we see that the role of the complement of our common cause can be fulfilled by \({C}^{\perp} = \{0,2\}\). Therefore C = {1, 3, 4, 5, 6, 7, 8, 9, 10, 11} is a 0-type common cause of the correlation between A and B at stage 12.Footnote 10 Proofs of Lemmas 1–3 (of lemma 1) If P is uniform, then \(\langle S, P \rangle\) has no 0-probability atoms, which means that S = S + and P = P +. Therefore P + is uniform, so (by theorem 1) \(\langle S^+ , P^+ \rangle\) (and, consequently,\(\langle S, P\rangle\)) is causally up-to-3-closed w.r.t. L + ind . But in a space with no 0-probability atoms L ind = L + ind , therefore \(\langle S, P \rangle\) is also causally up-to-3-closed w.r.t. L ind . \(\square\) The next two proofs will require "jumping" from \(\langle S^+, P^+ \rangle\) to \(\langle S, P \rangle\) and vice versa. We will now have to be careful about the distiction between proper and improper SCC(S)s. Some preliminary remarks are in order. Let \(A \in S\). As before, we can think of A as a set of atoms of S. Let A + be the set of non-zero probability atoms in A: $$ A^{+} := A \setminus \{a| {a \, \hbox{is an atom of} \, S \,\hbox{and} \, P(a) \, = \, 0} \}. $$ Notice that $$ P(A) = \sum_{a \in A} P(a) = \sum_{a \in A^+} P(a) = P(A^+) = P^+(A^+). $$ Suppose \(A,B,C \in S\). From (2) it follows that if A, B are correlated in \(\langle S,P \rangle, \,A^+,B^+\) are correlated in \(\langle S^+,P^+ \rangle\). Similarly, for any \(D \in S, \,P(D \mid C) = P^+(D^+ \mid C^+)\). So, if C screens off the correlated events A, B in \(\langle S, P \rangle\), then C + screens off the correlated events A +, B + in \(\langle S^+, P^+ \rangle\). Also, if a family \(\mathbf{C} = \{C_i\}_{i \in I}\) satisfies the statistical relevance condition w.r.t. A, B in \(\langle S, P \rangle\), then the family \(\mathbf{C^+} = \{C_i^+\}_{i \in I}\) satisfies the statistical relevance condition w.r.t. A +, B + in \(\langle S^+, P^+ \rangle\). If \(\mathbf{C} = \{C_i\}_{i \in \{0,\ldots,n-1\}}\) is a proper SCCS of size n for the correlation between events A, B in \(\langle S, P \rangle\), then all its elements differ from both A and B by more than a measure zero event. It follows that in such a case \(\mathbf{C^+} = \{C_i^+\}_{i \in \{0,\ldots,n-1\}}\) is a proper SCCS of size n for the correlation between events A +, B + in \(\langle S^+, P^+ \rangle\). (of lemma 2) Since P + is not uniform, by theorem 1 \(\langle S^+ , P^+ \rangle\) is not causally up-to-3-closed w.r.t. L + ind (and, consequently, L ind ). Then there exist logically independent, correlated events A +, B + in S + which do not have a proper SCCS of size at most 3 in \(\langle S^+, P^+ \rangle\). The two events are also logically independent and correlated in \(\langle S, P \rangle\); it is easy to show that in \(\langle S, P \rangle\) the pair \(\langle A^+, B^+ \rangle\) also belongs both to L + ind and to L ind . We will show that \(\langle S, P \rangle\) also contains no proper SCCS of size at most 3 for these events. For suppose that for some \({m \in \{2,3\}, \,\mathbf{C} = \{C_i\}_{i \in {\mathbb N}, i < m}}\) was a proper SCCS of size m for the correlation between A + and B + in \(\langle S, P \rangle\). Then \({\mathbf{C^+} := \{C_i^+\}_{i \in {\mathbb N}, i < m}}\) would be a proper SCCS of size m for the correlation between A + and B + in \(\langle S^+, P^+ \rangle\), but by our assumption no such SCCSs exist. We infer that the correlated events A +, B + have no proper SCCS of size up to 3 in \(\langle S, P \rangle\), so the space \(\langle S, P \rangle\) is not causally up-to-3-closed w.r.t. either L ind or L + ind . \(\square\) (of lemma 3) Since P + is uniform, by theorem 1 \(\langle S^+ , P^+ \rangle\) is causally up-to-3-closed w.r.t. L + ind . We will first show that also \(\langle S, P \rangle\) is causally up-to-3-closed w.r.t. L + ind . Notice that if \(A,B \in S\) are correlated in \(\langle S, P \rangle\) and \(\langle A, B \rangle \in L_{ind}^+\), then \(A^+,B^+ \in S^+\) are correlated in \(\langle S^+, P^+ \rangle\) and \(\langle A^+, B^+ \rangle \in L_{ind}^+\). We know that in that case there exists in \(\langle S^+, P^+ \rangle\) a proper SCCS of size 2 or 3 for A + and B +. If we add the 0-probability atoms of S to one of the elements of the SCCS, we arrive at a proper SCCS of size 2 or 3 for \(A,B \in S\). It remains to consider correlated events \(A,B\in S\) such that \(\langle A, B \rangle \in L_{ind}\) but \(\langle A, B \rangle \notin L_{ind}^+\). In such a case at least one of the probabilities from definition 6 has to be equal to 0. It is easy to show that, since we know the two events are correlated, it can only be the case that \(P(A \cap {B}^{\perp}) = 0\) or \(P(B \cap {A}^{\perp}) = 0\); equivalently, \(A^+ \subseteq B^+\) or \(B^+ \subseteq A^+\). It may happen that A + = B +. Let us first deal with the case of a strict inclusion; suppose without loss of generality that \(A^+ \subset B^+\). If \(| B^+ \setminus A^+| > 1\), take an event C such that \(A^+ \subset C \subset B^+\). Since both inclusions in the last formula are strict, in such a case C is a proper statistical common cause for A and B. Notice that since \(\langle A,B \rangle \in L_{ind}\), from the fact that \(A^+ \subset B^+\) it follows that A ≠ A +. Therefore, if \(| B^+ \setminus A^+| = 1\), put C = A +. Such a C is an improper statistical common cause of A and B. The last case is that in which A + = B +. From the fact that A and B are logically independent it follows that \(A \setminus B^+ \neq \emptyset\) and \(B \setminus A^+ \neq \emptyset\). Therefore A ≠ A + and B ≠ B +. We can thus put C = A + (=B +) to arrive at an improper statistical common cause of A and B. When \(A^+ \subseteq B^+\), it is also impossible to find (even improper) SCCSs of size 3 for A and B. For suppose \(\mathbf{C} = \{C_i\}_{i \in \{0,1,2\}}\) was an SCCS for A and B. If for some \(j \neq l; j,l \in \{0,1,2\}\) it is true that C j ∩ A + = C l ∩ A + = ∅, then P(A | C j ) = 0 = P(A | C l ) and so \(\mathbf{C}\) cannot be an SCCS of A and B due to the statistical relevance condition being violated. Thus at least two elements of \(\mathbf{C}\) have to have a nonempty intersection with A +. Every such element C j screens off A from B. Since by our assumption \(A^+ \subseteq B^+\), it follows that P(AB | C j ) = P(A|C j ). Therefore the screening off condition takes the form of P(A|C j ) = P(A|C j )P(B|C j ); and so P(B|C j ) = 1. Since we already established that C contains at least two elements which can play the role of C j in the last reasoning, it follows that in this case the statistical relevance condition is violated too; all options exhausted, we conclude that no SCCSs of size 3 exist for A and B when \(A^+ \subseteq B^+\). The argument from this paragraph can also be applied to show that if \(A^+ \subseteq B^+\) and \(| B^+ \setminus A^+| \leqslant 1\), no proper statistical common causes for the two events exist. \(\square\) The "proper" / "improper" common cause distinction and the relations of logical independence A motivating intuition for the distinction between proper and improper common causes is that a correlation between two events should be explained by a different event. The difference between an event A and a cause C can manifest itself on two levels: the algebraical (A and C being not identical as elements of the event space) and the probabilistic (\(P(A \cap {C}^{\perp}\)) or \(P(C \cap {A}^{\perp})\) being not equal to 0). As per definition 7, in the case of improper common causes the difference between them and at least one of the correlated events (say, A) is only algebraical. For some this is intuitively enough to dismiss C as an explanation for any correlation involving A. One could, however, have intuitions to the contrary. First, events which differ by a measure zero event can be conceptually distinct. Second, atoms with probability 0 should perhaps be irrelevant when it comes to causal features of the particular probability space, especially when the independence relation considered is defined without any reference to probability. If the space is causally up-to-n-closed w.r.t. L ind , adding 0-probability atoms should not change its status. But consider what happens when we add a single 0-probability atom to a space which is up-to-2-closed (common cause closed) w.r.t. L ind by Proposition 3 from Gyenis and Rédei (2004): the space \(\langle S_5, P_u \rangle\), where S 5 is the Boolean algebra with 5 atoms \(\{0,1,\ldots,4\}\) and P u is the uniform measure on S 5. Label the added 0-probability atom as the number 5. It is easy to check that the pair \(\langle\{3,4\}, \{4,5\} \rangle\) belongs to L ind , is correlated and has no proper common cause. The only common cause for these events, {4}, is improper. Therefore the space is not common cause closed w.r.t. L ind in the sense of Gyenis and Rédei (2004) and our definition 8; this change in the space's status has been accomplished by adding a single atom with probability 0. It should be observed that the pair of events belongs to L ind , but not to L + ind ; and that the bigger space is still common cause closed with respect to L + ind (although not L ind ). In general, suppose \(\langle S, P \rangle\) is a space without any 0 probability atoms, causally up-to-n-closed w.r.t. L ind , and suppose some "extra" atoms were added, so that a new space \(\langle S', P' \rangle\) is obtained, where for any atom a of S′, $$P'(a) =\left\{\begin{array}{ll}P(a) &\quad\hbox{for }\, a \in S\\0&\quad \hbox{for }\, a \in S'-S\end{array}\right.$$ It is easy to prove, using the techniques employed in the proof of lemma 3, that all "new" correlated pairs in \(\langle S', P' \rangle\) belonging to L ind have (sometimes only improper) SCCSs of size up to n. This is also true in the special case of \(\langle S_5, P_u \rangle\) augmented with some 0 probability atoms. Perhaps, then, we should omit the word "proper" from the requirements for a probability space to be causally up-to-n-closed (definition 8)? This, however, is only one half of the story. Suppose the definition of causal up-to-n-closedness were relaxed in the above way, so that explaining correlations by means of improper SCC(S)s would be admissible. Consider a space \(\langle S^+, P^+ \rangle\),Footnote 11 in which S + has at least 4 atoms and P + is not the uniform measure on S +. This space, as we know, is not causally up-to-3 closed in the sense of definition 8, but it is also not causally up-to-3 closed in the "relaxed" sense, since the difference between proper and improper common causes can only manifest itself in spaces with 0 probability atoms.Footnote 12 When a new 0 probability atom m is added, every hitherto unexplained correlation between some events A and B gains an SCC in the form of the event \(C: = A \cup \{ m\}\). All such SCCs are, of course, improper. In short, the situation is this: if proper SCC(S)s are required, this leads to somewhat unintuitive consequences regarding causal up-to-n-closedness w.r.t. L ind . Omitting the requirement results, however, in unfortunate effects regarding causal up-to-n-closedness no matter whether L ind or L + ind is considered. We think the natural solution is to keep the requirement of proper SCC(S)s in the definition of causal up-to-n-closedness, but, of the two independence relations, regard L + ind as more interesting. It is the rightmost column of Table 1 that contains the most important results of this paper, then; this is fortunate, since they are a "pure" implication and an equivalence, without any special disclaimers. Table 1 The main results of the paper Other Independence Relations So far, the relation of independence under consideration—determining which correlations between two events require explanation—was either the relation of logical independence or its derivative L + ind . Let us consider using a 'broader' relation \(R_{ind} \supset L_{ind}\), which apart from all pairs of logically independent events would also include some pairs of logically dependent events. (The spaces under consideration are still finite.) For clarity, assume the space does not have any 0-probability atoms (so that e.g. L ind = L + ind ), but make no assumptions regarding the uniformity of the measure. Will we have more correlations to explain? If so, will they have common causes? First, observe that if A or B is \(\mathbf{1}_S\), and so P(A) or P(B) equals 1, there is no correlation. In the sequel assume that neither A nor B equals \(\mathbf{1}_S\). Second, note that if A ∩ B = \( \emptyset \), then P(AB) = 0 and no (positive) correlation arises. Third, if \({A}^{\perp} \cap {B}^{\perp} = \emptyset\), there is again no positive correlation. This is because in such a case \(P(AB)+P(A{B}^{\perp})+P({A}^{\perp} B)=1\), and since \(P(A)P(B)=P(AB)[P(AB)+P(A{B}^{\perp})+P({A}^{\perp} B)]+P(A{B}^{\perp})P({A}^{\perp} B) \geqslant P(AB)\), the events are not correlated. Considerthe last possible configuration in which the events A, B are logically dependent: namely, that one is a subset of the other. Suppose \(A \subseteq B\). Since by our assumption both P(A) and P(B) are strictly less than 1, the events will be correlated. It can easily be checkedFootnote 13 that when \(A \subseteq B\) but \(B \neq \mathbf{1}_S\), any C which screens off the correlation and has a non-empty intersection with A (and so \(P(A \mid C) \neq 0\)) has to be a subset of B (because \(P(B \mid C) = 1\)). And since it cannot be that both C and \({C}^{\perp}\) are subsets of B, then if C is a common cause, it is necessary that \({C}^{\perp} \cap A = \emptyset\). In the other direction, it is evident that if \(A \subseteq C \subseteq B\), both C and \({C}^{\perp}\) screen off the correlation and the statistical relevance condition is satisfied. The only pitfall is that the definition of a common cause requires it be distinct from both A and B, and so none exist when b′ = 1. To summarise, the only correlated pairs of logically dependent events A, B are those in which one of the events is included in the other. Assume \(A \subseteq B\). Then: if b = 1, there is no common cause of the correlation; otherwise the common causes of the correlation are precisely all the events C such that \(A \subset C \subset B\). Lastly, notice that in a space \(\langle S_n, P_u \rangle\) (S n being the Boolean algebra with n atoms and P u being the uniform measure) we could proceed in the opposite direction and restrict rather than broaden the relation L ind . If we take the independence relation R ind to be the relation of logical independence restricted to the pairs which appear from above or below at stage n, then our probability space is common cause closed w.r.t. R ind . A Slight Generalisation In this section we will show that the results of this paper, which have only concerned classical probability spaces so far, are also meaningful for finite non-classical spaces. We go back to our former practice: by "common cause" we will always mean "proper common cause"; similarly with "common cause system". (Non-classical probability space) An ortholattice L is orthomodular if \(\forall_{a,b\in L} \, a \leqslant b \Rightarrow b = a \vee ({a}^{\perp} \wedge b)\). Two elements a and b of L are orthogonal iff \(a \leqslant {b}^{\perp}\). An additive state on an orthomodular lattice (OML) L is a map P from L to [0,1] such that \(P(\mathbf{1}_L)=1\) and for any \(A \subseteq L\) such that A consists of mutually orthogonal elements, if \(\bigvee A\) exists, then \(P(\bigvee A) = \sum_{a \in A} P(a)\).Footnote 14 A non-classical probability space is a pair \(\langle L, P \rangle\), where L is a non-distributive OML and P is an additive state on L.Footnote 15 A relation of compatibility needs to be introduced. Only compatible events may be correlated; and a common cause needs to be compatible with both effects. We use the word "compatibility" because it was the one used in (Hofer-Szabó et al. 2000); "commutativity" is used in its place (see e.g. Kalmbach 1983). (Compatibility, correlation, SCC(S) in non-classical spaces) Let \(\langle L, P\) be a non-classical probability space and \(a,b \in L\). Event a is said to be compatible with b (aCb) if \(a = (a \wedge b) \vee (a \wedge {b}^{\perp})\). Events a, b are said to be correlated if aCb and the events are correlated in the sense of definition 3. The event \(x \in L\) is a proper statistical common cause of a and b if it fulfills the requirements from definition 7, differs from both a and b by more than a measure zero event, and is compatible both with a and with b (of course, \(c^\perp\) will be compatible, too). A partition \(\{C_{i}\}_{i \in I }\) of \(\mathbf{1}_{L}\) is a proper statistical common cause system of size n of a and b if it satisfies the requirements of definition 7, all its elements differ from both a and b by more than a measure zero event, and all its elements are compatible both with a and b. The notion of causal up-to-n-closedness is then immediately transferred to the context of non-classical probability spaces by substituting "non-classical" for "classical" in definition 8 (p. 7). This leads us to the result of this section, which can be phrased colloquially in this way: a finite non-classical probability space is causally up-to-n closed if and only if all its blocks are causally up-to-n-closed. Suppose \(\langle L, P\rangle\) is a finite non-classical probability space. Suppose all blocks of L have at least 4 atoms a such that P(a) > 0. Then \(\langle L, P\rangle\) is causally up-to-n-closed w.r.t L ind if and only if for any block B of L, the classical probability space \(\langle B, P|_{B} \rangle\) is causally up-to-n-closed w.r.t. L ind . Suppose \(\langle L, P\rangle\) is causally up-to-n-closed w.r.t. L ind . Let B be a block of L; let a, b be correlated and logically independent events in \(\langle B, P|_{B} \rangle\). Then a, b are correlated and logically independent events in \(\langle L, P \rangle\), and so have an SCCS of size up to n in \(\langle L, P \rangle\). But since all elements of the SCCS have to be compatible with a and b, they also have to belong to B. And so the pair has an SCCS of size up to-n in \(\langle B, P|_{B} \rangle\). For the other direction, suppose that for any block B of L, the space \(\langle B, P|_B \rangle\) is causally up-to-n-closed w.r.t. L ind . Let a, b be correlated and logically independent events in \(\langle L, P \rangle\). Being correlated entails being compatible; and so a and b belong to a block B. Since the ordering on L is induced by the orderings of the elements of B, a and b are also logically independent in B. Therefore by our assumption they have an SCCS of size up to n in \(\langle B, P|_{B} \rangle\). This SCCS is a partition of unity of L, and so satisfies definition 13. Thus a and b have an SCCS of size up to n in \(\langle L, P \rangle\). \(\square\) We will now present a few examples of causal closedness and up-to-3-closedness of non-classical probability spaces. Figure 1 depicts two non-classical probability spaces causally closed w.r.t. L + ind . All blocks have exactly 5 atoms of non-zero probability and each such atom receives probability \(\frac{1}{5}\), and so each block is causally closed w.r.t. L + ind . The left space is also causally closed w.r.t. L ind . Greechie diagrams of two OMLs which, if supplied with the state which assigns the number \(\frac{1}{5}\) to all "white" atoms and 0 to both "black" atoms, form non-classical probability spaces which are causally up-to-2-closed (or simply "causally closed", to use the term of Gyenis and Rédei (2004) w.r.t. \(L_{ind}^+\) The left OML in Fig. 2 has two blocks and the measure of the space is uniform on both of them, therefore the space is causally up-to-3-closed w.r.t. L ind . This however is not the case with the right one: its measure is not uniform on the block with four atoms, and so there is a correlation among some two logically independent events from that block which has neither a common cause nor an SCCS of size 3. (One of these events will contain one "dotted" atom and the single "white" atom of the block; the other will contain two "dotted" atoms.) Therefore the space is not causally up-to-3-closed w.r.t. L ind . In these OMLs "white" atoms have probability \(\frac{1}{7}\) and the "dotted" ones \(\frac{2}{7}\). The space depicted on the left is causally up-to-3-closed, but the one on the right is not Conclusions and Problems The main result of this paper is that in finite classical probability spaces with the uniform probability measure (and so no atoms with probability 0) all correlations between logically independent events have an explanation by means of a common cause or a common cause system of size 3. A few remarks are in order. First, notice that the only SCCSs employed in our method described in Sect. 3.2 are 0-type SCCSs, and that they are required only when 'translating' the explanation from a smaller space to a bigger one. Sometimes (if the common cause we found in the smaller space is 0-type; see example 3 above) such a translation can succeed without invoking the notion of SCCS at all. Second, #-type common causes, which some would view as 'genuinely indeterministic', are never required to explain a correlation – that is, a correlation can always be explained by means of a 0-type SCCS, a 0-type statistical common cause, or a 1-type statistical common causeFootnote 16. Therefore one direction of the equivalence in theorem 2 can be strengthened: Let \(\langle S,P \rangle\) be a finite classical probability space. Let S + be the unique Boolean algebra whose set of atoms consists of all the non-zero probability atoms of S and let P + be the restriction of P to S +. Suppose S + has at least 4 atoms. If P + is the uniform probability measure on S +, then any pair of positively correlated and logically independent events in \(\langle S, P \rangle\) has a 1-type statistical common cause, a 0-type statistical common cause or a 0-type statistical common cause system of size 3 in \(\langle S, P \rangle\). The results of Gyenis and Rédei concerning the unique nature of the space with 5 atoms could lead one to think that in a sense it is not easy to find a space in which all correlations would be explained by Reichenbachian notions. We have shown that this is not the case—already on the level of finite spaces there are infinitely many such spaces. Moreover, recent results on causal completability show that in the case of classical probability spaces one can always extend (preserving the measure) the given (possibly infinite) space to a space which is causally closed Footnote 17 and in many cases such an extension to a finite causally up-to-3-closed space is possible. Footnote 18 One can think of extending a probability space while preserving the measure as of taking more factors into account when explaining some given family of correlations. We now know that it is always possible to extend the initial space so that all correlations are explained (in the Reichenbachian style) in the extension; sometimes (more often than thought before) all the explanations are there in the original space. So, we know much about explaining correlations in classical probability spaces using Reichenbachian notions: it is surprisingly easy! This strengthens the argument (which perhaps hardly needed strengthening) that a good account of causality (and causal explanation) inspired by Reichenbach should introduce something more then just bare-bones probability conditions. The account needs to be philosophically fleshed out. Another direction is investigating the fate of Reichenbach's principle in non-classical probability spaces common in physics: in these cases decidedly less is known. Footnote 19 The last option would be to move the discussion to the more general context of random variables, as opposed to events. First steps in this direction have been provided by Gyenis and Rédei (2010). The phrasing of the paper was in fact stronger, omitting the assumption about non-0 probabilities on the atoms (due to a missed special sub-case in the proof of case 3 of proposition 4 on p. 1299). The issue is connected to the distinction between proper and improper common causes and is discussed below in Sect. 4. See Wroński and Marczyk (2010) and Hofer-Szabó and Rédei (2006). The notion was introduced in Hofer-Szabó and Rédei (2006). See Hofer-Szabó and Rédei (2004). It is easy to verify that if S has 3 atoms or less, then \(\langle S, P \rangle\) contains no correlations between logically independent events. Note that the space at stage n + 1 is not to be thought of as an extension of the space at stage n in the sense of the latter being embeddable in the former; we propose no measure-preserving homomorphism providing such an embedding (and indeed no such homomorphism exists between any two adjacent stages except stages 1 and 2). Thus when we speak of "the same" events being present at different stages, we simply mean that they are equal as sets of natural numbers—a property useful in the proofs to follow. We believe the conceptual difference between necessity and probability 1 is not important for the present topic. The fact that a correlation has an SCCS of size 3 does not necessarily mean it has no statistical common causes. Recall that we assume the probability space under consideration has at least 4 atoms. Incidentally, if we wanted to find a 1-type common cause for A and B at stage 12, we could put C = {2,11}, in which case \(P(A \mid C) = 1\). However, this is not always possible and there are cases in which only 0-type common causes (or only 1-type common causes) are possible. For a concrete example, take the pair {{0, 1, 2, 3, 4}, {4, 5}}, which appears from below at stage 11 and has only 0-type common causes at that stage. Recall that by our convention such a space has no 0 probability atoms. This is because the spaces we are dealing with are finite—so that we can be sure the Boolean algebras considered do, indeed, have atoms—and we already require an SCC for two events A and B to be distinct from both A and B, see definition 4, p. 5. See the last paragraph of the proof of lemma 3, p. 20. Of course, in the finite case—since a lattice always contains all suprema of doubletons by virtue of being a lattice—it would suffice to say that for any two orthogonal elements a and b, P(a ∨ b) = P(a) + P(b). Notice that if L were distributive, \(\langle L, P \rangle\) would be a classical probability space. But #-type common causes do exist: e.g. in the space with 12 atoms and the uniform measure the pair of events { A, B }, where A = {0, 1, 2, 3, 4, 5, 6}, B = {4, 5, 6, 7, 8} (the same we dealt with in example 3, p. 18) has, apart from both 0- and 1-type common causes, a #-type common cause of shape \(C = \{1, 2, 4, 5, 7, 9\}, \,{C}^{\perp} = \{0, 3, 6, 8, 10, 11\}; P(A \mid C) = \frac{2}{3}, \,P(B \mid C) = \frac{1}{2}, \,P(A \mid {C}^{\perp}) = \frac{1}{2}, \,P(B \mid {C}^{\perp}) = \frac{1}{3}.\) This was proven in Chapter 7 of Wroński (2010), but see also Marczyk and Wroński (2012) for more results on the topic and Gyenis and Rédei (2011) for a more general construction. See Chapter 7 of Wroński (2010) and Marczyk and Wroński (2012). See e.g. Rédei and Summers (2002). Arntzenius, F. (1992). The common cause principle. PSA: Proceedings of the Biennial meeting of the philosophy of science association (vol. 1992). Volume Two: Symposia and Invited Papers. Gyenis, B., & Rédei, M. (2004). When can statistical theories be causally closed? Foundations of Physics, 34(9), 1284–1303. Gyenis, B., & Rédei, M. (2010). Causal completeness in general probability theories. In M. Suárez (Ed.), Probabilities, Causes, and Propensities in Physics. Synthese Library, Springer. Gyenis, Z., & Rédei, M. (2011). Characterizing common cause closed probability spaces. Philosophy of Science, 78(3), 393–409. Hofer-Szabó, G., & Rédei, M. (2004). Reichenbachian common cause systems. International Journal of Theoretical Physics, 43(7/8), 1819–1826. Hofer-Szabó, G., & Rédei, M. (2006). Reichenbachian common cause systems of arbitrary finite size exist. Foundations of Physics, 36(5), 745–756. Hofer-Szabó, G., Rédei, M., & Szabó, L. E. (1999). On Reichenbach's common cause principle and Reichenbach's notion of common cause. The British Journal for the Philosophy of Science, 50(3), 377–399. Hofer-Szabó, G., Rédei, M., & Szabó, L. E. (2000). Reichenbach's common cause principle: Recent results and open questions. Reports on Philosophy, 20, 85–107. Kalmbach, G. (1983). Orthomodular lattices. London: Academic Press. Marczyk, M., & Wroński, L. (2012). Completion of the causal completability problem. Accepted in the British Journal for the Philosophy of Science. Rédei, M., & Summers, S. (2002). Local primitive causality and the common cause principle in quantum field theory. Foundations of Physics, 32(3), 335–355. Reichenbach, H. (1971). The direction of time. University of California Press. Reprint of the 1956 edition. van Fraassen, B. C. (1982). The Charybdis of realism: Epistemological implications of Bell's inequality. Synthese, 52, 25–38. Reprinted with additions in: Cushing, J. T., & McMullin, E. (Eds.), (1989). Philosophical consequences of quantum theory. Reflections on Bell's theorem. University of Notre Dame Press, Indiana. Wroński, L. (2010). The common cause principle. Explanation via screening off, PhD thesis, Jagiellonian University, Kraków, archived at jagiellonian.academia.edu/LeszekWroński. http://www.academia.edu/267382/The_Common_Cause_Principle Wroński, L., & Marczyk, M. (2010). Only countable Reichenbachian common cause systems exist. Foundations of Physics, 40, 1155–1160. The joint research was aided by The Polish Ministry for Science and Higher Education (grant no. 668/N-RNP-ESF/2010/0). L. Wroński's research was aided by a PhD grant "Zasada wspólnej przyczyny" ("The Common Cause Principle") of The Polish Ministry for Science and Higher Education (grant no. N N101 131936) and by the Foundation for Polish Science START Fellowship. We would like to thank the referees for their insightful comments. Department of Philosophy, Jagiellonian University, Grodzka 52, 31-044, Kraków, Poland Leszek Wroński & Michał Marczyk Leszek Wroński Michał Marczyk Correspondence to Leszek Wroński. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Wroński, L., Marczyk, M. A New Notion of Causal Closedness. Erkenn 79 (Suppl 3), 453–478 (2014). https://doi.org/10.1007/s10670-013-9457-0 Probability Space Independent Event Boolean Algebra Correlate Event Single Atom
CommonCrawl
I. Parametric Equations and Polar Coordinates 2. Parametric Equations 3. Calculus of Parametric Curves 4. Polar Coordinates 5. Area and Arc Length in Polar Coordinates 6. Conic Sections II. Vectors in Space 8. Vectors in the Plane 9. Vectors in Three Dimensions 10. The Dot Product 11. The Cross Product 12. Equations of Lines and Planes in Space 13. Quadric Surfaces 14. Cylindrical and Spherical Coordinates III. Vector-Valued Functions 16. Vector-Valued Functions and Space Curves 17. Calculus of Vector-Valued Functions 18. Arc Length and Curvature 19. Motion in Space IV. Differentiation of Functions of Several Variables 21. Functions of Several Variables 22. Limits and Continuity 23. Partial Derivatives 24. Tangent Planes and Linear Approximations 25. The Chain Rule 26. Directional Derivatives and the Gradient 27. Maxima/Minima Problems 28. Lagrange Multipliers V. Multiple Integration 30. Double Integrals over Rectangular Regions 31. Double Integrals over General Regions 32. Double Integrals in Polar Coordinates 33. Triple Integrals 34. Triple Integrals in Cylindrical and Spherical Coordinates 35. Calculating Centers of Mass and Moments of Inertia 36. Change of Variables in Multiple Integrals VI. Vector Calculus 38. Vector Fields 39. Line Integrals 40. Conservative Vector Fields 41. Green's Theorem 42. Divergence and Curl 43. Surface Integrals 44. Stokes' Theorem 45. The Divergence Theorem VII. Second-Order Differential Equations 47. Second-Order Linear Equations 48. Nonhomogeneous Linear Equations 49. Applications 50. Series Solutions of Differential Equations Differentiation of Functions of Several Variables 21 Functions of Several Variables Recognize a function of two variables and identify its domain and range. Sketch a graph of a function of two variables. Sketch several traces or level curves of a function of two variables. Recognize a function of three or more variables and identify its level surfaces. Our first step is to explain what a function of more than one variable is, starting with functions of two independent variables. This step includes identifying the domain and range of such functions and learning how to graph them. We also examine ways to relate the graphs of functions in three dimensions to graphs of more familiar planar functions. Functions of Two Variables The definition of a function of two variables is very similar to the definition for a function of one variable. The main difference is that, instead of mapping values of one variable to values of another variable, we map ordered pairs of variables to another variable. A function of two variables maps each ordered pair in a subset of the real plane to a unique real number The set is called the domain of the function. The range of is the set of all real numbers that has at least one ordered pair such that as shown in the following figure. The domain of a function of two variables consists of ordered pairs Determining the domain of a function of two variables involves taking into account any domain restrictions that may exist. Let's take a look. Domains and Ranges for Functions of Two Variables Find the domain and range of each of the following functions: This is an example of a linear function in two variables. There are no values or combinations of and that cause to be undefined, so the domain of is To determine the range, first pick a value for We need to find a solution to the equation or One such solution can be obtained by first setting which yields the equation The solution to this equation is which gives the ordered pair as a solution to the equation for any value of Therefore, the range of the function is all real numbers, or For the function to have a real value, the quantity under the square root must be nonnegative: This inequality can be written in the form Therefore, the domain of is The graph of this set of points can be described as a disk of radius centered at the origin. The domain includes the boundary circle as shown in the following graph. The domain of the function is a closed disk of radius 3. To determine the range of we start with a point on the boundary of the domain, which is defined by the relation It follows that and If (in other words, then This is the maximum value of the function. Given any value c between we can find an entire set of points inside the domain of such that Since this describes a circle of radius centered at the origin. Any point on this circle satisfies the equation Therefore, the range of this function can be written in interval notation as Find the domain and range of the function The domain is the shaded circle defined by the inequality which has a circle of radius as its boundary. The range is Determine the set of ordered pairs that do not make the radicand negative. Graphing Functions of Two Variables Suppose we wish to graph the function This function has two independent variables and one dependent variable When graphing a function of one variable, we use the Cartesian plane. We are able to graph any ordered pair in the plane, and every point in the plane has an ordered pair associated with it. With a function of two variables, each ordered pair in the domain of the function is mapped to a real number Therefore, the graph of the function consists of ordered triples The graph of a function of two variables is called a surface. To understand more completely the concept of plotting a set of ordered triples to obtain a surface in three-dimensional space, imagine the coordinate system laying flat. Then, every point in the domain of the function has a unique associated with it. If is positive, then the graphed point is located above the if is negative, then the graphed point is located below the The set of all the graphed points becomes the two-dimensional surface that is the graph of the function Create a graph of each of the following functions: In (Figure), we determined that the domain of is and the range is When we have Therefore any point on the circle of radius centered at the origin in the maps to in If then so any point on the circle of radius centered at the origin in the maps to in As gets closer to zero, the value of z approaches 3. When then This is the origin in the If is equal to any other value between then equals some other constant between The surface described by this function is a hemisphere centered at the origin with radius as shown in the following graph. Graph of the hemisphere represented by the given function of two variables. This function also contains the expression Setting this expression equal to various values starting at zero, we obtain circles of increasing radius. The minimum value of is zero (attained when When the function becomes and when then the function becomes These are cross-sections of the graph, and are parabolas. Recall from Introduction to Vectors in Space that the name of the graph of is a paraboloid. The graph of appears in the following graph. A paraboloid is the graph of the given function of two variables. A profit function for a hardware manufacturer is given by where is the number of nuts sold per month (measured in thousands) and represents the number of bolts sold per month (measured in thousands). Profit is measured in thousands of dollars. Sketch a graph of this function. This function is a polynomial function in two variables. The domain of consists of coordinate pairs that yield a nonnegative profit: This is a disk of radius centered at A further restriction is that both must be nonnegative. When and Note that it is possible for either value to be a noninteger; for example, it is possible to sell thousand nuts in a month. The domain, therefore, contains thousands of points, so we can consider all points within the disk. For any we can solve the equation *** QuickLaTeX cannot compile formula: \begin{array}{}\\ \hfill 16-{\left(x-3\right)}^{2}-{\left(y-2\right)}^{2}& =\hfill & z\hfill \\ \hfill {\left(x-3\right)}^{2}+{\left(y-2\right)}^{2}& =\hfill & 16-z.\hfill \end{array} *** Error message: Missing # inserted in alignment preamble. leading text: $\begin{array}{} Missing $ inserted. leading text: $\begin{array}{}\\ \hfill 16-{\left Extra }, or forgotten $. leading text: ...gin{array}{}\\ \hfill 16-{\left(x-3\right)} Missing } inserted. leading text: ...eft(x-3\right)}^{2}-{\left(y-2\right)}^{2}& Since we know that so the previous equation describes a circle with radius centered at the point Therefore. the range of is The graph of is also a paraboloid, and this paraboloid points downward as shown. The graph of the given function of two variables is also a paraboloid. Level Curves If hikers walk along rugged trails, they might use a topographical map that shows how steeply the trails change. A topographical map contains curved lines called contour lines. Each contour line corresponds to the points on the map that have equal elevation ((Figure)). A level curve of a function of two variables is completely analogous to a contour line on a topographical map. (a) A topographical map of Devil's Tower, Wyoming. Lines that are close together indicate very steep terrain. (b) A perspective photo of Devil's Tower shows just how steep its sides are. Notice the top of the tower has the same shape as the center of the topographical map. Given a function and a number in the range of level curve of a function of two variables for the value is defined to be the set of points satisfying the equation Returning to the function we can determine the level curves of this function. The range of is the closed interval First, we choose any number in this closed interval—say, The level curve corresponding to is described by the equation To simplify, square both sides of this equation: Now, multiply both sides of the equation by and add to each side: This equation describes a circle centered at the origin with radius Using values of between yields other circles also centered at the origin. If then the circle has radius so it consists solely of the origin. (Figure) is a graph of the level curves of this function corresponding to Note that in the previous derivation it may be possible that we introduced extra solutions by squaring both sides. This is not the case here because the range of the square root function is nonnegative. Level curves of the function using and corresponds to the origin). A graph of the various level curves of a function is called a contour map. Making a Contour Map Given the function find the level curve corresponding to Then create a contour map for this function. What are the domain and range of To find the level curve for we set and solve. This gives We then square both sides and multiply both sides of the equation by Now, we rearrange the terms, putting the terms together and the terms together, and add to each side: Next, we group the pairs of terms containing the same variable in parentheses, and factor from the first pair: Then we complete the square in each pair of parentheses and add the correct value to the right-hand side: Next, we factor the left-hand side and simplify the right-hand side: Last, we divide both sides by This equation describes an ellipse centered at The graph of this ellipse appears in the following graph. Level curve of the function corresponding to We can repeat the same derivation for values of less than Then, (Figure) becomes for an arbitrary value of (Figure) shows a contour map for using the values When the level curve is the point Contour map for the function using the values Find and graph the level curve of the function corresponding to The equation of the level curve can be written as which is a circle with radius centered at First, set and then complete the square. Another useful tool for understanding the graph of a function of two variables is called a vertical trace. Level curves are always graphed in the but as their name implies, vertical traces are graphed in the – or Consider a function with domain A vertical trace of the function can be either the set of points that solves the equation for a given constant or for a given constant Finding Vertical Traces Find vertical traces for the function corresponding to and First set in the equation This describes a cosine graph in the plane The other values of appear in the following table. Vertical Traces Parallel to the for the Function Vertical Trace for In a similar fashion, we can substitute the in the equation to obtain the traces in the as listed in the following table. The three traces in the are cosine functions; the three traces in the are sine functions. These curves appear in the intersections of the surface with the planes and as shown in the following figure. Vertical traces of the function are cosine curves in the (a) and sine curves in the (b). Determine the equation of the vertical trace of the function corresponding to and describe its graph. This function describes a parabola opening downward in the plane Set in the equation and complete the square. Functions of two variables can produce some striking-looking surfaces. The following figure shows two examples. Examples of surfaces representing functions of two variables: (a) a combination of a power function and a sine function and (b) a combination of trigonometric, exponential, and logarithmic functions. Functions of More Than Two Variables So far, we have examined only functions of two variables. However, it is useful to take a brief look at functions of more than two variables. Two such examples are In the first function, represents a point in space, and the function maps each point in space to a fourth quantity, such as temperature or wind speed. In the second function, can represent a point in the plane, and can represent time. The function might map a point in the plane to a third quantity (for example, pressure) at a given time The method for finding the domain of a function of more than two variables is analogous to the method for functions of one or two variables. Domains for Functions of Three Variables Find the domain of each of the following functions: For the function to be defined (and be a real value), two conditions must hold: The denominator cannot be zero. The radicand cannot be negative. Combining these conditions leads to the inequality Moving the variables to the other side and reversing the inequality gives the domain as which describes a ball of radius centered at the origin. (Note: The surface of the ball is not included in this domain.) Since the radicand cannot be negative, this implies and therefore that Since the denominator cannot be zero, or Which can be rewritten as which are the equations of two lines passing through the origin. Therefore, the domain of is Find the domain of the function Check for values that make radicands negative or denominators equal to zero. Functions of two variables have level curves, which are shown as curves in the However, when the function has three variables, the curves become surfaces, so we can define level surfaces for functions of three variables. Given a function and a number in the range of a level surface of a function of three variables is defined to be the set of points satisfying the equation Finding a Level Surface Find the level surface for the function corresponding to The level surface is defined by the equation This equation describes a hyperboloid of one sheet as shown in the following figure. A hyperboloid of one sheet with some of its level surfaces. Find the equation of the level surface of the function corresponding to and describe the surface, if possible. describes a sphere of radius centered at the point Set and complete the square. The graph of a function of two variables is a surface in and can be studied using level curves and vertical traces. A set of level curves is called a contour map. Vertical trace for or for Level surface of a function of three variables For the following exercises, evaluate each function at the indicated values. The volume of a right circular cylinder is calculated by a function of two variables, where is the radius of the right circular cylinder and represents the height of the cylinder. Evaluate and explain what this means. This is the volume when the radius is and the height is An oxygen tank is constructed of a right cylinder of height and radius with two hemispheres of radius mounted on the top and bottom of the cylinder. Express the volume of the cylinder as a function of two variables, find and explain what this means. For the following exercises, find the domain of the function. All points in the All real ordered pairs in the of the form Find the range of the functions. For the following exercises, find the level curves of each function at the indicated value of to visualize the given function. a hyperbola a line; line through the origin any constant The level curves are parabolas of the form For the following exercises, find the vertical traces of the functions at the indicated values of and y, and plot the traces. a curve in the with rulings parallel to the Find the domain of the following functions. All points in For the following exercises, plot a graph of the function. Use technology to graph Sketch the following by finding the level curves. Verify the graph using technology. Describe the contour lines for several values of for The contour lines are circles. Find the level surface for the functions of three variables and describe it. a sphere of radius a hyperboloid of one sheet For the following exercises, find an equation of the level curve of that contains the point The strength of an electric field at point resulting from an infinitely long charged wire lying along the is given by where is a positive constant. For simplicity, let and find the equations of the level surfaces for A thin plate made of iron is located in the The temperature in degrees Celsius at a point is inversely proportional to the square of its distance from the origin. Express as a function of Refer to the preceding problem. Using the temperature function found there, determine the proportionality constant if the temperature at point Use this constant to determine the temperature at point Refer to the preceding problem. Find the level curves for and describe what the level curves represent. The level curves represent circles of radii and a plot of the various level curves of a given function function of two variables a function that maps each ordered pair in a subset of to a unique real number graph of a function of two variables a set of ordered triples that satisfies the equation plotted in three-dimensional Cartesian space level curve of a function of two variables the set of points satisfying the equation for some real number in the range of the graph of a function of two variables, the set of ordered triples that solves the equation for a given constant or the set of ordered triples that solves the equation for a given constant Previous: Introduction Next: Limits and Continuity Functions of Several Variables by OSCRiceUniversity is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
CommonCrawl
Construction of solutions for some localized nonlinear Schrödinger equations On the self-dual Einstein-Maxwell-Higgs equation on compact surfaces Jongmin Han and Juhee Sohn Department of Mathematics, Kyung Hee University, Seoul, 130-701, Korea Received December 2017 Revised August 2018 Published November 2018 In this paper, we study the self-dual Einstein-Maxwell-Higgs equation on compact surfaces. The solution structure depends on the parameter $\varepsilon $ appearing in the equation. We find an upper bound $\varepsilon _c $ of $\varepsilon $ for the existence of solutions. By using the topological degree theory, we prove that there exist at least two solutions for $0<\varepsilon <\varepsilon _c$. We also study the asymptotic behavior of solutions as $\varepsilon \to 0$. Keywords: Self-dual Einstein-Maxwell-Higgs equation, Leray-Schauder degree, existence of multiple solutions. Mathematics Subject Classification: Primary: 35J61, 35Q75; Secondary: 81T13. Citation: Jongmin Han, Juhee Sohn. On the self-dual Einstein-Maxwell-Higgs equation on compact surfaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 819-839. doi: 10.3934/dcds.2019034 Y. Almog, Arbitrary n-vortex self-duality solutions to the Ginzburg-Lanbdu equations satisfying normal state conditions at infinity, Asymptotic Anal., 17 (1998), 267-278. Google Scholar T. Aubin, Nonlinear Analysis on Manifolds: Monge-Ampére Equations, Springer-Velarg, Berline, 1982. doi: 10.1007/978-1-4612-5734-9. Google Scholar F. Bethuel, H. Brezis and F. Hélein, Asymptotics for the minimization of a Ginzburg-Landau functional, Calc. Var. P.D.E., 1 (1993), 123-148. doi: 10.1007/BF01191614. Google Scholar E. Bogomol'nyi, The stability of classical solutions, Sov. J. Nucl. Phys., 24 (1976), 449-454. Google Scholar L. A. Caffarelli and Y. Yang, Vortex condensation in Chern-Simons-Higgs model: An existence theorem, Comm. Math. Phys., 168 (1995), 321-336. Google Scholar D. Chae, Global existence of solutions to the coupled einstein and maxwell-higgs system in the spherical symmetry, Ann. Henri Poincaré, 4 (2003), 35-62. doi: 10.1007/s00023-003-0121-0. Google Scholar D. Chae, On the multi-string solutions of the self-dual static Einstein-Maxwell-Higgs system, Calc. Var. PDE, 20 (2004), 47-63. doi: 10.1007/s00526-003-0227-8. Google Scholar X. Chen, S. Hastings, J. B. McLeod and Y. Yang, A nonlinear elliptic equation arising from gauge field theory and cosmology, Proc. Royal Soc. A, 446 (1994), 453-478. doi: 10.1098/rspa.1994.0115. Google Scholar K. Choe, Multivortex solutions in the Chern imons gauged $O(3)$ sigma model on a doubly periodic domain, J. Math. Anal. Appl., 421 (2015), 591-624. doi: 10.1016/j.jmaa.2014.07.022. Google Scholar K. Choe and N. Kim, Blow-up solutions of the self-dual Chern imons iggs vortex equation, Ann. Inst. Henri. Poincar - Anal. Nonlin., 25 (2008), 313-338. doi: 10.1016/j.anihpc.2006.11.012. Google Scholar A. Comtet and G. Gibbons, Bogomol'nyi bounds for cosmic strings, Nucl. Phys. B, 299 (1988), 719-733. doi: 10.1016/0550-3213(88)90370-7. Google Scholar G. Folland, Fourier Analysis and its Applications, Brooks/Cole, 1992. Google Scholar J. Han, Asymptotics for the vortex condensate solutions in Chern-Simons-Higgs theory, Asymptotic Anal., 28 (2001), 31-48. Google Scholar J. Han, Asymptotic limit for condensate solutions in the Abelian Chern-Simons Higgs model Ⅱ, Proc. Amer. Math. Soc., 131 (2003), 3827-3832. doi: 10.1090/S0002-9939-03-07020-5. Google Scholar J. Han and C.-S. Lin, Multiplicity for self-dual condensate solutions in the Maxwell-Chern-Simons O(3) sigma model, Comm. PDE, 39 (2014), 1424-1450. doi: 10.1080/03605302.2014.908909. Google Scholar J. Han and J. Sohn, Classification of string solutions for the self-dual Einstein-Maxwell-Higgs model, Preprint.Google Scholar J. Han and J. Sohn, Existence of topological multi-string solutions in Abelian gauge field theories, J. Math. Phys., 58 (2017), 111511, 17 pp. doi: 10.1063/1.4997983. Google Scholar M. Hindmarsh and T. Kibble, Cosmic strings, Rep. Prog. Phys., 58 (1995), 477-562. Google Scholar A. Jaffe and C. H. Taubes, Vortices and Monopoles, Birkhäuser, Boston, 1980. Google Scholar B. Linet, A vortex-line model for a system of cosmic strings in equilibrium, Gen. Relativity & Gravitation, 20 (1988), 451-456. doi: 10.1007/BF00758120. Google Scholar J. Spruck and Y. Yang, Regular stationary solutions of the cylindrically symmetric Einsteinmatter-gauge equations, J. Math. Anal. Appl., 195 (1995), 160-190. doi: 10.1006/jmaa.1995.1349. Google Scholar G. Tarantello, Multiple condensate solutions for the Chern-Simons Higgs theory, J. Math. Phys., 37 (1996), 3769-3796. doi: 10.1063/1.531601. Google Scholar G. Tarantello, Selfdual Gauge Field Vortices, Birkhuser, 2008. doi: 10.1007/978-0-8176-4608-0. Google Scholar C. Taubes, Arbitrary $N$-vortex solutions to the first order Ginzburg-Landau equations, Comm. Math. Phys., 72 (1980), 277-292. Google Scholar S. Wang and Y. Yang, Abrikosov's vortices in the critical coupling, SIAM J. Math. Anal., 23 (1992), 1125-1140. doi: 10.1137/0523063. Google Scholar Y. Yang, An equivalence theorem for string solutions of the Einstein matter-gauge equations, Lett. Math. Phys., 90 (1992), 79-90. doi: 10.1007/BF00398804. Google Scholar Y. Yang, Prescribing topological defects for the coupled Einstein and abelian Higgs equations, Comm. Math. Phys., 170 (1995), 541-582. Google Scholar Y. Yang, Solitons in Field Theory and Nonlinear Analysis, Springer Monographs in Mathematics, Springer-Verlag, New York, 2001. doi: 10.1007/978-1-4757-6548-9. Google Scholar Gabriele Nebe, Wolfgang Willems. On self-dual MRD codes. Advances in Mathematics of Communications, 2016, 10 (3) : 633-642. doi: 10.3934/amc.2016031 Masaaki Harada, Akihiro Munemasa. Classification of self-dual codes of length 36. Advances in Mathematics of Communications, 2012, 6 (2) : 229-235. doi: 10.3934/amc.2012.6.229 Stefka Bouyuklieva, Anton Malevich, Wolfgang Willems. On the performance of binary extremal self-dual codes. Advances in Mathematics of Communications, 2011, 5 (2) : 267-274. doi: 10.3934/amc.2011.5.267 Nikolay Yankov, Damyan Anev, Müberra Gürel. Self-dual codes with an automorphism of order 13. Advances in Mathematics of Communications, 2017, 11 (3) : 635-645. doi: 10.3934/amc.2017047 Masaaki Harada, Akihiro Munemasa. On the covering radii of extremal doubly even self-dual codes. Advances in Mathematics of Communications, 2007, 1 (2) : 251-256. doi: 10.3934/amc.2007.1.251 Stefka Bouyuklieva, Iliya Bouyukliev. Classification of the extremal formally self-dual even codes of length 30. Advances in Mathematics of Communications, 2010, 4 (3) : 433-439. doi: 10.3934/amc.2010.4.433 Masaaki Harada, Takuji Nishimura. An extremal singly even self-dual code of length 88. Advances in Mathematics of Communications, 2007, 1 (2) : 261-267. doi: 10.3934/amc.2007.1.261 Hyun Jin Kim, Heisook Lee, June Bok Lee, Yoonjin Lee. Construction of self-dual codes with an automorphism of order $p$. Advances in Mathematics of Communications, 2011, 5 (1) : 23-36. doi: 10.3934/amc.2011.5.23 Bram van Asch, Frans Martens. Lee weight enumerators of self-dual codes and theta functions. Advances in Mathematics of Communications, 2008, 2 (4) : 393-402. doi: 10.3934/amc.2008.2.393 Bram van Asch, Frans Martens. A note on the minimum Lee distance of certain self-dual modular codes. Advances in Mathematics of Communications, 2012, 6 (1) : 65-68. doi: 10.3934/amc.2012.6.65 Masaaki Harada, Katsushi Waki. New extremal formally self-dual even codes of length 30. Advances in Mathematics of Communications, 2009, 3 (4) : 311-316. doi: 10.3934/amc.2009.3.311 Katherine Morrison. An enumeration of the equivalence classes of self-dual matrix codes. Advances in Mathematics of Communications, 2015, 9 (4) : 415-436. doi: 10.3934/amc.2015.9.415 Minjia Shi, Daitao Huang, Lin Sok, Patrick Solé. Double circulant self-dual and LCD codes over Galois rings. Advances in Mathematics of Communications, 2019, 13 (1) : 171-183. doi: 10.3934/amc.2019011 Suat Karadeniz, Bahattin Yildiz. New extremal binary self-dual codes of length $68$ from $R_2$-lifts of binary self-dual codes. Advances in Mathematics of Communications, 2013, 7 (2) : 219-229. doi: 10.3934/amc.2013.7.219 Ayça Çeşmelioǧlu, Wilfried Meidl, Alexander Pott. On the dual of (non)-weakly regular bent functions and self-dual bent functions. Advances in Mathematics of Communications, 2013, 7 (4) : 425-440. doi: 10.3934/amc.2013.7.425 Amita Sahni, Poonam Trama Sehgal. Enumeration of self-dual and self-orthogonal negacyclic codes over finite fields. Advances in Mathematics of Communications, 2015, 9 (4) : 437-447. doi: 10.3934/amc.2015.9.437 Joel Spruck, Yisong Yang. Charged cosmological dust solutions of the coupled Einstein and Maxwell equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 567-589. doi: 10.3934/dcds.2010.28.567 Hartmut Pecher. Local solutions with infinite energy of the Maxwell-Chern-Simons-Higgs system in Lorenz gauge. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2193-2204. doi: 10.3934/dcds.2016.36.2193 Delphine Boucher. Construction and number of self-dual skew codes over $\mathbb{F}_{p^2}$. Advances in Mathematics of Communications, 2016, 10 (4) : 765-795. doi: 10.3934/amc.2016040 Ekkasit Sangwisut, Somphong Jitman, Patanee Udomkavanich. Constacyclic and quasi-twisted Hermitian self-dual codes over finite fields. Advances in Mathematics of Communications, 2017, 11 (3) : 595-613. doi: 10.3934/amc.2017045 Jongmin Han Juhee Sohn
CommonCrawl
Cohomology 1 Vectors and covectors 2 The dual basis 3 The dual operators 4 The cochain complex 5 Social choice: higher order voting 6 Cohomology 7 Computing cohomology 8 Homology vs. cohomology maps 9 Computing cohomology maps 10 Functors Vectors and covectors What is the relation between $2$, the number, and "doubling", the function $f(x)=2\cdot x$? Linear algebra helps one appreciate this seemingly trivial relation. Indeed, the answer is given by a linear operator, $${\bf R} \to \mathcal{L}({\bf R},{\bf R}),$$ from the reals to the vector space of all linear functions on the reals. In fact, it's an isomorphism! More generally, suppose $R$ is a commutative ring, and $V$ is a finitely generated free module over $R$. Definition. Let the dual of $V$ be defined by $$V^* := \{ \alpha :V \to R, \alpha \text{ linear }\}.$$ If the elements of $V$ are called vectors, those of $V^*$ are called covectors. Example. An illustration of a vector in $v\in V={\bf R}^2$ and a covector in $u\in V^*$ is given below: Here, a vector is just a pair of numbers, while a covector is a match of each unit vector to a number. The linearity is visible. $\square$ Exercise. Explain the alternative way a covector can be visualized as shown below. Hint: It resembles an oil spill. Example. In the above example, it is easy to see a natural way of building a vector $w$ from this covector $u$. Indeed, let's pick $w$ such that the direction of $w$ is that of the one that gives the largest value of the covector $u$ (i.e., $2$), and the magnitude of $w$ is that value of $u$. So the result is $w=(2,2)$. Moreover, covector $u$ can be reconstructed from this vector $w$. Exercise. What does this construction have to do with the norm of a linear operator? Following the idea of this terminology, we add "co" to a word to indicate its dual. Such is the relation between chains and cochains (forms). In that sense, $2$ is a number and $2x$ is a "conumber", $2^*$. Exercise. What is a "comatrix"? The dual basis Proposition. The dual $V^*$ of module $V$ is also a module, with the operations for $\alpha, \beta \in V^*,\ r \in R$ given by: $$\begin{array}{llll} (\alpha + \beta)(v) := \alpha(v) + \beta(v),\ v \in V;\\ (r \alpha)(w) := r\alpha(w),\ w \in V. \end{array}$$ Exercise. Prove the proposition for $R={\bf R}$. Hint: Start with indicating what $0, -\alpha \in V^*$ are, and then refer to theorems of linear algebra. Below we assume that $V$ is finite-dimensional. Suppose also that we are given a basis $\{u_1,...,u_n\}$ of $V$. Definition. The dual $u_p^*\in V^*$ of $u_p\in V$ is defined by: $$u_p^*(u_i):=\delta _{ip} ,\ i = 1,...,n;$$ or $$u_p^*(r_1u_1+...+r_nu_n) = r_p.$$ Exercise. Prove that $u_p^* \in V^*$. Definition. The dual of the basis of $V$ is $\{u_1^*,...,u_n^*\}$. Example. The dual of the standard basis of $V={\bf R}^2$ is shown below: Let's prove that the dual of a basis is a basis. It takes two steps. Proposition. The set $\{u_1^*,...,u_n^*\}$ is linearly independent in $V^*$. Proof. Suppose $$s_1u_1^* + ... + s_nu_n^* = 0$$ for some $r_1,...,r_k \in R$. This means that $$s_1u_1^*(u)+...+s_nu_n^*(u)=0 ,$$ for all $u \in V$. For each $i=1,...,n$, we do the following. We choose $u:=u_i$ and substitute it into the above equation: $$s_1u_1^*(u_i)+...+s_nu_n^*(u_i)=0.$$ Then we use $u_j^*(u_i)=\delta_{ij}$ to reduce the equation to: $$s_10+...+s_i1+...+s_n0=0.$$ We conclude that $s_i=0$. The statement of the proposition follows. $\blacksquare$ Proposition. The set $\{u_1^*,...,u_n^*\}$ spans $V^*$. Proof. Given $u^* \in V^*$, let $r_i := u^*(u_i) \in R,\ i=1,...,n$. Now define $$v^* := r_1u_1^* + ... + r_nu_n^*.$$ Consider $$v^*(u_i) = r_1u_1^*(u_i) + ... + r_nu_n^*(u_i) = r_i.$$ So the values of $u^*$ and $v^*$ match for all elements of the basis of $V$. Thus $u^*=v^*$. $\blacksquare$ Exercise. Find the dual of ${\bf R}^2$ for two different choices of bases. Corollary. $$\dim V^* = \dim V = n.$$ Therefore, by the Classification Theorem of Vector Spaces, we have the following: Corollary. $$V^* \cong V.$$ Even though a module is isomorphic to its dual, the behaviors of the linear operators on these two spaces aren't "aligned", as we will show. Moreover, the isomorphism is dependent on the choice of basis. The relation between a module and its dual is revealed if we look at vectors as column-vectors (as always) and covectors as row-vectors: $$V = \left\{ x=\left[ \begin{array}{c} x_1 \\ \vdots \\ x_n \end{array} \right] \right\}, \quad V^* = \{y=[y_1,...,y_n]\}.$$ Then, we can multiply the two as matrices: $$y\cdot x=[y_1,...,y_n] \cdot \left[ \begin{array}{c} x_1 \\ \vdots \\ x_n \end{array} \right] =x_1y_1+...+x_ny_n.$$ Exercise. Prove the formula. As before, we utilize the similarity to the dot product and, for $x\in V,y\in V^*$, represent $y$ evaluated at $x$ as: $$\langle x,y \rangle:=y(x).$$ This isn't the dot product or an inner product, which is symmetric. It is called a pairing: $$\langle \ ,\ \rangle: V^*\times V \to R,$$ which is linear with respect to either of the components. Exercise. Show that the pairing is independent from a choice of basis. Exercise. Prove that, if the spaces are finite-dimensional, we have $$\dim \mathcal{L}(V,U) = \dim V \cdot \dim U.$$ Exercise. When $V$ is infinite-dimensional with a fixed basis $\gamma$, its dual is defined as the set of all linear functions $\alpha:V\to R$ that are equal to zero for all but a finite number of elements of $\gamma$. (a) Prove the infinite-dimensional analogs of the results above. (b) Show how they fail to hold if we use the original definition. The dual operators Next, we need to understand what happens to a linear operator $$A:V \to W$$ under duality. The answer is uncomplicated but also unexpected, as the corresponding dual operator goes in the opposite direction: $$A^*:V^* \leftarrow W^*.$$ This isn't just because of the way we chose to define it: $$A^*(f):=f A;$$ a dual counterpart of $A$ can't be defined in any other way! Consider this diagram: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & V & \ra{A} & W \\ & _{g \in V^*} & \searrow & \da{f \in W^*} \\ & & & R, \end{array} $$ where $R$ is our ring. If this is understood as a commutative diagram, the relation between $f$ and $g$ is given by the equation above. Therefore, we acquire (and have to) $g$ from $f$ by $g=fA$, and not vice versa. Furthermore, the diagram also suggests that the reversal of the arrows has nothing to do with linearity. The issue is "functorial". We restate the definition in our preferred notation. Definition. Given a linear operator $A:V \to W$, its dual operator $A^*:W^* \to V^*$, is given by: $$\langle A^*g,v \rangle=\langle g,Av \rangle ,$$ for every $g\in W^*,v\in V$. The matching behavior produced by the duality is non-trivial. Theorem. (a) If $A$ is one-to-one, then $A^*$ is onto. (b) If $A$ is onto, then $A^*$ is one-to-one. Proof. To prove part (a), observe that $\operatorname{Im}A$, just as any submodule in the finite-dimensional case, is a summand: $$W=\operatorname{Im}A\oplus N,$$ for some submodule $N$ of $W$. Consider some $f\in V^*$. Now, there is a unique representation of every element $w\in W$ as $w=w'+n$ for some $w'\in \operatorname{Im}A$ and $n\in N$. Therefore, there is a unique representation of every element $w\in W$ as $w=A(v)+n$ for some $v\in V$ and $n\in N$, since $A$ is one-to-one. Then, we can define $g \in W^*$ by $\langle g,w \rangle:= \langle f,v \rangle$. Finally, we have: $$\langle A^*g,v \rangle =\langle g,Av \rangle=\langle g,w-n \rangle=\langle g,w \rangle-\langle g,n \rangle=\langle f,v \rangle+0=\langle f,v \rangle.$$ Hence, $A^*g=f$. $\blacksquare$ Exercise. Prove part (b). Theorem. For finite-dimensional $V,W$, the matrix of $A^*$ is the transpose of that of $A$: $$A^*=A^T.$$ Exercise. Prove the theorem. The compositions are preserved under the duality but in reverse (just as with the inverses): Theorem. $$(BA)^*=A^*B^*.$$ Proof. Consider: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & V & \ra{A} & W & \ra{B} & U\\ & _{g \in V^*} & \searrow & \da{f \in W^*} & \swarrow & _{h \in U^*} \\ & & & R. \end{array} $$ $\blacksquare$ Exercise. Finish the proof. As you see, the dual $A^*$ behaves very much like, but is not to be confused with, the inverse $A^{-1}$. Exercise. When do we have $A^{-1}=A^T$? The isomorphism between $V$ and $V^*$ is very straight-forward. Definition. The duality isomorphism of the module $V$, $$D_V: V \to V^*,$$ is given by $$D_V(u_i):=u_i^*,$$ where $\{u_i\}$ is a basis of $V$ and $\{u^*_i\}$ is its dual. In addition to the compositions, as we saw above, this isomorphism preserves the identity. Theorem. $$(\operatorname{Id}_V)^*=\operatorname{Id_{V^*}}.$$ However, because of the reversed arrows, we can't say that this isomorphism "preserves linear operators". Therefore, the duality does not produce a functor but rather a different kind of functor discussed later in this section. Now, for $A:V \to U$ a linear operator, the diagram below isn't commutative: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{lllllllll} & V & \ra{A} & U \\ & \da{D_V} & \ne & \da{D_U} \\ & V^* & \la{A^*} & U^*\\ \end{array} $$ Exercise. Why not? Give an example. Exercise. Show how a change of basis of $V$ affects differently the coordinate representation of vectors in $V$ and covectors in $V^*$. However, the isomorphism with the second dual $$V^{**}:= (V^*)^*$$ given by: $$D_{V^*}D_V:V\cong (V^*)^*$$ does preserve linear operators, in the following sense. Theorem. The following diagram is commutative: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllll} & V & \ra{A} & U \\ & \da{D_V} & & \da{D_U} \\ & V^* & & U^*\\ & \da{D_{V^*}} & & \da{D_{U^*}} \\ & V^{**} & \ra{A^{**}} & U^{**}\\ \end{array} $$ Exercise. (a) Prove the commutativity. (b) Demonstrate that the isomorphism is independent from the choice of basis of $V$. Our conclusion is that we can think of the second dual (but not the first dual) of a module as the module itself: $$V^{**}=V.$$ Same applies to the second duals of linear operators: $$A^{**}=A.$$ Note: When the dot product above is replaced with a particular choice of inner product (to be considered later), we have an identical duality theory. The cochain complex Recall that cohomology theory is the dual of homology theory and it starts with the concept of a $k$-cochain on a cell complex $K$. It is any linear function from the module of $k$-chains to $R$: $$s:C_k(K)\to R.$$ Then chains are the vectors and the cochains are the corresponding covectors. We use the duality theory we have developed to define the module of $k$-cochains as the dual of the module of the $k$-chains: $$C^k(K):=\big( C_k(K) \big)^*,$$ Further, the $k$th coboundary operator of $K$ is the dual of the $(k+1)$th boundary operator: $$\partial^k:=\left( \partial _k \right)^*:C^k\to C^{k+1}.$$ It is given by the Stokes formula: $$\langle \partial ^k Q,a \rangle := \langle Q, \partial_{k+1} a \rangle,$$ for any $(k+1)$-chain $a$ and any $k$-cochain $Q$ in $K$. Theorem. The matrix of the coboundary operator is the transpose of that of the boundary operator: $$\partial^k=\left( \partial_{k+1} \right)^T.$$ Definition. The elements in $Z^k:=\ker \partial ^* $ are called cocycles and the elements of $B^k:=\operatorname{Im} \partial^*$ are called coboundaries. The following is a crucial result. Theorem (Double Coboundary Identity). Every coboundary is a cocycle, i.e., for $k=0,1,...$, we have $$\partial^{k+1}\partial^k=0.$$ Proof. It follows from the fact that the coboundary operator is the dual of the boundary operator. Indeed, $$\partial ^* \partial ^* = (\partial \partial)^* = 0^*=0,$$ by the double boundary identity. $\blacksquare$ The cochain modules $C^k=C^k(K),\ k=0,1,...$, form the cochain complex $\{C^*,\partial ^*\}$: $$\newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{rrrrrrrrrrr} 0& \la{0} & C^N & \la{\partial^{N-1}} & ... & \la{\partial^{0}} & C^0 &\la{0} &0 . \end{array} $$ According to the theorem, a cochain complex is a chain complex, just indexed in the opposite order. Our illustration of a cochain complex is identical to that of the chain complex, but with the arrows reversed: Recall that a cell complex $K$ is called acyclic if its chain complex is an exact sequence: $$\operatorname{Im}\partial_k=\ker \partial_{k-1}.$$ Of course, if a cochain complex is exact as a chain complex, it is also called exact: Exercise. (a) State the definition of an exact cochain complex in terms of cocycles and coboundaries. (b) Prove that $\{C^k(K)\}$ is exact if and only if $\{C_k(K)\}$ is exact. To see the big picture, we align the chain complex and the cochain complex in one, non-commutative, diagram: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{ccccccccc} ...& \la{\partial^{k}} & C^k & \la{\partial^{k-1}} & C^{k-1} & \la{\partial^{k-1}} &... \\ ...& & \da{\cong} & \ne & \da{\cong} & &... \\ ...& \ra{\partial_{k+1}} & C_k & \ra{\partial_{k}} & C_{k-1} & \ra{\partial_{k}} & ... \end{array} $$ Exercise. What can you say about the complexes if this diagram is commutative? Social choice: higher order voting Recall that we previously considered $n$ alternatives/candidates, $\{A,B,C,...\},$ placed at the vertices of a simplicial complex $K$. Let's consider some examples. On the most basic level, voters evaluate the alternatives. For example, voter $a$ votes: candidate $A$ is worth $1$; translated: $a(A)=1.$ Here, a $0$-chain $A$ is evaluated by a $0$-cochain $a$. On the next level, voters compare the alternatives (i.e., the vertices of $K$) to each other. For example, voter $a$ votes: candidate $A$ is worse, by $1$, than candidate $B$; translated: $a(AB)=1.$ What do we do with comparison votes? A comparison vote, such as the one above, may have come from a rating; however, there are many possibilities: $$a(A)=0,a(B)=1\ \text{ or } a(A)=1,a(B)=2\ \text{ etc.}$$ In addition, this comparison vote might conflict with others; for example, the total vote may be circular: $$a(B)-a(A)=1,\ a(C)-a(B)=1,\ a(A)-a(C)=1.$$ We don't try to sort this out; instead, we create a single pairwise comparison: $b(AB):=1.$ Here, a $1$-chain $AB$ is evaluated by a $1$-cochain $b$. In the case of three candidates, there are three votes of degree $0$ and three votes of degree $1$: Exercise. Analyze in this fashion the game of rock-paper-scissors. the $0$-cochains are the evaluations of single candidates, while the $1$-cochains are the pairwise comparisons of the candidates. The voter may vote for chains as linear combinations of the alternatives; for example, "the average of candidates $A$ and $B$": $$a\left(\frac{A+B}{2}\right)=1.$$ Note: We can understand this chain as the 50-50 lottery between $A$ and $B$. Exercise. Represent the following vote as a cochain: the average of candidates $A$ and $B$ is worse, by $B$, than candidate $C$. On the next level, the voter may express himself in terms of comparisons of comparisons. For example, the voter may judge: the combination of the advantages of candidate $B$ over $A$, $C$ over $B$, and $C$ over $A$ is $1$; translated: $b(AB)+b(BC)+b(CA)=1$. We simply create a single triple-wise comparison: $c(ABC):=1.$ Here, a $2$-chain $ABC$ is evaluated by a $2$-cochain $c$. Exercise. Show that the triple vote above (a) may come from a several pairwise comparisons and (b) may be in conflict with other votes. Let's sum up. These are possible votes: Candidate $A_0$ is evaluated by a vote of degree $0$: $a^0\in C^0$ and $a^0(A_0)\in R$. Candidates $A_0$ and $A_1$ are evaluated by a vote of degree $1$: $a^1\in C^1$ and $a^1(A_0A_1)\in R$. Candidates $A_0$, $A_1$, and $A_2$ are evaluated by a vote of degree $2$: $a^2\in C^2$ and $a^2(A_0A_1A_2)\in R$. Candidates $A_0$, $A_1$, ... $A_k$ are evaluated by a vote of degree $k$: $a^k\in C^k$ and $a^k(A_0A_1...A_k)\in R$. Definition. A vote is any cochain in complex $K$: $$a=a^0+a^1+a^2+...,\ a^i\in C^i(K).$$ Now, how do we make sense of the outcome of such a vote? Who won? In the simplest case, we ignore the higher order votes, $a^1,a^2,...$, and choose the winner to be the one with the highest rating: $$winner:=\arg\max _{i\in K^{(0)}} a^0(i).$$ But do we really have to discard the information about pairwise comparisons? Not necessarily: if $a^1$ is a rating comparison vote (i.e., a coboundary), we can use it to create a new set of ratings: $$b^0:=(\partial^0)^{-1}(a^1).$$ We then find the candidate(s) with the highest value of $$c^0:=a^0+b^0$$ to become the winner. Example. Suppose $a^0=1$ and $$a^1(AB)=1,\ a^1(BC)=-1,\ a^1(CA)=0.$$ We choose: $$b^0(A)=0,\ b^0(B)=1,\ b^0(C)=0,$$ and observe that $$\partial^0 b^0=a^1.$$ Therefore, $B$ is the winner! Thus, the comparison vote helps to break the tie. Exercise. Show that such a winner (or winners) is well-defined. Exercise. Show that the definition is guaranteed to apply only when $K^{(1)}$ is a tree. Exercise. Give other examples of how $a^1$ helps determine the winner when $a^0=0$. The inability of the voter to assign a number to each single candidate to convey his perceived value (or quality or utility or rating) is what makes comparison of pairs necessary. By a similar logic, the inability of the voter to assign a number to each pair of candidates to convey their relative value is what could make comparison of triples necessary. However, we need to find a single winning candidate! Simply put, can $a^2\ne 0$ help determine the winner when $a^0=0, a^1=0$? Unfortunately, there is no $b^0\in C^0$ such that $\partial^1\partial^0b^0=a^2$. Definition. The $k$th cohomology group of $K$ is the quotient of the cocycles over the coboundaries: $$H^k=H^k(K):=Z^k / B^k = \frac {\ker \partial^{k+1}} {\operatorname{Im}\partial^k}.$$ It is then the homology of the "reversed" cochain complex. Most of the theorems about homology have corresponding theorems about cohomology. Often, the latter can be derived from the former via duality. Sometimes it is unnecessary. We next discuss cohomology in the context of connectedness and simple connectedness. Proposition. The constant functions are $0$-cocycles. Proof. If $\varphi \in C^0({\mathbb R})$, then $\partial^* \varphi([a,a+1]) = \varphi (a+1)-\varphi(a) = 0$. A similar argument applies to $C^0({\mathbb R}^n)$. $\blacksquare$ Proposition. The $0$-cocycles are constant on a path-connected complex. Proof. For ${\mathbb R}^1$, we have $\partial^* \varphi([a,a+1]) = 0$, or $\varphi(a+1) - \varphi(a) = 0$. Therefore, $\varphi(a+1) = \varphi(a)$, i.e., $\varphi$ doesn't change from a vertex to the next. $\blacksquare$ The general case is illustrated below: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet & \ra{} & \bullet & & \\ \ua{} & & \da{} & & \\ \bullet & & \bullet & \ra{} & \bullet & \ra{} & \bullet \end{array} $$ Exercise. Prove that a complex is path-connected if and only if any two vertices can be connected by a sequence of adjacent edges. Use this to finish the proof of the proposition. Corollary. $\dim \ker \partial^0 =$ number of path components of $|K|$. We summarize the analysis as follows. Theorem. $H^0(K) \cong R^m$, where $m$ is the number of path components of $|K|$. Now, simple-connectedness. Example (circle). Consider a cubical representation of the circle: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet& \ra{ } & \bullet \\ \ua{ } & & \ua{ } \\ \bullet& \ra{ } & \bullet \end{array} $$ Here, the arrows indicate the orientations of the edges -- along the coordinate axes. A $1$-cochain is just a combination of four numbers: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet& \ra{q} & \bullet \\ \ua{p} & & \ua{r} \\ \bullet& \ra{s} & \bullet \end{array} $$ First, which of these cochains are cocycles? According to our definition, they should have "horizontal difference - vertical difference" equal to $0$: $$(r-p)-(q-s)=0.$$ For example, we can choose them all equal to $1$. Second, what $1$-cochains are coboundaries? Here is a $0$-cochain and its coboundary: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet^1 & \ra{1} & \bullet^2 \\ \ua{1} & & \ua{1} \\ \bullet^0 & \ra{1} & \bullet^1 \end{array} $$ So, this $1$-form is a coboundary. But this one isn't: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet & \ra{1} & \bullet \\ \ua{1} & & \ua{-1} \\ \bullet & \ra{-1} & \bullet \end{array} $$ This fact is easy to prove by solving a little system of linear equations, or we can simply notice that the complete trip around the square adds up to $4$ not $0$. Therefore, $H^1\ne 0$. We accept the following without proof. Theorem. If $|K|$ is simply-connected, $H^1(K)=0$. These topological results for homology and cohomology match! Does it mean that homology and cohomology match in other dimensions? Consider first the fact that duality gives as the isomorphism: $$C_k\cong C^k.$$ Second, the quotient construction of cohomology is identical to the one that defined homology. However, this doesn't mean that the resulting quotients are also isomorphic; in general, we have: $$H_k\not\cong H^k.$$ For example, the quotient construction, over ${\bf Z}$, creates torsion components for homology and cohomology. Those components often don't match, as in the case of the Klein bottle considered in the next subsection. A more subtle difference is that the cohomology isn't just a module; it also has a graded ring structure provided by the wedge product. Example (sphere with bows). To see the difference this makes, consider these two spaces: the sphere with two bows attached and the torus: Their homology groups coincide in all dimensions. The cohomology groups also coincide, but only as vector spaces! The basis elements in dimension $1$ behave differently under the wedge product. For the sphere with bows, we have: $$[a ^*]\wedge [b ^*]=0,$$ because there is nowhere for this $2$-cochain to "reside". Meanwhile for the torus, we have: $$[a ^*]\wedge [b ^*]\ne 0.$$ $\square$ Computing cohomology In the last subsection, we used cochains to detect the hole in the circle. Now, we compute the cohomology without shortcuts -- the above theorems -- just as a computer would do. The starting points is, $\dim C^k(K) = \#$ of $k$-cells in $K$. Example (circle). We go back to our cubical complex $K$ and compute $H^1(K)$: Consider the cochain complex of $K$: $$C^0 \stackrel{\partial^0}{\to} C^1 \stackrel{\partial^1}{\to} C^2 = 0.$$ Observe that, since $\partial ^1=0$, we have $\ker \partial^1 = C^1$. Let's list the bases of these vector spaces. Of course, we start with the bases of the chains $C_k$, i.e., the cells: $$\{A,B,C,D\},\ \{a,b,c,d\};$$ and consider the dual bases of the cochains $C^k$: $$\{A^*,B^*,C^*,D^*\},\ \{a^*,b^*,c^*,d^*\};$$ They are given by $$A^*(A)=\langle A^*,A \rangle =1,...$$ We find next the formula for $\partial^0$, a linear operator, which is its $4 \times 4$ matrix. For that, we just look at what happens to the basis elements: $A^* = [1, 0, 0, 0]^T= \begin{array}{ccc} 1 & - & 0 \\ | & & | \\ 0 & - & 0 \end{array} \Longrightarrow \partial^0 A^* = \begin{array}{ccc} \bullet & -1 & \bullet \\ 1 & & 0 \\ \bullet & 0 & \bullet \end{array} = a^*-b^* = [1,-1,0,0]^T;$ $B^* = [0, 1, 0, 0]^T= \begin{array}{ccc} 0 & - & 1 \\ | & & | \\ 0 & - & 0 \end{array} \Longrightarrow \partial^0 B^* = \begin{array}{ccc} \bullet & 1 & \bullet \\ 0 & & 1 \\ \bullet & 0 & \bullet \end{array} = b^*+c^* = [0,1,1,0]^T;$ $C^* = [0, 0, 1, 0]^T= \begin{array}{ccc} 0 & - & 0 \\ | & & | \\ 0 & - & 1 \end{array} \Longrightarrow \partial^0 C^* = \begin{array}{ccc} \bullet & 0 & \bullet \\ 0 & & -1 \\ \bullet & 1 & \bullet \end{array} = -c^*+d^* = [0,0,-1,1]^T;$ $D^* = [0, 0, 0, 1]^T= \begin{array}{ccc} 0 & - & 0 \\ | & & | \\ 1 & - & 0 \end{array} \Longrightarrow \partial^0 D^* = \begin{array}{ccc} \bullet & 0 & \bullet \\ -1 & & 0 \\ \bullet & -1 & \bullet \end{array} = -a^*-d^* = [-1,0,0,-1]^T.$ Now, the matrix of $\partial^0$ is formed by the column vectors listed above: $$\partial^0 = \left[ \begin{array}{cccc} 1 & 0 & 0 & -1 \\ -1 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \end{array} \right].$$ Now, from this data, we find the kernel and the image using the standard linear algebra. The kernel is the solution set of the equation $\partial^0v=0$. It may be found by solving the corresponding system of linear equations with the coefficient matrix $\partial^0$. The brute force approach is the Gaussian elimination, or we can simply notice that the rank of the matrix is $3$, so the dimension of the kernel is $1$. Therefore, $$\dim H^0=1.$$ We have a single component! The image is the set of $u$'s with $\partial^0v=u$. Once again Gaussian elimination is a simple but effective approach, or we simply notice that the dimension of the image is the rank of the matrix, $3$. In fact, $$\operatorname{span} \{ \partial^0(A^*), \partial^0(B^*), \partial^0(C^*), \partial^0(D^*) \} = \operatorname{Im} \partial^0 .$$ Therefore, $$\dim H^1=\dim \left( C^1 / \operatorname{Im} \partial^0 \right) = 4-3=1.$$ We have a single hole! Another way to see that the columns aren't linearly independent is to compute $\det\partial^0 =0$. Exercise. Compute the cohomology of the T-shaped graph. Exercise. Compute the cohomology of the following complexes: (a) the square, (b) the mouse, and (c) the figure 8, shown below. Example (cylinder). We create the cylinder $C$ by an equivalence relation of cells of the cell complex: $$a \sim c;\ A \sim D,\ B \sim C.$$ The chain complex is known and the cochain complex is computed accordingly: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{lccccccccccc} & & C_{2} & \ra{\partial} & C_{1} & \ra{\partial} & C_0 \\ & & < \tau > & \ra{?} & < a,b,d > & \ra{?} & < A,B > \\ & & \tau & \mapsto & b - d \\ & & & & a & \mapsto & B-A \\ & & & & b & \mapsto & 0 \\ & & & & d & \mapsto & 0 \\ & & C^{2} & \la{\partial^*} & C^{1} & \la{\partial^*} & C^0 \\ & & <\tau^*> & \la{?} & < a^*,b^*,d^* > & \la{?} & < A^*,B^* > \\ & & 0 & \leftarrowtail& a^* & & \\ & & \tau^* & \leftarrowtail& b^* & & \\ & & -\tau^* & \leftarrowtail& d^* & & \\ & & & & -a^* & \leftarrowtail & A^* \\ & & & & a^* & \leftarrowtail & B^* \\ {\rm kernels:} & & Z^2=<\tau^*> && Z^{1}=<a^*, b^*-d^* > & & Z^{0}=< A^*+B^* > \\ {\rm images:} & & B^2=<\tau^*> && B^{1}=< a^* > & & B^{0} = < 0 > \\ {\rm quotients:}& & H^2=0 && H^{1}=< [b^*]-[d^*] >\cong {\bf Z}& & H^{0} = < [A^*]+[B^*] > \cong {\bf Z} \end{array}$$ So, the cohomology is identical to that of the circle. In these examples, the cohomology is isomorphic to the homology. This isn't always the case. Example (Klein bottle). The equivalence relation that gives ${\bf K}^2$ is: $$c \sim a,\ d \sim -b.$$ The chain complex and the cochain complex are: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{lccccccccccc} & & C_{2} & \ra{\partial} & C_{1} & \ra{\partial} & C_0 \\ & & < \tau > & \ra{?} & < a,b > & \ra{?} & < A > \\ & & \tau & \mapsto & 2a \\ & & & & a & \mapsto & 0 \\ & & & & b & \mapsto & 0 \\ & & C^{2} & \la{\partial^*} & C^{1} & \la{\partial^*} & C^0 \\ & & <\tau^*> & \la{?} & < a^*,b^* > & \la{?} & < A^* > \\ & & 2\tau^* & \leftarrowtail& a^* & & \\ & & 0 & \leftarrowtail& b^* & & \\ & & & & 0 & \leftarrowtail & A^* \\ {\rm kernels:} & & Z^2 = <\tau^*> && Z^1 = <b^*> & & Z^0 = < A > \\ {\rm images:} & & B^2 = <2\tau^*> && B^1 = 0 & & B^0 = 0 \\ {\rm quotients:}& & H^2 = \{ \tau^*|2\tau^*=0 \}\cong {\bf Z}_2 && H^1 = <b^*>\cong {\bf Z} & & H^0 = < [A^*] > \cong {\bf Z} \end{array}$$ Exercise. Show that $H^*({\bf K}^2;{\bf R})\cong H({\bf K}^2;{\bf R})$. Exercise. Compute $H^*({\bf K}^2;{\bf Z}_2)$. Exercise. Compute the cohomology of the rest of these surfaces: Homology vs. cohomology maps Our illustration of cochain maps is identical to that of chain maps, but with the arrows reversed: The constructions are very similar and so are the results. Homology and cohomology respect maps that respect boundaries. One can think of a function between two cell complexes $$f:K\to L$$ as one that preserves cells, i.e., the image of a cell is also a cell, possibly of a lower dimension: $$a\in K \Longrightarrow f(a) \in L.$$ If we expand this map by linearity, we have a map between (co)chain complexes: $$f_k:C_k(K) \to C_k(L)\ \text{ and } \ f^k:C^k(K) \leftarrow C^k(L).$$ What makes $f_k$ and $f^k$ continuous, in the algebraic sense, is that, in addition, they preserve (co)boundaries: $$f_k(\partial a)=\partial f_k(a)\ \text{ and } \ f^k(\partial^* s)=\partial^* f^k(s).$$ In other words, the (co)chain maps make these diagrams -- the linked (co)chain complexes -- commute: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{cccccccccccccccc} \la{\partial} & C_k(K) & \la{\partial} & C_{k+1}(K) &\la{\partial} &\quad&\ra{\partial^*} & C ^k(K) & \ra{\partial^*} & C ^{k+1}(K) &\ra{\partial^*} \\ & \da{f_*} & & \da{f_*} &&\text{ and }&& \ua{f^*} & & \ua{f^*}\\ \la{\partial} & C_k(L) & \la{\partial} & C_{k+1}(L) & \la{\partial} &\quad& \ra{\partial^*} & C ^k(L) & \ra{\partial^*} & C ^{k+1}(L) & \ra{\partial^*} \end{array}. $$ They generate the (co)homology maps: $$f_* : H_k(K) \to H_k(L)\ \text{ and } \ f^* : H^k(K) \leftarrow H^k(L),$$ as the linear operators given by $$f_*([x]) := [f_k(x)]\ \text{ and } \ f^*([x]) := [f^k(x)],$$ where $[\cdot ]$ stands for the (co)homology class. Exercise. Prove that the latter is well-defined. The following is obvious. Theorem. The identity map induces the identity (co)homology map: $$(\operatorname{Id}_{|K|})_* = \operatorname{Id}_{H(K)}\ \text{ and } \ (\operatorname{Id}_{|K|})^* = \operatorname{Id}_{H^*(K)}.$$ This is what we derive from what we know about compositions of cell maps. Theorem. The (co)homology map of the composition is the composition of the (co)homology maps $$(gf)_* = g_*f_*\ \text{ and } \ (gf)^* = f^*g^*.$$ Notice the change of order in the latter case! This is the realm of category theory, explained later: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{cccccccc} K & \ra{f} & L \\ & \searrow ^{gf} & \da{g} \\ & & M \end{array}\quad \leadsto \quad \begin{array}{cccccccc} H(K) & \ra{f_*} & H(L) & &\\ & \searrow ^{g_*f_*} & \da{g_*}\\ & & H(M) \end{array} \text{ and }\ \begin{array}{cccccccc} H^*(K) & \la{f^*} & H^*(L) & &\\ & \nwarrow ^{f^*g^*} & \ua{g^*}\\ & & H^*(M) \end{array}. $$ Theorem. Suppose $K$ and $L$ are cell complexes. If a map $$f : |K| \to |L|$$ is a cell map and a homeomorphism, and $$f^{-1}: |L| \to |K|$$ is a cell map too, then the (co)homology maps $$f_*: H_k(K) \to H_k(L)\ \text{ and } \ f^*: H^k(K) \leftarrow H^k(L),$$ are isomorphisms for all $k$. Corollary. Under the conditions of the theorem, we have: $$(f^{-1})_* = (f_*)^{-1}\ \text{ and } \ (f^{-1})^* = (f^*)^{-1}.$$ Theorem. If two maps are homotopic, they induce the same (co)homology maps: $$f \simeq g\ \Rightarrow \ f_* = g_* \ \text{ and } \ f^* = g^*.$$ In the face of the isomorphisms of the groups and the matching behavior of the maps, let's not forget who came first: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccc} K & \to & C(K) & \stackrel{D}{\longrightarrow} & C^*(K)\\ & &\downarrow ^{\sim} & & \downarrow ^{\sim}\\ & &H(K) & \stackrel{?}{\longleftrightarrow} & H^*(K). \end{array}$$ Also, in spite of the fact that the cochain module is the dual of the chain module, we can't assume that the cohomology module is the dual of the homology module. In fact, we saw an example when they aren't isomorphic. However, the example was for the integer coefficients. What if all these modules are vector spaces? We state the following without proof (see Bredon, p. 282). Theorem. If $R$ is a field, the cohomology vector space is the dual of the homology vector space: $$H(K;R)^*=H^*(K;R);$$ therefore, they are isomorphic: $$H(K;R) \cong H^*(K;R).$$ Exercise. Define "cocohomology" and prove that it is isomorphic to homology. Computing cohomology maps We start with the most trivial examples and pretend that we don't know the answer... Example (inclusion). Suppose: $$G=\{A,B\},\ H=\{A,B,AB\},$$ and $$f(A)=A,\ f(B)=B.$$ This is the inclusion: Then, on the (co)chain level we have: $$\begin{array}{ll|ll} C_0(G)=< A,B >,& C_0(H)= < A,B >&\Rightarrow C^0(G)=< A^*,B^* >,& C^0(H)= < A^*,B^* >,\\ f_0(A)=A,& f_0(B)=B & \Rightarrow f^0=f_0^T=\operatorname{Id};\\ C_1(G)=0,& C_1(H)= < AB >&\Rightarrow C^1(G)=0,& C^1(H)= < AB^* >\\ &&\Rightarrow f^1=f_1^T=0. \end{array} $$ Meanwhile, $\partial ^0=0$ for $G$ and for $H$ we have: $$\partial ^0=\partial _1^T=[-1, 1].$$ Therefore, on the cohomology level we have: $$\begin{array}{llllll} H^0(G):= \frac{ Z^0(G) }{ B^0(G) }&=\frac{ C^0(G) }{ 0 } &=< [A^*],[B^*] > ,\\ H^0(H):= \frac{ Z^0(H) }{ B^0(H) }&=\frac{ \ker \partial ^0 }{ 0 } &=< [A^*+B^*] >,\\ H^1(G):= \frac{ Z^1(G) }{ B^1(G) }&=\frac{0 }{ 0 } &=0,\\ H^1(H):= \frac{ Z^1(H) }{ B^1(H) }&=\frac{0 }{ 0 } &=0. \end{array} $$ Then, for $f^k:H^k(H)\to H^k(G),\ k=0,1$, we compute from the definition, $$\begin{array}{lllllll} [f^0]([A^*]):=[f^0(A^*)]=[A^*], &[f^0]([B^*]):=[f^0(B^*)]=[B^*] &\Rightarrow [f^0]=[1,1]^T;\\ f^1=0&&\Rightarrow [f^1]=0. \end{array} $$ The former identity indicates that $A$ and $B$ are separated if we reverse the effect of $f$. Exercise. Modify the computation for the case when there is no $AB$. Exercise. Compute the cohomology maps for the following two two-edge simplicial complexes and these simplicial maps: $$K=\{A,B,C,AB,BC\},\ L=\{X,Y,Z,XY,YZ\};$$ $$f(A)=X,\ f(B)=Y,\ f(C)=Z,\ f(AB)=XY,\ f(BC)=YZ;$$ $$f(A)=X,\ f(B)=Y,\ f(C)=Y,\ f(AB)=XY,\ f(BC)=Y.$$ Example (rotation and collapse). Suppose we are given the following complexes and maps: $$G=H:=\{A,B,C,AB,BC,CA\},$$ $$f(A)=B,\ f(B)=C,\ f(C)=A.$$ This is a rotated triangle (left): The cohomology maps are computed as follows: $$\begin{array}{lllll} f^1(AB^*+BC^*+CA^*)&=f^1(AB^*)+f^1(BC^*)+f^1(CA^*)\\ &=CA^*+AB^*+BC^*\\ &=AB^*+BC^*+CA^*. \end{array} $$ Therefore, the cohomology map $[f^1]:H^1(H)\to H^1(G)$ is the identity. Conclusion: the hole is preserved. Also (right) we collapse the triangle onto one of its edges: $$f(A)=A,\ f(B)=B,\ f(C)=A.$$ Then, $$\begin{array}{llllll} f^1(AB^*+BC^*+CA^*)&=f^1(AB^*)+f^1(BC^*)+f_1(CA^*)\\ &=(AB^*+CB^*)+0+0\\ &=2\partial^0(B^*). \end{array} $$ Therefore, the cohomology map is zero. Conclusion: collapsing of an edge causes the hole collapse too. Exercise. Provide the details of the computations. Exercise. Modify the computation for the case (a) a reflection and (b) a collapse to a vertex. Functors The duality reverses arrows but preserves everything else. This idea deserves a functorial interpretation. We generalize a familiar concept below. Definition. A functor ${\mathscr F}$ from category ${\mathscr C}$ to category ${\mathscr D}$ consists of two functions: $${\mathscr F}:\text{Obj}({\mathscr C}) \rightarrow \text{Obj} ({\mathscr D}),$$ and, if ${\mathscr F}(X)=U,{\mathscr F}(Y)=V$, we have, for a covariant functor: $${\mathscr F}={\mathscr F}_{X,Y}:\text{Hom}_{\mathscr C}(X,Y) \rightarrow \text{Hom}_{\mathscr D}(U,V),$$ and for a contravariant functor: $${\mathscr F}={\mathscr F}_{X,Y}:\text{Hom}_{\mathscr C}(X,Y) \rightarrow \text{Hom}_{\mathscr D}(V,U).$$ We assume that the functor preserves, the identity: ${\mathscr F}(\operatorname{Id}_{X}) = \operatorname{Id}_{{\mathscr F}(X)}$; and the compositions: covariant, ${\mathscr F}(g f) = {\mathscr F}(g) {\mathscr F}(f)$, or contravariant, ${\mathscr F}(g f) = {\mathscr F}(f) {\mathscr F}(g)$. Exercise. Prove that duality is a contravariant functor. Hint: consider $\mathcal{L}(U,\cdot)$ and $\mathcal{L}(\cdot,V)$. Exercise. What happens to compositions of functors? We will continue to refer to covariant functors as simply "functors" when there is no ambiguity. The latter condition can be illustrated with these commutative diagrams: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccc} &X & \ra{f} & Y \\ & & _{gf}\searrow & \ \da{g}\\ & & & Z \end{array} \leadsto \begin{array}{ccccccccccc} & {\mathscr F}(X) & \ra{{\mathscr F}(f)} & {\mathscr F}(Y)\\ & & _{{\mathscr F}(gf)}\searrow & \ \ \da{{\mathscr F}(g)}\\ & & & {\mathscr F}(Z) \end{array} \ \text{ and} \begin{array}{cccccccccc} {\mathscr F}(X) & \la{{\mathscr F}(f)} & {\mathscr F}(Y)\\ & _{{\mathscr F}(gf)}\nwarrow & \ \ \ua{{\mathscr F}(g)}\\ & & {\mathscr F}(Z) \end{array} $$ Previously we proved the following: Homology is a covariant functor from cell complexes and maps to modules and homomorphisms. Cohomology is a contravariant functor from cell complexes and maps to modules and homomorphisms. Exercise. Outline the cohomology theory for relative cell complexes and cell maps of pairs. Hint: $C^*(K,K')=C^*(K) / C^*(K')$. Cohomology is just as good a functorial tool as homology. Below, we apply it to a problem previously solved with homology. Example. We consider the question, "Can a soap bubble contract to the ring without tearing?" and recast it as an example of the Extension Problem, "Can we extend the map of the circle onto itself to the whole disk?" Can we find a continuous $F$ to complete the first diagram below? With the cohomology functor, we answer: only if we can complete the last one. $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccccccccc} {\bf S}^{n-1} & \hookrightarrow & {\bf B}^{n} && & H^{n-1}({\bf S}^{n-1}) & \leftarrow & H^{n-1}({\bf B}^{n}) & & {\bf Z} & \leftarrow & 0 \\ & \searrow ^{\operatorname{Id}}& \ \ \ua{F=?} && \leadsto & _{} & \nwarrow ^{\operatorname{Id}} & \ \ua{?} & = & & \nwarrow ^{\operatorname{Id}}& \ua{?}\\ & & {\bf S}^{n-1} && & & & H^{n-1}({\bf S}^{n-1}) & & & & {\bf Z} \end{array}$$ Just as with homology, we see that the last diagram is impossible to complete. Exercise. Provide such analysis for extension of the identity map of the circle to the circle to that of the torus. Exercise. List possible maps for each of these based on the possible cohomology maps: inclusions of the circle into the torus; self-maps of the figure eight; inclusions of the circle into the sphere. Retrieved from "https://calculus123.com/index.php?title=Cohomology&oldid=1382"
CommonCrawl
Memory loss for nonequilibrium open dynamical systems Effectual leadership in flocks with hierarchy and individual preference September 2014, 34(9): 3703-3745. doi: 10.3934/dcds.2014.34.3703 Spatially structured networks of pulse-coupled phase oscillators on metric spaces Stilianos Louca 1, and Fatihcan M. Atay 2, Institute of Applied Mathematics, University of British Columbia, 121-1984 Mathematics Road, Vancouver, BC, V6T 1Z2, Canada Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany Received November 2012 Revised January 2014 Published March 2014 The Winfree model describes finite networks of phase oscillators. Oscillators interact by broadcasting pulses that modulate the frequencies of connected oscillators. We study a generalization of the model and its fluid-dynamical limit for networks, where oscillators are distributed on some abstract $\sigma$-finite Borel measure space over a separable metric space. We give existence and uniqueness statements for solutions to the continuity equation for the oscillator phase densities. We further show that synchrony in networks of identical oscillators is locally asymptotically stable for finite, strictly positive measures and under suitable conditions on the oscillator response function and the coupling kernel of the network. The conditions on the latter are a generalization of the strong connectivity of finite graphs to abstract coupling kernels. Keywords: synchrony, phase response., existence-uniqueness of solutions, Winfree model, stability, Coupled oscillators. Mathematics Subject Classification: 34C15, 70K20, 35A01, 35A02, 92C2. Citation: Stilianos Louca, Fatihcan M. Atay. Spatially structured networks of pulse-coupled phase oscillators on metric spaces. Discrete & Continuous Dynamical Systems, 2014, 34 (9) : 3703-3745. doi: 10.3934/dcds.2014.34.3703 J. Acebrón, L. Bonilla, C. Vicente, F. Ritort and R. Spigler, The Kuramoto model: A simple paradigm for synchronization phenomena, Rev. Mod. Phys., 77 (2005), 137. doi: 10.1103/RevModPhys.77.137. Google Scholar L. Ambrosio, N. Gigli and G. Savaré, Gradient flows in Metric Spaces and in the Space of Probability Measures, Lectures in Mathematics, Birkhäuser, 2008. Google Scholar J. T. Ariaratnam, Collective Dynamics of the Winfree Model of Coupled Nonlinear Oscillators, PhD thesis, Cornell University, 2002. Google Scholar J. T. Ariaratnam and S. H. Strogatz, Phase diagram for the Winfree model of coupled nonlinear oscillators, Phys. Rev. Lett., 86 (2001), 4278-4281. Google Scholar V. Arnold, Ordinary Differential Equations, Second printing of the 1992 edition. Universitext. Springer-Verlag, Berlin, 2006. Google Scholar K. Athreya and S. Lahiri, Measure Theory and Probability Theory, Springer, 2006. Google Scholar L. Basnarkov and V. Urumov, Critical exponents of the transition from incoherence to partial oscillation death in the Winfree model, J. Stat. Mech., 2009 (2009), P10014. doi: 10.1088/1742-5468/2009/10/P10014. Google Scholar R. Bellman, The stability of solutions of linear differential equations, Duke. Math. J., 10 (1943), 643-647. doi: 10.1215/S0012-7094-43-01059-2. Google Scholar R. Beurle, Properties of a mass of cells capable of regenerating pulses, Philos. T. R. Soc. B, 240 (1956), 55-94. doi: 10.1098/rstb.1956.0012. Google Scholar V. Bogachev, Measure Theory, Vol. I, II. Springer-Verlag, Berlin, 2007. doi: 10.1007/978-3-540-34514-5. Google Scholar A. P. Boris Makarov, Real Analysis: Measures, Integrals and Applications, Springer, 2013. doi: 10.1007/978-1-4471-5122-7. Google Scholar H. Brézis and E. Lieb, A relation between pointwise convergence of functions and convergence of functionals, P. Am. Math. Soc., 88 (1983), 486-490. doi: 10.2307/2044999. Google Scholar J. D. Crawford and K. Davies, Synchronization of globally coupled phase oscillators: Singularities and scaling for general couplings, Physica D, 125 (1999), 1-46. doi: 10.1016/S0167-2789(98)00235-8. Google Scholar D. Drachman, Do we have brain to spare? Neurology, 64 (2005), 2004-2005. doi: 10.1212/01.WNL.0000166914.38327.BB. Google Scholar B. Eckhardt, E. Ott, S. H. Strogatz, D. M. Abrams and A. McRobie, Modeling walker synchronization on the millennium bridge, Phys. Rev. E, 75 (2007), 021110. doi: 10.1103/PhysRevE.75.021110. Google Scholar T. Eisner, Stability of Operators and Operator Semigroups, Birkhäuser, 2010. Google Scholar F. Giannuzzi, D. Marinazzo, G. Nardulli, M. Pellicoro and S. Stramaglia, Phase diagram of a generalized Winfree model, Phys. Rev. E, 75 (2007), 051104. doi: 10.1103/PhysRevE.75.051104. Google Scholar P. Goel and B. Ermentrout, Synchrony, stability, and firing patterns in pulse-coupled oscillators, Physica D, 163 (2002), 191-216. doi: 10.1016/S0167-2789(01)00374-8. Google Scholar J. Griffith, A field theory of neural nets: I: Derivation of field equations, B. Math. Biol., 25 (1963), 111-120. doi: 10.1007/BF02477774. Google Scholar J. J. Grobler, A note on the theorems of Jentzsch-Perron and Frobenius, in Indagationes Mathematicae (Proceedings), 90 (1987), 381-391. Google Scholar P. Hertel, Continuum Physics, Springer, 2012. doi: 10.1007/978-3-642-29500-3. Google Scholar R. Horn and C. Johnson, Matrix Analysis, Cambridge University Press, 1990. Google Scholar G. Katriel, Stability of synchronized oscillations in networks of phase-oscillators, Discret. Contin. Dyn. B, 5 (2005), 353-364. doi: 10.3934/dcdsb.2005.5.353. Google Scholar K. Kowalski, Methods of Hilbert Spaces in the Theory of Nonlinear Dynamical Systems, World Scientific, 1994. doi: 10.1142/9789814354127. Google Scholar Y. Kuramoto, Self-entrainment of a population of coupled non-linear oscillators, in International Symposium on Mathematical Problems in Theoretical Physics, Springer, 39 (1975), 420-422. Google Scholar Y. Kuramoto and H. Arakai, Chemical Oscillations, Waves and Turbulence, Springer, 1984. doi: 10.1007/978-3-642-69689-3. Google Scholar C. Laing, The dynamics of chimera states in heterogeneous Kuramoto networks, Physica D, 238 (2009), 1569-1588. doi: 10.1016/j.physd.2009.04.012. Google Scholar S. Lang, Real and Functional Analysis, Third edition. Graduate Texts in Mathematics, 142. Springer-Verlag, New York, 1993. doi: 10.1007/978-1-4612-0897-6. Google Scholar W. S. Lee, J. G. Restrepo, E. Ott and T. M. Antonsen, Dynamics and pattern formation in large systems of spatially-coupled oscillators with finite response times, Chaos, 21 (2011), 023122. doi: 10.1063/1.3596697. Google Scholar K. Mardia and P. Jupp, Directional Statistics, Wiley, 2000. Google Scholar L. Nicolaescu, Lectures on the Geometry of Manifolds, World Scientific, 2007. doi: 10.1142/9789814261012. Google Scholar D. D. Quinn, R. H. Rand and S. H. Strogatz, Singular unlocking transition in the Winfree model of coupled oscillators, Phys. Rev. E, 75 (2007), 036218. doi: 10.1103/PhysRevE.75.036218. Google Scholar H. Schaefer, Banach Lattices and Positive Operators, Springer, 1974. Google Scholar Y. Shiogai and Y. Kuramoto, Wave propagation in nonlocally coupled oscillators with noise, Prog. Theor. Phys. Supp., 150 (2003), 435-438. doi: 10.1143/PTPS.150.435. Google Scholar R. M. Smeal, G. B. Ermentrout and J. A. White, Phase-response curves and synchronized neural networks, Philos. T. Roy. Soc. B, 365 (2010), 2407-2422. doi: 10.1098/rstb.2009.0292. Google Scholar S. H. Strogatz, Sync: The Emerging Science of Spontaneous Order, Hyperion Press, 2003. Google Scholar S. H. Strogatz, D. M. Abrams, A. McRobie, B. Eckhardt and E. Ott, Crowd synchrony on the Millennium Bridge, Nature, 438 (2005), 43-44. doi: 10.1038/43843a. Google Scholar B. R. Trees, V. Saranathan and D. Stroud, Synchronization in disordered Josephson junction arrays: Small-world connections and the Kuramoto model, Phys. Rev. E, 71 (2005), 016215. doi: 10.1103/PhysRevE.71.016215. Google Scholar K. Wiesenfeld, P. Colet and S. H. Strogatz, Frequency locking in Josephson arrays: Connection with the Kuramoto model, Phys. Rev. E, 57 (1998), 1563-1569. doi: 10.1103/PhysRevE.57.1563. Google Scholar A. Winfree, Biological rhythms and the behavior of populations of coupled oscillators, J. Theor. Biol., 16 (1967), 15-42. doi: 10.1016/0022-5193(67)90051-3. Google Scholar A. Winfree, The Geometry of Biological Time, Second edition. Interdisciplinary Applied Mathematics, 12. Springer-Verlag, New York, 2001. Google Scholar A. Zaanen, Introduction to Operator Theory in Riesz Spaces, Springer-Verlag, Berlin, 1997. doi: 10.1007/978-3-642-60637-3. Google Scholar Qi Yao, Linshan Wang, Yangfan Wang. Existence-uniqueness and stability of the mild periodic solutions to a class of delayed stochastic partial differential equations and its applications. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4727-4743. doi: 10.3934/dcdsb.2020310 Xin Lai, Xinfu Chen, Mingxin Wang, Cong Qin, Yajing Zhang. Existence, uniqueness, and stability of bubble solutions of a chemotaxis model. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 805-832. doi: 10.3934/dcds.2016.36.805 Daoyi Xu, Weisong Zhou. Existence-uniqueness and exponential estimate of pathwise solutions of retarded stochastic evolution systems with time smooth diffusion coefficients. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 2161-2180. doi: 10.3934/dcds.2017093 Telma Silva, Adélia Sequeira, Rafael F. Santos, Jorge Tiago. Existence, uniqueness, stability and asymptotic behavior of solutions for a mathematical model of atherosclerosis. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 343-362. doi: 10.3934/dcdss.2016.9.343 Xiaoxue Zhao, Zhuchun Li, Xiaoping Xue. Formation, stability and basin of phase-locking for Kuramoto oscillators bidirectionally coupled in a ring. Networks & Heterogeneous Media, 2018, 13 (2) : 323-337. doi: 10.3934/nhm.2018014 Ariane Piovezan Entringer, José Luiz Boldrini. A phase field $\alpha$-Navier-Stokes vesicle-fluid interaction model: Existence and uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 397-422. doi: 10.3934/dcdsb.2015.20.397 Wen Si. Response solutions for degenerate reversible harmonic oscillators. Discrete & Continuous Dynamical Systems, 2021, 41 (8) : 3951-3972. doi: 10.3934/dcds.2021023 Guy Katriel. Stability of synchronized oscillations in networks of phase-oscillators. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 353-364. doi: 10.3934/dcdsb.2005.5.353 Seung-Yeal Ha, Jinyeong Park, Sang Woo Ryoo. Emergence of phase-locked states for the Winfree model in a large coupling regime. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3417-3436. doi: 10.3934/dcds.2015.35.3417 Hayato Chiba. Continuous limit and the moments system for the globally coupled phase oscillators. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1891-1903. doi: 10.3934/dcds.2013.33.1891 Hongyu Cheng, Shimin Wang. Response solutions to harmonic oscillators beyond multi–dimensional Brjuno frequency. Communications on Pure & Applied Analysis, 2021, 20 (2) : 467-494. doi: 10.3934/cpaa.2020222 Chun-Hsiung Hsia, Chang-Yeol Jung, Bongsuk Kwon. On the global convergence of frequency synchronization for Kuramoto and Winfree oscillators. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3319-3334. doi: 10.3934/dcdsb.2018322 Hongbin Chen, Yi Li. Existence, uniqueness, and stability of periodic solutions of an equation of duffing type. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 793-807. doi: 10.3934/dcds.2007.18.793 R. Yamapi, R.S. MacKay. Stability of synchronization in a shift-invariant ring of mutually coupled oscillators. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 973-996. doi: 10.3934/dcdsb.2008.10.973 Anne Mund, Christina Kuttler, Judith Pérez-Velázquez. Existence and uniqueness of solutions to a family of semi-linear parabolic systems using coupled upper-lower solutions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5695-5707. doi: 10.3934/dcdsb.2019102 Emil Minchev. Existence and uniqueness of solutions of a system of nonlinear PDE for phase transitions with vector order parameter. Conference Publications, 2005, 2005 (Special) : 652-661. doi: 10.3934/proc.2005.2005.652 Stefanie Hirsch, Dietmar Ölz, Christian Schmeiser. Existence and uniqueness of solutions for a model of non-sarcomeric actomyosin bundles. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4945-4962. doi: 10.3934/dcds.2016014 Joost Hulshof, Robert Nolet, Georg Prokert. Existence and linear stability of solutions of the ballistic VSC model. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 35-51. doi: 10.3934/dcdss.2014.7.35 Helmut Abels, Johannes Kampmann. Existence of weak solutions for a sharp interface model for phase separation on biological membranes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 331-351. doi: 10.3934/dcdss.2020325 Francesca Marcellini. Existence of solutions to a boundary value problem for a phase transition traffic model. Networks & Heterogeneous Media, 2017, 12 (2) : 259-275. doi: 10.3934/nhm.2017011 Stilianos Louca Fatihcan M. Atay
CommonCrawl
Quantitative Marketing and Economics March 2008 , Volume 6, Issue 1, pp 41–78 | Cite as The discriminatory incentives to bundle in the cable television industry Gregory S. Crawford First Online: 26 September 2007 An influential theoretical literature supports a discriminatory explanation for product bundling: it reduces consumer heterogeneity, extracting surplus in a manner similar to second-degree price discrimination. This paper tests this theory and quantifies its importance in the cable television industry. The results provide qualified support for the theory. While bundling of general-interest cable networks is estimated to have no discriminatory effect, bundling an average top-15 special-interest cable network significantly increases the estimated elasticity of cable demand. Calibrating these results to a simple model of bundle demand with normally distributed tastes suggests that such bundling yields a heterogeneity reduction equal to a 4.7% increase in firm profits (and 4.0% reduction in consumers surplus). The results are robust to alternative explanations for bundling. Bundling Price discrimination Cable television We are grateful to the editor and two anonymous referees for their detailed comments on the paper. We would also like to thank Cathleen McHugh for her assistance inputting the data, Mike Riordan, Joe Harrington, Matt Shum, Steve Coate, Roger Noll, Bruce Owen, V. Kerry Smith, Mark Coppejans, Frank Wolak, Phillip Leslie, and seminar participants at Cornell University and the 1999 IDEI/NBER Econometrics of Price and Product Competition conference for helpful comments. JEL Classification L12 M31 L96 L40 L50 C31 Appendix 1: Proofs of propositions Proof of Proposition 145 Suppose there are n discrete products (components) supplied by a monopolist and consumers differ in their preferences (willingness-to-pay) for each of these products, given by a type vector, v i = (v i1, ..., v in ). Let each v ic , c = 1, ..., n, be independent with means μ c and variances σ c . Let \(x_{in} \equiv \frac{1}{n} \sum_{c=1}^{n} v_{ic}\) be the per-good valuation for consumer i of a bundle of n goods, let μ n be its mean, and let \(\sigma^2_n\) be its variance. Note that μ n and \(\sigma^2_n\) follow the well-known formulas for the mean and variance of an average of (independent) random variables: $$\mu _{n} = \frac{1}{n}{\sum\limits_{c = 1}^n {\mu _{c} = \overline{{\mu _{c} }} } }$$ $$\sigma ^{2}_{n} = \frac{1}{{n^{2} }}{\sum\limits_{c = 1}^n {\sigma ^{2}_{c} = \frac{1}{n}{\left[ {\frac{1}{n}{\sum\limits_{c = 1}^n {\sigma ^{2}_{c} } }} \right]} = \frac{1}{n}\overline{\sigma } ^{2}_{c} } }$$ Because the sequences v ic are uniformly bounded, \(\lim_{n \rightarrow \infty} \bar{\mu}_c\) and \(\lim_{n \rightarrow \infty} \bar{\sigma}^2_c\) exist. Let \(\lim_{n \rightarrow \infty} \bar{\mu}_c = \mu\) and \(\lim_{n \rightarrow \infty} \bar{\sigma}^2_c = \sigma^2\). Note this implies limn → ∞ μ n = μ and \(\lim_{n \rightarrow \infty} \sigma^2_n = \lim_{n \rightarrow \infty} \frac{\sigma^2}{n} = 0\). Let \(q_n(p) \equiv \int_{p}^{\infty} dF(x_{in})\) give the market share of a bundle of size n offered at per-good price p, where F(x in ) is the CDF of x in . Note that \(q_n(\mu - \epsilon) = \mbox{Prob}(x_{in} > \mu - \epsilon) = \mbox{Prob}(x_{in} - \mu > - \epsilon)\). Let \(\epsilon^{n}(p) \equiv - \frac{\partial q^n(p)}{\partial p} \frac{p}{q_n}\) be the (absolute value of the) elasticity of the per-good demand curve evaluated at per-good price p and let \(\tilde{\epsilon}^{n}(\tilde{p}) \equiv -\frac{\partial q^n(\tilde{p})}{\partial p} \frac{\tilde{p}}{q_n(\tilde{p})}\) be the corresponding (aggregate) elasticity of the bundle demand curve evaluate at total price \(\tilde{p}\). For a bundle of size n, \(\tilde{p} = np\). By the weak law of large numbers and symmetry, \(\mbox{Prob}(x_{in} - \mu < -\epsilon) \leq \frac{1}{2}\frac{\sigma^2}{\epsilon^2 n}\), implying \(q_n(\mu - \epsilon) \geq 1 - \frac{1}{2}\frac{\sigma^2}{\epsilon^2 n}\). By a similar argument, \(q_n(\mu + \epsilon) \leq \frac{1}{2}\frac{\sigma^2}{\epsilon^2 n}\). Case I Per-good elasticity. We first prove the proposition for the per-good elasticity, ε n . Let \(\omega = \frac{\sigma^2}{\epsilon^2}\) and consider a change in price from μ − ε to μ + ε on the per-good demand for a bundle of size n. $$\begin{array}{*{20}l} {{\frac{{\partial q_{n} }}{{\partial p}}} \hfill} & {{ \leqslant \frac{1}{2}\frac{\omega }{n} - {\left( {1 - \frac{1}{2}\frac{\omega }{n}} \right)}} \hfill} \\ {{} \hfill} & {{ \leqslant \frac{\omega }{n} - 1} \hfill} \\ \end{array} $$ $$ \in ^{n} {\left( {\mu - \in } \right)} = - \frac{{\partial q_{n} }}{{\partial p}}\frac{{\mu - \in }}{{q_{n} }} \geqslant - {\left( {\frac{\omega }{n} - 1} \right)}\frac{{\mu - \in }}{{1 - \frac{1}{2}\frac{\omega }{n}}} \geqslant \frac{{n - \omega }}{{n - \frac{\omega }{2}}}{\left( {\mu - \in } \right)}$$ Differentiating this with respect to the bundle size n yields $$\label{dpergood} \frac{\partial \epsilon^{n}(\mu - \epsilon)}{\partial n} \geq \frac{\omega}{2 \left[n - \frac{\omega}{2}\right]^2} ~ (\mu - \epsilon) > 0 $$ Let \(\epsilon = \mu - p^*_n\) so that \(p^*_n = \mu - \epsilon\).46 Then increasing bundle size makes the per-good bundle demand curve more elastic when evaluated at the profit-maximizing price for a bundle of size n. Case II Aggregate bundle elasticity. When considering the impact of increases in n on the aggregate bundle elasticity, one has to accommodate that a given change in the per-good bundle price has a larger effect on the aggregate price for a larger bundle than for a smaller bundle. While this does not impact the elasticity of the size-n bundle demand curve evaluated at price p n , it does impact the elasticity of the size-(n + 1) bundle demand curve evaluated at price p n . In particular, $$\begin{array}{*{20}l} {{\widetilde{ \in }^{{n + 1}} {\left( {\widetilde{p}_{n} } \right)}} \hfill} & {{ = \in ^{{n + 1}} {\left( {p_{n} } \right)}\frac{n}{{n + 1}}} \hfill} \\ {{} \hfill} & {{ = \in ^{{n + 1}} {\left( {p_{n} } \right)}A{\left( n \right)}} \hfill} \\ {{} \hfill} & {{ \geqslant \frac{{n - \omega A{\left( n \right)}}}{{n - \frac{{\omega A{\left( n \right)}}}{2}}}\widetilde{p}_{n} } \hfill} \\ \end{array} $$ Under A4, it is easy to show that for the per-good elasticities, \(\epsilon^{n+1}(p_n) \geq \epsilon^{n}(p_n)\). For the aggregate size-(n + 1) bundle elasticity, however, we must scale \(\epsilon^{n+1}(p_n)\) by A(n) < 1. What impact does this have on the comparison? One can show that \(\tilde{\epsilon}^{n+1}(\tilde{p}_n) \geq \tilde{\epsilon}^{n}(\tilde{p}_n)\) whenever the right-hand side of the last inequality in Eq. 8 is greater than the right-hand side of the last inequality in Eq. 6. This holds for all n.47 Proof of Proposition 2 Let preferences be as for Proposition 1 above except in allowing for correlation between consumer valuations, v i = (v i1, ..., v in ). Let ρ c,d = corr(v ic ,v id ). With correlation, the variance of the per-good valuation for a bundle of size n, x in , may be written as $$\begin{array}{*{20}l} {\sigma ^{2}_{n} }&{ = \frac{1}{{n^{2} }}{\left( {{\sum\limits_{c = 1}^n {\sigma ^{2}_{c} + 2{\sum\limits_{c = 1}^{n - 1} {{\sum\limits_{d = c + 1}^n {\rho _{{c,d}} \sigma _{c} \sigma _{d} } }} }} }} \right)}} \\ {}&{ = \frac{1}{n}{\left[ {\frac{1}{n}{\left( {{\sum\limits_{c = 1}^n {\sigma ^{2}_{c} + 2{\sum\limits_{c = 1}^{n - 1} {{\sum\limits_{d = c + 1}^n {\rho _{{c,d}} \sigma _{c} \sigma _{d} } }} }} }} \right)}} \right]}} \\ {}&{ = \frac{1}{n}\overline{\sigma } ^{2}_{c} } \\ \end{array} $$ The primary benefit of bundling is due to heterogeneity reduction as measured by the variance of per-good tastes for the bundle, \(\sigma^2_n\). Unlike for Proposition 1 above, once we allow for correlation in tastes, bundle size, n, is not a sufficient statistic for \(\sigma^2_n\). In particular, adding a new good to a bundle changes \(\sigma^2_n\) by both (1) changing \(\bar{\sigma}^2_c\) and (2) increasing n. Let \(\eta = \frac{\omega}{n} = \frac{\sigma^2}{\epsilon^2 n} = \frac{lim_{n \rightarrow \infty} \bar{\sigma}^2_c}{\epsilon^2 n}\). Then we may re-write Eq. 6 above as $$\begin{array}{*{20}l} {{ \in ^{n} {\left( {\mu - \in } \right)}} \hfill} & {{ = - \frac{{\partial q_{n} }}{{\partial p}}\frac{{\mu - \in }}{{q_{n} }}} \hfill} \\ {{} \hfill} & {{ \geqslant - {\left( {\eta - 1} \right)}\frac{{\mu - \in }}{{1 - \frac{1}{2}\eta }}} \hfill} \\ {{} \hfill} & {{ \geqslant \frac{{1 - \eta }}{{1 - \frac{\eta }{2}}}{\left( {\mu - \in } \right)}} \hfill} \\ \end{array} $$ Differentiating this with respect to η yields $$\label{dpergoodeta} \frac{\partial \epsilon^{n}(\mu - \epsilon)}{\partial \eta} = \frac{-1}{2 \left[1 - \frac{\eta}{2}\right]^2} ~ (\mu - \epsilon) < 0 $$ This is a more general statement of Eq. 7 above.48 Reducing the (limiting) variance of the bundle (e.g. by increasing n or reducing σ 2) makes per-good demand for a bundle of size n more elastic. The result of the proposition follows from Eq. 11. To see this, suppose the bundle had only two goods (i.e. component 2 was the nth good). It is easy to see that \(\frac{\partial \sigma^2_n}{\partial \rho_{1,2}} > 0\), i.e. making the correlation between components 1 and 2 more negative reduces \(\sigma^2_n\). Since, \(\frac{\partial \eta}{\partial \sigma^2} >0\) it follows that \(\frac{\partial \epsilon^{n}}{\partial \rho_{\tilde{1},2}} < 0\): making correlations more negative makes the bundle demand curve more elastic. For the case of general n, simply note that the variance of a bundle of size n can be decomposed into the variance of a bundle of size (n − 1), the variance of component n, and twice the covariance between a bundle of size (n − 1) and component n. $$\begin{array}{*{20}l} {\overline{{\sigma ^{2}_{c} }} }&{ = \frac{1}{n}{\left( {{\sum\limits_{c = 1}^n {\sigma ^{2}_{c} + 2} }{\sum\limits_{c = 1}^{n - 1} {{\sum\limits_{d = c + 1}^n {\rho _{{c,d}} \sigma _{c} \sigma _{d} } }} }} \right)}} \\ {}&{ = \frac{1}{n}{\left[ {{\left( {{\sum\limits_{c = 1}^{n - 1} {\sigma ^{2}_{c} + 2} }{\sum\limits_{c = 1}^{n - 2} {{\sum\limits_{d = c + 1}^{n - 1} {\left. {\rho _{{c,d}} \sigma _{c} \sigma _{d} } \right)} }} }} \right)} + \sigma ^{2}_{n} + 2{\sum\limits_{c = 1}^{n - 1} {\rho _{{c,n}} \sigma _{c} \sigma _{n} } }} \right]}} \\ {}&{ = \frac{1}{n}{\left[ {\sigma ^{2}_{{n\widetilde{ - }1}} + \sigma ^{2}_{n} + 2\rho _{{n\widetilde{ - }1,n}} \sigma _{{n\widetilde{ - }1}} \sigma _{n} } \right]}} \\ \end{array} $$ Appendix 2: Instruments In this appendix, we present an analysis of the instruments used for prices and network carriage in the econometric analysis. Price instruments To assess the power of the price instruments, Table 7 presents results from reduced form regressions of prices on the instruments and exogenous variables.49 The results are organized in sets of three columns. For each set of three, the first column reports the point estimates from the regression of the price of basic service, p b , on the instruments and included exogenous variables. Similarly in the second and third columns for the price of Expanded basic services I and II, pI and pII, if offered. First-stage estimation, prices Price inst: Cost Price inst: MSO prices p b p I p II Homes passed, basic IPB MSO subs, basic −0.03 MSO subs2, basic Affiliation, basic Channel capacity, basic Homes passed, expanded I MSO subs, expanded I MSO subs2, expanded I Affiliation, expanded I Channel capacity, expanded I Homes passed, expanded II MSO subs, expanded II MSO subs2, expanded II Affiliation, expanded II Channel capacity, expanded II R-squared R-square Reported are results from reduced form estimation of prices for basic service (b) and up to two expanded basic services (I, II), on the instruments and exogenous variables. Results are organized in sets of three columns. The first set report estimates using Cost shifters as instruments, defined as homes passed, number and square of subscribers served by same firm (MSO), owner affiliation with programming networks, and channel capacity, interacted with cable service dummy variables. Separate effects for each service are not identified for some parameters in the Expanded Service equations. The second set of columns report estimates using MSO Prices as instruments, defined for each service as the average price for that service at other systems owned by the same MSO. Reported p value in each column is for hypothesis test of joint insignificance of reported parameters. The first set of three columns report estimates using cost shifters as instruments for cable prices. As these shifters do not vary across services, we interact them with cable service dummy variables to allow their effects to differ by service. Reported are the estimated parameters for these interactions.50 Evidence in support of the cost instruments is mixed. While homes passed does not appear to be an important cost shifter in any equation, the remaining variables enter intermittently. Most influential are affiliation (negative and significant in the first and third columns) and MSO subscribers and its square (negative for large values and occasionally significant in the first and third columns). Channel capacity enters as expected only in the second column. That said, p values associated with the hypothesis test of joint insignificance for all parameters are trivially small in all but the expanded I equation.51 On balance, while supporting their use as instruments, lack of variation across services and an indirect connection to marginal costs suggests the cost shifters may be weak instruments. The second set of three columns report estimates using prices of cable services of other systems within an MSO as instruments.52 The results are quite promising. Other-system prices within an MSO provide strong and significant effects for both basic and Expanded I equations, particularly for prices of the same service. Results for a second expanded service are poor, possibly due to relatively few observations. As expected, p values associated with the hypothesis of joint insignificance are soundly rejected for the basic and Expanded I equations. Network instruments To assess the power of the network instruments, Table 8 presents a synopsis of reduced form (probit) regressions of network carriage on the instruments and included exogenous variables. As above, the results are organized in sets of three columns. As we must predict the carriage of each of the top-15 cable networks (as well as the sum of other cable networks) on all the exogenous variables and instruments, the number of estimations performed was considerable.53 Rather than report the point estimates of the instruments for each specification, we simply report the p value from the hypothesis test of joint insignificance of the instrument set. As can be seen from the table, the instruments have considerable power, at least for the basic and first expanded basic equation.54 Coefficient estimates were as expected—particularly powerful predictors of the carriage of network q on service s was the corresponding likelihood it was carried on service s by other systems within its MSO. First-stage estimation, network carriage \(\mbox{NET}_b\) \(\mbox{NET}_{I}\) \(\mbox{NET}_{II}\) WTBS Other Satellite Reported are results of reduced form (probit) estimation of the carriage of each reported network on Basic Service (b) and up to two expanded basic services (I, II), on the instruments and exogenous variables. All specifications use MSO Networks as network instruments, defined for each network on each service as the proportion of other systems owned by the same MSO carrying that network on that service. Reported are p values for hypothesis test of joint insignificance of network instruments. Lack of carriage on Expanded Service II prevented identification of the impact of instruments for some networks. Adams, W., & Yellen, J. (1976). Commodity bundling and the burden of monopoly. Quarterly Journal of Economics, 90, 475–498.CrossRefGoogle Scholar Angrist, J., & Krueger, A. (2001). Instrumental variables and the search for identification: From supply and demand to natural experiments. NBER Working Paper 8456.Google Scholar Armstrong, M. (1996). Multiproduct non-linear pricing. Econometrica, 64(1), 51–75.CrossRefGoogle Scholar Armstrong, M. (1999). Price discrimination by a multi-product firm. Review of Economic Studies, 66, 151–168.CrossRefGoogle Scholar Bakos, Y., & Brynjolfsson, E. (1999). Bundling information goods: Pricing, profits, and efficiency. Management Science, 45(2), 1613–1630.CrossRefGoogle Scholar Bakos, Y., & Brynjolfsson, E. (2000). Bundling and competition on the Internet. Marketing Science, 19(1), 63–82.CrossRefGoogle Scholar Berry, S. (1994). Estimating discrete choice models of product differentiation. Rand Journal of Economics, 25(2), 242–262.CrossRefGoogle Scholar Berry, S., Levinsohn, J., & Pakes, A. (1995). Automobile prices in market equilibrium. Econometrica, 63, 841–890.CrossRefGoogle Scholar Bresnahan, T., & Gordon, R. (1996). The economics of new goods: An introduction. In T. Bresnahan, R. Gordon (Eds.), The Economics of New Goods. University of Chicago Press.Google Scholar Carlton, D., & Perloff, J. (2001). Modern Industrial Organization (3rd ed.). Harper CollinsGoogle Scholar Chae, S. (1992). Bundling subscription TV channels: A case of natural bundling. International Journal of Industrial Organization, 10, 213–230.CrossRefGoogle Scholar Chipty, T. (1995). Horizontal integration for bargaining power: Evidence from the cable television industry. Journal of Economics and Management Strategy, 4(2), 375–97.CrossRefGoogle Scholar Chipty, T. (2001). Vertical integration, market foreclosure, and consumer welfare in the cable television industry. American Economic Review, 91(3), 428–453.CrossRefGoogle Scholar Crawford, G. (2000). The impact of the 1992 cable act on household demand and welfare. RAND Journal of Economics, 31(3), 422–449.CrossRefGoogle Scholar Crawford, G., & Cullen, J. (2007). Bundling, product choice, and efficiency: Should cable television networks be offered À La Carte? Information Economics and Policy (in press).Google Scholar Crawford, G., & Shum, M. (2006). Empirical modeling of endogenous quality choice: The case of cable television. Mimeo, University of Arizona.Google Scholar Crawford, G., & Yurukoglu, A. (2007). Demand, pricing, and bundling in subscription television markets. Mimeo, University of Arizona.Google Scholar Einav, L. (2007). Seasonality in the U.S. motion picture industry. RAND Journal of Economics, 38(1).Google Scholar FCC (2003). 2002 report on cable industry prices. Federal Communications Commission. Available at http://www.fcc.gov/mb/csrptpg.html. FCC (2006). Further report on the packaging and sale of video programming to the public. Federal Communications Commission, February 2006. Available at http://www.fcc.gov/mb/csrptpg.html. GAO (2003). Issues related to competition and subscriber rates in the cable television industry. Discussion paper, General Accounting Office, GAO-04-8.Google Scholar Gentile, G. (2004). ESPN ends ugly fight with Cox over fees. Associated Press.Google Scholar Griliches, Z., & Cockburn, I. (1994). Generics and new goods in pharmaceutical price indexes. American Economic Review, 84(5), 1213–1232.Google Scholar Halfon, J. (2003). The failure of cable deregulation: A blueprint for creating a competitive, proconsumer cable television marketplace. United States Public Interest Research Group.Google Scholar Hausman, J., Leonard, G., & Zona, D. (1994). Competitive analysis with differentiated products. Annales d'Economie et de Statistique, 34, 159–180.Google Scholar Hazlett, T., & Spitzer, M. (1997). Public Policy Towards Cable Television: The Economics of Rate Controls. MIT Press.Google Scholar Kagan World Media (1998). Economics of Basic Cable Television Networks. Discussion paper, Kagan World Media.Google Scholar Kagan Media Research (2005). À la Carte pricing makes great theory, but tv channel bundling tough to beat. Kagan Media Research, December 2005. Available at http://www.ncta.com/IssueBrief.aspx?contentId=15. Lancaster, K. (1971). Consumer Demand: A New Approach. Columbia University Press.Google Scholar McAfee, R. P., McMillan, J., & Whinston, M. (1989). Multiproduct monopoly, commodity bundling, and correlation of values. Quarterly Journal of Economics, 103, 371–383.CrossRefGoogle Scholar Mitchener, B., & Kanter, J. (2004). EU member states back record fine for microsoft. Wall Street Journal, March 24, 2004.Google Scholar Nalebuff, B. (2004). Bundling as an entry barrier. Quarterly Journal of Economics, 119(1), 159–187.CrossRefGoogle Scholar NCTA (1998). Cable television developments. Discussion paper, National Cable Television Association, Available at http://www.ncta.com. Nevo, A. (2001). Measuring market power in the ready-to-eat cereal industry. Econometrica, 69, 307–342.CrossRefGoogle Scholar Noam, E. M. (1985). Economics of scale in cable television: A multiproduct analysis. In E. M. Noam (Ed.), Video Media Competition: Regulation, Economics, and Technology. Columbia University Press.Google Scholar Petrin, A. (2003). Quantifying the benefits of new products: The case of the minivan. Journal of Political Economy, 110, 705–729.CrossRefGoogle Scholar Rennhoff, A., & Serfes, K. (2005). The role of upstream-downstream competition on bundling decisions: Should regulators force firms to unbundle? Mimeo, Drexel University.Google Scholar Reuters (2003). US lawmaker urges a la Carte cable channel rates. Reuters News Service.Google Scholar Rochet, J.-C., & Chone, P. (1998). Ironing, sweeping, and multidimensional screening. Econometrica, 66(4), 783–826.CrossRefGoogle Scholar Rochet, J.-C., & Stole, L. (2000). The economics of multidimensional screening. Mimeo, University of Chicago.Google Scholar Salinger, M. (1995). A graphical analysis of bundling. Journal of Business, 68(1), 85–98.CrossRefGoogle Scholar Saloner, G., Shepard, A., & Podolny, J. (2001). Strategic Management. Wiley.Google Scholar Schmalensee, R. (1984). Gaussian demand and commodity bundling. Journal of Business, 62, S211–S230.Google Scholar Squeo, A. M., & Flint, J. (2004). Should cable be a la Carte not flat rate? Wall Street Journal.Google Scholar Stigler, G. (1963). United States v. Loew's Inc.: A note on block booking. In P. Kurland (Ed.), The Supreme Court Review: 1963 (pp. 152–157). University of Chicago Press.Google Scholar Stigler, G. (1968). A note on block booking. In G. Stigler, R. D. Irwin (Eds.), The Organization of Industry.Google Scholar Stole, L. (2003). Price discrimination in competitive environments. Mimeo, University of Chicago.Google Scholar Thompson, A. (2006). NFL v. cable is turning into a real nailbiter. Wall Street Journal.Google Scholar Varian, H. (2003). Intermediate Microeconomics (6th ed.). NortonGoogle Scholar Waterman, D. H., & Weiss, A. A. (1996). The effects of vertical integration between cable television systems and pay cable networks. Journal of Econometrics, 72, 357–95.CrossRefGoogle Scholar Whinston, M. (1990). Tying, foreclosure, and exclusion. American Economic Review, 80(4), 837–859.Google Scholar Wildman, S., & Owen, B. (1985). Program competition, diversity, and multichannel bundling in the New Video Industry. In E. M. Noam (Ed.) Video Media Competition: Regulation, Economics, and Technology. Columbia University Press.Google Scholar Xiao, P., Chan, T., & Narasimhan, C. (2006). Product bundles under three-part tariffs. Mimeo, Washington University.Google Scholar © Springer Science+Business Media, LLC 2007 1.Department of EconomicsUniversity of ArizonaTucsonUSA Crawford, G.S. Quant Market Econ (2008) 6: 41. https://doi.org/10.1007/s11129-007-9031-7 First Online 26 September 2007 Online ISSN 1573-711X
CommonCrawl
Manhattan Harvester and Cropper: a system for GWAS peak detection Toomas Haller ORCID: orcid.org/0000-0002-5069-65231, Tõnis Tasa2 & Andres Metspalu1 Selection of interesting regions from genome wide association studies (GWAS) is typically performed by eyeballing of Manhattan Plots. This is no longer possible with thousands of different phenotypes. There is a need for tools that can automatically detect genomic regions that correspond to what the experienced researcher perceives as peaks worthwhile of further study. We developed Manhattan Harvester, a tool designed for "peak extraction" from GWAS summary files and computation of parameters characterizing various aspects of individual peaks. We present the algorithms used and a model for creating a general quality score that evaluates peaks similarly to that of a human researcher. Our tool Cropper utilizes a graphical interface for inspecting, cropping and subsetting Manhattan Plot regions. Cropper is used to validate and visualize the regions detected by Manhattan Harvester. We conclude that our tools fill the current void in automatically screening large number of GWAS output files in batch mode. The interesting regions are detected and quantified by various parameters by Manhattan Harvester. Cropper offers graphical tools for in-depth inspection of the regions. The tools are open source and freely available. For over a decade the genome-wide association studies (GWAS) have been a powerful tool in the arsenal used for unraveling the information present in the genome [1]. Despite certain skepticism this approach is not showing signs of fatigue. Quite to the contrary, the number of GWAS carried out is increasing, returning useful information for understanding the genome and predicting and helping to cure disease [2]. All this paves the road for personalized medicine – bound to become the backbone of the medicine in the future. With the increasing number of genotyped and sequenced individuals as well as advances in high performance computing the GWAS projects undertaken have grown in size and technological complexity [3]. There are reports out that have boosted the number of individual phenotypes in some cases to tens of thousands or more [4]. It is not rare to combinatorially generate even more phenotypes (e.g. metabolite ratio phenotypes) and analyze in one go [5]. These results can no longer be individually evaluated by a researcher. Automatic screening of results is much needed for a quick summary of the findings and to rank them in the order of significance. Yet well documented specific tools for this purpose are still missing to the best of our knowledge. We present Manhattan Harvester (MH) that uses the GWAS output files and detects the signals (peaks) of potential interest from them by mimicking the eye of a researcher. The software computes a list of parameters for each peak and a quality score based on these. MH is supplemented by another original tool – Cropper. Cropper is a visual aid for viewing, zooming, cropping and subsetting GWAS results. It can be used in combination with MH when studying the findings of MH. Scripting and properties Both MH and Cropper are written in C++/Qt [6]. They are open source and can be downloaded from www.geenivaramu.ee/en/tools. It is possible to compile them for all major computational platforms. Both tools are fully documented and accompanied by instructions and examples. Manhattan harvester (MH) MH is a command line tool working on GWAS output files. It is able to analyze all chromosomes together or one at a time and can operate in single file or batch mode. MH provides the user with a table containing all physical position regions (peaks) detected in the GWAS output, the peak parameters and their general quality scores (see below). It utilizes original and efficient algorithms to handle the GWAS files. MH starts by reading rows with valid position numbers and p-values under a certain threshold (p-value< 0.001 as the default). Two copies of the data sets are handled in parallel – one remains unchanged (Reference Branch), the other one (Test Branch) is modified by various functions required for signal detection. Later the information from the two branches is merged to get the final annotations (Fig. 1). The modifications performed in the test branch standardize the input so that the peaks (nearby data points representing local regions of low p-values) can be separated from the background noise. The test branch data undergo the following modifications: Signal smoothing. The data are smoothed using a sliding window. Linear regression is performed sequentially for each data point by making use of 5 data points (2 before and 2 after). The middle data point is replaced with its prediction from linear regression (Fig. 2, a-b). Height-based compression. The spaces between data points are compressed based on the average -log(p-value) of their flanking data points multiplied by a constant (default value = 2). This step ensures that the points with small p-values (and hence more likely to belong to the peaks) are compressed closer to one another than the points corresponding to the intermediate space between the peaks (Fig. 2, c). As an example to illustrate this step: points A (-logPA = 4) and B (-logPB = 10) that are originally 1000 bp apart become 100 bp apart (\( \raisebox{1ex}{$1000\ bp$}\!\left/ \!\raisebox{-1ex}{$\left(2\ast \left(4+6\right)/2\right)$}\right. \)). Local-range re-distribution. From this step forward the p-values of the points are ignored as the algorithm continues only with a one dimensional projection of the compressed (see step b) physical position values. In this step all points are evenly distributed between their neighboring data points. This is done in two stages so that each point is slid along the position axis relative to two secure anchor points. This means that every other point is relocated using its neighbors as anchors, then the anchors themselves are relocated using their own flanking points as anchors. For example consider sequential points 1, 2, 3, 4, 5 that have variable distances between them. In the first stage point 2 is set to equal distance from 1 and 3 and point 4 is set to equal distance from 3 and 5. In the second stage point 3 is set to equal distance from 2 and 3. This re-distribution ensures that the distances between points are more evenly distributed - a prerequisite for the next step. The points in the regions falling between the peaks relocate much more than those in the peak regions because the latter are locked tight between their neighbors and they have less space to relocate (Fig. 2, d). The order of points is never altered. As a result the difference between the largest gap found inside the peaks and the smallest gap found in the inter-peak region is widened; essentially the peak points become more distinguishable from the background. This is relevant because the peak regions are now differentiated from the inter-peak regions only by the data point density in the one-dimensional array. Vector fragmentation. We modified the framework of univariate clustering [7] for our specific needs. Our vector fragmentation procedure is searching for the optimal clusters within the physical position values space of the chromosome. It is a carried out on the standardized input (step c) and the outcome is the genome regions that constitute Manhattan Plot peaks. These regions are separated from the flanking regions by sequential fragmentation of the position values array. The chunks are created by iteratively breaking the vector where the distances between the points are the largest, gradually moving to the smallest. Always the chunk with more data points is carried over to the next round of fragmentation (Fig. 3). During such fragmentation there is a termination point that optimally corresponds to the peak with the densest point distribution. To pinpoint the best stopping point the mean inter-point gap size (meanG) and the maximal inter-point gap size (maxG) are recorded for each fragmentation step. Two parameters are computed for each chunk: a) \( stop{1}_i=\frac{\mathit{\max}{G}_i}{mean{G}_i} \), b) \( stop{2}_i=\frac{\mathit{\max}{G}_i}{\mathit{\max}{G}_{i+1}} \), where i is the fragmentation step index. The optimal chunk was found to correspond to the index i of \( \underset{\mathrm{i}\in \mathrm{n}}{\max } stop{1}_i \), or else \( \underset{\mathrm{i}\in \mathrm{n}}{\max } stop{2}_i \) if stop1i − stop2i > 2. This empirical solution to choose the best fragmentation stopping point eliminated the need for more complicated decision making structures and proved fully adequate for analyzing real data. Larger stop1 and stop2 values generally correspond to the inter-peak regions whereas small values are indicative of fragmentation cuts in the middle of the peaks. Hence the borders where these values turn from large to small align with the peak borders. In addition to this detection system MH also applies several "sanity check" filters such as the maximal height to width ratio, chunk size etc. to narrow down the options space for the stop1/stop2 fragmentation termination system. The last filter in the algorithm is a function that tests the left and right p-values of the newly detected peak candidates to decide whether the next smallest chunk size has more fitting left and right peak termini in terms of p-value (as decided relative to the smallest peak height and baseline p-values); in which case the next smallest chunk is selected instead. MH comes optimized with regard to the analytical parameters as the default values. However, all key parameters can be changed by the user via command line flags as the need arises (see MH manual). Peak characterization and re-looping. Once the peak borders are identified the peak is characterized by a number of parameters. This includes for example General Quality Score (GQS, see below), maximal slope, height to width ratio and more (see Table 1 of Additional file 1). These parameters can be used to let the user filter and prioritize the findings. After this step the data points corresponding to the peak are removed from the data and the algorithm loops back to step d for the next round of vector fragmentation and the identification of the peak with the second highest point density (Fig. 1). The cycle between vector fragmentation, characterization of the created fragments and removal of the characterized fragments continues until data points are depleted. Workflow of MH The key steps of data processing in the Test Branch of MH. a: original (raw) data, b: smoothing, c: height-based compression, d: local range re-distribution. The Y axis is -log(p-value), the X axis is physical position. The absolute position values can be different between different panels of the graph, they were scaled based on the first and last data point position The order of chunk creation during vector fragmentation by MH. The numbers indicate the order of gaps by size. The first fragmentation round (1) yields 8 points, (2) yields 6 points, (3) is not executed because the corresponding area was lost after step (1), and (4) yields 5 points – the densest area of the plot Table 1 Execution speed of MH with various input file sizes, number of detected peaks and computational systems Cropper is a GUI tool using standard data visualization logic and patterns. It is specifically designed for handling Manhattan Plots. Cropper was developed in synchronization with the demands that originated during MH production, validation and usage. The user can zoom, crop and output parts of Manhattan Plot in both graphical and numerical format. Cropper also allows to sequentially remove peaks from Manhattan plot so that the user can continue work with the leftover data set after cropping out peaks. Cropper offers two views: a) global view showing all chromosomes, b) local view showing the selected chromosome (Fig. 4). Chromosomes are chosen from the global view while all the selections and manipulations are done in the local view by using the mouse (see Fig. 1 in Additional file 1). It is easy to visualize the regions picked out by MH by copying their ranges directly from the MH output file to the range data field of Cropper. A summary of how Cropper works. The two views (global and local) make different control options available to the user In this work we used the NMR metabolite GWAS meta-analysis data set from the MAGNETIC consortium which is freely available [8, 9]. The files had a GWAMA format [10]. The data files were randomly divided into two non-overlapping subsets: method development set (MDS) and the method validation set (MVS). General quality score (GQS) MH computes 16 parameters for each peak. Each parameter describes a certain aspect of the peak region and can be used for subjective ranking (see Table 1 in Additional file 1). We built a model to predict the "goodness" of GWAS peak based on these values to generate a GQS for each peak. The more comprehensive GQS score was invented to provide a more global quality assignment for each peak that could be used as the main parameter for peak assessment. The peak quality score model was created using the quality scores assigned by the volunteer knowledgeable human evaluators (KHEs, the scientists from the Institute of Genomics, University of Tartu, knowledgeable in GWAS) as dependent variables. We collected the benchmark data set by asking 20 KHEs to evaluate 277 Manhattan Plot peaks (extracted with Cropper from the MDS) on a 5 point scale. These peaks were shown to the KHEs together with flanking areas (1/3 before and 1/3 after the peak region). Each KHE could also see the maximal –log(p-value) for the top of the peak and the width of the peak at p-value = 0.01. The evaluation marks generally agreed well between the KHEs as witnessed by the grading correlation structure (Fig. 5). The correlation structure within the set of parameters used for modeling was investigated (see Fig. 2 in Additional file 1). Many parameters showed either a strong positive or negative correlation with the marks given by the KHEs and in most cases the variation was low (see Fig. 3 and Fig. 4 in Additional file 1). A heat-map with hierarchical clustering showing the Manhattan Plot peak assignment groupings between KHEs We used the KHE-generated peak scores together with the16 parameters generated by MH as attributes to model the outcome grade variables with a mixed-effects proportional odds model [11]. The attribute "peak repetition" was transformed by taking the square-root, the attributes "skewness" and "peak balance" were raised to the power of 0.25 and all other attributes except "Kolmogorov normality test" were log-transformed (see Table 1 in Additional file 1). All variables were additionally mean scaled and standardized. The model used was a mixed-effects proportional odds model with a cumulative link that assumes an ordinal response of the peak scores. These scores are assumed to be subject to expert specific effects. We used a step-wise model development approach starting from the null model that minimized the average mean square error (\( 1/\mathrm{n}{\sum}_{i=1}^n{\left(\underset{i}{\Pr }-T{r}_i\right)}^2 \) where Pri is the predicted grade of the ith graded peak, Tri is true grade given to a peak and n is the number of graded instances of the predictions using five-fold out-of-sample cross validation. The initial dataset was split five-fold and the model was trained five times using 4 of the 5 folds and using the fold left-out fold as test data. We used R (version 3.3.3) package ordinal (version 2018.4–19) for model development [12]. Of all parameters "log max p-value" and "bestslope" were incorporated into the final model. This was the optimal solution that resulted in minimum 5-fold cross validation MSE of 0.92. The final model parameters were obtained from refitting the final model on all data. This resulted in Pearson correlation coefficient of r = 0.88 between the expected value of predicted scores (E (score)=\( \sum \limits_{i=1}^5P\left( score=i\right)\ast i \)) and the mean estimate from 20 KHE GQSs (Fig. 6). The R code used for modeling is presented in Additional file 2. The model was implemented in MH. Validation of the statistical model to estimate GQS – expected scores (based on KHE marks) vs. observed (computed by MH) Manhattan harvester peak detection accuracy It is important to evaluate the implemented algorithms of MH on test data to show how well they detect the Manhattan Plot peak regions and estimate peak quality. The adequacy of the MH algorithm can be evaluated relative to the KHE estimations. It is not preferable to use known (published) biological significance of the detected regions as a quality measure because the quality of the peak is not linked to biological significance. Instead it is the peak height, width and shape that typically mark a peak as interesting for the next steps of the study. We assessed the MH output by comparing the MH-detected regions with the opinion of a KHE using the MVS. Cropper was used by the KHE to conduct the comparison between the Manhattan Plot and the regions picked out as peaks by MH. The KHE inspected 100 randomly chosen peaks ranging from 5.1 to 81.7 on a –log(p-value) scale. Of those 97 coincided very closely with the KHE opinion, only one peak (max(−log(p-value)) = 39.1) was detected as to have a base range two times wider (extended equally to the left and right of the highest point) than marked by the KHE. Massive peaks were fragmented into 2 or 3 sub-peaks on two occasions (separating the center of the peak region from the shoulders). It was not unambiguous, however, if this solution was better or worse than reporting those peaks (3 MB wide at the height of –log(p-value) = 3) as single clumps. In no cases were the existing peaks not reported and the reported peak ranges never overlapped as judged by the KHE. A KHE also visually validated the GQS on the data used to validate the results above. The GQS values computed by MH were always within one unit of the score visually assigned by the KHE using Cropper. Computational speed The computational speed of MH was measured by analyzing MDS files on two systems: a) Hewlett Packard EliteBook 2540p (PC), with 14.04.5 LTS (32 bit), 2 GHz Processor, b) High Performance Computing (HPC) cluster of the University of Tartu running CentOS Linux release 7.4 (64 bit) and 2.2 GHz processor. All measurements were carried out 5 times on a single processor using the Unix command "time". We used minimal (filtered) file size with p-val < 0.01 (< 1 MB) as well as an un-modified GWAMA meta-analysis file (85 MB) (Table 1). We show that an 85 MB GWAMA format file with 19 columns, 560,646 rows and 6 genomic regions detected by MH was analyzed in just over 3 s (see Fig. 5 in Additional file 1 for Manhattan Plot). The analysis took 0.031–0.056 s when file reading time was minimized. We did not detect association between execution speed and the number of peaks detected. Most of the computational time was spent on file reading and the overall analysis speed was sufficiently fast for practical purposes. The execution speed of Cropper was not quantified. It was qualitatively judged by the users (KHEs) as sufficiently fast. Cropper evaluation Cropper was initially created to facilitate the development process of MH. It gained a role as a tandem tool in the MH workflow – used for visual validation of MH assignments. The other graphical tools that could be adopted for viewing or cropping of Manhattan Plots were not convenient for a quick visual assessment of the findings from MH. Cropper proved uniquely valuable for quickly conducting the comparison between the peak assignments by MH and KHE for this project both in terms of ease of use and speed. The capability of Cropper to visualize regions from MH output file in one copy and paste step proved most useful. It was specifically brought out by the KHEs that Cropper had no easy-to-use alternatives for fast visualization of selected Manhattan plot regions. To this day GWAS results are typically evaluated visually one at a time. This approach works well only with a small number of files. With many files automated help is needed. Manhattan Plot peaks are difficult to model based on simple mathematical function because they reflect genomic structure and underlying biological aspects in addition to certain expected peak geometry. We tackled this issue by creating a) MH which uses GQS and other parameters to mimic the opinion of an experienced researcher when picking out genome regions with features calling for further attention, b) Cropper that accompanies MH whenever there is a need to study the detected regions up-close. We demonstrate that our tools are fully adequate both in terms of accuracy and speed. We have developed MH so that it is certain to detect all peaks and it is up to the user to draw the line between the interesting and uninteresting. The MH algorithms can automatically scale to adapt to various peak identities. The peaks are not initially identified based on p-value but rather on a collection of peak qualities via specialized data pre-processing and vector fragmentation techniques. Data points with very low p-values are ignored if they are not found to belong to high certainty peaks. MH outperforms simple GWAS output screening based on the magnitude of p-value: the low p-value points are combined into regions and the regions, not the individual low p-values, are reported as peaks. This converts the findings into a manageable set that can be further ranked according to the user needs. MH and Cropper are meant to integrate into the GWAS result analysis workflow. It starts with generation of GWAS summary statistics with GWAS software. All resulting files are then analyzed by MH in the batch mode. The regions of interest are ranked by filtering and sorting the MH output. Only a small number of hits are brought to the researcher's attention. The researcher can next focus on the short list by using Cropper. The utility of Cropper was assessed by asking the UT scientists using it. The feedback was fully positive ensuring that the tool was needed. MH is not dealing with issues of biological significance. The peaks are evaluated based solely on the visually observable characteristics. This was the goal because this is also how the scientist evaluates the peaks. The same phenotype can result in Manhattan Plots of different visual qualities depending on the number of subjects and other study characteristics. MH is not devoid of limitations. It is not currently using the MAF or imputation quality info and thus relies on using quality pre-filtered GWAS files. Also it is not attempting to validate the peaks using non-mathematical methods such as performing database searches or comparing against known genome region associations to find gene identities. Although MH is not providing graphical output this functionality is performed by Cropper. Our future plans include updating the tools based on user feedback. We created a system for quickly detecting the interesting genomic regions from GWAS output files. To the best of our knowledge there are no other tools for helping to extend the human eye to a large number of GWAS outputs the same way as MH/Cropper. However these tools are needed. The aim of MH is to do a fast initial screening of data not manageable for the human eye. MH and Cropper together constitute a system that allows the user to qualitatively study a large number of GWAS results. Project name: Manhattan Harvester. Project home page: www.geenivaramu.ee/en/tools Operating system(s): Cross platform. Programming language: C++/Qt. Other requirements: Qt4.3 or higher (free from www.qt.io). License: GNU GPL. GHz: GQS: General Quality Score GUI: GWAS: KHE(s): Knowledgeable Human Evaluator(s) MAF: Minor Allele Frequency MAGNETIC: Metabolites And GeNETIcs Consortium MB: Megabite MDS: Method Development Set (subset of the MAGNETIC data set) MH: Manhattan Harvester MVS: Method Validation Set (subset of the MAGNETIC data set) NMR: Gibson G. Population genetics and GWAS: a primer. PLoS Biol. 2018;16(3):e2005485. Visscher PM, Wray NR, Zhang Q, Sklar P, McCarthy MI, Brown MA, et al. 10 years of GWAS discovery: biology, function, and Translation. Am J Hum Genet. 2017;101(1):5–22. Ganna A, Genovese G, Howrigan DP, Byrnes A, Kurki M, Zekavat SM, et al. Ultra-rare disruptive and damaging mutations influence educational attainment in the general population. Nat Neurosci. 2016;19(12):1563–5. Neale Lab. http://www.nealelab.is/blog/2017/7/19/rapid-gwas-of-thousands-of-phenotypes-for-337000-samples-in-the-uk-biobank. Accessed 27 June 2018. Haller T, Kals M, Esko T, Mägi R, Fischer K. RegScan: a GWAS tool for quick estimation of allele effects on continuous traits and their combinations. Brief Bioinform. 2015;16(1):39–44. Qt. https://www.qt.io/. Accessed 27 June 2018. Song J, Wang, H. Tutorial: Optimal univariate clustering. 2017. https://cran.r-project.org/web/packages/Ckmeans.1d.dp/vignettes/Ckmeans.1d.dp.html. Kettunen J, Demirkan A, Würtz P, Draisma HH, Haller T, Rawal R, et al. Genome-wide study for circulating metabolites identifies 62 loci and reveals novel systemic effects of LPA. Nat Commun. 2016;7:11122. Computational Medicine, MAGENTIC NMR-GWAS summary statistics. http://www.computationalmedicine.fi/data#NMR_GWAS. Accessed 27 June 2018. Mägi R, Morris AP. GWAMA: software for genome-wide association meta-analysis. BMC Bioinformatics. 2010;11:288. https://doi.org/10.1186/1471-2105-11-288. Hedeker D, Mermelstein RJ, Demirtas H, Berbaum ML. A mixed-effects location-scale model for ordinal questionnaire data. Health Serv Outcomes Res Methodol. 2016;16(3):117–31. R Core Team. 2017. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. https://www.r-project.org/Accessed 27 June 2018. We thank 20 scientists (KHEs) from the Genome Institute, University of Tartu for evaluating peaks for MH development. We also acknowledge the High Performance Computing Centre of the University of Tartu for their technical support. We received support from EU H2020 grant ePerMed (#692145), EU H2020 grant 633589, targeted financing from Estonian Government IUT20–60, Estonian Center of Genomics/Roadmap II (project No. 2014–2020.4.01.16–0125). This research was also supported by the European Union through the European Regional Development Fund (Project No. 2014–2020.4.01.15–0012) and by the US National Institute of Health [R01DK075787]. Manhattan Harvester and Cropper together with their source code and supporting materials can be downloaded at www.geenivaramu.ee/en/tools. Estonian Genome Center, Institute of Genomics, University of Tartu, 23b Riia Street, 51010, Tartu, Estonia Toomas Haller & Andres Metspalu Institute of Computer Science, University of Tartu, Juhan Liivi 2, 50409, Tartu, Estonia Tõnis Tasa Search for Toomas Haller in: Search for Tõnis Tasa in: Search for Andres Metspalu in: TH initiated the project, developed the algorithms, implemented the algorithms in C++, generated part of the figures and wrote the manuscript. TT performed statistical modeling, provided R programming knowledge, generated part of the figures and critically reviewed the manuscript. AM coordinated the project and provided funding. All authors read and approved the final manuscript. Correspondence to Toomas Haller. Supplementary table and figures that show the parameters computed by MH and the Cropper interface. (PDF 679 kb) R files showing the computer code used for modeling. (ZIP 2 kb) Haller, T., Tasa, T. & Metspalu, A. Manhattan Harvester and Cropper: a system for GWAS peak detection. BMC Bioinformatics 20, 22 (2019) doi:10.1186/s12859-019-2600-4 Accepted: 03 January 2019 GWAS Manhattan plots Peak detection Peak quality score
CommonCrawl
The Decentralization Foundation Funding technical classes d24n.org is looking for academic courses focused on decentralizing technology to fund with significant grants. We are especially interested in funding such courses in Latin America and in other locations where such grants would go farther and where a course in the local language would allow more people to learn about blockchain technology. We require that video and other course material be made available for free online. Education has been part of our mission from the start; quoting our first blog post "We want to educate this community and others on the underlying technology, including its potential and limitations." We are open to courses that are not focused on the technology, such as those focused on cryptocurrency economics and sociology, though such will face more scrutiny during the review process. We believe that high-quality technical courses (such as those below) at universities are a great aid in building local communities that are knowledgeable about decentralizing technology. Here are examples of technical courses that we see as excellent models: "Internet-Scale Consensus in the Blockchain Era" taught by David Tse at Stanford in 2021. The focus of this course is on blockchain consensus such as proof-of-work and proof-of-stake, which are analyzed in terms of throughput, efficiency, and security. "Bitcoin and Cryptocurrencies" was taught by Dan Boneh at Stanford in 2020. This course covered blockchain basics, consensus protocols, and blockchain scaling. This has a broader focus than Tse. "Smart Contracts and Blockchain Security" by Andrew Miller at UIUC is a similar broadly-focused course. A technical cryptography course that focuses on other areas (one-way functions, public-key cryptography, zero-knowledge proofs) yet includes significant content on the basics of how a blockchain works is acceptable. In all cases we require that the course be taught in an academic setting in a language other than English, that video, syllabi, and homework be made available online, and that a connection with the community outside the academic setting be made (say local meetups). We require that such courses cover potential problems with the technology (e.g. energy efficiency and scalability). We are mainly interested in funding such courses in areas where English proficiency is not as high and where online material for a course in the local language would facilitate better understanding of blockchain technology. We believe that these technical courses would provide critical context for local cryptocurrency, IPFS, or similar communities in which a strong technical background is not common. Please go to d24n.org and fill out an application! Lighthouse Research Report We are happy to share a report resulting from a grant made last year by the foundation to Sigma Prime for scientific research and development related to Ethereum 2 (Eth2). The report outlines challenges encountered and solutions found while implementing the Lighthouse Eth2 client. To illustrate the work, we list here a few items described in the report: A validator shuffling optimization which resulted in a 200x speedup (see page 5), An epoch processing optimization (page 6) The Lighthouse slasher (page 6) The Rust implementation of Gossipsub including an online scoring system (pages 7,8) Lighthouse client fuzzing (pages 8,9) Weak Subjectivity Sync (page 8) The report has a wonderful set of links to further material that we encourage you to investigate for further info on these topics. The end of the report highlights the "The Merge" in which Ethereum (and Lighthouse) will transition from proof-of-work to proof-of-stake: In the coming year, we hope to be bringing The Merge into its final stages and hopefully production. The specification for The Merge is available here (inspired by this research post and still going through significant changes). The Merge is a very broad initiative involving the entire community, so it's difficult for us to define a timeline, but we're optimistic for a merge in late 2021. Additional info can be found in the Lighthouse book, by following the Sigma Prime blog, and by reading the report. We are grateful to Sigma Prime for their scientific research, their excellent work on the Lighthouse client, and for this research report. An Economic Analysis of EIP 1559 We are happy to share the report resulting from Professor Tim Roughgarden's work on the Ethereum transaction price mechanism, including EIP 1559. The report is detailed yet accessible and educational summary of the proposal (variable-size blocks, the burned base fee) as well as an analysis of its strengths and weaknesses compared to the current mechanism and related proposals. The report is positive about EIP 1559, suggesting it meets the goals of easy fee estimation, lower variance on transaction fees, robustness to off-chain agreements by miners, and reduced inflation. Even so, there are some "Key Takeaways" which may be unexpected (see Section 1.2): 7. The seemingly orthogonal goals of easy fee estimation and fee burning are inextricably linked, with the threat of off-chain agreements precluding one without the other. (Sections 8.1–8.2) 8. EIP-1559's base fee update rule is somewhat arbitrary and should be adjusted over time. (Section 8.4) 9 Variable-size blocks enable a new (but expensive) attack vector: overwhelm the network with a sequence of maximum-size blocks. (Sections 8.4.5–8.4.6) Tim is Professor of Computer Science at Columbia University, and is a leading algorithmic game theorist. He has been awarded the ACM Grace Murray Hopper Award, the Gödel Prize, and many other awards. We are very grateful that he was willing to work on the Ethereum transaction price mechanism. We anticipate that his accessible writeup will act as an introduction and guide to Ethereum price mechanisms and bring attention to the academic world of the interesting and challenging problems found in Ethereum and other blockchains. We are very pleased with the report. Thank you Tim! Thank you James Fickel! The Decentralization Foundation is a 501(c)(3) nonprofit and relies entirely on donations to operate and continue our vision of a decentralized future. Everyone at D24N.org is grateful for a very generous donation by James. This donation reinforces our belief of a decentralized future as well as a trust given to everyone at the Foundation to realize these goals; we at D24N.org will work hard to ensure this trust is well earned. James' donation will help the Foundation fulfill its mission of promoting decentralizing technology, educating the world on its transformative potential, and funding research to improve it. Thank you Gustav Simonsson! Gustav made a large donation that funded much of our work through 2019. Donations to 501(c)(3) organizations can be restricted to a specific area of focus, or be unrestricted. As an example of restricted donations, the foundation until recently allowed donations specifically for Ethereum 2 Research and Development. Unrestricted donations may be used for any of the non-profit's charitable purposes, and are much easier for the non-profit to use. James' and Gustav's donations were unrestricted. Thank you James and Gustav! And many thanks to our smaller donors and supporters. We are, for obvious reasons, very grateful to our larger supporters. But it's our small donations that keep us decentralized, in harmony with our mission. Thanks to each and every one. Grant to Sigma Prime for Ethereum 2 Research The Foundation has made a significant grant to Sigma Prime for continued scientific research and development related to Ethereum 2. This will fund critical efforts, with all findings open to the public in the form of a research paper to be published later. We are committed to a decentralized future, including pushing the envelope of necessary core technologies such as Ethereum 2 forward through support of innovative efforts and teams. Ethereum 2 is a long-planned upgrade focusing on scalability and proof-of-stake. Phase 0 is implementation of the beacon chain which lays the ground for shard chains and staked ETH. The Foundation believes that continued research on Ethereum 2 and development of associated open-source clients will benefit the Ethereum community as well as other blockchains. Sigma Prime has implemented Phase 0 of Ethereum 2 in the Lighthouse client. This is one of the leading implementations; we are very happy to support Sigma Primes' work. Here is a quote from our grant agreement (with some legalese removed): Sigma Prime will perform scientific research geared towards a) analysis of the appropriate mechanism for implementing an environmentally sustainable proof-of-stake consensus mechanism in an Ethereum 2 client; b) preliminary implementation of sharding to allow transaction scalability. … Sigma Prime will engage with the Ethereum community in order to educate and disseminate the fruits of the supported research. Sigma Prime will deliver a scientific report that will include a) an executive summary aimed at a non-technical audience, b) an introduction to Ethereum 2, c) a summary of Ethereum 2 research and engineering challenges, d) a Lighthouse architecture overview, e) a review of the Lighthouse development process, f) key takeaways, and g) a roadmap for further scientific research and education. We look forward to the report from Sigma Prime! The Decentralization Foundation's mission is to promote decentralizing technology, educate on its potential, and fund research to improve it. We are a 501(c)(3) non-profit and are funded by tax-deductible donations. Tim Roughgarden will work on EIP 1559 The Decentralization Foundation has entered into an agreement with Tim Roughgarden for 225 hours of work by him on the Ethereum transaction price mechanism, including EIP-1559. EIP 1559 proposes a new mechanism in which the base fee is adjusted by the protocol to target an average gas usage per block instead of an absolute gas usage per block. See the EIP-1559 page for more details and EIP-2593 for another proposal. The foundation believes that the Ethereum transaction price mechanism is an important issue facing the Ethereum community, including in ETH2. The technique chosen will be analyzed by users, miners, attackers, and other blockchains. Tim is Professor of Computer Science at Columbia University, and is a leading algorithmic game theorist. He has been awarded the ACM Grace Murray Hopper Award, the Gödel Prize, and many other awards. We are very grateful that he is willing to work on the Ethereum transaction price mechanism. Here is a quote from Tim: The proposed research will provide an analysis of proposed changes to the transaction fee mechanism in the Ethereum protocol, including: (i) a formal definition of the design space, including the proposals in EIP-1559 and EIP-2593; (ii) a formal definition of the objectives and constraints; (iii) a formal model of transactions; (iv) a formal game-theoretic model for miners; (v) the identification of formal trade-offs between the competing objectives; (vi) if possible, the identification of an "optimal" mechanism. Tim will produce a progress report halfway through the work and a final report that will be made publicly available on arXiv. Any software produced will be made publicly available. The agreement includes 45 hours of communication with the Ethereum community, including relevant chat and discussion forums. The foundation is excited for this work, and can't wait to see what results! The Foundation funded eleven technical events and funded two grants in 2019. Donations worth $13,134 (at the time of donation) were received. At the end of 2019 the foundation had on- and off-chain assets worth about $7,701. We say "about" because the value of on-chain assets are not stable. The foundation funded Simon Castano 0.20281 BTC (about $1920) for work on Bitcoin.jl and related software in the Brane project. The foundation also funded Matt Quinn 0.04805 BTC (about $455) for bootstrapping the "Distributed Saturdays" in the Blockchain Free school. Neither of these projects have submitted a report. Food for eleven Silicon Valley Ethereum Meetup events were paid for by the foundation. Examples include an event on March 17 in which Origin presented their protocol for building peer-to-peer marketplaces, Vivek Bagaria presenting on operating blockchains at physical limits on May 26, a proof-of-stake introduction on June 30, and Steve Waldman presenting on sbt-ethereum on Oct 13. The meetup.com membership expenses for the SV Eth meetup were also paid. The foundation paid $750 for 5 months of membership at Hacker Dojo, though we have found venues at which events can be held free and stopped the Hacker Dojo membership. Other expenses include $394 for domains and website hosting and $195 for to a law firm (they charged us for their response to our email telling them they were too expensive). For more details, see our 2019 financial report. You may also want to compare with the 2018 report. Pro-Enjoyment, Anti-Harassment Policy The Decentralization Foundation supports several community groups, and looks forward to supporting others. Our main goal in supporting them is to to provide technical content and education about decentralizing technology in an enjoyable way. With this goal in mind, here is the foundation's pro-enjoyment pro-enjoyment, anti-harassment policy that the foundation applies to itself, and requires groups that it supports to follow. Specifically, the foundation is dedicated to providing a enjoyable and harassment-free experience for everyone, regardless of gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, race, age, religion, or whether you favor one crypto-currency or another. We do not tolerate harassment of event or group participants in any form. Sexual language and imagery is not appropriate for any event, including talks. Participants violating these rules may be expelled from an event or from a group at the discretion of the organizers. Harassment includes, but is not limited to: Sexual images in public spaces Deliberate intimidation, stalking, or following Harassing photography or recording Sustained disruption of talks or other events Inappropriate physical contact Unwelcome sexual attention Advocating for, or encouraging, any of the above behavior Participants asked to stop any harassing behavior are expected to comply immediately. If a participant engages in harassing behavior, event organizers retain the right to take any actions to keep the event a welcoming environment for all participants. This includes warning the offender or expulsion from the event and group. If someone makes you or anyone else feel unsafe or unwelcome, please report it as soon as possible to an organizer or event host in person, or to [email protected]. Harassment and other code of conduct violations reduce the value of events for everyone. We want you to be happy and enjoy events we sponsor. People like you make our event a better place. Ethereum Yellow Paper Improvements, Kestrel Institute We are very happy to share a report from the recipients of our first grant, the Kestrel Institute; the grant of $3,000 was for "Ethereum Yellow Paper Improvements." Kestrel is known for (among other projects) their work on formal specification and implementation of a full Ethereum client using the ACL2 theorem prover and APT toolkit. In the course of this client development and related work, they found a few ways in which the Ethereum Yellow Paper could be improved, which was the focus of our small grant to them. Alessandro Coglio and Eric McCarthy implemented the changes and also wrote the report below summarizing how the grant was used. Final Report, 2019-05-31 by Alessandro Coglio and Eric McCarthy What is the Yellow Paper, and how is it related to Decentralization? The Yellow Paper (YP) is the primary specification for the Ethereum blockchain. Ethereum is the most widely used blockchain smart contract platform, and therefore it is the most widely used platform for decentralized applications. Improving the security of Ethereum is important for all applications running on the platform. Improving the precision and clarity of the YP can help implementors of Ethereum avoid bugs, so improvements to the YP are contributions to the security of Ethereum and therefore to the security of many decentralized applications. Work done under this grant Prior to this grant, we were working on formalizing Ethereum in the ACL2 theorem prover. The Yellow Paper (YP) is the primary specification for Ethereum, so it was our primary reference. As a specification, it should be precise and understandable by someone who recognizes notations used by mathematics and logic. Whenever we found issues with the YP, we recorded the issues and our proposed solutions. On this grant, we worked though a large part of our list of problems with the YP. This included things that seemed wrong, inconsistent, underspecified, or unclear when we attempted to formalize the YP. We studied each issue and proposed suggested improvements. We reached consensus within Kestrel and then created pull requests and participated in subsequent community discussions on Github to reach consensus with the community, in some cases modifying the suggested changes accordingly. In some cases, we studied implementations. When community consensus was reached, we pushed our changes, thus improving the YP for all current and future users. We made the following changes to the Yellow Paper (YP) under this grant. This list does not include pull requests done prior to the start of this grant. The changes vary in complexity and importance. The most interesting change from a technical point of view is: Explicate non-RLP-encodable structures https://github.com/ethereum/yellowpaper/pull/736 However, the cleanup tasks are also important, since they affect the general understandability (and sometimes the correctness) of the YP. Here are the other commits we did under this grant: Change arg max to max in Appendix D. Fix typo. The \mathrm{n} should be just n here. It is the index that the sum ranges over. Add missing comma. Clarify the definition of RLP trees. Consistently use 'byte array'. Fix font and improve wording. Fix notation for sequence elements. Improve summation notation in BE definition. Make all quantified variables non-subscript. Give a general definition of sequence concatenation. Improve sentence clarity and explicitness. Improve appearance of true and false. Pass a bit as the second argument of HP. Fix two obvious typos. (Merged without pull request.) https://github.com/ethereum/yellowpaper/commit/3e367728980573f95fb4a7e6785314500bf4a1b6 We also cleaned up some older, related issues and pull requests by adding explanatory comments, some of which were subsequently closed. From a technical point of view, these can be quite interesting: https://github.com/ethereum/yellowpaper/issues/116 We added this issue, which has not yet been resolved: Note, we did a variety of adjunct work without charging to the grant, such as recording and discussing our proposed changes and discussions and meetings about maintainership. Our experience attempting to improve the Yellow Paper When we started working on the Yellow Paper, we thought it would be reasonably straightforward to submit pull requests with our proposed improvements. However, after Yoichi Hirai departed, his role as an active maintainer of the YP was not specifically reassigned to anyone else at the Ethereum Foundation. We would like to thank Nick Savers for stepping in to maintain the YP and to review and merge and approve many pull requests, Kenneth Ng for connecting us with Jamie Pitts, and Jamie Pitts for attempting to build a Yellow Paper community. Jamie started the Ethereum Specification mailing list ([email protected]). Jamie also set up a Doodle signup for an Ethereum Specification group call, but the only people who signed up were from Kestrel Institute: Alessandro Coglio and Eric McCarthy, and the call never happened. Then Jamie gave us push access to the Yellow Paper Github repo, so we are part of a small group of YP maintainers. More Information about the Yellow Paper There seem to be many people who find the YP valuable, and many who do not. There are people who want the YP to be "The Canonical Ethereum Specification", those who want a machine-readable canonical specification (such as expanding the Jello (K-framework) specification to cover more than the EVM), and those who think it is fine not to designate any specification as "canonical" and who support a diversity of specifications. Rather than take any sides in these discussions, we mainly focused on improving what we find useful: the Yellow Paper. Currently, the main forums for discussing the Yellow Paper are: the issues and pull requests of the Github repo at https://github.com/ethereum/yellowpaper Gitter: https://gitter.im/ethereum/yellowpaper There are also occasional discussions at Ethereum Magicians. https://ethereum-magicians.org Example: https://ethereum-magicians.org/t/yellow-paper-maintainership/819 Nick Savers is the primary Maintainer: https://en.ethereum.wiki/maintainers On this grant we accomplished our goal to improve the YP in quite a few ways. Also, this project has been helpful for us to become better integrated contributors to the Ethereum Ecosystem. We have met others in the intersection of Ethereum and Formal Methods. We continue to see ways the Yellow Paper can be improved, and we hope to continue contributing to the YP in the future. We thank the Decentralization Foundation for helping us get to this point. The following much-delayed summary gives details about significant activities of the Foundation in 2018, including donations, expenses, community events that were funded, and a grant. The foundation started in Aug 2018 using donations from anonymous sponsors totaling $20,000. We are very grateful for these generous donations! In October, the foundation gave a grant of $3,000 to the Kestrel Institute for work on the Ethereum yellow paper. The results of this grant are detailed in a separate blog post. We are very happy with Kestrel's work. We funded a Silicon Valley Ethereum Meetup event with Yannis Smaragdakis and another with Mooly Sagiv and Eric Smith. These sort of events will be a priority in the future for us. Unfortunately we paid $12,918 for legal expenses associated with setting up the foundation. Of course this is too much for our tiny foundation and we have switched to doing all legal work ourselves, including filing taxes. The other largest expense was $900 for rent; $300 per month for October, November, and December 2018. As with the legal expenses, this is too much for our non-profit and in collaboration with Mindrome and another Mindrome tenant, we found a way to keep the Mindrome address without significant expense. For more details, see our 2018 financial report. What is it good for? This organization is called the "Decentralization Foundation" which, in its way, invites a certain kind of mockery. "Decentralized" is such a buzzword, and like most buzzwords, it has become associated more with grifting and disappointment than with the putatively good things the word was once meant to connote. The tech that often brands itself as "decentralized" — open blockchains like Bitcoin, the "dApps" built on Ethereum and similar platforms — are arguably very decentralized, and arguably not very decentralized at all. Decentralization optimists emphasize that "the applications are autonomous, outside of the control even of their creators", or that "anyone can participate on equal terms with anyone else, just by bringing some hardware to the table." Decentralization pessimists point out how dominated "mining" of Bitcoin and Ethereum are by just a few "pools", how concentrated ownership of the assets that signify economic value is on these platforms, how in practice people with capital or socially connected to core groups of coders and evangelists have disproportionate influence. My own view is that "decentralization" is a completely uninteresting question to debate. Who gives a flop? Decentralization is valuable as a buzzword because it signifies other things, other virtues. First and foremost in my mind, the "decentralized technology" movement aspires (however much it fails), to react against the leeching away of human agency that is the signal social fact of an increasingly large scale, technologically mediated world. "Decentralized" systems claim to be "open" and "permissionless". What that means, or ought to mean in my view, is that human beings — as generally as possible, not just some special technosophisticate caste — should be able to use this technology to act in ways that are socially and economically meaningful for themselves and their own communities, and that are not restricted to patterns and templates sketched out by distant "tech entrepreneurs" or by anyone else. We are very, very far from that world. A whole industry of "blockchain skeptics" has emerged, quite reasonably, to point that out. And yet this problem, that we are building a world in which, however paradoxically, the great power unlocked by advancing technology and large scale specialization and trade leaves most of us feeling ever less powerful, ever more at the mercy of distant and inchoate forces with respect to the circumstances of our own lives and families, is dire. You come to the counterrevolution with the technologies that you have, not those that you might wish to have. The current generation of overhyped, overspeculated, underdeveloped "decentralized tech" is close to the only game in town, from a human agency perspective. We've watched the internet itself, which was supposed to be a great equalizer, become a space more rapidly and efficiently consolidated, more disempowering from an economic perspective, than almost any sphere in the predigital world. It seems unlikely that we will undo the internet's great achievement of stitching a naturally pluralistic world into a single gigantic economy ripe for domination. To restore some hope for human agency, we'll need tools that let humans create and defend their own spaces, which must be economic as well as creative if they are to be sustainable. It is my hope, however well or poorly founded, that from today's often disappointing "decentralized technology", such tools will emerge. So, the "Decentralization Foundation". Chris Peel, our fearless leader, is a remarkably adept builder of intellectual community around technology. His adroitness at that is what brought me into this hall of mirrors. I'm honored that, as he beats a path to turn that mission into a more formal organization, he's chosen to invite me along for the journey. Whether as a builder or critic, grantee or donor, I hope that you'll come along too. Let's see where we all take ourselves. Apr 11 2019 September 10, 2021 Introducing the Decentralization Foundation Update on 8 Apr 2019: We received our 501(c)(3) approval letter today! This allows tax-deductible donations from US taxpayers. It is an important step for the foundation. Original post: We are pleased to announce the launch of the Decentralization Foundation, with a mission of promoting decentralizing technology, educating the world on its potential, and funding relevant scientific research. The foundation will raise donations and fund projects and communities that are consistent with the mission. Why create the Decentralization Foundation? We want to build a community that is passionate about decentralization and is also willing to learn from others. We want to educate this community and others on the underlying technology, including its potential and its limitations. We want to fund scientific projects to improve the technology and society that would otherwise not be funded. What will the foundation do? The first focus of the foundation is to support community organizations and events by funding venues, meetup.com fees, and food for events. We will support groups focused on basic decentralized technology, as well as organizations and events focused on specific systems such as cryptocurrencies. We also plan to fund documentation of decentralized systems with the goal of making the technology more accessible to beginners. We are also happy to fund community events which include criticism of the decentralized technologies above. We will promote discussion of decentralization, economics, and social stability. We have already funded a small number of events, including an event on decompiling smart contracts, and another on formal verification of smart contracts. One barrier to adoption of decentralized technologies is education. The second focus of the foundation is to develop and fund courses on important decentralized technologies such as hashes, Merkle trees, basic cryptography, and distributed consensus. We will also fund classes on systems such as Ethereum, Bitcoin, IPFS, and Circles, and finally courses on economics. We will promote a culture of critical thinking as a part of all curriculum. The final focus of the foundation will be on funding scientific research. We have already given a small grant to the Kestrel Institute which we will describe in a subsequent blog post. Who are you? What are your goals? I am Chris Peel, the president of the foundation. Steve Waldman is secretary and treasurer. Elaine Ou is a member of the board; Steve and I are also on the board. Jeff Flowers is vice president. See the foundation website for longer bios. These are smart, wonderful people; it is a privilege to work with and be educated by them. Our long-term goal with the foundation is to grow the decentralization-focused community. We are happy to work with big, well-established meetups, conferences, etc…; even more than this we would like to build communities and provide education in areas where there is none available at present. We acknowledge a long history of thinkers who have emphasized the value of decentralization; for example Tocqueville said: Decentralization has, not only an administrative value, but also a civic dimension, since it increases the opportunities for citizens to take interest in public affairs; it makes them get accustomed to using freedom. "Democracy in America", Alexis de Tocqueville, 1840, Saunders and Otley (London) We recognize that we are building on Tocqueville and many others who contributed to the discussion of decentralization in government, finance, ideology and more. Despite the breadth of work on decentralization, the foundation is focused on technology, specifically the use of cryptography, blockchains, and incentives to build decentralized organizations. Steve will talk more about the word "decentralization" in a subsequent post. We have applied to the IRS for 501(c)(3) non-profit status in the US; if this is approved we will be able to accept tax-deductible donations. We are optimistic that our application will be approved. Finally, we are happy to announce a new round of grants; we are accepting applications now for small ($300-$3000) grants with a focus on community groups and meetups. Applications will close April 28. Please apply or donate! Decentralization Foundation 4701 Patrick Henry Dr Santa Clara CA, 95054 [email protected] Connect on meetup.com Follow us on Twitter: @d24nOrg
CommonCrawl
Taking a derivative with respect to a matrix I'm studying about EM-algorithm and on one point in my reference the author is taking a derivative of a function with respect to a matrix. Could someone explain how does one take the derivative of a function with respect to a matrix...I don't understand the idea. For example, lets say we have a multidimensional Gaussian function: $$f(\textbf{x}, \Sigma, \boldsymbol \mu) = \frac{1}{\sqrt{(2\pi)^k |\Sigma|}}\exp\left( -\frac{1}{2}(\textbf{x}-\boldsymbol \mu)^T\Sigma^{-1}(\textbf{x}-\boldsymbol \mu)\right),$$ where $\textbf{x} = (x_1, ..., x_n)$, $\;\;x_i \in \mathbb R$, $\;\;\boldsymbol \mu = (\mu_1, ..., \mu_n)$, $\;\;\mu_i \in \mathbb R$ and $\Sigma$ is the $n\times n$ covariance matrix. How would one calculate $\displaystyle \frac{\partial f}{\partial \Sigma}$? What about $\displaystyle \frac{\partial f}{\partial \boldsymbol \mu}$ or $\displaystyle \frac{\partial f}{\partial \textbf{x}}$ (Aren't these two actually just special cases of the first one)? Thnx for any help. If you're wondering where I got this question in my mind, I got it from reading this reference: (page 14) http://ptgmedia.pearsoncmg.com/images/0131478249/samplechapter/0131478249_ch03.pdf I added the particular part from my reference here if someone is interested :) I highlighted the parts where I got confused, namely the part where the author takes the derivative with respect to a matrix (the sigma in the picture is also a covariance matrix. The author is estimating the optimal parameters for Gaussian mixture model, by using the EM-algorithm): $Q(\theta|\theta_n)\equiv E_Z\{\log p(Z,X|\theta)|X,\theta_n\}$ calculus derivatives normal-distribution matrix-calculus jjepsuomijjepsuomi $\begingroup$ Possibly helpful: math.stackexchange.com/questions/94562/matrix-vector-derivative $\endgroup$ – dreamer Dec 30 '13 at 12:18 $\begingroup$ How is the function $Q(\theta|\theta_n)$ defined in the screenshot? $\endgroup$ – dreamer Dec 30 '13 at 12:24 $\begingroup$ @Dreamer I'll add it into the post. One sec, you can also see it from the reference pages 7-8 $\endgroup$ – jjepsuomi Dec 30 '13 at 12:25 $\begingroup$ I added the definition for $Q(\theta|\theta_n)$. However, I only require to understand how the author is doing the calculations in the reference. A simple example using multidimensional Gaussian is enough :) I can calculate it then myself for my specific problem (EM-algorithm). $\endgroup$ – jjepsuomi Dec 30 '13 at 12:29 It's not the derivative with respect to a matrix really. It's the derivative of $f$ with respect to each element of a matrix and the result is a matrix. Although the calculations are different, it is the same idea as a Jacobian matrix. Each entry is a derivative with respect to a different variable. Same goes with $\frac{\partial f}{\partial \mu}$, it is a vector made of derivatives with respect to each element in $\mu$. You could think of them as $$\bigg[\frac{\partial f}{\partial \Sigma}\bigg]_{i,j} = \frac{\partial f}{\partial \sigma^2_{i,j}} \qquad \text{and}\qquad \bigg[\frac{\partial f}{\partial \mu}\bigg]_i = \frac{\partial f}{\partial \mu_i}$$ where $\sigma^2_{i,j}$ is the $(i,j)$th covariance in $\Sigma$ and $\mu_i$ is the $i$th element of the mean vector $\mu$. $\begingroup$ +1 Ahaa :) that solves it then. Thank you, nothing further ;) $\endgroup$ – jjepsuomi Dec 30 '13 at 12:24 $\begingroup$ What you're saying is right, however, this doesn't quite explain how to do the calulations (it's not so easy to do this elementwise). $\endgroup$ – dreamer Dec 30 '13 at 12:25 $\begingroup$ @Dreamer It'd be glad to see some explanations on the calculations if someone wants to show them :) $\endgroup$ – jjepsuomi Dec 30 '13 at 12:28 You can view this in the same way you would view a function of any vector. A matrix is just a vector in a normed space where the norm can be represented in any number of ways. One possible norm would be the root-mean-square of the coefficients; another would be the sum of the absolute values of the matrix coefficients. Another is as the norm of the matrix as a linear operator on a vector space with its own norm. What is significant is that the invertible matrices are an open set; so a derivative can make sense. What you have to do is find a way to approximate $$ f(x,\Sigma + \Delta\Sigma,\mu)-f(x,\Sigma,\mu)$$ as a linear function of $\Delta\Sigma$. I would use a power series to find a linear approximation. For example, $$ (\Sigma+\Delta\Sigma)^{-1}=\Sigma^{-1}(I+(\Delta\Sigma) \Sigma^{-1})^{-1} =\Sigma^{-1} \sum_{n=0}^{\infty}(-1)^{n}\{ (\Delta\Sigma)\Sigma^{-1}\}^{n} \approx \Sigma^{-1}(I-(\Delta\Sigma)\Sigma^{-1})$$ Such a series converges for $\|\Delta\Sigma\|$ small enough (using whatever norm you choose.) And, in the language of derivatives, $$ (\frac{d}{d\Sigma} \Sigma^{-1})\Delta\Sigma = -\Sigma^{-1}(\Delta\Sigma)\Sigma^{-1} $$ Remember, that the derivative is a linear operator on $\Delta\Sigma$; if you squint you can almost see the classical term $\frac{d}{dx}x^{-1} =-x^{-2}$. Chain rules for derivatives apply. So that's how you can handle the exponential composed with matrix inversion. DisintegratingByPartsDisintegratingByParts $\begingroup$ "using whatever norm you choose." - Is this because all norms are equivalent in $\mathbb{R}^n$ and the space of matrices is "kind of" one of these spaces? $\endgroup$ – Soap Apr 4 '17 at 9:43 $\begingroup$ @Simoes Yes, all norms on the same finite-dimensional linear space are equivalent. $\endgroup$ – DisintegratingByParts Apr 4 '17 at 14:13 Not the answer you're looking for? Browse other questions tagged calculus derivatives normal-distribution matrix-calculus or ask your own question. Matrix/Vector Derivative Deriving $\frac{\partial (ABC)}{\partial B}$ Differentiating mahalanobis distance On notation for derivative of an n-Dimensional Gaussian How to take the derivative of a matrix with respect to itself? Derivative with respect to entries of a matrix Help with a derivative involving Kronecker product Derivative of a vector with respect to a vector Derivative of log-likelihood with respect to an element in mean vector of multinomial Guassian Derivative of Gaussian mixture model with respect to covariance matrix $\Sigma_k$ Second Derivative with respect to a Matrix Derivation of derivative of multivariate Gaussian w.r.t. covariance matrix
CommonCrawl
Operating Margin Calculator Nikola Vlacic A EE student eager to learn and develop skills. Particularly interested in physics, philosophy, mathematics, and computer science. What is the operating margin? How to calculate operating margin? Operating profit margin formula The limitations of the operating profit margin Gross margin vs. operating margin Operating margin vs. profit margin Operating margin ratio The operating margin (operating profit margin) calculator enables you to assess your operating margin. Have you ever wondered how to calculate your firm's profit per product sold, the profitability of each sold product, or net income? With this calculator, you will be able to follow the whole process. Learning the process is quite easy and useful for any business owner. This tool will help you understand and calculate your business operating margin. Operating margin measures profit a company makes after paying variable production costs but before paying taxes or other interests. In other words, operating margin helps us calculate the effectiveness of manufacturing a particular product by a company before any taxes. The costs of producing a specific product involve taxes and work power, energy costs, raw materials, and delivery of such materials. For example, if a company in question had $1000 in revenue from a particular product and the cost of sold goods was $250 and other administration expenses were $250, they would have an operating margin of 50 percent. For instance, if the company got a better deal on raw materials they would lower the net costs of This operating margin calculator enables you to calculate a company's operating margin. Analysts use the operating margin to determine the profitability of a company. To calculate the operating margin first, we calculate operative earnings. Operative earnings are profits made after subtracting from revenues all expenses that are (directly) associated with the operation of the business. In short, operative earnings are a firm's earnings before interest and taxes(or EBIT, ebit). Subsequently, we then divide Operative earnings(EBIT or ebit) by revenues. First, calculate EBIT(earnings before interest and taxes, in some cases ebit) or operative earnings. Then, from total revenue, subtract costs of goods sold(COGS) and other administrative costs. Operative\;earnings\; = \;Revenue\; - \;(COGS\; +\; Administrative costs) The operative margin is then computed by dividing operative earnings(COGS) and total revenue the company made. Operative\; margin\; =\; \frac{Operative\;earnings}{Revenue} The most significant limitation of operating margin is difficulty in comparing companies with different business models. For an adequate operating margin comparison, we would need two companies with a similar, or ideally the same, business model. If the comparison is not an oranges-to-oranges comparison, then the contrast between two companies operating margins is purposeless. When it comes to comparing firms, analysts generally avoid using operating margins. However, a better way to compare would be using EBITDA(earnings before interests, taxes, deprecation, and amortization). In general, EBITDA is often used as a proxy to minimize the limitations of an operating profit margin because it includes non-cash expenses(IE., depreciation). Other limitations stem from a possibility of misunderstanding or misusing information we get from the operating profit margin. One of the most common misuses of such information is disregarding interest, taxes, deprecation, and amortization. In such cases, the company might disregard such taxes and believe that the profit is the same as the operating margin. For example, if the operating margin is 10 percent then the business is not sustainable. Gross profit margin(or just gross margin) equates to net sales subtracted by COGS(costs of goods sold). In most cases, COGS includes all the expenses directly related to the production of a product. Those costs are labor costs, raw materials, and delivery of such materials. Often they are called "costs of goods sold," "costs of products sold," or in some cases, "cost of sales." The gross profit margin is calculated per-product basis and is most helpful in analyzing a product suite. The formula for gross profit margin: Gross\;profit\;margin\;=\;\frac{Net\;sales\;-COGS}{Net\;sales} Net sales(or net profit) are revenues received from selling a certain product or service. While operating profit margin(or just operating margin): subtract admission costs and COGS from total revenue. Bankers use operating profit margins to determine the profitability of a firm. The formula for operating margin: The most notable distinction between operating margin and profit margin is that operating margin calculates all expenses in production(IE overhaul, labor, raw materials…). On the other hand, profit margin only includes fees directly involved in production(IE labor, raw material but not overhaul). For example, a company makes $1000 and the total cost for overhaul, labor, and raw materials is $500, in this case, the operating margin is 50 percent. On the other hand, if costs for labor, raw materials but without the costs of the overhaul are 400 then the profit margin is 60 percent. Because of errors created by disregarding the costs of overhaul, it is more common to use operating margin. The operating margin ratio equates to the profitability ratio, which measures the percentage of the total revenue of operating income. It demonstrates how much revenue we have after compensating for all expenses related to the product. These leftover finances will cover other interests and taxes. The formula for operating margin ratio: Operating\;margin\;ration\;=\frac{Operating\;income}{Net\;sales} The operating margin ratio is the best indicator for any investor about the company's support of operations. For instance, if the company in question needs income from multiple sources of their operational and non-operational businesses the operation is not sustainable. In a case that the company has an operating margin ratio of 20 percent. It means that after paying the expenses for all the costs of goods sold(COGS) only 20 percent is left to pay off other interests and taxes. In general, for every dollar of a sold product, after paying 80 percent on labor, raw material, and overhaul only 20 cents are left for other taxes. For most businesses, especially the small ones, a good operating margin ratio is 15 percent or higher. What is a good operating margin? The higher the operating margin, the better, indicating the company's profitability. A good operating margin is 15 percent or higher. Can operating margin be negative? Yes, indeed, operating margins can be negative. If, for example, administrative costs and COGS are higher than revenues, then the company is losing money on such a product, and it has to find a new way to generate income. How can a company improve its operating margin? There are multiple ways to improve operating margins. For example, increasing prices, increasing sales, or having a better deal with their supplier for raw materials. Saving a percentage of costs for raw materials could make a significant difference. What does operating margin measure? The operating margin equates to the profit a company makes on a dollar of sales after accounting for administrative costs and all the costs directly involved with the production, but before interest and taxes. What is the operating income margin? Operating income margin(operating margin) is a measure, in percentages, of how much profit a company has after paying expenses that are directly related to the product. This profit is total gain before any taxes or interest. DuPont Analysis Calculator
CommonCrawl
A Volume-limited Sample of Cataclysmic Variables from Gaia DR2: Space Density and Population Properties Pala, A. F. Gänsicke, B. T. Breedt, E. Knigge, C. Hermes, J. J. Gentile Fusillo, N. P. Hollands, M. A. Naylor, T. Pelisoli, I. Schreiber, M. R. Toonen, S. Aungwerojwit, A. Cukanovaite, E. Dennihy, E. Manser, C. J. Pretorius, M. L. Scaringi, S. Toloza, O. We present the first volume-limited sample of cataclysmic variables (CVs), selected using the accurate parallaxes provided by the second data release (DR2) of the European Space Agency Gaia space mission. The sample is composed of 42 CVs within 150 pc, including two new systems discovered using the Gaia data, and is $(77 \pm 10)$ per cent complete. We use this sample to study the intrinsic properties of the Galactic CV population. In particular, the CV space density we derive, $\rho =(4.8^{+0.6}_{-0.8}) \times 10^{-6}\, \mbox{$\mathrm{pc}^{-3}$}$, is lower than that predicted by most binary population synthesis studies. We also find a low fraction of period bounce CVs, seven per cent, and an average white dwarf mass of $\langle M_\mathrm{WD} \rangle = (0.83 \pm 0.17)\, \mathrm{M}_\odot$ . Both findings confirm previous results, ruling out the presence of observational biases affecting these measurements, as has been suggested in the past. The observed fraction of period bounce CVs falls well below theoretical predictions, by at least a factor of five, and remains one of the open problems in the current understanding of CV evolution. Conversely, the average white dwarf mass supports the presence of additional mechanisms of angular momentum loss that have been accounted for in the latest evolutionary models. The fraction of magnetic CVs in the 150 pc sample is remarkably high at 36 per cent. This is in striking contrast with the absence of magnetic white dwarfs in the detached population of CV progenitors, and underlines that the evolution of magnetic systems has to be included in the next generation of population models. Monthly Notices of the Royal Astronomical Society 10.1093/mnras/staa764 2020MNRAS.494.3799P stars: evolution; Hertzsprung-Russell and colour-magnitude diagrams; novae; cataclysmic variables; stars: statistics; Published in MNRAS. Definitive version. A typo in Table 6 (space density of novalikes) has been corrected NED (1) ESO (1)
CommonCrawl
America/Edmonton English 2015 CAP Congress / Congrès de l'ACP 2015 America/Edmonton timezone Welcome to the 2015 CAP Congress! / Bienvenue au congrès de l'ACP 2015! Instructions for oral presenters Instructions pour présentation orale Instructions for Poster Presenters Instructions pour présentation d'affiche Best Student Presentation Competition Compétition de la meilleure communication étudiante 2015 CAP Congress website Congrès de l'ACP 2015 site web [email protected] 925. Welcome & Introduction Garth Huber (University of Regina) CINP Town Hall (Sat) / Consultation publique du ICPN 918. ALPHA Antihydrogen Symmetry Test (20+5) 919. MOLLER Parity Violation Experiment at JLab (20+5) 920. Cold Neutrons at SNS (10+2) 921. UltraCold Neutrons at TRIUMF (20+5) 922. Compton Scattering Measurements at MAMI (15+3) Prof. David Hornidge (Mount Allison University) 923. Nucleon electromagnetic form factor measurements at JLab (15+3) Adam Sarty (Saint Mary's University) 927. The GlueX and Exclusive Meson Production Programs at JLab (20+5) 928. Extreme QCD: Characterizing the Quark-Gluon Plasma (15+3) Prof. Charles Gale (McGill University) 929. Discussion on Hadron Structure/QCD (21) 926. Discussion on HQP Issues (40) Juliette Mammei 951. Introduction-IPP Director's Report Michael Roney (University of Victoria) IPP Town Hall - AGM / Consultation publique et AGA de l'IPP 930. Introductory Comments CINP Town Hall (Sun) / Consultation publique du ICPN 932. Nuclear Structure aspects of the Gamma-Ray program (20+5) 952. Particle Astrophysics CFREF Prof. Tony Noble (Queen's University) 953. DEAP Mark Boulay (Q) 933. Nuclear Astrophysics aspects of the Gamma-Ray program (20+5) iris Dillmann 954. SNO+ Christine Kraus 934. Fundamental Symmetries aspects of the Gamma-Ray program (20+5) Carl Svensson 955. SuperCDMS Prof. Gilles Gerbier (Queen's University) 936. Nuclear Astrophysics with DRAGON/TUDA/EMMA (15+3) Chris Ruiz (TRIUMF) 956. PICO Tony Noble (Queen's University) 935. TITAN Ion Trap Program at ISAC (20+5) 957. EXO Prof. David Sinclair (Carleton University) 958. NEWS Experiment Prof. Gilles Gerbier (Queens' University) 937. Canadian Penning Trap & Related Ion-Trap Expts @ ANL (15+3) 959. IceCube Prof. Darren Grant (University of Alberta) 938. Reaction spectroscopy of rare isotopes with low and high-energy beams (15+3) Rituparna Kanungo (TRIUMF) 960. Theory review-Higgs, EW, BSM Dr David Morrissey (TRIUMF), Prof. Heather Logan (Carleton University) 939. Electroweak measurements of nuclear neutron densities via PREX and CREX at JLab (15+3) Juliette Mammei (University of Manitoba) 940. Ab initio nuclear theory for structure and reactions (20+5) 961. Moller Experiment at JLAB Michael Gericke (University of Manitoba) 941. From nuclear forces to structure and astrophysics (10+2) Alexandros Gezerlis 962. ALPHA Makoto Fujiwara (TRIUMF (CA)) 965. UCN Prof. Jeffery Martin (University of Winnipeg) 942. Discussion on Nuclear Astrophysics (19) 964. Belle II Dr Christopher Hearty (IPP / UBC) 943. Discussion on Nuclear Structure (20) Adam Garnsworthy 963. HEPNET/Computing in HEP Randy Sobie (University of Victoria (CA)) 944. TRIUMF's Neutral Atom Trap for Beta Decay (15+3) 966. MRS - Alberta/Toronto James Pinfold (University of Alberta (CA)) 967. MRS - Carleton/Victoria/Queens Prof. Kevin Graham (Carleton University) 945. Francium Trap Project (15+3) 968. NA62 Dr Toshio Numao 946. Neutrinoless Double Beta Decay (25+5) 969. Halo+upgrade at LNGS Prof. Clarence Virtue (Laurentian University) 970. VERITAS Prof. David Hanna (McGill University) 947. Electroweak Physics (15+3) 971. g-2 at JPARC Dr Glen Marshall (TRIUMF) 972. Theory Review - QCD Prof. Randy Lewis (York University) 948. Discussion on Fundamental Symmetries (20) 973. ATLAS Prof. Alison Lister (UBC) 949. Science Opportunities of ARIEL (20+5) 974. Technical support for experiment development and construction Dr Fabrice Retiere (TRIUMF) 950. Resources for Detector Development in the Canadian Subatomic Physics Community (10+2) 931. General Discussion 841. Report from NSERC SAP ES John Martin (York University (CA)) Joint CINP-IPP Meeting / Réunion conjointe de l'ICPN et de l'IPP (DPN-PPD) 845. Canada Foundation for Innovation and Subatomic Physics Olivier Gagnon (Fondation canadienne pour l'innovation) 842. Report from TRIUMF Director Jonathan Bagger (Johns Hopkins University) 843. Report from SNOLAB Director Nigel Smith (SNOLab) Other Sessions or Meetings / Autres séances ou réunions 844. Report from Subatomic Physics Long Range Plan Committee Chair Dean Karlen (University of Victoria (CA)) 975. Status and future plan of KEK and J-PARC Yasuhiro Okada (KEK) 976. T2K+HyperK Hirohisa A. Tanaka (University of British Columbia) 977. ILC Alain Bellerive (Carleton University (CA)) 978. Long Range Plan: Next Steps for IPP 883. Exoplanets and the Search for Habitable Worlds Prof. Sara Seager (Massachusetts Institute of Technology) Plenary Speaker / Conférencier plénier M-PLEN Plenary Session - Start of Conference - Sara Seager, MIT / Session plénière - Ouverture du Congrès - Sara Seager, MIT Thousands of exoplanets are known to orbit nearby stars with the statistical inference that every star in our Milky Way Galaxy should have at least one planet. Beyond their discovery, a new era of "exoplanet characterization" is underway with an astonishing diversity of exoplanets driving the fields of planet formation and evolution, interior structure, atmospheric science, and orbital... 582. Ab initio calculations of nuclear structure and reactions Petr Navratil (TRIUMF) Nuclear Physics / Physique nucléaire (DNP-DPN) Invited Speaker / Conférencier invité M1-7 Advances in Nuclear Physics and Particle Physics Theory (DNP-PPD-DTP) / Progrès en physique nucléaire et en physique des particules théoriques (DPN-PPD-DPT) The description of nuclei starting from the constituent nucleons and the realistic interactions among them has been a long-standing goal in nuclear physics. In recent years, a significant progress has been made in developing ab initio many-body approaches capable of describing both bound and scattering states in light and medium mass nuclei based on input from QCD employing Hamiltonians... 620. Evaluation of SiPM Arrays and Use for Radioactivity Detection and Monitoring Andrei Semenov M1-5 Nuclear Techniques in Medicine and Safety (DNP-DIAP) / Techniques nucléaires en médecine et en sécurité (DPN-DPIA) Silicon photomultipliers (SiPMs) are novel photo sensorss that are needed for many applications in a broad range of fields. The advantages of such detectors are that they feature low bias ($<$100V) operation, high gain (10$^5$ to 10$^6$), insensitivity to magnetic fields, excellent photon detection efficiency (PDE), and the ability to operate in field conditions over a range of... 458. Examples of exact solutions of charged particle motion in magnetic fields and their applications. Konstantin Kabin (RMC) Atmospheric and Space Physics / Physique atmosphérique et de l'espace (DASP-DPAE) M1-3 Theory, modelling and space weather I (DASP) / Théorie, modélisation et climat spatial I (DPAE) There are very few exact solutions for the motion of a charged particle in specified magnetic field. These solutions have considerable theoretical as well as pedagogical value. In this talk I will briefly describe several known analytical solutions, such as motion in the equatorial plane of a dipole and in a constant gradient field. Particular attention will be given to a relatively unknown... 792. Following an Auger Decay by Attosecond Pump-Probe Measurements Prof. Julien Beaudoin Bertrand (Université Laval) Division of Atomic, Molecular and Optical Physics, Canada / Division de la physique atomique, moléculaire et photonique, Canada (DAMOPC-DPAMPC) M1-9 Ultrafast and Time-resolved Processes (DAMOPC) / (DPAMPC) Attosecond Physics is an emerging field at the international level which now provides tabletop attosecond (as=10^-18 s.) light sources extending from the extreme ultraviolet (XUV, 10-100 eV) to X-rays (keV) [1]. This feat opens new avenues in atomic and molecular spectroscopies [2], especially, to perform time-resolved experiments of ultrafast electron dynamics on the unexplored attosecond... 516. Nematic and non-Fermi liquid phases of systems with quadratic band crossing Prof. Igor Herbut (Simon Fraser University) Condensed Matter and Materials Physics / Physique de la matière condensée et matériaux (DCMMP-DPMCM) M1-1 Topological States of Matter (DCMMP) / États topologiques de la matière (DPMCM) I will review the recent work on the phases and quantum phase transitions in the electronic systems that feature the parabolic band touching at the Fermi level, the celebrated and well-studied example of which is the bilayer graphene. In particular, it will be argued that such three-dimensional systems are in principle unstable towards the spontaneous formation of the (topological) Mott... 503. Neutrino in the Standard Model and beyond Prof. Samoil Bilenky (JINR (Dubna)) Particle Physics / Physique des particules (PPD) M1-6 Neutrinoless Double-beta Decay I (PPD-DNP) / Double désintégration beta sans neutrino I (PPD-DPN) The Standard Model teaches us that in the framework of such general principles as local gauge symmetry, unication of weak and electromag- netic interactions and Brout-Englert-Higgs spontaneous breaking of the elec- troweak symmetry nature chooses the simplest possibilities. Two-component left-handed massless neutrino elds play crucial role in the determination of the charged current... 912. NSERC's Partnership Program: Panel Discussion and Q&A / Programme de partenariats du CRSNG : Table ronde et Q&R Oral (Non-Student) / orale (non-étudiant) M1-10 NSERC's Partnership Program: Panel Discussion and Q&A / Programmes de partenariats du CRSNG : Table ronde et Q&R This session is a panel discussion and Q&A on researcher-industry collaborations and NSERC funding opportunities. Panelists include: Irene Mikawoz (NSERC Prairies Regional Office), Donna Strickland (U. of Waterloo), Wayne Hocking (Western U.), Kristin Poduska (Memorial U.), Chijin Xiao (U. of Saskatchewan) and Andranik Sarkissian (Plasmionique Inc). -- Cette séance consistera en un débat... 573. Principles and methods enabling atom scale electronic circuitry Robert Wolkow (University of Alberta) M1-2 Organic and Molecular Electronics (DCMMP-DMBP-DSS) / Électronique organique et moléculaire (DPMCM-DPMB-DSS) Quantum dots are small entities, typically consisting of just a few thousands atoms, that in some ways act like a single atom. The constituent atoms in a dot coalesce their electronic properties to exhibit fairly simple and potentially very useful properties. It turns out that collectives of dots exhibit joint electronic properties of yet more interest. Unfortunately, though extremely... 642. Probing Physics with Observations of Neutron Stars and White Dwarfs Jeremy Heyl (UBC) Theoretical Physics / Physique théorique (DTP-DPT) M1-4 Theoretical Astrophysics (DTP) / Astrophysique théorique (DPT) White dwarfs and neutron stars are two of the densest objects in the Universe. Discovered 105 and 45 years ago, these objects are two of the best astrophysical laboratories of fundamental physics. The simple existence of white dwarfs is a stellar-size manifestation of quantum physics. I will describe how we use these objects today to study quantum-chromodynamics, quantum-electrodynamics,... 648. Summary of ATLAS Standard Model measurements (including top quark) Alison Lister (University of British Columbia (CA)) M1-8 Energy frontier: Standard Model and Higgs Boson I (PPD) / Frontière d'énergie: modèle standard et boson de Higgs I (PPD) While the LHC is ramping up for it's second run at yet higher centre-of-mass energy, the experimental Collaborations are not only preparing for this run, but also ensuring that the maximum amount of information is extracted from the data taken in 2011 and 2012 at a centre-of-mass energy of 7 and 8 TeV, respectively. Many precision standard model measurements have been carried out, spanning... 608. Collective modes and interacting Majorana fermions in topological superfluids Joseph Maciejko (University of Alberta) Topological phases of matter are characterized by the absence of low-energy bulk excitations and the presence of robust gapless surface states. A prime example is the three-dimensional (3D) topological band insulator, which exhibits a bulk insulating gap but supports gapless 2D Dirac fermions on its surface. This physics is ultimately a consequence of spin-orbit coupling, a single-particle... 799. Energetic Electron Precipitation Model Alexei Kouznetsov (University of Calgary) Energetic electron precipitations cause atmospheric ionization - a complicated process which depends on many parameters. We present our energetic particles precipitation model which consists of three main parts: Energetic electron sources; Coupled electron/photon transport in the earth atmosphere; RIOMETERs and Very Low Frequency (VLF) receivers response to energetic electron precipitations.... 653. Measurement of the Higgs-boson properties with the ATLAS detector at the LHC Manuela Venturi (University of Victoria (CA)) A detailed review on the properties of the Higgs boson, as measured with the ATLAS experiment at the LHC, will be given. The results shown here use approximately 25 fb-1 of pp collision data, collected at 7 TeV and 8 TeV in 2011 and 2012. The measurements of the mass, couplings properties and main quantum numbers will be presented. Prospects for the upcoming Run2, starting in May 2015, will be... 524. Neutron Generator Facility at SFU - GEANT4 Dose Prediction and Verification Mr Jonathan Williams (Simon Fraser University) Oral (Student, In Competition) / Orale (Étudiant(e), inscrit à la compétition) A neutron generator facility under development at Simon Fraser University (SFU) utilizes a commercial deuterium- tritium neutron generator (Thermo Scientific P 385) to produce 14.2 MeV neutrons at a nominal rate of $3\times10^8$ neutrons/s. The facility will be used to produce radioisotopes to support a research program including nuclear structure studies and neutron activation... 594. New horizons for MCAS: heavier masses and alpha-particle scattering. Dr Juris P. Svenne (University of Manitoba, Dept. of Physics and Astronomy) The Multi-Channel Algebraic-Scattering (MCAS) method, developed in 2003 for the analysis of low-energy nuclear spectra and of resonant scattering, continues to be effectively used for nuclear-structure studies. The MCAS approach allows the construction of the nucleon-core model Hamiltonian which can be defined in detail (coupling to the collective modes, rotational or vibrational, diverse... 472. Observations and Theory of Supernova Explosions and their Remnants Denis Leahy A supernova explosion occurs to end the life of a massive star (with mass of more than 8-10 times that of the sun). These explosions create and eject the elements that make up everything around us, including the earth. The life of a massive star will be outlined, and its sudden death in a supernova event. Following the explosion, the ejected material and energy interacts with the surrounding... 803. Polarization induced energy level shifts at organic semiconductor interfaces probed on the molecular scale by scanning tunnelling microscopy Sarah Burke (University of British Columbia) The inter- and intra- molecular energy transfer that underlies transport, charge separation for photovoltaics, and catalysis are influenced by both the spatial distribution of electronic states and their energy level alignment at interfaces. In organic materials, the relevant length scales are often on the order of a single molecular unit. Scanning tunneling microscopy (STM) and spectroscopy... 647. Status of the SNO+ Experiment Prof. Aksel Hallin (University of Alberta) The SNO+ experiment, at the SNOLAB underground laboratory, consists of 780 Mg of linear alkylbenzene scintillator contained in the 12 m diameter SNO acrylic sphere and and observed by the SNO photomultiplier tubes. SNO+ will be loaded with tellurium, at approximately the 0.3% level to enable a sensitive search for neutrinoless double beta decay. This talk will detail the experiment, the... 806. Ultrafast imaging of nonlinear terahertz pulse transmission in semiconductors Haille Sharum (University of Alberta) Oral (Student, Not in Competition) / Orale (Étudiant(e), pas dans la compétition) Terahertz pulse spectroscopy has been widely used for probing the optical properties and ultrafast carrier dynamics of materials in the far-infrared region of the spectrum. Recently, sources of intense terahertz (THz) pulses with peak fields higher than 100 kV/cm have allowed researchers to explore ultrafast nonlinear THz dynamics in materials, such as THz-pulse-induced intervalley scattering... 549. Energy transfer dynamics in blue emitting functionalized silicon nanocrystals Glenda De los Reyes (Physics Department, University of Alberta) We use time-resolved photoluminescence (TRPL) spectroscopy to study the effects of surface passivation and nanocrystal (NC) size on the ultrafast PL dynamics of colloidal SiNCs. The SiNCs were passivated by dodecylamine and ammonia, and exhibit blue emission centered at ~473 nm and ~495 nm, respectively. For both functionalizations, increasing the size of the NCs from ~3 nm to ~6 nm did not... 711. Properties of the lunar wake inferred from hybrid-kinetic simulations and an analytic model Hossna Gharaee (university of Alberta, Departement of Physics) There is renewed interest in the Moon as a potential base for scientific experiments and space exploration. Earth's nearest neighbour is exposed directly to the solar wind and solar radiation, both of which present hazards to successful operations on the lunar surface. In this paper we present lunar wake simulation and analytic results and discuss them in the context of observations from the... 862. Rapid Elemental Analysis of Human Finger Nails Using Laser-Induced Breakdown Spectroscopy Ms Vlora Riberdy (University of Windsor) Zinc is a crucial element needed for many processes in the human body. It is essential for enzymatic activity and many cellular processes, such as cell division. A zinc deficiency can lead to problems with the immune system, birth defects, and blindness. This problem is especially important to address in developing countries where nutrition is limited. Supplements can be taken to increase the... 804. The coefficient of restitution of inflatable balls Mr Gaëtan Landry (Dalhousie University) Industrial and Applied Physics / Physique industrielle et appliquée (DIAP-DPIA) The bouncing of sports balls is often characterized in terms of the coefficient of restitution, which represents the ratio of the after-impact velocity to the before-impact velocity. While the behaviour of the coefficient of restitution as a function of the internal pressure of the ball has been studied, no theoretical justification has been given for any parametric curve fitted to the data.... 558. Dilute limit of an interacting spin-orbit coupled two-dimensional electron gas Mr Joel Hutchinson (University of Alberta) The combination of many-body interactions and Rashba spin-orbit coupling in a two-dimensional fermion system gives rise to an exotic array of phases in the ground state. In previous analyses, it has been found that in the low fermion density limit, these are nematic, ferromagnetic nematic, and spin-density wave phases. At ultra-low densities, the ground state favours the ferromagnetic nematic... 541. Explaining the Newly Discovered Third Radiation Belt Dr Louis Ozeke (University of Alberta) Accurate specification of the global distribution of ultra-low frequency (ULF) wave power in space is critical for determining the dynamics and acceleration of outer radiation belt electrons. Current radiation belt models use ULF wave radial diffusion coefficients which are analytic functions of Kp based on ULF wave statistics. In this presentation we show that these statistical based analytic... 693. Extraction of optical parameters in SNO+ with an in-situ optical calibration system Dr Kalpana Singh Singh (Department of Physics, University of Alberta) SNO+ is a multi-purpose neutrino physics experiment investigating neutrinoless double beta decay and neutrino oscillations. The SNO+ detector consists of a 12m diameter acrylic vessel (AV), surrounded by ultra-pure water and approximately 9500 photomultiplier tubes (PMTs) which are positioned on a stainless steel PMT support structure (PSUP). The acrylic vessel will be filled with liquid... 683. Measurement of the yy -> WW cross-section and searches for anomalous quartic gauge couplings WWAA at the ATLAS experiment Chav Chhiv Chau (University of Toronto (CA)) Searches for the anomalous quartic gauge coupling of two photons to two W bosons (WWAA) were made at LEP and Tevatron. More recently many searches have been performed by the CMS and ATLAS collaborations at the Large Hadron Collider (LHC). Among the processes sensitive to these couplings are the Wy and yy -> WW production. In hadron colliders, yy -> WW events where the W bosons decay into... 757. Molecular SuperRotors: Control and properties of molecules in extreme rotational states Valery Milner (UBC) Extremely fast rotating molecules, known as "super-rotors", may exhibit a number of unique properties, from rotation-induced nano-scale magnetism to formation of macroscopic gas vortices. Orchestrating molecular spinning in a broad range of angular frequencies is appealing from the perspectives of controlling molecular dynamics. Yet in sharp contrast to an optical excitation of molecular... 473. No "End of Greatness": Superlarge Structures and the Dawn of Brane Astronomy Dr Rainer Dick (University of Saskatchewan) Several groups have recently reported observation of large scale structures which exceed the size limits expected from standard structure formation in a 13.8 billion years old LambdaCDM universe. On the other hand, the concept of crosstalk between overlapping 3-branes carrying gauge theories was recently introduced in arXiv:1502.03754[hep-th]. Crosstalk impacts the redshift of signals from... 563. On the Road to Low Power Circuitry: Analysis of Si Dangling Bond Charging Dynamics Roshan Achal (University of Alberta) Undesired circuit heating results from the billions of electrons flowing through our devices every second. Heating wastes energy (leading to shorter battery life), and also puts a limit on computational speeds. The solution to excess heat generation is of huge commercial interest and has led to a large push towards nanoscale electronics which are smaller and more energy efficient. Proposed... 897. The 2018 Shutdown of the NRU Reactor Dr John Root (Canadian Neutron Beam Centre) The federal government recently announced its decision to shut down the NRU reactor in 2018. The National Research Universal (NRU) reactor commenced operation in 1957, to provide neutrons for several missions simultaneously, including the production of neutron beams to support fundamental experimental research on solids and liquids, advancing knowledge of condensed matter physics. Today, the... 731. Andreev and Josephson transport in InAs nanowire-based quantum dots Kaveh Gharavi (University of Waterloo) Superconducting proximity effects are of fundamental interest and underlie recent proposals for experimental realization of topological states. Here we study superconductor-quantum dot- superconductor (S-QD-S) junctions formed by contacting short- channel InAs nanowire transistors with Nb leads. When the carrier density is low, one or more quantum dots form in the nanowire... 794. Double-beta decay half-life of 96Zr – nuclear physics meets geochemistry Adam Mayer (University of Calgary) Double-beta (\beta\beta) decay measurements are a class of nuclear studies with the objective of detecting the neutrinoless (0\nu) decay variants. Detection of a 0\nu\beta\beta decay would prove the neutrino to be massive and to be its own anti-particle (i.e., a Majorana particle). A key parameter in the detection of the 0\nu\beta\beta decay is the energy, or Q-value, of the decay. ^{96}Zr... 685. Fast damping of Alfven waves: Observations and modeling Chengrui Wang (University of Alberta) Results of analysis of Cluster spacecraft data will be presented that show that intense ultra-low frequency (ULF) waves in the inner magnetosphere can be excited by the impact of interplanetary shocks and solar wind dynamic pressure variations. The observations reveal that such waves can be damped away rapidly in a few tens of minutes. We examine mechanisms of ULF wave damping for two... 784. Set Point Effects in Fourier Transform Scanning Tunneling Spectroscopy Mr Andrew Macdonald (University of British Columbia) Fourier Transform Scanning Tunneling Spectroscopy (FT-STS) has become an important experimental tool for the study of electronic structure. By combining the local real space picture of the electronic density of states provided by scanning tunneling microscopy with the energy and momentum resolution of FT-STS one can extract information about the band structure and dispersion. This has been... 694. Asymmetric Wavefunctions from Tiny Perturbations Tyler Dauphinee Physics Education / Enseignement de la physique (DPE-DEP) M2-8 Teaching Physics to a Wider Audience (DPE) / Enseigner la physique à un auditoire plus vaste (DEP) We present an undergraduate-accessible analysis of a single quantum particle within a simple double well potential through matrix mechanics techniques. First exploring the behavior in a symmetric double well (and its peculiar wavefunctions), we then examine the effect that varying well asymmetry has on the probability density. We do this by embedding the potential within a larger infinite... 857. CLS 2.0: The Next 10 Years Dr Dean Chapman (Canadian Light Source Inc.) Instrumentation and Measurement Physics / Physique des instruments et mesures (DIMP-DPIM) M2-9 Advanced Instrumentation at Major Science Facilities: Accelerators (DIMP) / Instrumentation avancée dans des installations scientifiques majeures: accélérateurs (DPIM) The Canadian Light Source (CLS) is Canada's premier source of intense light for research, spanning from the far infrared to hard x-rays. The facility has been in operations for 10 years and in that time has hosted over 2,000 researchers from academic institutions, government, and industry, from 10 provinces and 2 territories, and provided a scientific service critical in over 1,000 scientific... 768. Development of Comprehensive Model of Earth Ionosphere and its Application for Studies of MI-coupling Dr Dmytro Sydorenko (University of Alberta) M2-3 Theory, modelling and space weather II (DASP) / Théorie, modélisation et climat spatial II (DPAE) A comprehensive model of the Earth ionosphere has been developed [Sydorenko and Rankin, 2012 and 2013]. The model is two-dimensional, it resolves the meridional direction and the direction along the geomagnetic field. The dipole coordinates are used, the azimuthal symmetry is assumed. The model considers torsional Alfven waves and includes the meridional convection electric field. The electric... 498. Extrinsic Spin Hall Effect in Graphene Tatiana Rappoport (Federal University of Rio de Janeiro) M2-1 Computational methods in condensed matter physics (DCMMP) / Méthodes numériques en physique de la matière condensée (DPMCM) The intrinsic spin-orbit coupling in graphene is extremely weak, making it a promising spin conductor for spintronic devices. However, for many applications it is desirable to also be able to generate spin currents.Theoretical predictions and recent experimental results suggest one can engineer the spin Hall effect in graphene by greatly enhancing the spin-orbit coupling in the vicinity of an... 485. Field-tuned quantum criticality of heavy fermion systems Prof. Eundeok Mun (Simon Fraser University) M2-2 Material growth and processing (DCMMP) / Croissance et traitement des matériaux (DPMCM) Intensive study of strongly correlated electronic systems has revealed the existence of quantum phase transitions from ordered states to disordered states driven by non-thermal control parameters such as chemical doping, pressure, and magnetic field. In this presentation I will discuss a recent progress of magnetic field-tuned quantum criticality with particular emphasis on the Fermi liquid... 555. Medical linear accelerator mounted mini-beam collimator: transferability study Mr William Davis (Department of Physics and Engineering Physics, University of Saskatchewan) Medical and Biological Physics / Physique médicale et biologique (DMBP-DPMB) M2-6 Radiation Therapy (DMBP-DNP) / Thérapie par rayonnement (DPMB-DPN) Background: In place of the uniform dose distributions used in conventional radiotherapy, spatially-fractionated radiotherapy techniques employ a planar array of parallel high dose 'peaks' and low dose 'valleys' across the treatment area. A group at the Saskatchewan Cancer Agency have developed a mini-beam collimator for use with a medical linear accelerator operated at a nominal energy of... 832. Probing the Nature of Inflation M2-4 Cosmic Frontier: Cosmology I (DTP-PPD-DIMP) / Frontière cosmique: cosmologie I (DPT-PPD-DPIM) The idea that the early universe included an era of accelerated expansion (Inflation) was proposed to explain very qualitative features of the first cosmological observations. Since then, our observations have improved dramatically and have lead to high precision agreement with the predictions of the first models of inflation, slow-roll inflation. At the same time, there has been significant... 777. SPECTROSCOPIC LINE-SHAPE STUDIES FOR ENVIRONMENTAL AND METROLOGIC APPLICATIONS Li-Hong Xu (University of New Brunswick), Ronald Lees (University of New Brunswick) M2-10 Atomic and Molecular Spectroscopy: microwave to X-ray (DAMOPC) / Spectroscopie atomique et moléculaire: des micro-ondes aux rayons X (DPAMPC) Our research group has investigated the spectra of several gases of environmental importance using our 3-channel laser spectrometer or the experimental facility at the far-infrared beamline at the Canadian Light Source. Our results have been used by others through our contributions to the HITRAN and GEISA databases used by the atmospheric community and we have made our own contributions to the... 774. Status of Dark Matter Theories Yanou Cui (Perimeter Institute) M2-7 Cosmic frontier: Dark matter I (PPD-DTP) / Frontière cosmique: matière sombre I (PPD-DPT) The existence of dark matter is a prominent puzzle in model physics, and it strongly motivates new particle physics beyond the standard model.I will review theoretical candidates for dark matter as proposed in the literature, and their status in light of recent experimental searches. I will also discuss new possibilities of dark matter theories and related research avenues. 835. The turbulent hydrodynamics and nuclear astrophysics of anomalous stars from the early universe Falk Herwig (University of Victoria) M2-5 Nuclear Astrophysics (DNP) / Astrophysique nucléaire (DPN) The anomalous abundances that can be found in the most metal-poor stars reflect the evidently large diversity of nuclear production sites in stars and stellar explosions, as well as the cosmological conditions for the formation and evolution of the first generations of stars. Significant progress in our predictive understanding of nuclear production in the early universe comes now within reach... 455. An online resource for teaching about energy Prof. Jason Donev (University of Calgary) Energy issues are important to Canada, and a logical topic for Canadians to teach. The Energy Education group at the University of Calgary has built a free on-line resource suitable for teaching an 'energy for everyone' course from a physics department. This resource includes interactive data visualizations and real world simulations to help students understand the role of energy in modern society 827. Cancer cell targeting gold nanoparticles for therapeutics Charmainne Cruje (Ryerson University) Polyethylene glycol (PEG) has promoted the prospective cancer treatment applications of gold nanoparticles (GNPs). *In vivo* stealth of GNPs coated with PEG (PEG-GNPs) takes advantage of the enhanced permeability and retention effect in tumor environments, making them suitable for targeted treatment. Because PEG minimizes gold surface exposure, PEG-GNP interaction with ligands that mediate... 586. Acquaman: Scientific Software as the Beamline Interface David Chevrier (Canadian Light Source) The Acquaman project (Acquisition and Data Management) was started in early 2010 at the Canadian Light Source. Over the past four years, the project has grown to support five beamlines by providing beamline control, data visualization, workflow, data organization, and analysis tools. Taking advantage of modular design and common components across beamlines, the Acquaman team has demonstrated... 548. CLS Synchrotron FIR Spectroscopy of High Torsional Levels of CD3OH: The Tau of Methanol Dr Ronald Lees (Centre for Laser, Atomic and Molecular Sciences, Department of Physics, University of NB) Structure from high torsional levels of the CD$_3$OH isotopologue of methanol has been analyzed in Fourier transform spectra recorded at the Far-Infrared beamline of the Canadian Light Source synchrotron in Saskatoon. Energy term values for $A$ and $E$ torsional species of the third excited torsional state, v$_t$ = 3, are now almost complete up to rotational levels $K$ = 15, and thirteen... 520. Determining Power Spectra of High Energy Cosmics Sheldon Campbell (The Ohio State University) The angular power spectrum is a powerful observable for characterizing angular distributions, popularized by measurements of the cosmic microwave background (CMB). The power spectra of high energy cosmics ($\gamma$-rays, protons, neutrinos, etc.) contains information about their sources. Since these cosmics are observed on an event-by-event basis, the nature of the power spectrum measurement... 459. Development and Imaging of the World's first Whole-Body Linac-MRI Hybrid System Prof. B. Gino Fallone (University of Alberta) **Purpose:** We designed and first whole-body clinical linac-MRI hybrid (linac-MR) system to provide real-time MR guided radiotherapy with current imaging and treatment. Installation began in our clinic in 2013, and the world-first images from a linac-MR on a human volunteer were obtained in July 2014. **Methods:** The linac-MR consists of an isocentrically mounted 6 MV linac that... 847. Essential Psychology in Physics - MBTI and You Dr Jo-Anne Brown (University of Calgary) According to the wikipedia entry, psychology is an academic and applied discipline that involves the scientific study of mental functions and behaviours. Since learning involves mental functions, it only makes sense that psychology has a role in the classroom - including a post-secondary physics class. The Myers-Briggs Type Indicator (MBTI) is one model that provides a framework for... 606. Ge:Mn Dilute Magnetic Semiconductor Laila Obied (Brock University) This work aims to develop Ge:Mn dilute magnetic semiconductor and study the fundamental origin of ferromagnetism in this system. Using ion implantation at $77$ K, a single crystal Ge wafer was doped with magnetic Mn ions. The implantation was done at ion energy of $4.76$ MeV with a fluence of 2 x 10$^{16}$ ion/cm$^2$. X-ray diffraction (XRD) of the as-implanted sample showed that the implanted... 815. Klein Tunnelling in Graphene Mr Kameron Palmer (University of Alberta) In 1929 Oskar Klein solved the Dirac equation for electrons scattering off of a barrier. He found that the transmission probability increased with potential height unlike the non-relativistic case where it decreases exponentially. This phenomenon can also been in a graphene lattice where the energy bands form a structure known as a Dirac cone around the points where they touch. In this project... 454. Quark-Novae : Implications to High-Energy and Nuclear Astrophysics Prof. Rachid Ouyed (University of Calgary) After a brief account of the physics of the Quark-Nova (explosive transition of a neutron star to a quark star), I will discuss its implications and applications to High Energy and Nuclear Astrophysics. The talk will focus on Quark-Novae in the context of Super-Luminous Supernovae and in the context of the origin of heavy elements (r-process nucleosynthesis). The Quark-Nova has the... 734. Solar wind modelling for operational forecasting Ljubomir Nikolic (Natural Resources Canada) Dark regions seen in extreme ultraviolet and X-ray images of the solar corona, called coronal holes (COHO), are known to be sources of fast solar wind streams. These streams often impact the Earth's magnetosphere and produce geomagnetic storms to which Canada is susceptible. COHO are associated with open coronal magnetic field lines along which fast solar wind streams emanate from the Sun.... 725. The DEAP-3600 Dark Matter Experiment -- Updates and First Commissioning Data Results Dr Bei Cai (Queen's University) The DEAP-3600 experiment uses 3.6 tons of liquid argon for a sensitive dark matter search, with a target sensitivity to the spin-independent WIMP-nucleon cross-section of 10^{-46} cm^2 at 100 GeV WIMP mass. This high sensitivity is achievable due to the large target mass and the very low backgrounds in the spherical acrylic detector design as well as at the unique SNOLAB facility.... 655. A Phase Space Beam Position Monitor for Synchrotron Radiation Nazanin Samadi (University of Saskatchewan) Synchrotron radiation experiments critically depend on the stability of the photon beam position. The position of the photon beam at the experiment or optical element location is set by the electron beam source position and angle as it traverses the magnetic field of the bend magnet or insertion device. An ideal photon beam monitor would be able to measure the photon beam's position and... 668. Analysis of Quantum Defects in high energy Helium P states Ryan Peck Quantum defects are useful in interpreting high energy atomic states in terms of simple Hydrogenic energy levels. We will find the energy levels for 1snp singlet and triplet P state Helium from $n = 2$ to $n = 12$ with some of the most accurate helium atom calculations to date using the exact non-relativistic Hamiltonian with wave functions expanded in a basis set of Hylleraas... 848. Essential Psychology in the Physics Classroom - Five Steps to Improve Classroom Effectiveness Jo-Anne Brown (University of Calgary) Teaching large physics classes - especially to non-physics majors that may have developed an extraordinary aversion to anything math-related - can be a challenge, even for the best instructors. However, there are a few techniques, drawn from psychology, that can help improve the experience for both the instructor and the students. In this talk, I will present a 'five-step program' I... 670. Extensions of Kinetic Monte Carlo simulations to study thermally activated grain reversal in dual-layer Exchange Coupled Composite recording media. Dr Ahmad Almudallal (Memorial University of Newfoundland) Thermal activation processes represent the biggest challenge to maintain data on magnetic recording media, which is composed of uniformly magnetized nano-meter grains. These processes occur over long time scales, years or decades, and result in reversing magnetization of the media grains by rare events. Typically, rare events present a challenge if modelled by conventional micromagnetic... 590. Searching for the echoes of inflation from a balloon - The first SPIDER flight Ivan Padilla (University of Toronto) SPIDER is a balloon-borne polarimeter designed to detect B-modes in the CMB at degree angular scales. Such a signal is a characteristic of early universe gravitational waves, a cornerstone prediction of inflationary theory. Hanging from a balloon at an altitude of 36 km allows the instrument to bypass 99% of the atmosphere and get an unobstructed view of the sky at 90 and 150 GHz. The... 522. The nanostructure of (Ybx, Y1-x)2O3 thin films obtained by reactive crossed-beam laser ablation using bright-field and high-angle annular dark-field STEM imaging. Prof. Jean-François Bisson (Université de Moncton) Ytterbium-doped yttrium oxide thin films were obtained with a variant of pulsed laser deposition, called reactive crossed-beam laser deposition, wherein a cross-flow of oxygen, synchronized with the laser pulses, is used for oxidizing and entraining the ablation products of a Yb/Y alloy target towards a substrate placed inside a vacuum chamber [1]. The nanostructure of the films is examined... 723. A SYSTEMATIC APPROACH TO STANDARDIZING SMALL FIELD DOSIMETRY IN RADIOTHERAPY APPLICATIONS Dr Gavin Cranmer-Sargison (Department of Medical Physics, Saskatchewan Cancer Agency) Small field dosimetry is difficult, yet consistent data is necessary for the clinical implementation of advanced radiotherapy techniques. In this work we present improved experimental approaches required for standardizing measurement, Monte Carlo (MC) simulation based detector correction factors as well as methods for reporting experimental data. A range of measurements and MC modelling... 532. Direct Detection Prospects for Higgs-portal Singlet Dark Matter Fred Sage (University of Saskatchewan) There has recently been a renewed interest in minimal Higgs-portal dark matter models, which are some of the simplest and most phenomenologically interesting particle physics explanations of the observed dark matter abundance. In this talk, we present a brief overview of scalar and vector Higgs-portal singlet dark matter, and discuss the nuclear recoil cross sections of the models. We show... 712. Hadronic-to-Quark-Matter Phase Transition: Effects of Strange Quark Seeding. Mr Luis Welbanks (University of Calgary) When a massive star depletes its fuel it may undergo a spectacular explosion; the supernova. If the star is massive enough, it can undergo a second explosion; the Quark nova. The origin for this second explosion has been argued to be the transition from Hadronic-to-Quark-Matter (Ouyed et al. 2013). Hadronic-to-Quark-Matter phase transition occurs when hadronic (nucleated) matter under high... 783. Investigation of the effect of growth condition on defects in MBE grown GaAs1-xBix Vahid Bahrami Yekta (University of Victoria) Incorporation of Bismuth into GaAs causes an anomalous bandgap reduction (88 meV/% for dilute alloys) with rather small lattice mismatch compared to ternary In or Sb alloys. The bandgap can be adjusted over a wide range of infrared wavelengths up to 2.5 μm by controlling the Bi content of the alloy which is useful for laser, detector and solar cell applications. Semiconductor lasers are... 849. Observation of Wakefields in Coherent Synchrotron Radiation at the Canadian Light Source Ward Wurtz (Canadian Light Source Inc.) Synchrotron light sources routinely produce brilliant beams of light from the infrared to hard X-ray. Typically, the length of the electron bunch is much longer than the wavelength of the produced radiation, causing the electrons to radiate incoherently. Many synchrotron light sources, including the Canadian Light Source (CLS), can operate in special modes where the electron bunch, or... 446. Precision Measurement of Lithium Hyperfine and Fine Structure Intervals Prof. William van Wijngaarden (Physics Department, York University) A number of experiments have precisely measured fine and hyperfine structure splittings as well as isotope shifts for several transitions at optical frequencies for 6,7Li [1]. These data offer an important test of theoretical techniques developed by two groups to accurately calculate effects due to QED and the finite nuclear size in 2 and 3 electron atoms. The work by multiple groups... 538. The Kronig-Penney model extended to arbitrary potentials via numerical matrix mechanics Mr Pavelich Robert (University of Alberta) We present a general method using matrix mechanics to calculate the bandstructure for 1D periodic potential arrays, filling in a pedagogical gap between the analytic solutions to the Kronig-Penney model and more complicated methods like tight-binding. By embedding the potential for a unit cell of the array in a region with periodic boundary conditions, we can expand in complex exponential... 560. Using an information theory-based method for statistical detection of high-frequency climate signals in northern-hemisphere water supply variations Dr Sean W. Fleming (Environment Canada, MSC Science Division) Water scarcity is an acute global concern under population and economic growth, and understanding hydroclimatic variation is becoming commensurately more important for resource management. Climatic drivers of water availability vary complexly on many time- and space-scales, but serendipitously, the climate system tends to self-organize into coherent dynamical modes. Two of these are El... 776. A Multiorbital DMFT Analysis of Electron-Hole Asymmetry in the Dynamic Hubbard Model Christopher Polachic The dynamic Hubbard model (DHM) improves on the description of strongly correlated electron systems provided by the conventional single-band Hubbard model through additional electronic degrees of freedom, namely a second, higher energy orbital and associated hybridization parameters for interorbital transitions. The additional orbital in the DHM provides a more realistic modeling of... 612. Atomic Force Microscopy Characterization of Hydrogen Terminated Silicon (100) 2x1 Reconstruction Ms Taleana Huff (University of Alberta) Hydrogen terminated silicon (100) $2 \times 1$ (H:Si(100)) is examined using a novel non-contact atomic force microscopy (NC-AFM) approach. NC-AFM gives access to unique information on the surface such as unperturbed surface charge distributions, chemical bonding, and surface forces. H:Si(100) is an attractive surface for examination due to its potential for nano-electronics. Dangling... 675. Dual Co-Magnetometer using Xe129 for Measurement of the Neutron's Electron Dipole Moment Joshua Wienands (University of British Columbia) A new high-density ultra cold neutron source is being constructed and developed at TRIUMF in Vancouver, BC with collaborators from Japan and several Canada research groups. One of the first goals of this collaboration is to measure the electric dipole moment (EDM) of the neutron to an uncertainty of <10$^{-27}$ e-cm. To measure the nEDM, a magnetic resonance (MR) experiment on polarized... 480. Multifunctional perfluorocarbon nanoemulsions for cancer therapy and imaging Mr Donald A. Fernandes (Ryerson University) There is interest for the use of nanoemulsions as therapeutic agents, particularly Perfluorocarbon (PFC) droplets, whose amphiphilic shell protects drugs against physico-chemical and enzymatic degradation. When delivered to their target sites, these PFC droplets can vaporize upon laser excitation, efficiently releasing their drug payload and/or imaging tracers. Due to the optical properties of... 622. Status of the PICO-60 Dark Matter Search Experiment Pitam Mitra (University of Alberta) The PICO collaboration (formerly PICASSO and COUPP) uses bubble chambers for the search for Weakly Interacting Massive Particle (WIMP) dark matter. Such bubble chambers are scalable, can have large target masses and can be operated at regimes where they are insensitive to backgrounds such as beta and gamma radiation. The PICO-60 experiment is a bubble chamber that has been developed and... 884. Faster than the Speed of Light Prof. Miguel Alcubierre (National University of Mexico) Herzberg Memorial Public Lecture - Miguel Alcubierre, National Univ. of Mexico / Conférence commémorative publique Herzberg - Miguel Alcubierre, National Univ. of Mexico In this talk I will give a short introduction to some of the basic concepts of Einstein's special theory of relativity, which is at the basis of all of modern physics. In particular, I will concentrate on the concept of causality, and why causality implies that nothing can travel faster than the speed of light in vacuum. I will later discuss some of the basic ideas behind Einstein's other... 898. Opening and Welcome, Calvin Kalman from CAP Teachers' Day - Session I / Journée des enseignants - Atelier I 585. **WITHDRAWN** Electrical and optical properties of electrochromic Tungsten trioxide (WO3) thin films at temperature range 300 to 500K Bassel Abdel Samad (Moncton University) T1-9 Nanostructured Surfaces and Thin Films (DSS-DCMMP) / Surfaces et couches minces nanostructurées (DSS-DPMCM) During the past decade a great interest has been shown in the study of transition tungsten trioxide (WO3) thin films. The reason is that this transition presents a number of interesting optical and electrical properties. While their optical properties are very well studied in view of their application in smart windows, not much study is focussed on their electrical properties as a function of... 899. Changing student's approach to learning physics, Calvin Kalman, Chair of CAP Division of Physics Education 452. Exoplanet Atmospheres: Triumphs and Tribulations Prof. Sara Seager (Massachusett institute of technology) T1-3 Ground-based / in situ observations and studies of space environment I (DASP) / Observations et études de l'environnement spatial, sur terre et in situ I (DPAE) From the first tentative discoveries to veritable spectra, the last 15 years has seen a triumphant success in observation and theory of exoplanet atmospheres. Yet the excitement of discovery has been mitigated by lessons learned from the dozens of exoplanet atmospheres studied, namely the difficulty in robustly identifying molecules, the possible interference of clouds, and the permanent... 621. Geometrization of N-Extended 1-Dimensional Supersymmetry Algebras Charles Doran (University of Alberta) T1-4 Mathematical Physics (DTP) / Physique mathématique (DPT) The problem of classifying off-shell representations of the N-extended one-dimensional super Poincare algebra is closely related to the study of a class of decorated graphs known as Adinkras. We show that these combinatorial objects possess a form of emergent supergeometry: Adinkras are equivalent to very special super Riemann surfaces with divisors. The method of proof critically involves... 771. Light Exotic Nuclei Studied via Resonance Scattering Grigory Rogachev (Texas A&M University) T1-7 Nuclear Structure I (DNP) / Structure nucléaire I (DPN) Remarkable advances have been made toward achieving the long-sought-after dream of describing properties of nuclei starting from realistic nucleon-nucleon interactions in the last two decades. The ab initio models were very successful in pushing the limits of their applicability toward nuclear systems with ever more nucleons and exotic neutron to proton ratios. Predictions of these models... 736. Natural and unnatural SUSY Thomas Gregoire (Carleton University) T1-5 Energy Frontier: Susy & Exotics I (PPD-DTP) / Frontière d'énergie: supersymétrie et particules exotiques I (PPD-DPT) After the first run of LHC, the parameter space of supersymmetric theories is under serious pressure. In this talk I will present attempts at natural SUSY model building and also discuss the consequences of relaxing the naturalness assumption of supersymmetric theories. 514. New results from Planck Douglas Scott (UBC) T1-6 Cosmic Frontier: Cosmology II (PPD-DTP-DIMP) / Frontière cosmique: cosmologie II (PPD-DPT-DPIM) The Planck satellite has completed its mission to map the entire microwave sky at nine separate frequencies. A new data release was made in February 2015, based on the full mission, and including some polarization data for the first time. The Planck team has already produced more than 100 papers, covering many different aspects of the cosmic microwave background (CMB). We have been able to... 518. Overview of the Recent J-TEXT Results Ge Zhuang (Huazhong University of Science and Technology) Plasma Physics / Physique des plasmas (DPP) T1-8 Special session to honor Dr. Akira Hirose I (DPP) / Session speciale en l'honneur de Dr. Akira Hirose I (DPP) The experimental research in recent years on the J-TEXT tokamak are summarized, the most significant results including observation of core magnetic and density perturbations associated with sawtooth events and tearing instabilities by a high-performance polarimeter-interferometer (POLARIS), investigation of a rotating helical magnetic field perturbation on tearing modes, studies of resonant... 916. Pearson Education's digital resources for supporting Physics teaching: Mastering Physics Adam Sarty (Saint Mary's University), Mrs Claire Varley (Customer Experience Manager – Higher Education, Pearson Canada) T-PUB Commercial Publishers' Session: Resources to Enhance University Physics Teaching (DPE) / Session des éditeurs commerciaux : Ressources visant à améliorer l'enseignement de la physique à l'Université (DEP) This presentation will provide an overview of the online resources which Pearson Education can provide to help support your university physics teaching. We will begin with an overview of how one faculty member has implemented and used Pearson resources in his first-year physics course sequence, and the plans in place for including further tools in the coming year. The presentation will then... 823. Scanning Tunneling Spectroscopy of LiFeAs Prof. D,A. Bonn (University of British Columbia) T1-1 Superconductivity (DCMMP) / Supraconductivité (DPMCM) LiFeAs is one of several pnictide and chalcogenide superconductors that can be grown in single-crystal form with relatively few defects. Spectroscopy away from any native defects reveals a spatially uniform superconducting gap, with two distinct gap edges. Quasiparticle interference over the gap energy range provides evidence for an S+- pairing state. We further explore the spectroscopy of... 750. The Rate of Reduction of Defocus in the Chick Eye is Proportional to Retinal Blur Prof. Melanie Campbell (University of Waterloo) T1-11 Medical Imaging (DMBP) / Imagerie médicale (DPMB) PURPOSE. Calculations of retinal blur and eye power are developed and used to study blur on the retina of the growing chick eye. The decrease in defocus and optical blur during growth is known to be an active process. Here we show that the rate of defocus reduction is proportional to the amount of blur on the retina. METHODS. From literature values of chick eye parameters, the amounts of... 714. Ultrafast dynamics of mobile charges and excitons in hybrid lead halide perovskites David Cooke (McGill University) T1-10 THz science and applications (DAMOPC) / Sciences et applications des THz (DPAMPC) In this talk we discuss recent experiments using ultra-broadband time-resolved THz spectroscopy (uTRTS) studying charge and excitonic degrees of freedom in the novel photovoltaic material CH3NH3PbI3. This technique uses near single-cycle and phase stable bursts of light with an ultra-broad bandwidth spanning 1 - 125 meV to take snapshots of a material's dielectric function or optical... 526. Universal features of quantum dynamics: quantum catastrophes Duncan O'Dell (McMaster University) T1-2 Many body physics & Quantum Simulation (DAMOPC-DCMMP) / Physique des N corps et simulation quantique (DPAMPC-DPMCM) Tracking the quantum dynamics following a quench of a range of simple many-body systems (e.g. the two and three site Bose-Hubbard models, particles on a ring), we find certain common structures with characteristic geometric shapes that occur in all the wave functions over time. What are these structures and why do they appear again and again? I will argue that they are quantum versions of the... 561. Characterization of the 2D percolation transition in ultrathin Fe/W(110) films using the magnetic susceptibility Randy Belanger (McMaster University) Surface Science / Science des surfaces (DSS) The growth of the first atomic layer of an ultrathin film begins with the deposition of isolated islands. Upon further deposition, the islands increase in size until, at some critical deposition, the merging of the islands creates at least one connected region of diverging size. This universal phenomenon describing connectivity is termed "percolation" and occurs at a "percolation transition"... 743. Image Analysis and Quantification for PET Imaging Dr Esmat Elhami (University of Winnipeg) Introduction: Positron emission tomography (PET) is a highly sensitive, quantitative and non-invasive detection method that provides 3D information on biological functions inside the body. There are several factors affecting the image data, including normalization, scattering, and attenuation. In this study we have quantified the effect of scattering and attenuation corrections on the PET... 837. **WITHDRAWN** Planck, gravity waves, and cosmology in the 21st century Kendrick Smith (Perimeter Institute for Theoretical Physics) In this talk I'll survey the current observational status in cosmology, highlighting recent developments such as results from the Planck satellite, and speculate on what we might achieve in the future. In the near future some important milestones will be exploration of the neutrino sector, and much better constraints on the physics of the early universe via B-mode polarization. In the... 583. Carrier dynamics in semiconductor nanowires studied using optical-pump terahertz-probe spectroscopy Prof. Denis Morris (Département de physique, Université de Sherbrooke) The advance of non-contact measurements involving pulsed terahertz radiation presents great interests for characterizing electrical properties of a large ensemble of nanowires. In this work, InP and Si nanowires grown by molecular beam epitaxy or by chemical vapor deposition on silicon substrates were characterized using optical-pump terahertz probe (OPTP) transmission experiments. The... 530. Hot and Cold Dynamics of Trapped Ion Crystals near a Structural Phase Transition Paul C Haljan (Simon Fraser University) Small arrays of laser-cooled trapped ions are widely used for quantum information research, but they are also a versatile mesoscopic system to investigate physics with a flavor reminiscent of familiar models in condensed matter. For example, in a linear rf Paul trap, laser-cooled trapped ions will organize into a linear array when the transverse confinement of the trap is strong enough;... 461. Hunt for Supersymmetry with the ATLAS detector at LHC Zoltan Gecse (University of British Columbia (CA)) Supersymmetry is one of the most motivated theories beyond the Standard Model of particle physics. It explains the mass of the observed Higgs boson and provides a Dark Matter candidate among other attractive features. A striking prediction of Supersymmetry is the existence of a new particle for each Standard Model one. I will highlight results of the extensive program of the ATLAS... 820. Interplay of charge density waves and superconductivity Kaori Tanaka (University of Saskatchewan) We examine possible coexistence or competition between charge density waves (CDW) and superconductivity (SC) in terms of the extended Hubbard model. The effects of band structure, filling factor, and electron-phonon interactions on CDW are studied in detail. In particular, we show that van Hove singularities per se can lead to the formation of CDW, due to a substantial energy gain by... 451. Low-energy, precision experiments with ion traps: mass measurements and decay spectroscopy Anna Kwiatkowski (TRIUMF) The atomic mass is a unique identifier of each nuclide, akin to a human fingerprint, and manifests the sum of all interactions among its constituent particles. Hence it provides invaluable insights into many disciplines from forensics to metrology. At TRIUMF Ion Trap for Atomic and Nuclear science (TITAN), Penning trap mass spectrometry is performed on radioactive nuclides, particularly... 618. Magnetic Fluctuations Measurements in Magnetized Confinement Plasmas Dr Weixing Ding (UCLA) Both the magnetic fluctuations and electron density fluctuations are important parameters for fusion-oriented plasma research since fluctuation-driven transport dominates in high temperature magnetic confinement devices. The far-infrared laser systems are employed to measure both the Faraday rotation and electron density simultaneously with time response up to a few microseconds in reversed... 539. Magnetic Susceptibility Mapping in Human Brain using High Field MRI Dr Alan Wilman (University of Alberta) Magnetic Resonance Imaging (MRI) is a powerful imaging method for examining hydrogen protons and their local environment. Inferences can be made about the local environment from the signal relaxation (decay or recovery) or phase evolution. For many years, phase images were largely discarded in favor of magnitude images only, which dominate clinical MRI. Although the sensitive nature of... 900. Metamaterials: Controlling light,heat,sound and electrons at the nanoscale, Zubin Jacob, Electrical and Computer Engineering, UofA 469. Some discrete-flavoured approaches to Dyson-Schwinger equations Karen Yeats (Simon Fraser University) I will discuss two recent ideas on how to better understand the underlying structure of Dyson-Schwinger equations in quantum field theory. These approaches use primarily combinatorial tools; classes of rooted trees in the first case and chord diagrams in the second case. The mathematics is explicit and approachable. 772. The Far-Infrared Universe: from the Universe's oldest ligth to the birth of its youngest stars Jeremy Scott (University of Lethbridge) Over half of the energy emitted by the Universe appears in the relatively unexplored Far-Infrared (FIR) spectral region, which is virtually opaque from the ground and must be observed by space-borne instrumentation. The European Space Agency (ESA) Planck and Herschel Space Observatories, launched together on 14 May 2009, have both provided pioneering observations in this spectral range from... 643. You don't know what you've got 'till it's gone: ambient surface degradation of ZnO powders Kristin Poduska (Memorial University of Newfoundland) ZnO has rich electronic and optical properties that are influenced by surface structure and composition, which in turn are strongly affected by interactions with water and carbon dioxide. We correlated the effects of particle size, surface area, and crystal habit with data from X-ray photoelectron spectroscopy and zeta potential measurements to compare the degradation of ZnO powders prepared... 652. **WITHDRAWN** Density Functional Theory Study of Hydrogen on Metal Oxide and Insulator Surfaces Prof. Abdulwahab Sallabi (Physics Department, Misurata University, Misurata , Libya) Hydrogen molecule is being promoted as an environmentally clean energy source of the future. In order to use hydrogen as a source of energy, infrastructures have to be built. These infrastructures are efficient processes for hydrogen extraction, and efficient processes and materials for hydrogen storage. The major problem facing the use of hydrogen as a clean source of energy is the storage... 917. A panel discussion of PER and Enhanced WebAssign in teaching physics Ernie McFarlane (University of Guelph), Marina Milner-Bolotin (The University of British Columbia), Martin Williams (University of Guelph) Join Nelson Education and some of Canada's leading physics educators for a demonstration of enhanced WebAssign and a discussion around physics education research in practice including the use of digital learning tools to promote better learning outcomes. 797. Neutron Monitor Atmospheric Pressure Correction Method Based on Galactic Cosmic Rays tracing and MCNP Simulations of Cascade Showers Nuclear spallations accompanying Galactic Cosmic Ray (GCR) propagation through the atmosphere forms a so-called "cascade shower" by means of production of secondary protons, photons, neutrons, muons, pions and other energetic particles. World-wide Neutron Monitor (NM) network has been deployed for the ground-based monitoring of energetic protons and neutrons precipitations. Real time data... 607. Simulating Anderson localization via a quantum walk on a one-dimensional lattice of superconducting qubits. Joydip Ghosh (University of Calgary) Quantum walk (QW) on a disordered lattice leads to a multitude of interesting phenomena, such as Anderson localization. While QW has been realized in various optical and atomic systems, its implementation with superconducting qubits still remains pending. The major challenge in simulating QW with superconducting qubits emerges from the fact that on-chip superconducting qubits cannot hop... 867. A search for heavy gluon and vector-like quark in the 4b final state in pp collisions at 8 TeV Frederick Dallaire (Universite de Montreal (CA)) Searches for vector-like quarks are motivated by Composite Higgs models assuming a new strong sector and predict the existence of new heavy resonances. A search for single production of vector-like quarks is performed for the through the exchange of a heavy gluon in the $p p \to G^* \to B\bar b/\bar B b \to H b \bar b \to b~\bar b~b~\bar b$ process, where $G^*$ is a heavy color octet vector... 649. Cold Atom Metrology: Progress towards a New Absolute Pressure Standard Dr James Booth (British Columbia Institute of Technology) Laser cooling and trapping of atoms has created a revolution in physics and technology. For example, cold atoms are now the standard for time keeping which underpins the GPS network used for global navigation. In this talk, I will describe a research collaboration between BCIT, UBC (Kirk Madison) and NIST (Jim Fedchak - Sensor Science Division) with the goal of creating a cold atom (CA) based... 490. Correlating quantitative MR changes with pathological changes in the white matter of the cuprizone mouse model of demyelination Prof. Melanie Martin (Physics, University of Winnipeg, Radiology, University of Manitoba) Mouse brain white matter (WM) damage following the administration of cuprizone was studied weekly using diffusion tensor imaging, quantitative magnetization transfer imaging, T2-weighted MRI (T2w), and electron microscopy (EM). A previous study examined correlations between MR metrics and EM measures after 6 weeks of feeding. The addition of weekly *ex vivo* tissue analysis allows for a more... 552. Electric Monopole Transition Strengths in $^{62}$Ni Mr Lee J. Evitts (TRIUMF) Excited states in $^{62}$Ni were populated with a (p, p') reaction using the 14UD Pelletron accelerator at the Australian National University. The proton beam had an energy of 5 MeV and was incident upon a self-supporting $^{62}$Ni target of 1.2 mg/cm$^2$. Electric monopole transition strengths were measured from simultaneous detections of the internal conversion electrons and $\gamma$-rays... 510. Ionospheric Sounding Opportunities Using Signal Data From Pre-existing Amateur Radio And Operational Networks Alex Cushley Amateur radio and other signals used for dedicated purposes, such as the Automatic Position Reporting System (APRS) and Automatic Dependent Surveillance Broadcast (ADS-B), are signals that exist for another reason, but can be used for ionospheric sounding. Whether mandated and government funded or voluntarily constructed and operated, these networks provide data that can be used for scientific... 678. Novel Charges in CFT's Dr Pablo Diaz Benito (University of Lethbridge) In this talk we construct two infinite sets of self-adjoint commuting charges for a quite general CFT. They come out naturally by considering an infinite embedding chain of Lie algebras, an underlying structure that share all theories with gauge groups U(N), SO(N) and Sp(N). The generality of the construction allows us to carry all gauge groups at the same time in a unified framework, and so... 814. Plasma Ion Implantation for Photonic and Electronic Device Applications Prof. Michael Bradley (Physics & Engineering Physics, University of Saskatchewan) Plasma Ion Implantation (PII) is a versatile ion implantation technique which allows very high fluence ion implantation into a range of targets. The technique is conformal to the surface of the implanted object, which makes it suitable for a wide range of applications. The ease with which high ion fluences can be delivered means that the technique can be used to change the stoichiometry (e.g.... 745. Quantum oscillation studies of quantum criticality in PrOs$_4$Sb$_{12}$ Dr Stephen Julian (University of Toronto) PrOs$4$Sb${12}$ is a cubic metal with an exotic superconducting ground state below 1.8 K. The crystal fields around the Pr site are such that it has a singlet ground state and a magnetic triplet just 8K above the ground state. Under an applied magnetic field, the triplet splits, and the S$_z = +1$ state crosses the singlet state at easily accessible magnetic fields. In the region of the... 790. The CHIME Dark Energy Project The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a novel radio telescope currently under construction at the Dominion Radio Astrophysical Observatory in Penticton, BC. Comprising four 20-m by 100-m parabolic cylinders, each equipped with 256 antennas along its focal line, CHIME is a `software telescope' with no moving parts. It will measure the 21-cm emission from neutral... 761. Towards quantum repeaters using frequency multiplexed entanglement Mr Pascal Lefebvre (University of Calgary) Quantum communication is based on the possibility of transferring quantum states, generally encoded into so-called qubits, over long distances. Typically, qubits are realized using polarization or temporal modes of photons, which are sent through optical fibers. However, photons are subject to loss as they travel through optical fibers or free space, which sets a distance barrier of around 100... 624. **WITHDRAWN** Fast-timing mesurements in neutron-rich $^{65}$Co Bruno Olaizola Mampaso (Nuclear Physics Group - University of Guelph) The region below $^{68}$Ni has recently attracted great attention, from both experimental and theoretical studies, due to the observation of a sub-shell closure at N=40 and Z=28. The collectivity in the region is revealed in the even-even Fe and Cr isotopes by the low energy of the first 2$^+$ states and the enhanced $B(E2;2^+\rightarrow0^+)$ reduced transition probabilities, which peak at... 474. A Variational Wave Function for Electrons coupled to Acoustic Phonons Carl Chandler (University of Alberta) We survey briefly the electron-phonon interactions in metals with an emphasis on applications in electron-phonon mediated superconductivity. While BCS theory and Eliashberg theory have significant predictive power, the microscopic Hamiltonians for the processes they describe are still an open area of study. We will examine the hitherto unsolved BLF-SSH model of electrons interacting with... 544. Electroweak Baryogenesis and the LHC David Morrissey (TRIUMF) It is not known how to explain the excess of matter over antimatter with the Standard Model. This matter asymmetry can be accounted for in certain extensions of the Standard Model through the mechanism of electroweak baryogenesis (EWBG), in which the extra baryons are created in the early Universe during the electroweak phase transition. In this talk I will review EWBG, connect it to... 692. Interstitial point radiance spectroscopy in turbid media Dr Bill Whelan (Dept of Physics, University of Prince Edward Island) Optical spectroscopy has become a valuable tool in biomedical diagnostics because of its ability to provide biochemical information on endogenous and exogenous chromophores present in tissues. In this work, point radiance spectroscopy using a white light source is investigated 1) to measure the optical properties of bulk tissues and 2) to detect localized gold nanoparticles in tissue mimicking... 778. SELF- AND AIR-BROADENED LINE SHAPE PARAMETERS OF METHANE IN THE 2.3 MICRONS RANGE Adriana Predoi-Cross (University of Lethbridge) Methane is an important greenhouse gas in the terrestrial atmosphere and a trace gas constituent in planetary atmospheres. We report measurements of the self- and air-broadened Lorentz widths, shifts and line mixing coefficients along with their temperature dependences for methane absorption lines in the 2.22 to 2.44 microns spectral range. This set of highly accurate spectral line shape... 764. True Random Number Generation based on Interference between Two Independent Lasers Caleb John (University of Calgary) Reliable true random number generation is essential for information theoretic security in a quantum cryptographic system based on quantum key distribution (QKD) and one-time pad encryption [1]. Various random number generation methods have already been proposed and demonstrated, such as schemes based on the detection of single photons [2], whose rate is limited by the dead time of single... 780. Yang-Mills Flow in the Abelian Higgs Model Paul Mikula (University of Manitoba) The Yang-Mills flow equations are a parabolic system of partial differential equations determined by the gradient of the Yang-Mills functional, whose stationary points are given by solutions to the equations of motion. We consider the flow equations for a Yang-Mills-Higgs system, where the gauge field is coupled with a scalar field. In particular we consider the Abelian case with axial... 896. Generating Ideas for Active and Experiential Learning in Physics Chitra Rangan (University of Windsor) T-MEDAL CAP Medal Talk - Chitra Rangan, U. Windsor (Teaching Undergraduate Physics / Enseignement de la physique au 1er cycle) The Physics community has known the importance of Active Learning (AL) for the last twenty years (see [1,2]). A recent analysis of 225 studies on AL [3] has demonstrated that "active learning appears effective across all class sizes --- although the greatest effects are in small (n <= 50) classes." Physicists have innovated both technologies and techniques for AL [4,5]. Yet, most classes,... 901. Quantum superposition and the uncertainty principle in the class room; a hands-on experience, Martin laforest, Senior manager, Scientific Outreach, Institute for Quantum Computing, University of Waterloo Teachers' Day - Session II / Journée des enseignants - Atelier II 902. Afternoon workshops: Teachers' Day - Lunch / Journée des enseignants - Diner List of proposed workshops, teachers will be asked to sign up for three workshop at most. A separate registration form for workshops will be sent to the teachers by the teachers local committee. - Cavendish experiment (measuring G) - Millikan oil drop. (obtaining the basic electron charge) - e/m for electrons. - Electron diffraction (verifying particle wave duality). - Video analysis of... 630. Accelerator-Based Medical Isotope Production at TRIUMF Paul Schaffer (TRIUMF) T2-6 Nuclear Physics in Medicine (DNP-DMBP-DIAP) / Physique nucléaire en médecine (DPN-DPMB-DPIA) TRIUMF operates a suite of [H-] cyclotrons (13, 2 x 30, 42 and 500 MeV) which, in addition to supplying our basic science program, are used to produce a variety of medical isotopes. Within the next few years TRIUMF will also begin isotope production in our new Advanced Rare IsotopE Laboratory (ARIEL) – a 50 MeV, 10 mA continuous-wave electron linac. The breadth and power of our infrastructure... 554. Many-body localization and potential realizations in cold atomic gases Jesko Sirker (U Manitoba) T2-2 Condensed Matter Theory (DCMMP-DTP) / Théorie de la matière condensée (DPMCM-DPT) Disorder in a non-interacting quantum system can lead to Anderson localization where single-particle wave functions become localized in some region of space. Recently, the study of interaction effects in systems which do exhibit Anderson localization has attracted renewed interest. In my talk I will present recent theorerical progress in understanding localization in many-body systems. I... 457. Model-Based Reasoning in Upper-division Lab Courses Heather Lewandowski (University of Colorado) T2-9 Gender and Arts in Physics Teaching (CEWIP-DPE) / Genre et arts dans l'enseignement de la physique (CEFEP-DEP) Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from include everything from the Bohr model of the hydrogen atom to the Standard Model of particle physics. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with... 615. Modification of graphene films in the flowing afterglow of microwave plasmas at reduced-pressure Luc Stafford (U.Montréal) T2-11 Laser, Laser-matter interactions, and plasma based applications (DPP) / Lasers, interactions laser-matière et applications basées sur les plasmas (DPP) Graphene films were exposed to the late afterglow of a reduced-pressure N2 plasma sustained by microwave electromagnetic fields. X-ray photoelectron spectroscopy (XPS) shows that plasma-generated N atoms are incorporated into both pyridinic and pyrrolic groups, without excessive reduction of sp2 bonding. Nitrogen incorporation was found to be preceded by N adsorption, where N adatom density... 463. New View of Aurora from Space using the e-POP Fast Auroral Imager Prof. Leroy Cogger (University of Calgary) T2-3 Ground-based / in situ observations and studies of space environment II (DASP) / Observations et études de l'environnement spatial, sur terre et in situ II (DPAE) The Fast Auroral Imager (FAI) on the CASSIOPE Enhanced Outflow Probe (e-POP) consists of two CCD cameras, which measure the atomic oxygen emission at 630 nm and prompt auroral emissions in the 650 to 1100 nm range, respectively, using a fast lens system and high quantum-efficiency CCDs to achieve high sensitivity, and a common 26 degree field-of-view to provide nighttime images of about 650 km... 703. Project ALPHA: Applying AMO Physics to Antimatter and Using Antimatter to Study AMO Physics Prof. Robert Thompson (University of Calgary) T2-10 Cold and trapped atoms, molecules and ions (DAMOPC) / Atomes, molécules et ions froids et piégés (DPAMPC) In 2010, the ALPHA Collaboration working at the AD Facility at CERN achieved the first capture and storage of atomic antimatter with our confinement of low temperature antihydrogen in an Ioffe-type magnetic minimum atom trap. [1] This achievement was only reached through the application of a range of tools and techniques from an interdisciplinary spectrum of fields, including AMO Physics. ... 519. Scale and Conformal Invariance in Quantum Field Theory Prof. Jean-Francois Fortin (Laval University) T2-4 Fields and Strings (DTP) / Champs et cordes (DPT) The behavior of coupling constants in quantum field theory under a change of energy scale is encoded in the renormalization group. At fixed points of the renormalization group flow, quantum field theories exhibit conformal invariance and are described as conformal field theories. The larger spacetime symmetry of conformal field theory is not the smallest possible extension of Poincare... 491. Searches for Exotic Physics at ATLAS Wendy Taylor (York University (CA)) T2-7 Energy Frontier: Susy & Exotics II (PPD) / Frontière d'énergie: supersymétrie et particules exotiques II (PPD) The most exciting discovery to come from the LHC would be that of something completely unexpected. To that end, the ATLAS experiment has been enthusiastically analyzing the 2012 LHC data recorded at a centre of mass energy of 8 TeV looking for any possible evidence of new physics. A variety of signatures has been considered, including heavy resonances, excesses above the Standard Model... 834. Single particle structure in neutron-rich Sr isotopes approaching $N=60$ Dr Peter Bender (TRIUMF) T2-5 Nuclear Structure II (DNP) / Structure nucléaire II (DPN) The shape coexistence and shape transition at $N=60$ in the Sr, Zr region is the subject of substantial current experimental and theoretical effort. An important aspect in this context is the evolution of single particle structure for $N<60$ leading up to the shape transition region, which can be calculated with modern large scale shell model calculations using a $^{78}$Ni core or Beyond... 494. Status of the PICASSO and PICO experiments Dr Guillaume Giroux (Queen's University) T2-8 Cosmic frontier: Dark matter II (PPD) / Frontière cosmique: matière sombre II (PPD) The PICO collaboration, a merger of COUPP and PICASSO experiments, searches for dark matter particles using superheated fluid detectors. These detectors can be operated within a set of conditions where they become insensitive to the typically dominant electron recoil background. Additionally, the acoustic measurement of the bubble nucleation makes possible the rejection of additional... 673. Ultrafast Transmission Electron Microscopy and its Nanoplasmonic Applications Prof. Aycan Yurtsever (INRS-EMT) T2-1 Materials characterization: microscopy, imagining, spectroscopy (DCMMP) / Caractérisation des matériaux: microscopie, imagerie, spectroscopie (DPMCM) Understanding matter at the dynamic and microscopic levels is fundamental for our ability to predict, control and ultimately design new functional properties for emerging technologies. Reaching such an understanding, however, has traditionally been difficult due to limited experimental methodologies that can simultaneously image both in space and time. Ultrafast transmission electron... 869. A Search for Magnetic Monopoles and Exotic Long-lived Particles with Large Electric Charge at ATLAS Mr Gabriel David Palacino Caviedes (York University (CA)) A search for highly ionizing particles produced in 8 TeV proton-proton collisions at the LHC is performed with the ATLAS detector. A dedicated trigger increases significantly the sensitivity to signal candidates stopping in the electromagnetic calorimeter and allows to probe particles with higher charges and lower energies. Production cross section limits are obtained for stable particles in... 513. CASSIOPE e-POP and coordinated ground-based studies of polar ion outflow, auroral dynamics, wave-particle interactions, and radio propagation Andrew Yau (University of Calgary) The Enhanced Polar Outflow Probe (e-POP) is an 8-instrument scientific payload on the Canadian CASSIOPE small satellite, comprised of plasma, magnetic field, radio, and optical instruments designed for in-situ observations in the topside polar ionosphere at the highest-possible resolution. Its science objectives are to quantify the micro-scale characteristics of plasma outflow in the polar... 710. DEAP-3600 trigger - the needle in the haystack Ben Smith (TRIUMF) DEAP-3600 is a dark matter experiment based at SNOLAB. It uses 3600kg of liquid argon as a target, and searches for scintillation light from argon nuclei struck by weakly interacting massive particles (WIMPs). Argon-39 atoms also undergo beta decay, and the recoiling electrons also produce scintillation light. Beta decays are expected to occur at least $10^8$ times as frequently as WIMP... 523. Doppler shift lifetime measurements using the TIGRESS Integrated Plunger Mr Aaron Chester (Simon Fraser University Department of Chemistry) Along the $N=Z$ line, shell gaps open simultaneously for prolate and oblate deformations; the stability of these prolate and oblate configurations is enhanced by the coherent behaviour of protons and neutrons in $N=Z$ nuclei. Additionally, amplification of proton-neutron interactions along the $N=Z$ line may yield information on the isoscalar pairing interactions which have been predicted in... 535. Dynamics of Gravitational Collapse in AdS Space-Time Andrew Frey (University of Winnipeg) Gravitational collapse in asymptotically anti-de Sitter spacetime is dual to thermalization of energy injected to the ground state of a strongly coupled gauge theory. Following work by Bizon and Rostworowski, numerical studies of massless scalar fields in Einstein gravity indicate that generic initial states thermalize, given time, even for arbitrarily small energies. From the gravitational... 556. Evaporative Cooling in Electromagnetic Radio Frequency Ion Traps Lohrasp Seify (University of Calgary) In 2011, the ALPHA collaboration created and trapped neutral anti-hydrogen particles for the first time in history [1]. Key to this achievement was the demonstration of evaporative cooling of charged particles in a Penning Trap, a cooling method that had not previously been achieved with trapped low temperature ions [2]. Work is currently underway at the University of Calgary to... 826. Fabulous Physicists from Around the World: The tale of ICWIP 2014 Shohini Ghose (Wilfrid Laurier University) Women in Physics / Femmes en physique (CEWIP-CEFEP) A century ago it was only Marie Curie and a few other women who were part of the physics community, but in 2014, CAP and IUPAP (International Union of Pure and Applied Physics) brought over 200 women physicists from 52 countries to Waterloo for the 5th IUPAP International Conference on Women in Physics (ICWIP). ICWIP 2014 was held at Wilfrid Laurier University from August 5 to 8, 2014. This... 811. Light-Trapping Architecture for Room Temperature Bose-Einstein Condensation of Exciton-Polaritons near Telecommunication Frequencies Mr Pranai Vasudev (University of Toronto) While normally quantum mechanical effects are observable at cryogenic temperatures and at very small length scales, our work brings these quantum phenomena to the macroscopic length scale and to room temperature. Our work focuses on the possibility of room-temperature thermal equilibrium Bose-Einstein condensation (BEC) of quantum well exciton-polaritons in micrometer scale cavities... 501. Means of mitigating the limits to characterization of radiation sensitive samples in an electron microscope. Marek Malac The scattering of the fast electrons by a sample in the transmission electron microscope (TEM) results in a measurable signal and also leads to sample damage. In an extreme case, the damage can be severe and can proceed faster than data can be collected. The fundamental limit on whether a measurement can be performed is set by the interaction cross section and collection efficiency for the... 838. Producing Medical Isotopes with Electron Linacs Mark de Jong (Canadian Light Source Inc.) The Canadian Light Source (CLS) has been working on a project to develop a facility that uses a 35 MeV high power (40 kW) electron linac to produce medical isotopes. This project was funded by Natural Resources Canada's Non-reactor-based Isotope Supply Program which was initiated following the lengthy shutdowns of the NRU reactor at Chalk River that caused significant shortages of... 829. Pump-probe Studies of Warm Dense Matter Prof. Ying Tsui (University of Alberta) Warm Dense Matter (WDM) is a material under extreme conditions which has near solid density but has a temperature of several electron volts. It is a state lies in between condense matter state and plasma state. The study of materials under extreme conditions is currently a forefront area of study in material science and has generated enormous scientific interest. The understanding of WDM is... 447. Demonstration of a Microtrap Array and manipulation of Array Elements Dr Bin Jian (National Research Council) A novel magnetic microtrap has been demonstrated for ultracold neutral atoms [1]. It consists of two concentric currents loops having radii r1 and r2. A magnetic field minimum is generated along the axis of the loops if oppositely oriented currents flow through the loops. Selecting r2/r1 = 2.2 maximizes the restoring force to the trap center. The strength and position of the microtrap... 722. Early studies of detector optical calibrations for DEAP-3600 Dr Berta Beltran (Univeristy of Alberta) The DEAP-3600 experiment is looking for dark matter WIMPs by detecting the scintillation light produced by a recoiling liquid argon nucleus. Using a 1 tonne fiducial volume a WIMP-nucleon cross section sensitivity of 10^{-46} cm2 in is expected for 3 years of data taking for a 100GeV WIMP. DEAP-3600 has been designed for a target background of 0.6 events in the WIMP region of interest in 3... 591. Isomeric decay spectroscopy of 96Cd Jason Park (University of British Columbia/TRIUMF) Self-conjugate nuclei, where $N=Z$, exhibit a strong $pn$ interaction due to the large overlap of wavefunctions in identical orbitals. The heaviest $N=Z$ nuclei studied so far is $^{92}$Pd, and it has demonstrated a strong binding in the $T = 0$ interaction [1]. As the mass number increases, the nucleus approaches the doubly-magic $^{100}$Sn. To investigate the evolution of the $pn$... 654. Molecular-dynamics simulations of two-dimensional Si nanostructures Dr Ralf Meyer (Laurentian University) Nanostructed materials make it possible to tailor the vibrational properties of a system for specific uses like thermoelectric applications or phononic waveguides. In this work, the vibrational properties of two-dimensional silicon nanostructures are studied. The nanostructures are build from arrays of nanowires that are arranged in such a manner that they form a periodic lattice. The method... 495. The MoEDAL Experiment at the LHC - a New Light on the High Energy Frontier In 2010 the Canadian led MoEDAL experiment at the Large Hadron Collider (LHC) was unanimously approved by CERN's Research Board to start data taking in 2015. MoEDAL is a pioneering experiment designed to search for highly ionizing avatars of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles. Its groundbreaking physics program defines over 30 scenarios that... 531. The nature of GPS receiver bias variabilities: An examination in the Polar Cap region and comparison to Incoherent Scatter Radar Mr David Themens (University of New Brunswick) The problem of receiver Differential Code Biases (DCBs) in the use of GPS measurements of ionospheric Total Electron Content (TEC) has been a constant concern amongst network operators and data users since the advent of the use of GPS measurements for ionospheric monitoring. While modern methods have become highly refined, they still demonstrate unphysical bias behavior, namely notable solar... 579. Calculation of isotope yields for radioactive beam production Ms Fatima Garcia (Simon Fraser University and TRIUMF) Access to new and rare radioactive isotopes is key to their application in nuclear science. Radioactive ion beam (RIB) facilities around the world, such as TRIUMF (Canada's National Laboratory for Particle and Nuclear Physics, 4004 Westbrook Mall, Vancouver, BC, V6T 2A3), work to develop target materials that generate ion beams used in nuclear medicine, astrophysics and fundamental physics... 617. Deposition of functional coatings on glass substrates using a recently-developed atmospheric-pressure microwave plasma jet Mr Antoine Durocher-Jean (U. Montreal) In recent years, atmospheric-pressure plasmas have gained a lot of interest in view of their interest for fast treatment of materials over large area wafers. While such plasmas are typically based on corona or dielectric barrier discharges (DBDs) for processing of thin samples (for example roll-to-roll systems), a number of applications require the treatment of thicker samples and thus the use... 646. Engineered spin-orbit coupling in ultracold quantum gases Lindsay LeBlanc (University of Alberta) Ultracold quantum gases are an ideal medium with which to explore the many-body behaviour of quantum systems. With a century of research in atomic physics at the foundation, a wide variety of techniques are available for manipulating the parameters that govern the behaviour of these systems, including tuning the interactions between particles and manipulating their potential energy... 529. Gender gaps in a first-year physics lab Dr James Day (University of British Columbia) It has been established that male students outperform female students on almost all commonly-used physics concept inventories. However, there is significant variation in the factors that contribute to this gender gap, as well as the direction in which they influence it. It is presently unknown if such a gender gap exists on the relatively new Concise Data Processing Assessment (CDPA). To get... 674. Inverse melting in a simple 2D liquid Ahmad Almudallal (Memorial University of Newfoundland) We employ several computer simulation techniques to study the phase behaviour of a simple, two dimensional monodisperse system of particles interacting through a core-softened potential comprising a repulsive shoulder and an attractive square well. This model was previously constructed and used to explore anomalous liquid behaviour in 2D and 3D, including liquid-liquid phase separation [1].... 708. Single photon counting for the DEAP dark matter detector Thomas McElroy (University of Alberta) DEAP-3600, comprised of a 1 tonne fiducial mass of ultra-pure liquid argon, is designed to achieve world-leading sensitivity for spin-independent dark matter interactions. DEAP-3600 measures the time distribution of scintillation light from the de-excitation of argon dimers to select events. This measurement allows background events from Ar39 decays to be rejected at a high level. The... 659. Terahertz Scanning Tunneling Microscopy in Ultrahigh Vacuum Mr Vedran Jelic (University of Alberta) The terahertz scanning tunneling microscope (THz-STM) is a new imaging and spectroscopy tool that is capable of measuring picosecond electron dynamics at the nanoscale. Free-space THz pulses are commonly used for non-contact conductivity measurements, but they are diffraction limited to millimeter length scales. We can overcome this limit by coupling THz pulses to a sharp metal tip through... 580. The Electromagnetic Mass Analyser EMMA Barry Davids (TRIUMF) The Electromagnetic Mass Analyser EMMA is a recoil mass spectrometer for TRIUMF's ISAC-II facility designed to separate the recoils of nuclear reactions from the heavy ion beams that produce them and to disperse the recoils according to their mass/charge ratio. In this talk I will present an update on the construction and commissioning of the spectrometer and its components. 632. Thermodynamic and Transport Properties of a Holographic Quantum Hall System Joel Hutchinson (University of Alberta) We apply the AdS/CFT correspondence to study a quantum Hall system at strong coupling. Fermions at finite density in an external magnetic field are put in via gauge fields living on a stack of D5 branes in Anti-deSitter space. Under the appropriate conditions, the D5 branes blow up to form a D7 brane which is capable of forming a charge-gapped state. We add finite temperature by including a... 566. Transmission of Waves from a High-Frequency Ionospheric Heater to the Topside Ionosphere Dr Gordon James (University of Calgary) In the first year of operation of the ePOP instruments on the Canadian small satellite CASSIOPE, a number of passes were recorded during which the Radio Receiver Instrument (RRI) measured radiation from powerful high-frequency ground transmitters that act as ionospheric heaters. In the case of measurements of transionospheric propagation from the Sura heating facility in Russia, located at... 557. Cellular Automaton with nonlinear Viscoelastic Stress Transfer to Model Earthquake Dynamics Xiaoming Zhang (University of Western Ontario) Earthquakes may be seen as an example of self-organized criticality. When we transform the Gutenberg-Richter law of earthquake magnitude, the seismic moment, as a measure of the energy released, yields a power law distribution indicating a self-similar pattern. The earthquake dynamics can be modelled by employing the spring-block system, which features a slowly-driving force, failure threshold... 521. Coincidence Measurements using the SensL MatrixSM-9 Silicon-photomultiplier Array Dr Jamie Sanchez-Fortun Stoker (University of Regina) The silicon photomultiplier (SiPM) has emerged as a rival device to traditional photodetectors such as the photomultiplier tube (PMT). Over the past decade, SiPMs - also known as Multi-pixel photon counters (MPPCs) and Single-photon avalanche diodes (SPADs) - have found applications in fields ranging from, for example, high-energy physics and atmospheric lidar, to homeland security,... 634. Constraints and Bulk Physics in the AdS/MERA Correspondence ChunJun Cao (Caltech) It has been proposed that the Multi-scale Entanglement Renormalization Ansatz (MERA), which is efficient at reproducing CFT ground states, also captures certain aspects of the AdS/CFT correspondence. In particular, MERA reproduces the Ryu-Takayanagi-type formula and the network structure is similar to a discretized AdS space where the renormalization direction gives rise to the additional bulk... 578. Dawn-dusk asymmetry in the intensity of polar cap flows as seen by SuperDARN Alexandre Koustov (U) Polar cap flow pattern and intensity depend on the IMF Bz and By components. For IMF Bz<0, the pattern is consistently two-celled, and previous studies indicate that flows are fastest near noon and midnight for By<0 and during afternoon-dusk hours for By>0. In this study, we investigate the polar cap flow intensity in two ways. First we consider highly-averaged (over each month of... 575. Identifying differences in long-range structural disorder in solids using mid-infrared spectroscopy Ben Xu (Memorial University) Structural disorder in calcium carbonate materials is a topic of intense current interest in the fields of biomineralization, archaeological science, and geochemistry. In these fields, Fourier transform infrared (FTIR) spectroscopy is a standard material characterization tool because it can clearly distinguish between amorphous calcium carbonate and calcite. Earlier theoretical work based on... 682. Measurements of Ionization States in Warm Dense Aluminum with Femtosecond Betatron Radiation from a Laser Wakefield Accelerator Mianzhen Mo (University of Alberta) Study of the ionization state of material in the warm dense matter regime is a significant challenge at present. Recently, we have demonstrated that the femtosecond duration Betatron x-ray radiation from the laser wakefield acceleration of electrons is capable of being employed as a probe to directly measure the ionization states of warm dense aluminum via K-shell line absorption spectroscopy... 746. New decay modes of the high-spin isomer of $^{124}$Cs Allison Radich (University of Guelph) As part of a broader program to study the evolution of collectivity in the even-even nuclei above tin, a series of $\beta$-decay measurements of the odd-odd Cs isotopes into the even-even Xe isotopes, specifically $^{122,124,126}$Xe, have been made utilizing the 8$\pi$ spectrometer at TRIUMF-ISAC. The 8$\pi$ spectrometer consisted of 20 Compton-suppressed high-purity germanium (HPGe) detectors... 589. A pump-probe technique to measure the Curie temperature distribution of exchange-decoupled nanoscale ferromagnet ensembles Prof. Simone Pisana (York University) T3-1 Materials characterization: electrical, optical, thermal (DCMMP) / Caractérisation des matériaux: électrique, optique, thermique (DPMCM) Heat assisted magnetic recording (HAMR) has been recognized as a leading technology to increase the data storage density of hard disk drives[1]. Dispersions in the properties of the grains comprising the magnetic medium can lead to grain-to-grain Curie temperature variations, which drastically affect noise in the recorded magnetic transitions, limiting the data storage density capabilities in... 468. Advanced Instrumentation at TRIUMF Prof. Reiner Kruecken (TRIUMF) T3-8 Advanced Instrumentation at Major Science Facilities: Detectors I (DIMP) / Instrumentation avancée dans des installations scientifiques majeures: détecteurs I (DPIM) TRIUMF operates the Isotope Separator and Accelerator (ISAC) rare isotope facility as well as the Centre for Molecular and Materials Science (CMMS), which uses muons and isotopes. The ISAC facility comprises 18 state of the art experiments for experimental programs in nuclear structure, nuclear astrophysics, and fundamental symmetries. CMMS features several state-of-the-art set-ups for muon... 666. Anisotropic ion temperatures and ion flows adjacent to auroral precipitating electrons William Archer (University of Calgary) T3-3 Ground-based / in situ observations and studies of space environment III (DASP) / Observations et études de l'environnement spatial, sur terre et in situ III (DPAE) Large ion temperature anisotropies (temperature perpendicular to magnetic field larger than parallel to magnetic field) in narrow regions of enhanced ion flow have been identified by the Electric Field Instruments on board the Swarm satellites as a persistent feature of the high latitude midnight-sector auroral zone. These flow channels typically span less than 100 km latitudinally with ion... 669. Beta-decay from $^{47}$K to $^{47}$Ca with GRIFFIN Dr Jenna Smith (TRIUMF) T3-6 Nuclear Structure III (DNP) / Structures nucléaires III (DPN) Recent developments in many-body calculation methods have extended the application of *ab initio* interactions to medium-mass nuclei near closed shells. Detailed nuclear data from these isotopes are necessary to evaluate the many-body calculation methods and to test the predictive capacity of the interactions. $^{47}$Ca and $^{47}$K are each one nucleon removed from the doubly-magic nucleus... 568. From Plasma to Complex Plasma Prof. Osamu Ishihara (Chubu University/Yokohama National University) T3-10 Special session to honour Dr. Akira Hirose II (DPP) / Session spéciale en l'honneur du Dr Akira Hirose II (DPP) Earlier research on plasma turbulence and later research development on a complex plasma are discussed. Study of nonlinear evolution of instabilities in a collisionless plasma, especially ion acoustic instability and Buneman instability, revealed the role of plasma collective modes in the heating of plasma particles. It was essential for plasma waves to grow in time, resulting in the heating... 688. Improving Physical Models of Qubit Decoherence and Readout Prof. Bill Coish (McGill University) T3-2 Quantum Computation and Communication (DTP-DCMMP-DAMOPC) / Communication et calcul quantique (DPT-DPMCM-DPAMPC) Qubit coherence measurements are now sufficiently accurate that they can be used to perform 'spectroscopy' of noise due to a complex environment. Measuring not only the decay time, but also the form of decay as a function of some external parameter (e.g. temperature) can determine the nature of the dominant decoherence source. I will describe how temperature-dependent measurements of qubit... 872. Panel Discussion on Women in Physics T3-7 Panel on Women in Phyiscs (CEWIP) / Table ronde sur les femmes en physique (CEFEP) Moderator: Shohini Ghose, Wilfrid Laurier University Panellists: Sara Seager (MIT), Adriana Predoi-Cross (University of Lethbridge) and Wendy Taylor (York University) 572. Status of Long-Baseline Neutrino Experiments Dr Nicholas Hastings (University of Regina) T3-5 Study of Neutrino Oscillations (PPD-DTP-DNP) / Études des oscillations de neutrinos (PPD-DPT-DPN) The current generation of long-baseline neutrino oscillation experiments employ an off-axis $\nu_\mu$ (or $\bar{\nu}_\mu$) beam produced by the decay of pions created when a proton beam strikes a target. The beam is monitored at detector facilities near the production point before travelling hundreds of kilometres to a far detector. Aiming the beam centre slightly away from the far... 698. Status of the SuperCDMS and European Cryogenic Dark Matter experiments Dr Gilles Gerbier (Queens University) T3-4 Cosmic Frontier: Dark Matter III (PPD)/ Frontière cosmique: matière sombre III (PPD) The SuperCDMS collaboration operates cryogenic germanium detectors to search for particle dark matter (WIMPs), so far at Soudan Underground Laboratory in Minnesota, US. The EURECA collaboration gathers EDELWEISS, a European collaboration also operating cryogenic germanium detectors, at the Laboratoire Souterrain de Modane, and CRESST who operate cryogenic scintillating detectors (CaWO4) at the... 614. Watching proteins fold: capturing the conformational diffusion of single molecules during structural self-assembly Michael Woodside (University of Alberta) T3-9 Molecular Biophysics (DMBP) / Biophysique moléculaire (DPMB) The self-assembly of intricate structures by proteins is a complex process involving myriad degrees of freedom. Such "folding" is usually described in terms of a diffusive search over a multi-dimensional energy landscape in conformational space for the lowest-energy structure. The diffusion coefficient, *D*, encodes the rates at which microscopic motions occur during folding and is thus of... 584. Investigating the Structure of $^{46}$Ca through the $\beta^{-}$ Decay of $^{46}$K Utilizing the New GRIFFIN Spectrometer Ms Jennifer Pore (Simon Fraser University) Due to its very low natural abundance of 0.004%, the structure of the magic nucleus $^{46}$Ca has not been studied in great detail compared to its even-even Ca neighbors. The calcium region is currently a new frontier for modern shell model calculations based on NN and 3N forces [1,2], so detailed experimental data from these nuclei is necessary for a comprehensive understanding of the region.... 810. Coarse-Grained computer simulations of Alzheimer's beta-amyloid peptides, using the Mercedes-Benz Hydrogen Bond Potential Dr Apichart Linhananta (Lakehead University) Protein aggregation is a medically relevant phenomenon that can lead to protein-folding diseases such as Alzheimer's and Prion's. The aggregation process is largely determined by hydrogen bonds (HB), involving hundreds of peptides, over a period of days or even months. This rules out molecular dynamics (MD) simulations of all-atom protein models in explicit solvents. However, coarse-grained... 690. Development and simulations of the CANREB RFQ buncher and cooler at the TRIUMF facility Mr Jeff Bale (TRIUMF) TRIUMF's new Advanced Rare IsotopE Laboratory (ARIEL) is the up and coming producer of rare isotope beams for nuclear science in Canada. It will triple the number of beamlines available to both of the (rare) Isotope Separator and Accelerator (ISAC) facilities, and will expand their range of available isotopes. An overview of ARIEL and the CANadian Rare-isotope facility with Electron-Beam ion... 525. Electron Neutrino Cross Section Measurements at the T2K Off-Axis Near Detector Fady Shaker (university of winnipeg) T2K is a long baseline neutrino oscillation experiment in Japan, that targets the measurement of the mixing angle between the first and the third neutrino mass eigenstates ($\theta_{13}$) by looking for the appearance of electron neutrinos ($\nu_e$) in a beam of muon neutrinos ($\nu_\mu$), as well as a precision measurement for the mass difference between the the second and the third... 672. Fluctuations and Transport in Hall devices with ExB drift Dr Andrei Smolyakov (University of Saakatchewan) Devices with stationary, externally applied, electric field which is perpendicular to a moderate amplitude magnetic field B₀, are now a common example of magnetically controlled plasmas. High interest applications involve Penning type plasma sources, magnetrons for plasma processing, magnetic filters for ion separation, and electric space propulsion devices such as Hall thrusters. One common... 569. Generation, dynamics, and decay of a polar cap patch Dr Thayyil Jayachandran (University of New Brunswick) The polar cap ionosphere, an important part of the solar wind-magnetosphere-ionosphere system, is formed by ionization of the neutral atmosphere by solar radiation and particle precipitation under internal transportation and chemical processes. The polar ionosphere is primarily driven by magnetospheric convection and neutral circulation, and undergoes structuring over a wide range of temporal... 766. NanoQEY Quantum Key Distribution Satellite Christopher Pugh (University of Waterloo) NanoQEY (Nano Quantum EncrYption satellite) is a demonstration satellite which will show the feasibility of implementing Quantum Key Distribution (QKD) between two ground stations on earth using a satellite trusted node approach. One of the main objectives of NanoQEY is to eliminate the necessity for a fine pointing system which will reduce cost and planning time for a satellite. The system... 789. New Pulse Processing Algorithm for SuperCDMS Mr Ryan Underwood (Queens University) SuperCDMS searches for dark matter in the form of Weakly Interacting Massive Particles (WIMPs) with cryogenic germanium detectors. WIMPs interacting with atomic nuclei deposit energy in form of lattice vibrations (phonons) which propagate through the cylindrical Ge single crystal (75 mm diameter, 25 mm high) until they are absorbed by the phonon sensors covering part of the flat surfaces of... 587. Optical properties and Fermiology near field-tuned quantum critical points David Broun (Simon Fraser University) In the so-called "heavy-fermion" metals, the hybridization of the conduction band with electrons localized in partially filled $f$ orbitals leads to the formation of heavy quasiparticles, for which the effective mass can be renormalized by a factor of 100 or more. However, the itinerant nature of these quasiparticles competes with a tendency to form more conventional, magnetically ordered... 671. The first GRIFFIN Experiment: An investigation of the $s$-process yields for $^{116}$Cd Mr Ryan Dunlop (University of Guelph) In adopted models for the $s$-process, it is assumed that helium shell flashes give rise to two neutron bursts at two different thermal energies $(kT\sim10$ keV and $kT\sim25$ keV). The contribution to the isotopic abundance of $^{116}$Cd from the higher temperature neutron bursts are calculated assuming thermal equilibrium between the ground state and the long-lived isomeric state of... 821. Alpha particle backgrounds from the neck of the DEAP-3600 dark matter detector Dr James Bueno (University of Alberta) The DEAP-3600 dark matter detector at SNOLAB will search for scattering of weakly interacting massive particles from a 3600 kg liquid argon target. The liquid argon is held in a spherical vessel made from acrylic, with the highest standards of purity for both bulk acrylic and removal of surface activities. At the top of the vessel there is a neck opening to the cooling system, and alpha... 718. Constraining Oscillation Analysis Inputs at the T2K Near Detector Christine Nielsen (University of British Columbia) The T2K long-baseline neutrino oscillation experiment is composed of a near detector at 280m and a far detector at Super-Kamiokande located 295 km from the neutrino beam in Tokai. The main oscillation analyses are performed using fits to the data collected at the far detector. These analyses depend on our ability to predict the event rates and energy spectra at the far detector, which... 680. Gamma-Gamma Angular Correlation Measurements With GRIFFIN Mr Andrew MacLean (University of Guelph) When a excited nuclear state emits successive $\gamma$-rays in a $\gamma-\gamma$ cascade, $X^{**} \rightarrow X^{*} + \gamma_{1} \rightarrow X + \gamma_{2}$ an anisotropy is found in the spatial distribution of $\gamma_{2}$ with respect to $\gamma_{1}$. By defining the direction of $\gamma_{1}$ to be the z-axis, the intermediate level, $X^{*} $, in general will have an uneven distribution of... 786. Photodetection with SiPM in particle physics and materials science Fabrice Retiere (TRIUMF) Pixelated Geiger-mode avalanche photo-diodes also called silicon photo-multipliers (SiPMs) are replacing traditional vacuum photo-multiplier tubes (PMTs) in numerous applications in subatomic physics, medical imaging and condensed matter. They have several advantages being insensitive to magnetic field, having higher efficiency, more uniform gain, and being more compact. However, they still... 603. Temporal and Spatial Evolution of Poynting Flux Measured with Swarm Mr Matthew Patrick (University of Calgary) Small Scale Dynamics of Poynting Flux Measured With Swarm We present case studies of ionospheric Poynting flux using the instruments onboard the three ESA Swarm spacecraft. The three Swarm satellites each carry an Electric Field Instrument (EFI) that can be used to measure ion drift velocities. During the first months of the mission the satellites were in nearly circular, polar orbits at... 609. The Formation of Alzheimer's Plaques in Synthetic Membranes Jennifer Tang (McMaster University) Alzheimer's disease is a type of dementia that affects memory, thinking, and behaviour. One of the hallmarks of the disease is the formation of neurotoxic senile plaques, primarily consisting of amyloid-β peptides. Despite their importance for the pathogenesis of the disease, little is known about the properties of these plaques and the process by which they form. We developed a model... 788. Towards a Quantum Non-Demolition Measurement for Photonic Qubits Chetan Deshmukh (University of Calgary) Many applications of quantum information processing benefit from, or even require, the possibility to detect the number of photons in a given signal pulse without destroying the photons nor the encoded quantum state. We propose and show first steps towards the implementation of such a Quantum Non-Demolition (QND) measurement for time-bin qubits. To implement this measurement, we first store a... 830. Adaptive Matrix Transpose Algorithms for Distributed Multicore Processors John Bowman (University of Alberta) The matrix transpose is an essential primitive of high-performance parallel computing. In plasma physics and fluid dynamics, a matrix transpose is used to localize the computation of the multidimensional Fast Fourier transform, the engine that powers the pseudospectral collocation method. An adaptive parallel matrix transpose algorithm optimized for distributed multicore architectures... 592. Alzheimer's Disease Amyloid Beta(25-35): An Oligomeric Aggregation Simulation From Monomers to Ordered Filaments Robert Girardin (Lakehead University) Amyloid Beta (A$\beta$) plaques have long been correlated with Alzheimer's disease; however, efforts made to link the plaques to pathogenic effects or utilize them for diagnostic purposes have not been very fruitful. Modern investigations point to peptide intermediate structures or their oligomers as likely candidates for the toxic agents, yet much remains unknown about the aggregation life... 640. Deep Core and PINGU - Studying Neutrinos in the Ice Ken Clark (University of Toronto) IceCube and its low energy extension DeepCore have been deployed at the South Pole and taking data since early 2010. Originally designed to search for high energy (on the order of PeV) events, IceCube has recently published the detection of the highest energy events ever recorded. At the same time, enhancements to the detector have been installed to focus on lower energy events. With a... 443. Evanescent Waveguide Microscopies for Bio-Application Prof. Silvia Mittler (University of Western Ontario) Two new evanescent field microscopy technologies based on glass slab waveguides with permanent coupling gratings are introduced: waveguide evanescent field fluorescence (WEFF) microscopy and waveguide evanescent field scattering (WEFS) microscopy. The technologies are briefly described and the experimental setup based on a conventional inverted microscope is introduced and compared to... 702. High-Precision Half-Life Measurements for the Superallowed $\beta^+$ emitter $^{10}$C Michelle Dunlop (University of Guelph) High precision measurements of superallowed Fermi beta transitions between 0$^+$ isobaric analogue states allow for stringent tests of the electroweak interaction described by the Standard Model. In particular, these transitions provide an experimental probe of the unitary of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, the Conserved-Vector-Current (CVC) hypothesis, as well as set limits on... 727. Optimizing the wavelength-shifter thickness for alpha suppression in the DEAP-3600 detector Derek Cranshaw (Queen's University) The DEAP-3600 experiment is a spherical dark matter detector searching for WIMPs by detecting scintillation light in a 3600 kg mass of liquid argon. Before the ultraviolet scintillation light passes through the optically clear acrylic vessel and light guides to the surrounding photomultiplier tubes, it must pass through a wavelength-shifting layer of tetraphenyl butadiene (TPB). Trace amounts... 755. Protein Biosensing with Fluorescent-Core Microcapillaries Mr Stephen Lane (University of Alberta) Whispering gallery modes (WGMs) are the electromagnetic resonances of dielectric spheres, cylinders, or rings. The WGM wavelengths can shift when the resonant field interacts with a local analyte fluid. This work demonstrates a fluorescent core microcapillary that utilizes WGMs for biosensing applications. This device consists of a glass microcapillary with a 50-μm-diameter inner channel. The... 738. Small Scale Structuring in Electron Precipitation as seen by the ePOP Suprathermal Electron Imager Taylor Cameron (University of Calgary) Auroral arcs are known to be caused by electrons with keV energies interacting with the neutral atmosphere. However, there is much more to the aurora than auroral arcs. There is a wide range of phenomena that are grouped together as "diffuse aurora". Suprathermal electron precipitation (having energies between 1 eV and a keV) often contributes to the diffuse aurora. Much less is known about... 719. The Electrodynamometer of the McPherson Collection of Scientific Instruments Jean Barrette (McGill University) History of Physics / Histoire de la physique (DHP) The Weber electrodynamometer is an instrument used to measure the absolute value of electric power (or current if one knows the voltage). It works by measuring the mechanical torque between two pairs of coils induced by the current in the coils. This is one of the first research instrument bought in 1895 by the newly created McGill Physics Department for 177£. McGill College apparatus... 756. Cusp Ion Upflows Observed by e-POP SEI and RISR-N: Initial Results Yangyang Shen (University of Calgary) Low-energy ion upflows associated with ion heating processes in the cusp/cleft and polar cap regions are investigated using conjunctions of the Enhanced Polar Outflow Probe (e-POP) satellite and the Resolute Bay Incoherent Scatter Radar (RISR-N) in June 2014 and February 2015. e-POP encountered the cusp/cleft ion fountain at 10-14 MLT and around 1000km altitude during these conjunction... 762. Dense Plasma Focus for Short-Lived Isotope Activation Mr R. A. Behbahani (University of Saskatchewan) Short-lived radioisotopes (SLRs) are used for medical applications including positron emission tomography (PET). The required activity for N-13for PET is about 4 GBq for a myocardial blood perfusion assessment. Dense plasma focus (DPF) has been considered as a low cost methods for producing SLRs as an alternative to conventional cyclotron facilities. A low energy dense plasma focus has been... 477. Experimental test of the unitarity of the leptonic mixing (PMNS) matrix Dr Akira Konaka (TRIUMF) In the past decade, a remarkable progress has been made in the neutrino oscillation determining the lepton mixing (PMNS) angles, except for the CP violating phase delta_CP. The next step is to determine this remaining phase and then over-constraining the PMNS matrix to test its unitarity. Testing the unitarity is an effective way to search for physics beyond the standard model, as is... 787. Kinetics of Chain Motions within the disordered eIF4E-BP2 protein Zhenfu Zhang (University of Toronto) Intrinsically disordered proteins (IDPs) play critical roles in regulatory protein interactions. Cap-dependent translation initiation is regulated by the interaction of eukaryotic initiation factor 4E (eIF4E) with disordered eIF4E binding proteins (4E-BPs) in a phosphorylation dependent manner. Fluorescence correlation and time-resolved anisotropy spectroscopies were used to detect and assess... 700. The First Radioactive Beam at GRIFFIN: $^{26}$Na for Decay Spectroscopy Nikita Bernier (TRIUMF) GRIFFIN (Gamma-Ray Infrastructure For Fundamental Investigations of Nuclei) was installed at the TRIUMF-ISAC-I facility during 2014 and represents a major upgrade to the 8$\pi$ spectrometer which operated at ISAC for the past decade. With an array of 16 large-volume hyper-pure germanium (HPGe) clover detectors, instrumented with a state-of-the-art digital data acquisition system, the array... 704. Ultrafast modulation of photoluminescence in semiconductors by intense terahertz pulses Mr David Purschke (University of Alberta) Terahertz (THz) pulse science is a rapidly developing field, and has been applied extensively in the characterization of ultrafast dynamics in semiconductors and nanostructures. The recent development of intense THz pulse sources in lithium niobate (LN), however, allows the dynamics of transient states to be directly manipulated by the large electric field of the THz pulse itself. We have used... John Root (Canadian Nuclear Laboratories) Science Policy: Where do you fit? / Politique scientifique : où vous situez-vous? The Canadian federal government recently announced its decision to shut down the NRU reactor in 2018. The National Research Universal (NRU) reactor commenced operation in 1957, to provide neutrons for several missions simultaneously, including the production of neutron beams to support fundamental experimental research on solids and liquids. Today, the Canadian Neutron Beam Centre manages... 906. How physicists can have a stronger voice in Ottawa Ted Hsu (MP, Kingston and the Islands) How can physicists have a stronger voice in government? We should be regularly posing this question before government budget decisions, before funding deadlines for scientific projects or infrastructure loom. Scientists in general, not to mention physicists, are not represented in large numbers in the country's legislatures, nor are they commonly sighted in the halls and offices of... 907. Inelastic Collisions Outside Academe: Alternate Careers for Physicists Dr Gary Albach (Former President and CEO, Alberta Innovates – Technology Futures) Physics Career Paths: Stories of where a Physics Education can lead / Cheminements de carrière en physique: où des études en physique pourraient vous mener Many physicists find rewarding careers outside universities, in industry and government service. Gary Albach has collected paycheques from all three sectors and shares job-seeking tips plus the terrors and challenges of the inelastic collisions within his career as a "lapsed physicist". 908. Intro: 2 other stories of where physics training led (and how) - Shalon MacFarlane (Technology Transfer Manager, TEC Edmonton) - Ted Hsu (Member of Parliament, Kingston and the Islands) 909. Panel / Round-Table Discussion (Albach, MacFarlane, Hsu) 793. Advanced Instrumentation Techniques developed for SNOLAB science programmes. W1-9 Advanced Instrumentation at Major Science Facilities: Detectors II (DIMP) / Instrumentation avancée dans des installations scientifiques majeures: détecteurs II (DPIM) SNOLAB is a deep underground research facility, based at a depth of 2km in the Vale Creighton mine, near Sudbury, Ontario. The SNOLAB research programme is primarily based around particle and astroparticle physics projects studying the Galactic dark matter, neutrinoless double beta decay and natural sources of neutrinos. Several leading edge technologies have been developed to study these... 465. Black hole chemistry: thermodynamics with Lambda David Kubiznak (Perimeter Institute) W1-4 Gravity I (DTP) / Gravité I (DPT) The mass of a black hole has traditionally been identified with its energy. We describe a new perspective on black hole thermodynamics, one that identifies the mass of a black hole with chemical enthalpy, and the cosmological constant as thermodynamic pressure. This leads to an understanding of black holes from the viewpoint of chemistry, in terms of concepts such as Van der Waals fluids,... 527. Chiral symmetry breaking and the quantum Hall effect in monolayer graphene Prof. Malcolm Kennett (Simon Fraser University) W1-1 Carbon-based materials (DCMMP) / Matériaux à base de carbone (DPMCM) When graphene is placed in a strong magnetic field it exhibits quantum Hall states at fillings that can be predicted by taking into account the relativistic form of the low-energy electronic excitations and ignoring interactions between electrons. However, additional quantum Hall states are observed at filling fractions $\nu$ = 0 and $\nu$ = $\pm$1 that are not explained within a picture of... 456. High-fidelity single-shot Toffoli gate via quantum control Barry Sanders (University of Calgary) W1-10 Quantum Optics and Cavity QED (DAMOPC) / Optique quantique et ÉDQ en cavité (DPAMPC) A single-shot Toffoli, or controlled-controlled-NOT, gate is desirable for classical and for quantum information processing. The Toffoli gate alone is universal for reversible computing and, accompanied by the Hadamard gate, are universal for quantum computing. The Toffoli gate is a key ingredient for (non-topological) quantum error correction. Currently Toffoli gates are achieved by... 679. Learning from our neighbours about "Building a Thriving Undergraduate Physics Program" Daria Ahrensmeier (Simon Fraser University) W1-8 Strengthening Physics Departments (DPE) / Renforcer les départements de physique (DEP) Many Physics Departments in Canada and around the world face similar challenges: low enrollment numbers for physics majors, long graduation times, retention problems, and the desire – or pressure – to improve the student experience. In February 2015, a team from the Physics Department at SFU attended the "Building a Thriving Undergraduate Physics Program" workshop organized by the APS and... 800. Optical measurement of spin-orbital torque in magnetic bilayers John Q Xiao (University of Delaware) W1-6 Devices (DCMMP) / Dispositifs (DPMCM) Spin-orbital coupling driven toques have been observed in magnetic bilayers consisting of a ferromagnet (FM) and heavy metal (HM) or topological insulator (TI). It has been demonstrated that the spin-orbit torques driven by an in-plane current can switch magnetization, manipulate magnetic domains and excite magnetization auto-oscillation. However, the microscopic mechanism for the spin-orbit... 570. Optimal control of microscopic nonequilibrium systems David Sivak (Simon Fraser University) W1-2 Soft Condensed Matter and Soft Interfaces (DMBP-DCMMP) / Matière condensée molle et interfaces molles (DPMB-DPMCM) Molecular machines are protein complexes that convert between different forms of energy, and they feature prominently in essentially any major cell biological process. A plausible hypothesis holds that evolution has sculpted these machines to efficiently transmit energy and information in their natural contexts, where energetic fluctuations are large and nonequilibrium driving forces are... 502. Summary of ATLAS-Canada Upgrades William Axel Leight (Carleton University (CA)) W1-5 Energy frontier: Future developments (PPD) / Frontière d'énergie: développements futurs (PPD) Planned upgrades to the LHC will significantly increase its luminosity, leading to large increases in particle flux which will pose a challenge for the ATLAS detector. Phase-1 upgrades, to be installed during the upcoming Long Shutdown 2 during 2018 and 2019, will both enhance the ATLAS trigger system to handle these higher luminosities without damaging physics performance as well as... 795. The EXO Search for Neutrinoless Double Beta Decay W1-7 Neutrinoless Double-beta Decay II (PPD-DNP) / Double désintégration beta sans neutrino II (PPD-DPN) The Enriched Xenon Observatory (EXO) effort continues to develop techniques and technology towards the search for neutrinoless double beta decay. Discovery of this process would reveal new properties of neutrinos including first measurement of the neutrino mass scale, evidence that neutrinos are Majorana particles, and first measurement of a lepton number violating process. Searching for... 729. Up, out, and away: Probing the initial stages of ion outflow with Swarm Johnathan Burchill (University of Calgary) W1-3 Special session to honor Dr. Akira Hirose III (DASP-DPP) / Session spéciale en l'honneur du Dr Akira Hirose III (DPAE-DPP) Earth's ionized upper atmosphere is ablated into space at a rate of 1 kilogram per second. This outflowing flux is comprised of significant amounts of heavy ions, such as oxygen, whose source populations in the F region ionosphere are sufficiently cool (~1000 K) to be strongly gravitationally bound to the Earth. Here lies a mystery: What processes operate on the cool, dense, low-altitude (*F*... 876. Building Thriving Undergraduate Physics Programs Dr Theodore Hodapp (American Physical Society) There are about 760 colleges and universities that offer an undergraduate degree in physics in the United States. Some are small and educate only a few physics students each year, while others attract, educate, and graduate dozens. The American Physical Society and the American Association of Physics Teachers have organized a number of workshops to help departments build and improve their... 481. Cavity-induced spin-orbit coupled Bose-Einstein condensation: A new approach for exploring cold atoms Farokh Mivehvar (University of Calgary) The atom-photon interaction is significantly amplified when the radiation field is confined inside a high-finesse cavity, resulting in a complex, coupled dynamics of the quantized matter and radiation fields. This coupled dynamics in turn mediates long-ranged interactions between atoms. Here we demonstrate how to simultaneously induce spin-orbit (SO) coupling and long-ranged interactions in a... 625. Interfacing organic and carbon-based nanomaterials towards their applications in sustainable energy Prof. Giovanni Fanchini (The University of Western Ontario) From the 1996 Nobel Prize for Chemistry, recognizing the synthesis of fullerene, to the 2010 Nobel Prize for Physics, awarded to the discovery of graphene, carbon-based nanomaterials have evolved into one of the hottest area in materials science. Utilization of large-area graphene thin films as transparent conductors in solar cells and energy-efficient light emitting devices may take... 493. Kinetic modelling of Solar Orbiter space environment Prof. Richard Marchand (University of Alberta) Solar Orbiter is a European Space Agency (ESA) mission scheduled for launch early in 2017, with the objective of providing observations of unprecedented quality of the sun and solar wind at locations ranging from 0.28 to 0.9 astronomical units (AU). During its journey, Solar Orbiter will experience a vast range of space-environment conditions, including intense radiation and solar wind plasma... 747. Modeling the Leaching of 222Rn Daughters into the SNO+ Detector Pouya Khaghani SNO+ is a multi-purpose neutrino experiment which is located at SNOLAB in Sudbury, Ontario. Using 780 tonnes of organic liquid scintillator, SNO+ will search for neutrino-less double beta decay of 130Te. In addition, measuring low energy solar neutrinos are planned for the second phase. Looking for rare events requires very stringent background limits. One of the sources originates from... 661. Nonlinear Optomechanics for Quantum Nondemolition Measurements Mr Bradley Hauer (University of Alberta) Since its inception in the early 1900s, the theory of quantum mechanics has provided an excellent model for very small objects, such as single atoms or molecules. However, as we move to larger and larger systems, we eventually return to the classic realm, where Newtonian mechanics takes over. In order to probe this quantum-to-classical crossover, we propose an experiment by which we can... 444. OMCVD AuNP Grown on Polymer Substrates: An approach Towards Mass Fabrication of Biosensors Organometallic chemical vapour deposited (OMCVD) gold nanoparticles (AuNPs) can be successfully used for biosensing. For the sensing mechanism an absorption feature, the localized surface plasmon resonance (LSPR), is implemented. Typically in this bio-sensing approach is monitoring the changes in the peak position of the LSPR via absorption spectroscopy during a highly specific binding... 471. Restricted Weyl Invariance in Four-Dimensional Curved Spacetime Prof. Ariel Edery (Bishop's) We discuss the physics of *restricted Weyl invariance*, a symmetry of dimensionless actions in four dimensional curved space time. When we study a scalar field nonminimally coupled to gravity with Weyl(conformal) weight of $-1$ (i.e. scalar field with the usual two-derivative kinetic term), we find that dimensionless terms are either fully Weyl invariant or are Weyl invariant if the conformal... 749. Test beam performance measurements of novel Thin Gap Detectors for the ATLAS experiment upgrade. Sebastien Rettie (University of British Columbia (CA)) The planned luminosity increase of the LHC will allow the precise measurement of Higgs boson properties and extend the search for new physics phenomena beyond the standard model. To maintain excellent detection and background rejection capability in the forward region of the ATLAS detector, part of the muon detection system is scheduled to be upgraded during the LHC long shutdown period of... 567. Ultra-Low Background Counting and Assay Studies At SNOLAB Dr Ian Lawson (SNOLAB) Experiments currently searching for dark matter, studying properties of neutrinos or searching for neutrinoless double-beta decay require very low levels of radioactive backgrounds both in their own construction materials and in the surrounding environment. These low background levels are required so that the experiments can achieve the required sensitivities for their searches. SNOLAB has... 565. In vivo manipulations of single cells using an all-optical platform Dr Raphael Turcotte (Advanced Microscopy Program, Wellman Center for Photomedicine and Center for Systems Biology, Massachusetts General Hospital, Harvard Medical School, MA) The hematopoietic stem cell niche is a specialized bone marrow microenvironment where blood-forming cells reside. Interactions between these rare cells and their niche need to be studied at the single-cell level. While live animal cell tracking with optical microscopy has proven useful for this purpose, a more thorough characterization requires novel approaches. This can be accomplished by... 588. Maintaining Clean Rooms at the SNOLAB Underground Laboratory Chris Jillings (SNOLAB) The SNOLAB underground laboratory has 50 thousand square feet of floor space all kept as a class 2000 clean room. This suppresses backgrounds from Uranium and Thorium which comprise approximately 1ppm of mine dust and a few ppm of concrete. The systems used to maintain and monitor the cleanliness will be discussed as well as results from cleanliness audits. Air in mines typically has much... 636. Optomechanics in a dilution refrigerator Allison MacDonald (University of Alberta) Recent developments in fabrication techniques have allowed reductions in the size of mechanical resonators to the micro- and nano-scale facilitating their use as exquisite sensors of a variety of phenomena. When coupled to optical cavities, these resonators become an attractive means for investigating the limits of quantum mechanics, as well as powerful tools for building hybrid quantum... 877. Panel 753. Performance of the first Canadian-made muon chamber prototype for the ATLAS experiment upgrade. Benoit Lefebvre (McGill University (CA)) 664. PLASMA TECHNOLOGY: ITS CURRENT and FUTURE IMPACT ON VARIOUS INDUSTRIES Andranik Sarkissian (PLASMIONIQUE Inc) Plasma, known also as the forth state of mater, constitutes over 99.9% of our universe. However, our planet appears to be an exception to the rule, where plasma, in its natural state, exists only rarely. The incentive for human to generate and study plasma state started as scientific curiosity, but it has grown into an important enabling industrial tool, with applications in variety of... 571. Revisiting the Solar System and Beyond Prof. Réjean Plamondon (École Polytechnique de Montréal) In a recent paper [1] on the modeling of emergent trends in complex biological and physical systems, an overview of the Bayesian approach [2] that we have used to describe the space-time curvature of a static spherically symmetric massive system was presented. The first part of this conference summarizes the main steps of this development. Reinterpreting Einstein gravitation equation in the... 547. The effect of off-resonant excitation on intensity-intensity correlation spectra in three-level, lambda systems Christopher DiLoreto (University of Windsor) Developing methods for the detection of single molecule interacting with the environment has been a large area of research. These methods are quite varied in their execution and include antigen binding, surface plasmon resonance, fluorescence among many others. These methods all take advantage of the fact that molecular processes often change how a substrate interacts with light when a... 728. Using 60Co as a high precision calibration device for the SNO+ detector Logan Sibley (University of Alberta) Is the neutrino its own anti-particle? The SNO+ detector at SNOLAB is set to join the international competition of experiments seeking to answer this fundamental question. Its sheer size – a 12 m diameter acrylic sphere holding 780 t of liquid scintillator containing more than 2 t of dissolved tellurium and observed by nearly 9500 photomultiplier tubes – puts SNO+ among those experiments with... 808. Entangled photon triplets: a new quantum light source and a test of nonlocality Prof. Kevin Resch (University of Waterloo) Entanglement is required for most applications in quantum information science. In optics, the most widespread source of entangled photon pairs is the nonlinear optical process of parametric down-conversion. Through this process, single high energy pump photons are converted into pairs of lower energy photons. In the first part of my talk I will describe our realization of cascaded... 499. Linear perturbations of type IIB SUGRA in flux compactifications Bradley Cownden (University of Manitoba) We consider linear perturbations of the background type IIB SUGRA solutions and find the equations of motion for the moduli. In particular, we allow for spacetime fluctuations of the positions of the D3-branes in the compact dimensions. We postulate an ansatz for the 5-form flux due to the motion of the D3-branes, and a corresponding first-order part of the metric. The movement of the... 542. Silicon Nitride microdisk resonators as refractometric sensors in water Callum Doolin (University of Alberta) Optical microdisks are intriguing devices due to their small size, compatibility with standard planar micro/nano-fabrication techniques, and ability to support high-Q whispering gallery modes. We fabricate and explore the use of thin (130 nm) Si$_3$N$_4$ disks with diameters of 15 - 30 μm as refractive index sensors, using a dimpled-tapered fiber to couple with the optical modes under water.... 576. Status of the Future Linear Collider Prof. Dean Karlen (University of Victoria (CA)) The physics potential for an electron-positron linear collider operating at energies above LEP-II has been established for more than a decade. The case is even stronger now that we know the Higgs boson has a mass that allows a linear collider to make precisions studies of its properties. There is renewed excitement in this project with the Japanese Government considering hosting the facility.... 631. The oligomeric composition of the M2 muscarinic receptor and the G protein signalling complex: a single molecule study Mr Dennis D. Fernandes (Department of Physics, University of Toronto) The role of oligomers in signalling via G protein-coupled receptors (GPCRs) has been under debate. The oligomeric size of several GPCRs has been studied using fluorescence-based techniques and have provided inconsistent results, while their attendant G proteins have received little attention. In this study, the oligomeric nature of the GPCR signalling complex comprising the M2 muscarinic... 623. Third Harmonic Terahertz Response Optimization of Doped Monolayer Graphene Prof. Marc Dignam (Queen's University) Due to the linear dispersion of graphene, it has recently been predicted that third harmonic generation should be observed in monolayer graphene [1]. Although recent experiments have demonstrated third harmonic generation in a 45-layer thick sample [2], there has been no indication of harmonic generation in monolayer graphene [3]. To understand why this is the case and to try to maximize the... 651. Effect of lipid composition on peptide-induced coalescence in bicellar mixtures Chris Miranda (Memorial University of Newfoundland) Transfer of lipid material between mulitlamellar reservoirs and the surface active layer is required to maintain the functional surfactant layer in alveoli. Surfactant Protein B (SP-B) is believed to facilitate interlayer contact implicit in such activity. The way in which SP-B might promote trafficking of surfactant material was investigated using mixtures bilayered micelles containing long-... 809. Gravitational-wave searches for Binary Black Holes: waveform models Dr Prayush Kumar (Canadian Institute for Theoretical Astrophysics) The direct detection of gravitational waves is imminent with the network of Advanced LIGO, Virgo, Geo, KAGRA detectors coming online. Coalescing binaries of stellar-mass black holes are one of the flagship sources for these terrestrial detectors. Planned detection searches that will be performed over the instrument data rely on models of inspiraling compact binaries. In this talk I would... 550. Investigation of the interaction between the single walled carbon nanotube and conjugated oligomers using various dispersion correction DFT methods Mr Mohammad Zahidul Hossain Khan (Memorial University) The area of carbon nanotubes (CNT)-polymer composites has been progressing rapidly in recent years. Pure CNT and CNT-polymer composites have many useful (industry related) properties ranging from good electrical conductivity to superior strength. However the full potential of using pure CNTs has been severely limited because of complications associated with the dispersion of CNTs. CNTs tend... 627. Three-dimensional scanning near field optical microscopy imaging of random arrays of copper nanoparticles and their use for plasmonic solar cell enhancement Mr Sabastine Ezugwu (The University of Western Ontario) In order to investigate the suitability of random arrays of nanoparticles for plasmonic enhancement in the visible-near infrared range, we introduced three-dimensional scanning near-field optical microscopy (3D-SNOM) imaging as a useful technique to probe the intensity of near-field radiation scattered by random systems of nanoparticles at heights up to several hundred nm from their surface... 852. ZnO Thin Film Samples Produced by Pulsed Laser Deposition for Ultrafast Laser High Harmonics Generation Studies Mr Zachary Tchir (University of Alberta) The ultrafast laser generated high harmonics is currently a hot research topics. Higher order harmonics generation in a 0.5 mm thick ZnO bulk crystal irradiated by ultrashort pulses from mid IR laser has been reported recently [1]. Because of the long interaction length in the bulk media the effects from self phase modulation become significant and the consequences are deformation of the ... 886. Neutrino Physics: On Earth and in the Heavens Prof. Wick Haxton (University of Washington / University of California) W-PLEN Plenary Session - Wick Haxton, Univ. of Washington and Univ. of California / Session plénière - Wick Haxton, Univ. de Washington et Univ. de Californie The discovery 15 years ago that neutrinos have mass and can spontaneously change their flavors has led to intense activity in nuclear and particle physics, including plans for powerful neutrino beams for long-baseline oscillation experiments and for ton-scale ultraclean underground detectors for double beta decay studies. Our improved knowledge of neutrinos has also enabled us to understand... 880. Exploring wave phenomena in complex, strongly scattering materials using ultrasound Dr John Page (University of Manitoba) W-MEDAL1 CAP Medal Talk - John Page, U. Manitoba (Brockhouse Medal Recipient/Récipiendaire de la médaille Brockhouse) Waves in complex media are often strongly scattered due to mesoscopic heterogeneities, leading to unusual phenomena which continue to fascinate us and enrich our basic understanding of the wave physics of condensed matter. Examples range from strikingly large variations in wave speeds to unusual refraction and tunneling effects, and even to the complete inhibition of wave propagation, due to... 881. QCD under extreme conditions: Hot, shiny fluids and sticky business Charles Gale (McGill University) W-MEDAL2 CAP Medal Talk - Charles Gale, McGill U. (CAP-CRM Prize in Theoretical and Mathematical Physics Recipient / Récipiendaire Prix ACP-CRM en physique théorique et mathématique) The phase diagram of Quantum Chromodynamics (QCD, the theory of the strong interaction) is only poorly known. There is currently a vibrant experimental and theoretical program that concentrate on the study and the characterization of the quark-gluon plasma, a fundamental state of matter that existed a few microseconds after the Big Bang. I will describe the theoretical aspects of this... 537. A New format for Reporting in a First Year Physics Laboratory Dr Marzena Kastyak-Ibrahim (University of Manitoba) W2-8 Labs and/or undergraduate research experiences (DPE) / Expériences de recherche en laboratoire et/ou au premier cycle (DEP) A new approach for the preparation and submission of laboratory reports for a first year Physics course has been developed at the University of Manitoba. Students are able to complete the data collection and analysis as well as the preparation and submission of the lab report during the three hour lab period. The students are provided with a template prepared in Microsoft Excel to help analyze... 866. A review of imaging methods in analysis of works of art. Thermographic imaging method in art analysis. Dr Dmitry Gavrilov (University of Windsor, Windsor, Canada) W2-9 General Instrumentation and Measurement Physics (DIMP) / Physique générale des instruments et mesures (DPIM) Methods of non-destructive analysis and quality control are an inherent part of progress in development of modern materials for science and industry. Many of these methods and techniques find new, sometimes unexpected, applications in other fields as well. This presentation is devoted to an outline of those imaging methods which made their paths from science and industry to the delicate... 533. Casting Light on Antimatter: Fundamental Physics with the ALPHA Antihydrogen Trap Dr Makoto Fujiwara (TRIUMF) W2-4 Testing Fundamental Symmetries I (DNP-PPD) / Tests de symétries fondamentales I (DPN-PPD) ALPHA is an international project at CERN, with substantial Canadian involvement, whose ultimate goal is to test symmetry between matter and antimatter via comparisons of the properties of atomic hydrogen with its antimatter counter-part, antihydrogen. After several years of development, we have recently achieved significant milestones, including the first stable confinement of... 512. Gravitational Waves Probes of Extreme Gravity Physics Prof. Nicolas Yunes (Montana State University) W2-3 Gravity II (DTP) / Gravité II (DPT) Einstein's theory has passed all tests to date in the quasi-stationary weak-field, where gravitational dynamics are weak and quadrupolar, while velocities are small relative to the speed of light. The highly non-linear and dynamical regime of the gravitational interaction, however, remains mostly unexplored. The imminent detection of gravitational waves will open a window into this regime that... 475. Measurement of Mesospheric Ozone Using Meteor Decay Times Prof. Wayne Hocking (University of Western Ontario) W2-5 Atmospheric physics (DASP) / Physique atmosphérique (DPAE) Mesospheric Ozone has significant chemical impact in the upper atmosphere, and its seasonal and annual variability needs to be better understood. Many large and expensive satellites have been flown over the last 20 years in search of such measurements. By carefully investigating the role of ozone in the high-temperature, hypersonic environment of overdense meteors, we have been able to... 765. Measuring flow and yielding with coherent x-rays Michael Rogers (University of Ottawa) W2-11 Microfluidics and Driven Motion (DMBP) / Microfluidique et mouvement forcé (DPMB) When coherent radiation is scattered by particles, its scattering pattern is modulated by *speckle*. If the scattering particles move, the speckle will change accordingly. This principle forms the basis of XPCS (X-ray photon correlation spectroscopy), which utilizes bright, coherent X-rays to probe nanoscale particle motion. We will discuss our recent efforts to extend XPCS in two different... 645. Recent Results from IceCube Claudio Kopper (University of Alberta) W2-7 Cosmic Frontier: Astrophysics and Neutrinos (PPD) / Frontière cosmique : astrophysique et neutrinos (PPD) The spectrum of cosmic rays includes the most energetic particles ever observed. The mechanism of their acceleration and their sources are, however, still mostly unknown. Observing astrophysical neutrinos can help solve this problem. Because neutrinos are produced in hadronic interactions and are neither absorbed nor deflected, they point directly back to their source. Neutrinos may also be... 695. Solenoid-free Plasma Start-up in NSTX using Transient Coaxial Helicity Injection (CHI) Dr Roger Raman (University of Washington) W2-6 Special session to honor Dr. Akira Hirose IV (DPP) / Session spéciale en l'honneur du Dr Akira Hirose IV (DPP) Transient Coaxial Helicity Injection (CHI) in the National Spherical Torus Experiment (NSTX) has generated toroidal current on closed flux surfaces without the use of the central solenoid. When induction from the solenoid was added, CHI initiated discharges in NSTX achieved 1 MA of plasma current using 65% of the solenoid flux of standard induction-only discharges. In addition, the... 657. Spin Currents in Magnetic Insulator/Normal Metal Heterostructures Dr Sebastian Goennenwein (Walther-Meissner-Institut, Bayerische Akademie der Wissenschaften, Garching, Germany) W2-1 Spintronics and spintronic devices (DCMMP) / Spintronique et technologies spintroniques (DPMCM) The generation and the detection of pure spin currents are fascinating challenges in modern solid state physics. In ferromagnet/normal metal thin film heterostructures, pure spin currents can be generated, e.g., by means of spin pumping [1], or via the application of thermal gradients in the so-called spin Seebeck effect [2]. An elegant scheme for detecting spin currents relies on the inverse... 478. SPIN-ROTATION HYPERFINE SPLITTINGS AT MODERATE TO HIGH J VALUES IN METHANOL Li-Hong Xu (University of New Brunswick) W2-10 Spectroscopy and Optics (DAMOPC) / Spectroscopie et optique (DPAMPC) In this talk we present a possible explanation, based on torsionally mediated proton-spin-overall-rotation interaction operators, for the surprising observation in Nizhny Novgorod several years ago[1] of doublets in some Lamb-dip sub-millimeter-wave transitions between torsion-rotation states of E symmetry in methanol. These observed doublet splittings, some as large as 70 kHz, were later... 604. Wigner function negativity and contextuality in quantum computation on rebits Robert Raussendorf (UBC) W2-2 Quantum Information and Quantum Computation (DTP-DCMMP) / Information et calcul quantique (DPT-DPMCM) We describe a universal scheme of quantum computation by state injection on rebits (states with real density matrices). For this scheme, we establish contextuality and Wigner function negativity as computational resources, extending results of [M. Howard et al., Nature 510, 351–355 (2014)] to two-level systems. For this purpose, we define a Wigner function suited to systems of n rebits, and... 737. Atomic Recoil Processes following He-6 Beta Decay Dr Gordon Drake (University of Windsor) There are currently several experiments in progress to search for new physics beyond the Standard Model by high precision studies of angular correlations in the $\beta$ decay of the helium isotope $^6{\rm He} \rightarrow\, ^6{\rm Li} + e^- + \bar{\nu}_e$. An essential part of the analysis is to understand the energy distribution and spectra of the recoil ions. After the $\beta$ decay... 616. Paperless physics laboratory course using the Blackboard resources. Natalia Krasnopolskaia (University of Toronto) Creating a paperless laboratory course does not only mean "going green" involving a friendly attitude to environment, but gives a lot of new opportunities in achieving additional learning outcomes. Among the other important advantages of the paperless course are reducing the lab report preparation time for students and optimizing the grading process for TAs/Lab Demonstrators. The paperless... 628. **WITHDRAWN** Upper Level Undergraduate Physics Laboratories Competencies Dr Tetyana Antimirova (Ryerson University) The undergraduate laboratory is an integral part of the physics curriculum because of the experimental nature physics as a discipline. The importance of the laboratory component in physics education cannot be overemphasized. Considerable efforts were made and resources invested in revitalizing the introductory undergraduate laboratories and integrating these experiences... 801. Controlling magnetization dynamics with artificial pinning sites in thin magnetic disks Fatemeh Fani Sani (UofA) Spatial inhomogeneity on the nanoscale is intrinsic to most thin film materials used in nanomagnetic technology, and can affect both static and dynamic magnetic responses [1]. Understanding the detailed effects of local inhomogeneity becomes increasingly important as device dimensions shrink. Focused ion beam irradiation is an elegant way to carry out local magnetic patterning to create highly... 709. High Precision Weak Charge Measurements using Parity Violating Electron Scattering: Looking for Signatures of New Physics at the Precision Frontier Measurements of the parity violating electron-proton and electron-electron asymmetries in the number of scattered electrons can be extremely sensitive probes for signatures of new physics beyond the standard model up to mass scales as high as $\Lambda/g \simeq 7.5~ TeV$. The basic reason for this is that the measured asymmetry has a simple and, within the Standard Model (SM), precisely... 492. Impacts of STOR tokamaks on fusion research Prof. Osamu Mitarai (Tokai University) STOR-1M and STOR-M tokamaks have been impacting a fusion research in two areas: [1] Alternating current (AC) operation, and [2] Central solenoid (CS)-less plasma current start-up in a tokamak. I would like to look back on these researches and talk about their possible future impacts. [1] AC operation in STOR-1M and STOR-M: In design of STOR-1M and -M tokamaks, the iron core image field... 707. Long-Term Stability of Backgrounds in the IceCube Neutrino Observatory Benedikt Riedel (University of Alberta) The IceCube Neutrino Observatory is a cubic-kilometre-scale neutrino telescope completed in the Austral summer of 2010/2011. The detector forms a lattice of 5,160 photomultiplier tubes (PMTs) installed in the South Polar ice cap at depths from 1450 to 2450 m. IceCube is designed to detect astrophysical neutrinos upward of 100 GeV and to study neutrino oscillations with atmospheric neutrinos... 781. Making and using atomically defined nanotips Radovan Urban (University of Alberta) Atomically defined tips gained significant attention over the past decades to be used as gas field ion sources (GFISs) for helium ion microscopy (HIM) and non-staining ion beam writing applications, electron sources for high resolution SEM, TEM and electron holography, as well as scanning probe microscopy, namely STM and AFM. Single atom tips (SATs) represent a unique subgroup of atomically... 500. Orbital Interference Effects in Nanowire Josephson Junctions for Exploring Majorana Physics Jonathan Baugh (University of Waterloo) The Josephson effect in a nanowire-based superconductor-normal-superconductor (SNS) junction is studied theoretically and experimentally, focusing on the effects of nanoscale confinement on the current-phase relationship of the junction. We identify a new type of Josephson interference based on the coupling of an applied axial magnetic flux to N-section Andreev quasiparticles (bound and... 497. Searching for Nanohertz Gravitational Waves Using Pulsars Ingrid Stairs (UBC) Millisecond radio pulsars (MSPs) exhibit tremendous rotational stability and therefore provide a means to detect gravitational waves passing near the Earth. Three collaborations are using the world's largest radio telescopes to observe arrays of MSPs with the goal of making such a detection. This talk will review the expected gravitational-wave sources, the methodology and the... 574. Structure Determination from Single-Particle X-ray Imaging Dr Ahmad Hosseinizadeh (Univ. of Wisconsin, USA) X-ray free electron lasers, such as Linac Coherent Light Source at SLAC, can be used to generate diffraction patterns of non-crystalline biological objects, like viruses. Then the main goal is to extract the high-resolution 3D structure of such particles by assembling 2D diffraction data with random orientations. However, structure determination still remains as a major challenge, and powerful... 785. The spectrum of 15NH3 in the 66-2000 cm-1 region Ammonia is indeed an ubiquitous molecule, it can be found in various astrophysical objects such as planetary atmospheres, comets and interstellar medium, and its presence in exoplanets and in the atmosphere of cold stars must be taken in serious consideration. The 15N isotopic variety could be very important since it allows the knowledge of the 14N/15N ratio in the universe. Although lists of... 684. Wind and Gravity Wave Observations with ERWIN-II Samuel Kristoffersen (University of New Brunswick) The ERWIN-II (improved E-Region Wind Interferometer) is a Michelson interferometer, located at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nu. It measures the airglow irradiance and winds via Doppler shifts in the airglow emissions – green line (557.7 nm) at a height of ~97 km, O2 (560 nm) at ~94 km, and OH (543 nm) at ~87 km. These measurements are made at a very... 716. Direct reconstruction - a new event reconstruction algorithm for the IceCube Neutrino Observatory Sarah Nowicki (University of Alberta) The IceCube detector is designed to detect very high-energy neutrino events (exceeding 1 PeV) from astrophysical sources. DeepCore, a low-energy array, was designed to extend the reach of IceCube down to ~10 GeV. Data analyses at the low energies have unique challenges compared to their high-energy counterparts, including achieving robust reconstructions of the energy and angular properties... 600. Generalized Waveguide Theory including Electromagnetic Duality Mrs Nafiseh Sang-Nourpour (Institute for Quantum Science and Technology, University of Calgary, Alberta T2N 1N4, Canada ; Photonics Group, Research Institute for Applied Physics and Astronomy, University of Tabriz, Tabriz 51665-163, Iran) We introduce a general theory for describing modes and characteristics of linear, homogeneous, isotropic waveguide materials and metamaterials with slab and cylindrical geometries. Our theory accommodates exotic media such as double-negative-index and near-zero-index metamaterials as special cases, and we demonstrate that our general theory exhibits electromagnetic duality that would... 798. Imaging mesospheric winds using the Michelson Interferometer for Airglow Dynamics Imaging (MIADI) Mr Jeffery Langille (UNB) The Michelson Interferometer for Airglow Dynamics Imaging (MIADI) is a ground based optical instrument designed to obtain two dimensional images of the line of sight Doppler wind and irradiance field in the mesosphere. The intention of the instrument is to measure the perturbations in the airglow due to the presence of gravity waves. In its current configuration, the instrument observes a ~80... 644. Investigating the correlation between molecular structure and mechanical properties of collagen using optical tweezers Dr Marjan Shayegan (Postdoctoral Researcher) Collagens represent a prominent family of fibrous structural proteins present in the majority of connective tissues in mammals that contribute to their mechanical behaviors. Collagen self-assembles into well-defined structures including fibrils, which makes it an excellent example of a hierarchical biological system with a broad range of functions. It is known that fibril formation kinetics... 553. Performance Assessment of a Silicon-Based Piezoresistive MEMS Strain Sensor Mr Ronald Delos Reyes (University of Alberta, Department of Chemical and Materials Engineering) The performance of a silicon-based MEMS strain sensor was assessed using static and dynamic strain. Static strain response of the sensor was tested under static bending and uniaxial loading conditions. Dynamic strain response was evaluated at frequencies of 10 Hz, 63 Hz, and 175 Hz using an aluminum cantilever beam mounted on an electrodynamic shaker. The static strain response showed that the... 691. Spin pumping in electrodynamically coupled magnon-photon systems Dr Lihui Bai (University of Manitoba) Lihui Bai$^{1}$, M. Harder$^{1}$, Y. P. Chen$^{2}$, X. Fan$^{2}$, J. Q. Xiao$^{2}$, and C.-M. Hu$^{1}$ $^{1}$Department of Physics and Astronomy, University of Manitoba, Winnipeg, Canada R3T2N2 and $^{2}$Department of Physics and Astronomy, University of Delaware, Newark, Delaware 19716, USA We use electrical detection, in combination with microwave transmission, to investigate both... 742. The University of Calgary Instructor of Record mentorship program: helping graduate students make the transition to becoming effective instructors Michael Wieser (University of Calgary) For most graduate students, the only exposure to undergraduate teaching comes from interacting with students in a junior teaching laboratory or tutorial, or perhaps giving an individual lecture to cover the absence of a faculty member. When graduate students eventually become instructors, they are given the challenging task of teaching a lecture section without much experience and with... 507. **WITHDRAWN** Observing the effects of Time Ordering in Single Photon Frequency Conversion Nicolas Quesada (University of Toronto) Frequency conversion (FC) is one of the most common nonlinear processes used in quantum optics. This process has the property that the Hamiltonian that governs it does not commute with itself at different times, and hence time ordering becomes an important aspect in the description of its dynamics. Recently, it has been shown that the Magnus expansion provides an appropriate description of... 751. CAFTON and PEARL: Using Ground-based FTIR Spectroscopy to Probe Atmospheric Composition over Canada Kimberly Strong (University of Toronto) Fourier transform infrared (FTIR) spectroscopy provides a powerful tool for probing the atmosphere. Solar absorption spectroscopy can be used to measure atmospheric abundances of tropospheric and stratospheric trace gases, while emission spectroscopy also provides information about clouds and the radiation budget. High-quality time series of composition measurements, along with critical... 611. Compact Torus Injection for Fuelling Chijin Xiao (Univ. of Saskatchewan) Current fueling technologies, such as gas puffing or pellet injection, are unable to send fuels directly to the reactor core due to premature evaporation and ionization at the edge of the reactor. Compact torus (CT) injection as a means for fueling a magnetically confinement fusion reactor as a research topic at the University of Saskatchewan started in early 90's. Compact torus formed in a... 489. Interferometric second harmonic generation imaging of biological tissues Prof. François Légaré (INRS-EMT) In recent years, Second Harmonic Generation (SHG) microscopy has emerged as a powerful technique to image in situ non-centrosymmetric structures in biological tissues, such as collagen, a major structural protein of vertebrates. However, due to the coherence of SHG signal, intensity depends not only on the density and the overall organization of harmonophores, but also on their relative... 559. Long-Term Supernova Monitoring with HALO Mr Colin Bruulsema (Laurentian University) Supernovae are the favoured location in the universe for certain processes necessary for the formation of heavy elements, and the only location where the effects of neutrino-neutrino scattering could plausibly be observed. This makes supernovae relevant to the fields of both nuclear astrophysics and particle physics. A core collapse supernova can be detected by the immense burst of... 825. Making Comparisons - A Strategy for Teaching Scientific Reasoning in a First Year Lab Prof. D.A. Bonn (University of British Columbia) Recent work on first year labs at UBC have focussed on teaching students widely-applicable data handling skills: especially, understanding uncertainty, statistical tools, and graphical techniques. In the past year we have also targeted students' critical thinking, with a relatively simple framework that asks students to make quantitative comparisons, reflect on those comparisons, and... 819. Multiqubit entangled channels for quantum communication in networks Multiqubit entangled states are an important resource for networked information processing and communication. Here, we explore the use of entangled channels for various controlled teleportation schemes. In controlled teleportation (CT), the teleportation can proceed only with the permission and participation of one or more controllers. Thus, the controller's role is of key importance in CT... 551. Muon g-2/edm at J-PARC Glen Marshall (TRIUMF) The anomalous magnetic moment of the muon, a_mu = (g_mu - 2)/2, has been measured to 0.54 ppm, and when compared to the Standard Model (SM) calculation of similar precision, a discrepancy of 3.6 sigma remains unexplained. This is perhaps a hint of new interactions beyond the SM, stimulating much theoretical interest and speculation. The muon g-2 experiment responsible for... 851. Searching for gravitational waves from compact binary coalescences with Advanced LIGO Michael Landry (LIGO Hanford Observatory/Caltech) Gravitational waves are distortions in the metric of space-time, the detection of which would provide key information on strong gravity and the astrophysical systems that produce them: supernovae, spinning compact stars, and the coalescence of compact binary systems (CBCs). LIGO is a gravitational wave observatory composed to two 4km interferometric detectors separated by 3000km, in Hanford... 462. The Saskatchewan Centre for Cyclotron Sciences: Developing a multi-use cyclotron facility Sara Ho (Saskatchewan Centre for Cyclotron Sciences) The Sylvia Fedoruk Canadian Centre for Nuclear Innovation (Fedoruk Centre) is in the process of commissioning the Saskatchewan Centre for Cyclotron Sciences as a multidisciplinary research and commercial radiopharmaceutical production facility. At the heart of the facility is a multipurpose 24 MeV cyclotron capable of producing a number of radioisotopes for use in research in humans, animals... 816. Towards Microwave-Frequency Spin Mechanics Dylan Grandmont (Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2G7) Torque magnetometry using torsional resonators provides a highly sensitive platform for resolving magnetic signatures in single meso-scale elements. In the static case, sample magnetizations are biased using a DC field while an AC excitation is applied at a frequency equal to the mechanical resonance of the torsional device. The resulting torque is then measured using an interferometric... 528. Accurate and Precise Characterization of Linear Optical Interferometers Mr Ish Dhand (University of Calgary) We combine single- and two-photon interference procedures for characterizing any multichannel passive linear optical interferometer accurately and precisely. Accuracy is achieved by accounting for systematic errors due to spatiotemporal and polarization mode mismatch and estimating those mismatch parameters through calibrating on one known beam splitter. Enhanced precision is achieved by curve... 610. Atmospheric Neutrino Measurement with IceCube Neutrino Observatory Tania Wood (University of Alberta) IceCube, the world's largest neutrino detector, is designed to measure the highest energy neutrinos produced in astrophysical events. Augmented with a low-energy array, called DeepCore, IceCube has the ability to perform precision measurements of the high flux of atmospheric neutrinos for energies ranging from approximately 10 GeV to a few 100 TeV. When combined with the measurements by... 741. Explosive molecule detection by nanomechanical resonator and photothermal spectroscopy Tushar Biswas (Department of Physics, University of Alberta) Nanomechanical resonators are good sensors because of their small mass, high frequency and high quality factor. In particular for mass sensing, the adsorbent mass is estimated from the shift in mechanical resonance frequency. Fantastic progress has been made in mass sensitivity, reaching the single protein and even yoctogram mass level. Yet this technique alone is not sufficient to identify... 720. Investigation of the E2 and E3 matrix elements in $^{200}$Hg using direct nuclear reactions Evan Rand (University of Guelph) A nuclear-structure campaign has been initiated to investigate the isotopes of Hg around mass 199. To date, $^{199}$Hg provides the most stringent limit on an atomic electric dipole moment (EDM)$~$[1]. The observation of a permanent EDM would represent a clear signal of CP violation from new physics beyond the Standard Model. Theoretical nuclear-structure calculations for $^{199}$Hg are... 911. MR Imaging with Radiofrequency Phase Gradients Dr Jonathan Sharp (University of Alberta) Although MRI offers highly diagnostic medical imagery, patient access to this modality worldwide is very limited when compared with X-ray or ultrasound. One reason for this is the expense and complexity of the equipment used to generate the switched magnetic fields necessary for MRI encoding. These field gradients are also responsible for intense acoustic noise and have the potential to induce... 856. Off with the training wheels: Student centered approach to lab work. Mr Benoit Blanchet (Cégep de Rivière-du-Loup) Do you remember those labs in college when you just followed your teacher's instructions? You probably barely do right? In order to increase students' involvement in the scientific process (and to increase motivation), I tried a different approach. I trained them to become autonomous researchers. At first, I gave them detailed instructions and complete theoretical framework. Then... 874. A new strategy to build ultrafast lasers; Frequency domain Optical Parametric Amplification Francois Legare (INRS-EMT) W-MEDAL1 CAP Medal Talk François Légaré, EMT-INRS (CAP Herzberg Medal Recipient/Récipiendaire de la médaille Herzberg de l'ACP) High power laser amplification of octave spanning or even octave exceeding spectra is a formidable technological challenge since the universal dilemma of gain narrowing is only one of many problems on this path. The shortcoming of all present femtosecond (1fs = 0.000000000000001s) laser amplification schemes suffer from opposing conditions for either high amplification level (=> high peak... 873. From Quarks to Neutrinos, Adventures in Particle Physics John Martin (Department of Physics and Astronomy) W-MEDAL2 CAP Medal Talk - John F. Martin, IPP / U. Toronto (Achievement Medal Recipient / Récipiendaire de la médaille pour contributions exceptionnelles) I have been fortunate to participate in the evolution of particle physics in Canada for a large fraction of its history. I will discuss developments from 1972 to its current lively state, illustrating with a few examples from the experiments on which I have collaborated. With the recent discovery of the Higgs boson the Standard Model is now complete, but is known to be an incomplete... 914. 37 Years of High School Physics Olympics Competitions at the University of British Columbia Janis McKenna (University of British Columbia), Marina Milner-Bolotin (The University of British Columbia) Poster (Non-Student) / affiche (non-étudiant) DPE Poster Session with beer / Session d'affiches avec bière DPE The Physics Olympics is an annual high school physics competition held at the University of British Columbia in Vancouver. This year, more than 400 students on 56 teams, with at least one teacher/coach per team, came from all over British Columbia to participate in the 37th Annual Physics Olympics. The competition consists of six hands-on events, of which two are pre-built by the students in... 665. Algorithms for Boson Realizations of SU(n) Ish Dhand (University of Calgary) Poster (Student, In Competition) / Affiche (Étudiant(e), inscrit à la compétition) DTP Poster Session with beer / Session d'affiches, avec bière DPT We devise a procedure for calculating boson realizations of canonical basis-states of SU$(n)$ for arbitrary $n$. We employ our boson realization to calculate Wigner $\mathcal{D}$-functions of SU$(n)$ in the canonical Gelfand-Tsetlin basis and connect these functions to outputs from multi-photon interferometry. 717. Calibration of the DEAP-3600 photomultiplier tubes Dr Marcin Kuźniak (Queen's University), Dr Tina Pollmann (Laurentian University) PPD Poster Session with beer / Session d'affiches, avec bière PPD The DEAP detector uses 255 photo multiplier tubes (PMTs) to detect the faint scintillation light from possible WIMP interactions in liquid argon. A photoelectron released when a scintillation photon strikes the front face of the PMT is amplified inside the PMT, and the resulting single-photoelectron (SPE) charge signal recorded. Due to random fluctuations in the amplification process, the... 445. Changes in Global Precipitation from 1850 to the Present Ms Atifa Syed (Physics Department, York University) Poster (Student, Not in Competition) / Affiche (Étudiant(e), pas dans la compétition) DASP Poster Session with beer / Session d'affiches avec bière DPAE Reports of the Intergovernmental Panel on Climate Change have stated that precipitation has changed by up to 1% for each decade in the 20th century [1]. Indeed, the Clausius Clapeyron equation predicts absolute humidity increases by about 7% for each degree of warming assuming no change in the relative humidity. This study analyzed monthly precipitation observations recorded at over 1,000... 605. Evolution of Single Particle Structure in Exotic Strontium Isotopes Steffen Cruz DNP Poster Session with beer / Session d'affiches, avec bière DPN Nuclei near the magic numbers of protons and neutrons are observed to have a spherical shape for the low lying states. Nuclei between magic numbers, where the binding energy tends to be lower, are often observed to show deformation in low lying states. These deformations are perceived to have either a prolate or oblate nature. States within a nucleus that have different shapes that are close... 504. Focus study to measure phase effects of a bent Laue beam expander Mercedes Martinson (University of Saskatchewan) DIMP Poster Session with beer / Session d'affiches, avec bière DPIM At the Canadian Light Source (CLS) in Canada, the BioMedical Imaging and Therapy (BMIT) bend magnet (BMIT-BM) beamlines and insertion device (BMIT-ID) have been very successful in their mission to image biological tissue and conduct live animal imaging studies. However, since their inception, they've been limited by the vertical beam size. This poses limitations for imaging modalities such as... 890. Massively parallel genomic analysis using tunable nanoscale confinement Marjan Shayegan DMBP Poster session, with beer / Session d'affiches DPMB, avec bière Linearly extending long DNA molecules in sub-50 nm nanochannels for genomic analysis, while retaining their structural integrity, is a major technological challenge. We employ ``Convex Lens-induced Confinement'' (CLiC) microscopy to gently load DNA into nanogrooves from above, overcoming the limitations of side-loading techniques used in direct-bonded nanofluidic devices. In the CLiC... 635. Measurement of High Magnetic Fields in Laser Produced Plasmas Ms Fatema Liza (University fo Alberta) DPP Poster Session with beer / Session d'affiches, avec bière DPP Many high intensity laser applications can generate large magnetic fields up to the level of 100's of Tesla. In particular, the application of circularly polarized [1] or orbital angular mode (OAM) laser beams [2] can be used to generate such large fields using the inverse Faraday Effect (IFE). These fields can play an important role in the generation and guiding of electrons in laser plasma... 744. Optomechanics with a Twist Paul Kim (University of Alberta) DCMMP Poster Session with beer / Session d'affiches, avec bière DPMCM Torsional resonators are an effective platform to study various material and optical properties. Current advances in nanofabrication techniques allow miniaturization of these torsional oscillators down to micron scale devices but they are often limited by their detection methods. Using a high quality optical resonator ($Q > 10^5$) coupled with a mechanical oscillator, which are the basis of... 791. Two-dimensional accelerating beams along arbitrary trajectories and enhancement of their peak intensities Luca La Volpe (INRS) DAMOPC Poster Session with beer / Session d'affiches avec bière DPAMPC In the last few years, accelerating optical beams propagating along a curved trajectory have attracted a lot of attentions. Since Airy beams were the first to be introduced into optics, they have been employed in a variety of applications, such as the generation of curved plasma channels, optical trapping and manipulation, and micro-fabrication. It has been shown that accelerating beams can be... 689. Water and Defect Detection Beneath Rubber Using Terahertz Reflection Tomography Mr Patrick Kilcullen (University of Northern British Columbia) DIAP Poster Session with beer / Session d'affiches, avec bière DPIA Royal Canadian Navy VICTORIA Class submarines must operate with enhanced stealth using rubber acoustic tiles over their steel hulls to provide SONAR cloaking and absorb emanating noise. These tiles, as well as a connective grouting compound, form an opaque covering of virtually the entire hull surface. In evaluating the integrity of the hull as a pressure vessel it is of paramount importance... 813. Development of a Vibrating Wire Rheometer Prof. John de Bruyn (The University of Western Ontario) Vibrating wire devices have been used in the past to determine the viscosity of Newtonian fluids. We are investigating the use of a vibrating wire device to measure the viscous and elastic moduli of non-Newtonian fluids. Our device consists of a small diameter tungsten wire under tension and immersed in a fluid. When a magnetic field is applied and an alternating current is passed through... 773. Intense, double pulse irradiation of targets for MeV proton acceleration Shaun Kerr (University of Alberta) The efficient generation of MeV proton beams using lasers is an area of interest due to its potential applications, ranging from radiotherapy to the fast ignition concept for inertial confinement fusion. Various efficiency-enhancing schemes have been put forward, including using ultra-clean pulses with nm thick foils, and structured targets such as hemispheres to enhance fields. One method,... 875. Low-Light Photosensor Applications in Plant Imaging & Personal Radiation Detection Jamie Sanchez-Fortun Stoker (University of Regina) Silicon photomultipliers - also known as Multi-pixel Photon Counters (MPPCs) - are a type of photodetector that have shown great potential for many applications such as nuclear and particle physics, nuclear medicine, biophotonics, outer space, military, atmospheric or automotive distance control lidar, radioactivity detection and monitoring, and nuclear hazard/threat detection. Out group has... 697. Photonic actuation and detection of higher order modes in nanomechanical resonators J. N. Westwood-Bachman (University of Alberta, National Institute for Nanotechnology) All-optical actuation and detection of nanomechanical devices has emerged as a promising transduction technique with high displacement sensitivity1-3. However, symmetric mechanical modes are difficult to detect with integrated photonics because the symmetry causes a zero effective index shift. Detection of higher order modes, including symmetric modes, is desirable for sensing... 641. Possibility of determining predominant SO2 oxidation pathways by isotope fractionation or source apportionment Mrs Neda Amiri (Department of Physics and Astronomy, University of Calgary) Sulfur dioxide oxidation and the effect of oxidation products in formation and growth of aerosols have been studied widely. Despite this, significant gaps still exist in understanding the SO2 oxidation pathways in various locations. A study of SO2 and aerosol sulphate downwind of the oil sands region was conducted as part of the FOSSILIS campaign in the summer 2013. Size segregated aerosols... 677. Pulsed Laser Spectroscopy of Xe129 for a Co-magnetometer in the TRIUMF Neutron Electron Dipole Moment Experiment Eric R Miller (The University of British Columbia) Construction is underway at TRIUMF by an international collaboration on a high-density ultra-cold neutron source. Its primary experiment will be a measurement of the neutron electric dipole moment (nEDM). The experiment uses an NMR technique known as Ramsey resonance to detect electric-field correlated shifts in the precession frequency of ultra-cold neutrons. Previous-generation nEDM... 676. Quantum corrected numerical relativity Anindita Dutta (University of Lethbridge) We introduce a non-spherical metric motivated from quantum gravity. We study the time evolution of scalar field induced gravity using numerical methods. 715. Simulation study of the use of internal Ar-39 beta decays for energy calibration of the DEAP-3600 detector. The DEAP-3600 detector uses natural liquid argon as target material for WIMP interactions. Natural argon contains the beta emitter Ar-39, which will create about 3600 beta events per second, uniformly distributed across the active detector volume. By fitting the shape of the beta spectrum from these events as a function of event position, a position-dependent energy calibration of the whole... 892. Single-Molecule Microscopy System for Tunable Nanoscale Confinement Mr Adriel Arsenault (McGill University) We present the design and construction of a versatile, open-frame inverted microscope system for wide-field fluorescence and single molecule imaging. The microscope chassis and modular design allow for customization, expansion, and experimental flexibility. We present two components that are included with the microscope which extend its basic capabilities and together create a powerful... 488. Bohmian trajectories for harmonic oscillator and Coulomb potentials Mr Benjamin Dupuis (Université du Québec à Trois-Rivières) In Bohmian mechanics, quantum particles follow causal deterministic trajectories. These trajectories obey an equation of motion much like Newton's, with the addition of
CommonCrawl
The effect? 3 or 4 weeks later, I'm not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn't expect to. An effect? Possibly. The above information relates to studies of specific individual essential oil ingredients, some of which are used in the essential oil blends for various MONQ diffusers. Please note, however, that while individual ingredients may have been shown to exhibit certain independent effects when used alone, the specific blends of ingredients contained in MONQ diffusers have not been tested. No specific claims are being made that use of any MONQ diffusers will lead to any of the effects discussed above. Additionally, please note that MONQ diffusers have not been reviewed or approved by the U.S. Food and Drug Administration. MONQ diffusers are not intended to be used in the diagnosis, cure, mitigation, prevention, or treatment of any disease or medical condition. If you have a health condition or concern, please consult a physician or your alternative health care provider prior to using MONQ diffusers. As discussed in my iodine essay (FDA adverse events), iodine is a powerful health intervention as it eliminates cretinism and improves average IQ by a shocking magnitude. If this effect were possible for non-fetuses in general, it would be the best nootropic ever discovered, and so I looked at it very closely. Unfortunately, after going through ~20 experiments looking for ones which intervened with iodine post-birth and took measures of cognitive function, my meta-analysis concludes that: the effect is small and driven mostly by one outlier study. Once you are born, it's too late. But the results could be wrong, and iodine might be cheap enough to take anyway, or take for non-IQ reasons. (This possibility was further weakened for me by an August 2013 blood test of TSH which put me at 3.71 uIU/ml, comfortably within the reference range of 0.27-4.20.) Please browse our website to learn more about how to enhance your memory. Our blog contains informative articles about the science behind nootropic supplements, specific ingredients, and effective methods for improving memory. Browse through our blog articles and read and compare reviews of the top rated natural supplements and smart pills to find everything you need to make an informed decision. My first time was relatively short: 10 minutes around the F3/F4 points, with another 5 minutes to the forehead. Awkward holding it up against one's head, and I see why people talk of LED helmets, it's boring waiting. No initial impressions except maybe feeling a bit mentally cloudy, but that goes away within 20 minutes of finishing when I took a nap outside in the sunlight. Lostfalco says Expectations: You will be tired after the first time for 2 to 24 hours. It's perfectly normal., but I'm not sure - my dog woke me up very early and disturbed my sleep, so maybe that's why I felt suddenly tired. On the second day, I escalated to 30 minutes on the forehead, and tried an hour on my finger joints. No particular observations except less tiredness than before and perhaps less joint ache. Third day: skipped forehead stimulation, exclusively knee & ankle. Fourth day: forehead at various spots for 30 minutes; tiredness 5/6/7/8th day (11/12/13/4): skipped. Ninth: forehead, 20 minutes. No noticeable effects. A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice. In 3, you're considering adding a new supplement, not stopping a supplement you already use. The I don't try Adderall case has value $0, the Adderall fails case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the Adderall succeeds case is worth $X-40-4099, where $X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that 0.5 \times (X-4179) > 0 ~> $X>4179$. (Adderall working or not isn't binary, and so you might be more comfortable breaking down the various how effective Adderall is cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I've designed it so it has a reasonable chance of showing that.) These are the most popular nootropics available at the moment. Most of them are the tried-and-tested and the benefits you derive from them are notable (e.g. Guarana). Others are still being researched and there haven't been many human studies on these components (e.g. Piracetam). As always, it's about what works for you and everyone has a unique way of responding to different nootropics. Federal law classifies most nootropics as dietary supplements, which means that the Food and Drug Administration does not regulate manufacturers' statements about their benefits (as the giant "This product is not intended to diagnose, treat, cure, or prevent any disease" disclaimer on the label indicates). And the types of claims that the feds do allow supplement companies to make are often vague and/or supported by less-than-compelling scientific evidence. "If you find a study that says that an ingredient caused neurons to fire on rat brain cells in a petri dish," says Pieter Cohen, an assistant professor at Harvard Medical School, "you can probably get away with saying that it 'enhances memory' or 'promotes brain health.'" A provisional conclusion about the effects of stimulants on learning is that they do help with the consolidation of declarative learning, with effect sizes varying widely from small to large depending on the task and individual study. Indeed, as a practical matter, stimulants may be more helpful than many of the laboratory tasks indicate, given the apparent dependence of enhancement on length of delay before testing. Although, as a matter of convenience, experimenters tend to test memory for learned material soon after the learning, this method has not generally demonstrated stimulant-enhanced learning. However, when longer periods intervene between learning and test, a more robust enhancement effect can be seen. Note that the persistence of the enhancement effect well past the time of drug action implies that state-dependent learning is not responsible. In general, long-term effects on learning are of greater practical value to people. Even students cramming for exams need to retain information for more than an hour or two. We therefore conclude that stimulant medication does enhance learning in ways that may be useful in the real world. It is often associated with Ritalin and Adderall because they are all CNS stimulants and are prescribed for the treatment of similar brain-related conditions. In the past, ADHD patients reported prolonged attention while studying upon Dexedrine consumption, which is why this smart pill is further studied for its concentration and motivation-boosting properties. AMP and MPH increase catecholamine activity in different ways. MPH primarily inhibits the reuptake of dopamine by pre-synaptic neurons, thus leaving more dopamine in the synapse and available for interacting with the receptors of the postsynaptic neuron. AMP also affects reuptake, as well as increasing the rate at which neurotransmitter is released from presynaptic neurons (Wilens, 2006). These effects are manifest in the attention systems of the brain, as already mentioned, and in a variety of other systems that depend on catecholaminergic transmission as well, giving rise to other physical and psychological effects. Physical effects include activation of the sympathetic nervous system (i.e., a fight-or-flight response), producing increased heart rate and blood pressure. Psychological effects are mediated by activation of the nucleus accumbens, ventral striatum, and other parts of the brain's reward system, producing feelings of pleasure and the potential for dependence. Factor analysis. The strategy: read in the data, drop unnecessary data, impute missing variables (data is too heterogeneous and collected starting at varying intervals to be clean), estimate how many factors would fit best, factor analyze, pick the ones which look like they match best my ideas of what productive is, extract per-day estimates, and finally regress LLLT usage on the selected factors to look for increases. In addition, large national surveys, including the NSDUH, have generally classified prescription stimulants with other stimulants including street drugs such as methamphetamine. For example, since 1975, the National Institute on Drug Abuse–sponsored Monitoring the Future (MTF) survey has gathered data on drug use by young people in the United States (Johnston, O'Malley, Bachman, & Schulenberg, 2009a, 2009b). Originally, MTF grouped prescription stimulants under a broader class of stimulants so that respondents were asked specifically about MPH only after they had indicated use of some drug in the category of AMPs. As rates of MPH prescriptions increased and anecdotal reports of nonmedical use grew, the 2001 version of the survey was changed to include a separate standalone question about MPH use. This resulted in more than a doubling of estimated annual use among 12th graders, from 2.4% to 5.1%. More recent data from the MTF suggests Ritalin use has declined (3.4% in 2008). However, this may still underestimate use of MPH, as the question refers specifically to Ritalin and does not include other brand names such as Concerta (an extended release formulation of MPH). Amphetamines have a long track record as smart drugs, from the workaholic mathematician Paul Erdös, who relied on them to get through 19-hour maths binges, to the writer Graham Greene, who used them to write two books at once. More recently, there are plenty of anecdotal accounts in magazines about their widespread use in certain industries, such as journalism, the arts and finance. Price discrimination is aided by barriers such as ignorance and oligopolies. An example of the former would be when I went to a Food Lion grocery store in search of spices, and noticed that there was a second selection of spices in the Hispanic/Latino ethnic food aisle, with unit prices perhaps a fourth of the regular McCormick-brand spices; I rather doubt that regular cinnamon varies that much in quality. An example of the latter would be using veterinary drugs on humans - any doctor to do so would probably be guilty of medical malpractice even if the drugs were manufactured in the same factories (as well they might be, considering economies of scale). Similarly, we can predict that whenever there is a veterinary drug which is chemically identical to a human drug, the veterinary drug will be much cheaper, regardless of actual manufacturing cost, than the human drug because pet owners do not value their pets more than themselves. Human drugs are ostensibly held to a higher standard than veterinary drugs; so if veterinary prices are higher, then there will be an arbitrage incentive to simply buy the cheaper human version and downgrade them to veterinary drugs. Brain-imaging studies are consistent with the existence of small effects that are not reliably captured by the behavioral paradigms of the literature reviewed here. Typically with executive function tasks, reduced activation of task-relevant areas is associated with better performance and is interpreted as an indication of higher neural efficiency (e.g., Haier, Siegel, Tang, Abel, & Buchsbaum, 1992). Several imaging studies showed effects of stimulants on task-related activation while failing to find effects on cognitive performance. Although changes in brain activation do not necessarily imply functional cognitive changes, they are certainly suggestive and may well be more sensitive than behavioral measures. Evidence of this comes from a study of COMT variation and executive function. Egan and colleagues (2001) found a genetic effect on executive function in an fMRI study with sample sizes as small as 11 but did not find behavioral effects in these samples. The genetic effect on behavior was demonstrated in a separate study with over a hundred participants. In sum, d-AMP and MPH measurably affect the activation of task-relevant brain regions when participants' task performance does not differ. This is consistent with the hypothesis (although by no means positive proof) that stimulants exert a true cognitive-enhancing effect that is simply too small to be detected in many studies. We reached out to several raw material manufacturers and learned that Phosphatidylserine and Huperzine A are in short supply. We also learned that these ingredients can be pricey, incentivizing many companies to cut corners. A company has to have the correct ingredients in the correct proportions in order for a brain health formula to be effective. We learned that not just having the two critical ingredients was important – but, also that having the correct supporting ingredients was essential in order to be effective. NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab's production.) A year's supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one's eyes?), it could cost anywhere up to $10,000. Even party drugs are going to work: Biohackers are taking recreational drugs like LSD, psilocybin mushrooms, and mescaline in microdoses—about a tenth of what constitutes a typical dose—with the goal of becoming more focused and creative. Many who've tried it report positive results, but real research on the practice—and its safety—is a long way off. "Whether microdosing with LSD improves creativity and cognition remains to be determined in an objective experiment using double-blind, placebo-controlled methodology," Sahakian says. Sometimes called smart drugs, brain boosters, or memory-enhancing drugs, the term "nootropics" was coined by scientist Dr. Corneliu E. Giurgea, who developed the compound piracetam as a brain enhancer, according to The Atlantic. The word is derived from the Greek noo, meaning mind, and trope, which means "change" in French. In essence, all nootropics aim to change your mind by enhancing functions like memory or attention. Exercise is also important, says Lebowitz. Studies have shown it sharpens focus, elevates your mood and improves concentration. Likewise, maintaining a healthy social life and getting enough sleep are vital, too. Studies have consistently shown that regularly skipping out on the recommended eight hours can drastically impair critical thinking skills and attention. 2 commenters point out that my possible lack of result is due to my mistaken assumption that if nicotine is absorbable through skin, mouth, and lungs it ought to be perfectly fine to absorb it through my stomach by drinking it (rather than vaporizing it and breathing it with an e-cigarette machine) - it's apparently known that absorption differs in the stomach. Flaxseed oil is, ounce for ounce, about as expensive as fish oil, and also must be refrigerated and goes bad within months anyway. Flax seeds on the other hand, do not go bad within months, and cost dollars per pound. Various resources I found online estimated that the ALA component of human-edible flaxseed to be around 20% So Amazon's 6lbs for $14 is ~1.2lbs of ALA, compared to 16fl-oz of fish oil weighing ~1lb and costing ~$17, while also keeping better and being a calorically useful part of my diet. The flaxseeds can be ground in an ordinary food processor or coffee grinder. It's not a hugely impressive cost-savings, but I think it's worth trying when I run out of fish oil. Rogers RD, Blackshaw AJ, Middleton HC, Matthews K, Hawtin K, Crowley C, Robbins TW. Tryptophan depletion impairs stimulus-reward learning while methylphenidate disrupts attentional control in healthy young adults: Implications for the monoaminergic basis of impulsive behaviour. Psychopharmacology. 1999;146:482–491. doi: 10.1007/PL00005494. [PubMed] [CrossRef] Herbal supplements have been used for centuries to treat a wide range of medical conditions. Studies have shown that certain herbs may improve memory and cognition, and they can be used to help fight the effects of dementia and Alzheimer's disease. These herbs are considered safe when taken in normal doses, but care should be taken as they may interfere with other medications. Government restrictions and difficulty getting approval for various medical devices is expected to impede market growth. The stringency of approval by regulatory authorities is accompanied by the high cost of smart pills to challenge the growth of the smart pills market. However, the demand for speedy diagnosis, and improving reimbursement policies are likely to reveal market opportunities. While these two compounds may not be as exciting as a super pill that instantly unlocks the full potential of your brain, they currently have the most science to back them up. And, as Patel explains, they're both relatively safe for healthy individuals of most ages. Patel explains that a combination of caffeine and L-theanine is the most basic supplement stack (or combined dose) because the L-theanine can help blunt the anxiety and "shakiness" that can come with ingesting too much caffeine. Fish oil (Examine.com, buyer's guide) provides benefits relating to general mood (eg. inflammation & anxiety; see later on anxiety) and anti-schizophrenia; it is one of the better supplements one can take. (The known risks are a higher rate of prostate cancer and internal bleeding, but are outweighed by the cardiac benefits - assuming those benefits exist, anyway, which may not be true.) The benefits of omega acids are well-researched. Another factor to consider is whether the nootropic is natural or synthetic. Natural nootropics generally have effects which are a bit more subtle, while synthetic nootropics can have more pronounced effects. It's also important to note that there are natural and synthetic nootropics. Some natural nootropics include Ginkgo biloba and ginseng. One benefit to using natural nootropics is they boost brain function and support brain health. They do this by increasing blood flow and oxygen delivery to the arteries and veins in the brain. Moreover, some nootropics contain Rhodiola rosea, panxax ginseng, and more. Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis. Please note: Smart Pills, Smart Drugs or Brain Food Supplements are also known as: Brain Smart Vitamins, Brain Tablets, Brain Vitamins, Brain Booster Supplements, Brain Enhancing Supplements, Cognitive Enhancers, Focus Enhancers, Concentration Supplements, Mental Focus Supplements, Mind Supplements, Neuro Enhancers, Neuro Focusers, Vitamins for Brain Function,Vitamins for Brain Health, Smart Brain Supplements, Nootropics, or "Natural Nootropics" A fancier method of imputation would be multiple imputation using, for example, the R library mice (Multivariate Imputation by Chained Equations) (guide), which will try to impute all missing values in a way which mimicks the internal structure of the data and provide several possible datasets to give us an idea of what the underlying data might have looked like, so we can see how our estimates improve with no missingness & how much of the estimate is now due to the imputation: Modafinil, sold under the name Provigil, is a stimulant that some have dubbed the "genius pill." It is a wakefulness-promoting agent (modafinil) and glutamate activators (ampakine). Originally developed as a treatment for narcolepsy and other sleep disorders, physicians are now prescribing it "off-label" to cellists, judges, airline pilots, and scientists to enhance attention, memory and learning. According to Scientific American, "scientific efforts over the past century [to boost intelligence] have revealed a few promising chemicals, but only modafinil has passed rigorous tests of cognitive enhancement." A stimulant, it is a controlled substance with limited availability in the U.S. How exactly – and if – nootropics work varies widely. Some may work, for example, by strengthening certain brain pathways for neurotransmitters like dopamine, which is involved in motivation, Barbour says. Others aim to boost blood flow – and therefore funnel nutrients – to the brain to support cell growth and regeneration. Others protect brain cells and connections from inflammation, which is believed to be a factor in conditions like Alzheimer's, Barbour explains. Still others boost metabolism or pack in vitamins that may help protect the brain and the rest of the nervous system, explains Dr. Anna Hohler, an associate professor of neurology at Boston University School of Medicine and a fellow of the American Academy of Neurology. Finally, two tasks measuring subjects' ability to control their responses to monetary rewards were used by de Wit et al. (2002) to assess the effects of d-AMP. When subjects were offered the choice between waiting 10 s between button presses for high-probability rewards, which would ultimately result in more money, and pressing a button immediately for lower probability rewards, d-AMP did not affect performance. However, when subjects were offered choices between smaller rewards delivered immediately and larger rewards to be delivered at later times, the normal preference for immediate rewards was weakened by d-AMP. That is, subjects were more able to resist the impulse to choose the immediate reward in favor of the larger reward. Core body temperature, local pH and internal pressure are important indicators of patient well-being. While a thermometer can give an accurate reading during regular checkups, the monitoring of professionals in high-intensity situations requires a more accurate inner body temperature sensor. An ingestible chemical sensor can record acidity and pH levels along the gastrointestinal tract to screen for ulcers or tumors. Sensors also can be built into medications to track compliance. Overall, the studies listed in Table 1 vary in ways that make it difficult to draw precise quantitative conclusions from them, including their definitions of nonmedical use, methods of sampling, and demographic characteristics of the samples. For example, some studies defined nonmedical use in a way that excluded anyone for whom a drug was prescribed, regardless of how and why they used it (Carroll et al., 2006; DeSantis et al., 2008, 2009; Kaloyanides et al., 2007; Low & Gendaszek, 2002; McCabe & Boyd, 2005; McCabe et al., 2004; Rabiner et al., 2009; Shillington et al., 2006; Teter et al., 2003, 2006; Weyandt et al., 2009), whereas others focused on the intent of the user and counted any use for nonmedical purposes as nonmedical use, even if the user had a prescription (Arria et al., 2008; Babcock & Byrne, 2000; Boyd et al., 2006; Hall et al., 2005; Herman-Stahl et al., 2007; Poulin, 2001, 2007; White et al., 2006), and one did not specify its definition (Barrett, Darredeau, Bordy, & Pihl, 2005). Some studies sampled multiple institutions (DuPont et al., 2008; McCabe & Boyd, 2005; Poulin, 2001, 2007), some sampled only one (Babcock & Byrne, 2000; Barrett et al., 2005; Boyd et al., 2006; Carroll et al., 2006; Hall et al., 2005; Kaloyanides et al., 2007; McCabe & Boyd, 2005; McCabe et al., 2004; Shillington et al., 2006; Teter et al., 2003, 2006; White et al., 2006), and some drew their subjects primarily from classes in a single department at a single institution (DeSantis et al., 2008, 2009; Low & Gendaszek, 2002). With few exceptions, the samples were all drawn from restricted geographical areas. Some had relatively high rates of response (e.g., 93.8%; Low & Gendaszek 2002) and some had low rates (e.g., 10%; Judson & Langdon, 2009), the latter raising questions about sample representativeness for even the specific population of students from a given region or institution. One reason I like modafinil is that it enhances dopamine release, but it binds to your dopamine receptors differently than addictive substances like cocaine and amphetamines do, which may be part of the reason modafinil shares many of the benefits of other stimulants but doesn't cause addiction or withdrawal symptoms. [3] [4] It does increase focus, problem-solving abilities, and wakefulness, but it is not in the same class of drugs as Adderall, and it is not a classical stimulant. Modafinil is off of patent, so you can get it generically, or order it from India. It's a prescription drug, so you need to talk to a physician.
CommonCrawl
Weyandt et al. (2009) Large public university undergraduates (N = 390) 7.5% (past 30 days) Highest rated reasons were to perform better on schoolwork, perform better on tests, and focus better in class 21.2% had occasionally been offered by other students; 9.8% occasionally or frequently have purchased from other students; 1.4% had sold to other students Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8). Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners. On the other end of the spectrum is the nootropic stack, a practice where individuals create a cocktail or mixture of different smart drugs for daily intake. The mixture and its variety actually depend on the goals of the user. Many users have said that nootropic stacking is more effective for delivering improved cognitive function in comparison to single nootropics. The data from 2-back and 3-back tasks are more complex. Three studies examined performance in these more challenging tasks and found no effect of d-AMP on average performance (Mattay et al., 2000, 2003; Mintzer & Griffiths, 2007). However, in at least two of the studies, the overall null result reflected a mixture of reliably enhancing and impairing effects. Mattay et al. (2000) examined the performance of subjects with better and worse working memory capacity separately and found that subjects whose performance on placebo was low performed better on d-AMP, whereas subjects whose performance on placebo was high were unaffected by d-AMP on the 2-back and impaired on the 3-back tasks. Mattay et al. (2003) replicated this general pattern of data with subjects divided according to genotype. The specific gene of interest codes for the production of Catechol-O-methyltransferase (COMT), an enzyme that breaks down dopamine and norepinephrine. A common polymorphism determines the activity of the enzyme, with a substitution of methionine for valine at Codon 158 resulting in a less active form of COMT. The met allele is thus associated with less breakdown of dopamine and hence higher levels of synaptic dopamine than the val allele. Mattay et al. (2003) found that subjects who were homozygous for the val allele were able to perform the n-back faster with d-AMP; those homozygous for met were not helped by the drug and became significantly less accurate in the 3-back condition with d-AMP. In the case of the third study finding no overall effect, analyses of individual differences were not reported (Mintzer & Griffiths, 2007). The goal of this article has been to synthesize what is known about the use of prescription stimulants for cognitive enhancement and what is known about the cognitive effects of these drugs. We have eschewed discussion of ethical issues in favor of simply trying to get the facts straight. Although ethical issues cannot be decided on the basis of facts alone, neither can they be decided without relevant facts. Personal and societal values will dictate whether success through sheer effort is as good as success with pharmacologic help, whether the freedom to alter one's own brain chemistry is more important than the right to compete on a level playing field at school and work, and how much risk of dependence is too much risk. Yet these positions cannot be translated into ethical decisions in the real world without considerable empirical knowledge. Do the drugs actually improve cognition? Under what circumstances and for whom? Who will be using them and for what purposes? What are the mental and physical health risks for frequent cognitive-enhancement users? For occasional users? It arrived as described, a little bottle around the volume of a soda can. I had handy a plastic syringe with milliliter units which I used to measure out the nicotine-water into my tea. I began with half a ml the first day, 1ml the second day, and 2ml the third day. (My Zeo sleep scores were 85/103/86 (▁▇▁), and the latter had a feline explanation; these values are within normal variation for me, so if nicotine affects my sleep, it does so to a lesser extent than Adderall.) Subjectively, it's hard to describe. At half a ml, I didn't really notice anything; at 1 and 2ml, I thought I began to notice it - sort of a cleaner caffeine. It's nice so far. It's not as strong as I expected. I looked into whether the boiling water might be breaking it down, but the answer seems to be no - boiling tobacco is a standard way to extract nicotine, actually, and nicotine's own boiling point is much higher than water; nor do I notice a drastic difference when I take it in ordinary water. And according to various e-cigarette sources, the liquid should be good for at least a year. Began double-blind trial. Today I took one pill blindly at 1:53 PM. at the end of the day when I have written down my impressions and guess whether it was one of the Adderall pills, then I can look in the baggy and count and see whether it was. there are many other procedures one can take to blind oneself (have an accomplice mix up a sequence of pills and record what the sequence was; don't count & see but blindly take a photograph of the pill each day, etc.) Around 3, I begin to wonder whether it was Adderall because I am arguing more than usual on IRC and my heart rate seems a bit high just sitting down. 6 PM: I've started to think it was a placebo. My heart rate is back to normal, I am having difficulty concentrating on long text, and my appetite has shown up for dinner (although I didn't have lunch, I don't think I had lunch yesterday and yesterday the hunger didn't show up until past 7). Productivity wise, it has been a normal day. All in all, I'm not too sure, but I think I'd guess it was Adderall with 40% confidence (another way of saying placebo with 60% confidence). When I go to examine the baggie at 8:20 PM, I find out… it was an Adderall pill after all. Oh dear. One little strike against Adderall that I guessed wrong. It may be that the problem is that I am intrinsically a little worse today (normal variation? come down from Adderall?). The stimulant now most popular in news articles as a legitimate "smart drug" is Modafinil, which came to market as an anti-narcolepsy drug, but gained a following within the military, doctors on long shifts, and college students pulling all-nighters who needed a drug to improve alertness without the "wired" feeling associated with caffeine. Modafinil is a relatively new smart drug, having gained widespread use only in the past 15 years. More research is needed before scientists understand this drug's function within the brain – but the increase in alertness it provides is uncontested. The nonmedical use of substances—often dubbed smart drugs—to increase memory or concentration is known as pharmacological cognitive enhancement (PCE), and it rose in all 15 nations included in the survey. The study looked at prescription medications such as Adderall and Ritalin—prescribed medically to treat attention deficit hyperactivity disorder (ADHD)—as well as the sleep-disorder medication modafinil and illegal stimulants such as cocaine. If you want to try a nootropic in supplement form, check the label to weed out products you may be allergic to and vet the company as best you can by scouring its website and research basis, and talking to other customers, Kerl recommends. "Find one that isn't just giving you some temporary mental boost or some quick fix – that's not what a nootropic is intended to do," Cyr says. Nootropics. You might have heard of them. The "limitless pill" that keeps Billionaires rich. The 'smart drugs' that students are taking to help boost their hyperfocus. The cognitive enhancers that give corporate executives an advantage. All very exciting. But as always, the media are way behind the curve. Yes, for the past few decades, cognitive enhancers were largely sketchy substances that people used to grasp at a short term edge at the expense of their health and well being. But the days of taking prescription pills to pull an all-nighter are so 2010. The better, safer path isn't with these stimulants but with nootropics. Nootropics consist of dietary supplements and substances which enhance your cognition, in particular when it comes to motivation, creativity, memory, and other executive functions. They play an important role in supporting memory and promoting optimal brain function. Smart drugs offer significant memory enhancing benefits. Clinical studies of the best memory pills have shown gains to focus and memory. Individuals seek the best quality supplements to perform better for higher grades in college courses or become more efficient, productive, and focused at work for career advancement. It is important to choose a high quality supplement to get the results you want. At this point I began to get bored with it and the lack of apparent effects, so I began a pilot trial: I'd use the LED set for 10 minutes every few days before 2PM, record, and in a few months look for a correlation with my daily self-ratings of mood/productivity (for 2.5 years I've asked myself at the end of each day whether I did more, the usual, or less work done that day than average, so 2=below-average, 3=average, 4=above-average; it's ad hoc, but in some factor analyses I've been playing with, it seems to load on a lot of other variables I've measured, so I think it's meaningful). Instead of buying expensive supplements, Lebowitz recommends eating heart-healthy foods, like those found in the MIND diet. Created by researchers at Rush University, MIND combines the Mediterranean and DASH eating plans, which have been shown to reduce the risk of heart problems. Fish, nuts, berries, green leafy vegetables and whole grains are MIND diet staples. Lebowitz says these foods likely improve your cognitive health by keeping your heart healthy. Speaking of addictive substances, some people might have considered cocaine a nootropic (think: the finance industry in Wall Street in the 1980s). The incredible damage this drug can do is clear, but the plant from which it comes has been used to make people feel more energetic and less hungry, and to counteract altitude sickness in Andean South American cultures for 5,000 years, according to an opinion piece that Bolivia's president, Evo Morales Ayma, wrote for the New York Times. Piracetam boosts acetylcholine function, a neurotransmitter responsible for memory consolidation. Consequently, it improves memory in people who suffer from age-related dementia, which is why it is commonly prescribed to Alzheimer's patients and people struggling with pre-dementia symptoms. When it comes to healthy adults, it is believed to improve focus and memory, enhancing the learning process altogether. Many of the positive effects of cognitive enhancers have been seen in experiments using rats. For example, scientists can train rats on a specific test, such as maze running, and then see if the "smart drug" can improve the rats' performance. It is difficult to see how many of these data can be applied to human learning and memory. For example, what if the "smart drug" made the rat hungry? Wouldn't a hungry rat run faster in the maze to receive a food reward than a non-hungry rat? Maybe the rat did not get any "smarter" and did not have any improved memory. Perhaps the rat ran faster simply because it was hungrier. Therefore, it was the rat's motivation to run the maze, not its increased cognitive ability that affected the performance. Thus, it is important to be very careful when interpreting changes observed in these types of animal learning and memory experiments. Exercise is also important, says Lebowitz. Studies have shown it sharpens focus, elevates your mood and improves concentration. Likewise, maintaining a healthy social life and getting enough sleep are vital, too. Studies have consistently shown that regularly skipping out on the recommended eight hours can drastically impair critical thinking skills and attention. Because these drugs modulate important neurotransmitter systems such as dopamine and noradrenaline, users take significant risks with unregulated use. There has not yet been any definitive research into modafinil's addictive potential, how its effects might change with prolonged sleep deprivation, or what side effects are likely at doses outside the prescribed range. When you drink tea, you're getting some caffeine (less than the amount in coffee), plus an amino acid called L-theanine that has been shown in studies to increase activity in the brain's alpha frequency band, which can lead to relaxation without drowsiness. These calming-but-stimulating effects might contribute to tea's status as the most popular beverage aside from water. People have been drinking it for more than 4,000 years, after all, but modern brain hackers try to distill and enhance the benefits by taking just L-theanine as a nootropic supplement. Unfortunately, that means they're missing out on the other health effects that tea offers. It's packed with flavonoids, which are associated with longevity, reduced inflammation, weight loss, cardiovascular health, and cancer prevention. Dopaminergics are smart drug substances that affect levels of dopamine within the brain. Dopamine is a major neurotransmitter, responsible for the good feelings and biochemical positive feedback from behaviors for which our biology naturally rewards us: tasty food, sex, positive social relationships, etc. Use of dopaminergic smart drugs promotes attention and alertness by either increasing the efficacy of dopamine within the brain, or inhibiting the enzymes that break dopamine down. Examples of popular dopaminergic smart drug drugs include Yohimbe, selegiline and L-Tyrosine. Eugeroics (armodafinil and modafinil) – are classified as "wakefulness promoting" agents; modafinil increased alertness, particularly in sleep deprived individuals, and was noted to facilitate reasoning and problem solving in non-ADHD youth.[23] In a systematic review of small, preliminary studies where the effects of modafinil were examined, when simple psychometric assessments were considered, modafinil intake appeared to enhance executive function.[27] Modafinil does not produce improvements in mood or motivation in sleep deprived or non-sleep deprived individuals.[28]
CommonCrawl
${suggestion.title} ${suggestion.description || ''} ${img} `; return html; } } }]).on('autocomplete:selected', function(event, suggestion) { window.location.href = offsetURL(suggestion.path); }); // remove inline display style on autocompleter (we want to // manage responsive display via css) $('.algolia-autocomplete').css("display", ""); }) .catch(function(error) { console.log(error); }); }); Grasshoppermouse Home About Research ☰ While it may be true that Evolutionary Anthropologists consider themselves scientists… Ed Hagen https://anthro.vancouver.wsu.edu/people/hagen Casey Roulette, a former PhD student of mine who is now an assistant professor at San Diego State University, recently received an email from a member of the Biology Department who was irate that Casey's evolutionary anthropology course, Evolution of Human Nature, was being considered to fulfill "Natural Sciences" GE reqs. She informed him that she had complained to her Chair, who in turn complained to the Dean of the College of Sciences, and that "Both will be preparing a letter to be sent to the GE committee indicating that the College of Sciences does not support this course as a GE course." The email concluded: While it may be true that Evolutionary Anthropologists consider themselves scientists and use the terms evolution and evolutionary, the "Evolutionary Biology" represented in this course does not reflect or represent modern Evolutionary Biology as defined by Evolutionary Biologists. In our meeting with the Instructor, who I know is an Assistant Professor, I thought it was not collegial to point out that his views on "evolutionary anthropology" are not considered accurate by those of us trained in Evolutionary Biology. It is deeply concerning that students could leave this campus with such an erroneous understanding of such a fundamental scientific process. She attached an evaluation from the SDSU Department of Biology. The first part argued that, on programmatic grounds, the course did a good job fulfilling the Social Science reqs. I agree. The second part, however, echoed the email, arguing on scientific grounds that the course did not qualify as a GE course in the Natural Sciences because it presented a view of biology that was incomplete, biased and incorrect. That's odd. If true, Casey's course shouldn't qualify for GE in either the Social or Natural Sciences. In fact, his course shouldn't be taught at all. SDSU biologists' first concern was that Casey presents evolution as synonymous with adaptation: The "Evolutionary" approach presented in this class seems to present biological evolution as synonymous with adaptation via natural selection. Natural selection is but one of the five evolutionary forces that determine patterns of variation within and among species. For example, no information is provided on the very important role of Random Genetic Drift in shaping genetic variation. The practice of interpreting biological patterns only through the lens of adaptation has been labeled the "Adaptionist program". For four decades, biologists have recognized the folly of this approach. The phrase "The Fallacy of Intuitive Evolutionary Thinking" has also been used to describe the assumption that every biological feature has been optimized by selection. Evolutionary biologists accept the power of Darwinian natural selection but we do so understanding that natural selection is a complex process. Their second concern was that Casey does not discuss genetic drift, and its role in human evolution: There is no discussion of the Neutral and Nearly Neutral models of Evolution, which have been the dominant models of molecular evolution for 50 years. It is also well established that Homo sapiens has an effective population size (Ne) of approximately 10,000. With such a small effective population size it is widely understood (among evolutionary biologists) that selection is inefficient in H. sapiens. Because of this evolutionary biologists would expect to see high levels of both fixed and segregating neutral and nearly neutral (slightly deleterious) alleles within and between human populations. Our most current understanding of genetic variation in does not fit in to the pan selectionist approach presented in this class. By "the pan selectionist approach" I suppose they are referring to the focus of Casey's course, Human Behavioral Ecology (HBE). The debate over adaptationism, however, is an ongoing debate, one that began more than a century ago. Each side has involved towering figures in evolutionary biology like Fisher, Wright, Williams, Hamilton, Maynard Smith, Gould, Lewontin, and Kimura. The neutralist–selectionist debate — are patterns of genetic variation primarily explained by random genetic drift or natural selection? — is, again, a debate. Does Casey's syllabus present both sides of these debates? The assigned reading in Week 2 of the course is Stephen Jay Gould's Sociobiology: the art of storytelling. It was Gould, of course, who with co-author Richard Lewontin, introduced the phrase "Adaptationist Programme" in their hugely influential article The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme. One of Gould's key points is the important role of genetic drift. More importantly, one of two required books for the course is Laland and Brown's Sense and nonsense: Evolutionary perspectives on human behaviour. I would count Laland as a consistent critic of the adaptationist program (in favor of his alternative, niche construction, developed in collaboration with eminent population geneticist Marcus Feldman) whose book is a good-faith effort to describe the controversies as they apply to human evolution. Laland and Brown do discuss neutral theory, drift and molecular evolution, especially their intriguing parallels with cultural evolution. Casey also assigned chapter 7 of Evolution of Human Behavior, by Agustin Fuentes, who I would also count as a critic of adaptationism. Although Casey's course focuses on the evolution of the human behavioral phenotype and not molecular evolution, I see no evidence that the course, which also assigns Wrangham's Demonic males, is giving students a biased overview of the debates over adaptationism and neutralism vs. selectionism. And it's not quite true that neutral models "have been the dominant models of molecular evolution for 50 years." Instead, the intense debate has perhaps resulted in a consensus. According to evolutionary geneticists Charlesworth and Charlesworth, "From the late 1980s, the neutral theory came increasingly to be used as a null hypothesis, against which alternative hypotheses could be tested, including the models of the effects of selection on neutral or nearly neutral variability at linked sites….", a point also made by Masatoshi Nei and Kimura himself. At this point I should admit that I'm an unreconstructed ultra-Darwinian Fundamentalist who, each night, reads his daughters passages from the Selfish Gene. Depression is an adaptation! Drug use is an adaptation! This was my logo for the first iteration of this blog: Pangloss Casey, what's with the balanced overview? Did I teach you nothing? In my view, critics of adaptationism have things backwards. For adaptationists, the question is not, do constraints and noise play an important role in organism structure? The question is, why don't constraints and noise dominate organism structure? Constraints and noise permeate physical processes, yet organisms — intricate machines that surpass all human technology — somehow manage to make precise copies of themselves in hostile environments. It is exactly this problem, as I discuss in a bit more detail here, that adaptationists are trying to solve. The population genetics folks have it right: random genetic variation, and more generally, noise, by-products, constraints, and thermodynamically favored physics and chemistry, should always be the null hypotheses that an adaptationist hypothesis needs to beat. Course approval at SDSU is a pretty trivial topic, but the evidence for noise vs. selection in the human genome is not. I therefore thought I would use this post as an opportunity to learn a bit more about it. It's not meant to be a comprehensive review, just a taste of some major issues. I'm not a genetics guy — far from it — so if I make any mistakes, let me know in the comments. Population genetics 101 The SDSU biologists' argument that, due to a small effective population size (\(N_e\)), selection is "inefficient" in H. sapiens rests on a classic result from population genetics. Each generation is a sample of alleles from the previous generation. A particular allele might decrease the probability that individuals with the allele reproduce (negative selection), increase the probability (positive selection), or have no effect (neutral). This effect is quantified by the selection coefficient, \(s\), which is the difference in fitness, \(W\), between two alleles, the wild type, A, and a mutant type, B: \[ s = W_B - W_A \] Even when \(s=0\), the frequencies of A and B will change at least a little bit from generation to generation due to random differences in the survival and reproduction of individuals with A vs. B. On average, however, the frequency of a neutral mutant allele in a new generation is, under some strong simplifying assumptions, simply the frequency of the allele in the original generation. If the population is small, however, then the "sample size" is small. Just as we learn in statistics, the smaller the sample size, the larger the variance in the "sample estimate." Thus, in small populations, the frequency of a neutral allele will bounce around from generation to generation more than it would in large populations, eventually going to either 0 (lost) or 1 (fixed). In particular, the frequency of a new mutation is small, so there's a good chance that it won't be "sampled," and will thus be lost. But what if the allele isn't neutral? Naively, if an allele has a negative effect on fitness, \(s<0\), then its frequency should go to 0, and if it has a positive effect on fitness, \(s>0\), then its frequency should go to 1. Again, though, allele frequencies fluctuate randomly. If these fluctuations are often larger than the systematic effects of selection, then a deleterious allele can become fixed, and a beneficial allele can be lost. This is much more likely to happen when a population has a small effective population size (for review of the \(N_e\) concept, see Charlesworth 2009). Generally, only when the magnitude of \(s\) is greater than twice the reciprocal of effective population size will the fate of an allele will be dominated by its fitness effects: \[ |s| > \frac{1}{2N_e} \] Otherwise, its fate will be dominated by drift. This is what SDSU biologists were referring to: as \(N_e\) decreases, the fate of alleles will increasingly be dominated by drift. The importance of selection also depends on \(s\), however, the fitness of a new allele relative to an existing allele, a fact that the SDSU biologists failed to mention. Figure 1, from Lanfear et al. 2014, illustrates the relationship between \(N_e\), \(s\), and the rate of evolution — the substitution rate vs. the mutation rate — conveniently using parameters that approximate those for ancestral Homo. The top panels illustrate positive selection, and the bottom panels negative selection. The left-hand panels show that the substitution rate increases not only as \(N_e\) increases, but also as \(s\) increases. The right-hand panels depict the relationship between the ratio of substitution rate to the mutation rate (on the y-axis) and \(N_es\). For \(N_es < 1\) (to the left of the dotted line), the substitution rate is approximately the mutation rate (drift). For \(N_es > 1\) (to the right of the dotted line), the substitution rate either begins to skyrocket beyond the mutation rate (positive selection) or drop to 0 (negative selection). [1] "/Users/hagen/Documents/Projects/_grasshoppermouse" Figure 1: The relationship between substitution rate (in substitutions per site per year) and effective population size (\(N_e\)) under genetic drift and natural selection (the \(N_eRR\)) [8]. These relationships were calculated assuming a mutation rate of \(1 \times 10^{-9}\) mutations per site per year, approximately that found in humans. (A,B) show the substitution rate of mutations for a range of positive (A) and negative (B) selection coefficients (denoted 's'). (C,D) show the same data, but in this case the y-axis shows the substitution rate relative to the mutation rate, and the x-axis shows the product of \(N_e\) and the selection coefficient for positive (C) and negative (D) mutations respectively. A dashed line highlights where \(N_es = 1\), below which mutations are often considered 'effectively neutral'. Note that genetic drift predicts a flat \(N_eRR\) for neutral mutations, where \(s = 0.00\) in (A,B). In (C,D), this is reflected by the substitution rate equaling the mutation rate, giving a value of 1 on the y-axis, when \(N_es = 0\). Figure and Caption from Lanfear et al. 2014. So, for humans, we need two pieces of evidence, (1) \(N_e\), and (2) the plausible range of values that \(s\) might take, referred to as the distribution of fitness effects (DFE) for new alleles.1 Empirical estimates of effective population size (\(N_e\)) Like all other lineages, the human lineage extends back to the origin of life, \(>3\) billion years ago. It is therefore informative to consider the \(N_e\) of our lineage over different periods of our evolution. Here is a relatively recent estimate of divergence times and \(N_e\)'s for the great apes, including humans, from Prado-Martinez et al. (2013): Figure 2: Population splits and effective population sizes (\(N_e\)) during great ape evolution. Split times (dark brown) and divergence times (light brown) are plotted as a function of divergence (d) on the bottom and time on top. Time is estimated using a single mutation rate (μ) of \(1 \times 10^{-9}\) \(mut\) \(bp^{−1}\) \(year^{−1}\). The ancestral and current effective population sizes are also estimated using this mutation rate. The results from several methods used to estimate \(N_e\) (COALHMM, ILS COALHMM, PSMC and ABC) are coloured in orange, purple, blue and green, respectively. The chimpanzee split times are estimated using the ABC method. The x axis is rescaled for divergences larger than \(2 \times 10^{-3}\) to provide more resolution in recent splits. All the values used in this figure can be found in Supplementary Table 5. The terminal \(N_e\) correspond to the effective population size after the last split event. Figure and caption from Prado-Martinez et al. (2013). And here is a recent estimate of \(N_e\) for modern H. sapiens from about 200,000 years ago to the present, from Schiffels and Durbin (2014). Figure 3: Estimates of \(N_e\) for humans. The blue colors are African populations, and the other colors are non-African populations. The top panel used different data that included Native American haplotypes, with better resolution for older population changes than the bottom panel, but less resolution for more recent changes. Figure from Schiffels and Durbin (2014). Thus, based on estimated \(N_e\), and assuming a fixed DFE, the rate of evolution should have been relatively high when the great apes diverged from other apes about 20 million years ago, slowed when hominins diverged from chimpanzees at the end of the Miocene, slowed again, perhaps with the appearance of Homo around the beginning of the Pleistocene, and then very recently accelerated in the late Pleistocene and Holocene. The distribution of fitness effects (DFE) The DFE of new mutations is not necessarily fixed, however, but could have changed at different points in human evolution. Generally, deleterious mutations are expected to greatly outnumber beneficial ones because there are more ways to break things than improve them, especially if a population is well-adapted to its environmental niche. That means the DFE will generally be shifted toward negative values. If a population is moving into a new environmental niche, however (point A in Figure 4), the fitness of existing alleles might drop, pushing the distribution of fitness effects of new alleles to higher, often positive, values (distributions on the left). As the population adapts and reaches a fitness maximum (point B), new mutations will again almost always be detrimental (distributions on the right). Figure 4: Fitness landscapes and the distribution of fitness effects. Hypothetical distributions of fitness effects for populations of different sizes at different positions on a simple fitness landscape. A population far from the optimum (A) has a certain proportion of mutations that confer increases in fitness. However, a population at the hypothetical optimum of the landscape (B) cannot increase it fitness, so all mutations are deleterious. In both cases, the proportion of mutations that fall into different categories (Box 2, main text) changes depending on the effective population size. Note that, for simplicity, we have drawn a fitness landscape that varies along a single dimension, but the distributions we have drawn are more similar to those that would come from higher-dimensional fitness landscapes. Furthermore, it is unlikely that any natural population sits at the precise optimum of any fitness landscape. The selection coefficients are shown on natural scales, not log-transformed scales. Figure and caption from Lanfear et al. 2014. There are good reasons to believe that our lineage has undergone multiple niche changes, e.g., when hominins diverged from the chimpanzee lineage toward the end of the Miocene, when Homo diverged from other hominins around the beginning of the Pleistocene, when modern humans left Africa c. 50-100,000 years ago, and when transitioning to agriculture at the end of the Pleistocene and beginning of the Holocene c. 10,000 years ago. During these changes, the DFE could have been shifted to higher values, resulting in a higher rate of evolution despite small \(N_e\). Several other mechanisms have been proposed that might either change or stabilize the DFE in different species with different \(N_e\) and different levels of organism complexity (Figure 5). Figure 5: Overview of the main predictions of five theoretical models regarding DFE differences between two species. Here, E[s] is the average selection coefficient of a new mutation, and \(N_e\) is the effective population size. Figure and caption from Huber et al. 2017. Huber et al. 2017, for example, found that polymorphism data from humans, Drosophila, mice, and yeast best supported Fisher's Geometrical Model, which represents phenotypes as points in a multidimensional phenotype space, whose dimensionality is termed "complexity." Fitness is a decreasing function of the distance from the optimal phenotype. Because mutations in complex organisms are more likely to disrupt functionality, the average selection coefficient should be more negative in complex vs. simple organisms, a prediction supported by their data (Figure 6, panel A): Figure 6: Empirical support for FGM. (A) Both under the gamma DFE and the Lourenço et al. DFE, estimated average deleteriousness of mutations increases as a function of organismal complexity. (B) The shape parameter of the gamma DFE depends on the breadth of gene expression. Tissue-specific genes have a smaller shape parameter (α) than broadly expressed genes, supporting FGM. This pattern is consistent across overall expression levels. (C and D) By fitting the DFE of Lourenço et al., we can model slightly beneficial mutations in the DFE (green) that are thought to compensate for fixed deleterious mutations in species with small population size. We find support for a larger proportion of slightly beneficial mutations in the DFE of (C) humans than in (D) Drosophila. Figure and caption from Huber et al. 2017. Racimo and Schraiber (2014) criticize empirical studies of DFE that rely on fitting the data to a single probability distribution, however, such as normal or gamma. They found, instead, that in humans the DFE (for deleterious mutations only) had a bimodal distribution, with a large peak centered on \(s \sim 0\) (neutrality), and smaller peak between \(-10^{-5}\) and \(-10^{-4}\): Figure 7: Distribution of fitness effects among YRI polymorphisms in the Complete Genomics dataset, partitioned by the genomic consequence of the mutated site. The right panels show a zoomed-in version of the distributions in the left panels, after removing neutral polymorphisms and log-scaling the x-axis. A) DFE obtained from the genome-wide mapping. B) Zoomed-in version of panel A. C) DFE obtained from the exome-wide mapping. D) Zoomed-in version of panel C. E) DFEs for exonic sites (nonsynonymous, synonymous, splice sites) obtained from the exome-wide mapping and DFEs for non-exonic sites (intergenic, UTR, regulatory) obtained from the genome-wide mapping. F) Zoomed-in version of panel E. Consequences were determined using the Ensembl Variant Effect Predictor (v.2.5). Codon and degeneracy information was obtained from snpEff. If more than one consequence existed for a given SNP, that SNP was assigned to the most severe of the predicted categories, following the VEP's hierarchy of consequences. NonSyn = nonsynonymous. Syn = synonymous. Syn to unpref. codon = synonymous change from a preferred to an unpreferred codon. Syn to pref. codon = synonymous change from an unpreferred to a preferred codon. Syn no pref. = synonymous change from an unpreferred codon to a codon that is also unpreferred. Splice = splice site. Figure and caption from Racimo and Schraiber (2014). https://doi.org/10.1371/journal.pgen.1004697.g002 Racimo and Schraiber (2014) speculate that the absence of mutations with \(s < -10^{-4}\) might indicate a cutoff between weakly deleterious mutations that segregate in human populations and highly deleterious mutations that are quickly eliminated by negative selection. In summary, the rate of evolution depends on both the DFE and \(N_e\), and involves both negative (purifying) and positive selection; the DFE likely depends on changes to the environment, organism complexity, and perhaps other other factors like robustness and back mutations. The human DFE is an active area of research. Standing variation and soft sweeps An important further consideration is that the stochastic effects of drift on the fate of beneficial alleles apply mainly to new mutations, because they are initially at very low frequency. Species harbor a large number of neutral, or nearly neutral, alleles, however, termed standing variation, which, because they are already at high frequency, are much less likely to be lost via drift. Moreover, these alleles already exist, so there is no waiting time for new beneficial mutations to appear. When a population moves into a new niche, these formerly neutral alleles can become either deleterious or beneficial and will respond quickly to selection, resulting in a soft sweep (Hermisson and Pennings 2005). Figure 8, from Barrett and Schluter 2008, illustrates that, for smaller values of \(\alpha_b = 2N_es_b\) (where the \(b\) subscript indicates "beneficial"), beneficial standing variants (solid line) have a very high fixation probability relative to new mutations (dashed line): Figure 8: The probability of fixation of a single new mutation (dashed curve) compared with that of a polymorphic allele that arose in a single mutational event (solid curve). \(\alpha_b = 2N_es_b\), where \(N_e\) is the effective population size and \(s_b\) is the homozygous fitness advantage. The form of the curve for standing variation in this example assumes that \(N = N_e = 25,000\), the dominance coefficient (h) = 0.5 and that beneficial alleles were previously neutral. \(\alpha_b\) is plotted on a logarithmic scale. Figure and caption from Barrett and Schluter 2008. In a study that used a novel machine learning technique to identify hard sweeps (new mutations that go to fixation under positive selection) and soft sweeps across the human genome, Schrider and Kern (2017) found that the vast majority, over 90%, were soft sweeps. In addition, patterns of variation in perhaps half the genome has been affected by a nearby sweep: Figure 9: The number of windows assigned to each class by S/HIC in each population. This is a new technique based on simulated training data. Schrider and Kern acknowledge that their results may be surprising given the apparently small effective population size and low nucleotide diversity levels in humans. However, if the mutational target for the trait to be selected on is fairly large, then the probability of a population harboring a mutation affecting that trait may be appreciable. Variance in \(N_e\) can also be important in determining the relative importance of hard vs. soft sweeps (Messer and Petrov 2013). \(N_e\) estimated from sequence data can be dominated by short phases during which \(N_e\) was small, even though \(N_e\) was large for long periods during which adaptation by positive selection was much more likely. In summary, human adaptation by natural selection was often via soft sweeps, in which \(N_e\) plays a much smaller role. The effects of population structure on the substitution rate Another consideration is that the population genetics models of the effects of \(N_e\) and \(s\) on substitution rates make simplifying assumptions, such as random mating (i.e., a lack of population structure), that might easily be violated in real populations. Recent theoretical work has found that population structure can either suppress or amplify the effects of selection (e.g., Frean et al. 2013). Phenotypic evidence for human evolution The final piece of background information to keep in mind is the physiological, morphological, and behavioral evidence for human evolution that has informed the studies of evolutionary biologists from Charles Darwin in the 19th century to entire departments of evolutionary biologists in the 21st. Perhaps the two most dramatic phenotypic examples of human-specific adaptations are (1) bipedalism, and (2) the substantial increase in brain size in humans relative to hominin ancestors and extant apes, most of which occurred since the first appearance of Homo over 2 million years ago (Figure 10): Figure 10: The evolution of primate cranial capacity. Each dot is a fossil specimen. The x-axis is on a log scale. From Schoenemann (2013) Two recent studies of variation in cranial dimensions in apes and humans (Weaver and Stringer 2015; Schroeder and von Cramon-Taubadel 2017) found that, whereas great ape cranial evolution was largely characterized by strong stabilizing selection, the divergence of Homo from its last common ancestor with chimpanzees was explained by strong directional selection. The ability to learn language is a widely accepted cognitive difference between humans and chimpanzees that is probably rooted in human encephalization, as might be the cognitive abilities underlying cumulative culture, which is arguably a uniquely human trait (for a review of the evidence for cumulative culture in humans and other animals, see Dean et al. 2013). The key point here is that there is overwhelming phenotypic evidence that human psychology did evolve under positive natural selection. Detailed comparisons of other important aspects of human and great ape phenotypes are available in the Matrix of Comparative Anthropogeny (MOCA) of the Center for Academic Training and Research in Anthropogeny (CARTA). Empirical evidence that links genetic evolution with phenotype evolution We are only at the beginning of very long quest to link phenotypes, including the evolved behavioral and cultural phenotypes that are the topic of Casey's course, to the genome, the focus of the SDSU biologists' letter. Nevertheless, we know a lot more today than we knew 10 or even 5 years ago. Functional vs. junk DNA Genomes can be divided into two parts: functional DNA, which plays a profound role in the phenotype, and "junk" DNA, which plays no role in the phenotype. Functional DNA comprises protein coding regions and non-coding regulatory regions. We now have a pretty good understanding of which DNA sequences code for protein. It is much more difficult, however, to distinguish regulatory DNA from "junk" DNA, and hence to determine the fraction of the genome that is functional, and therefore subject to evolution by natural selectione. Most attempts to distinguish functional from junk DNA involve identifying sequences that are conserved across species, and are therefore presumed to be under purifying selection (constrained sequences) and thus functional, vs. those that are not conserved across species and so presumably have not been under purifying selection and are therefore likely non-functional "junk" (c.f., ENCODE). A recent estimate (Rands et al. 2014) is that 8.2% (7.1–9.2%) of the human genome is functional (constrained). Protein coding sequences comprise about 1% of the genome, and are highly conserved across the mammals (Figure 11, red). Non-coding regulatory sequences have much higher rates of turnover (turnover refers to the loss or gain of purifying selection at a particular locus of the genome caused by changes in the physical or genetic environment, or mutations at the locus itself, that switch it from being functional to being non-functional or vice versa). See Figure 11: Figure 11: Schematic summary of the fraction of constrained sequence that has been retained (saturated colours) or turned over (pastel colours) in the human lineage over time (X-axis, divergence time) and how it has been distributed across various categories of functional element. In addition to showing the reduced quantity of preserved constrained sequence with increasing divergence, we infer the reciprocal quantity of sequence that is assumed to have been gained over human lineage evolution. Figure and caption from Rands et al. 2014. Because \(>90\%\) of the human genome appears to be non-functional, sequence variation in this portion should closely follow the neutral model. Our focus, then, is on the evolution of the \(\sim 8\%\) of the genome that is functional. Conserved traits Perhaps the most important point is that evolutionary anthropologists are keenly interested in the adaptations we share with primates, mammals, vertebrates, and so on. Examples include lacation, the immune system, vision and bitter taste and other plant toxin defense mechanisms that are central to Casey's research. The major features of these adaptations evolved long before the appearance of Homo, with its small \(N_e\), but would need to have been maintained by purifying selection during the evolution of Homo. Much of the genetic basis of adaptations we share with other mammals and primates almost certainly lies in the \(\sim 2\%\) of constrained sequence we share with all other mammals and the \(\sim 6-7\%\) we share with other primates (out of 8.2% total; Figure 11). Moreover, many complex adaptations have evolved features that enable them to adjust ontogenetically to local environmental conditions, an ability that I and many others have framed in strategic terms. Examples include the immune system and induction of xenobiotic metabolizing enzymes. Humans would not be doing too horribly in a wide range of environments even if there were no human-specific adaptations. Human-specific selection in regulatory sequences Despite the low \(N_e\) in Homo, there is increasing genetic evidence for positive natural selection in the human lineage since our divergence from chimpanzees. Given that protein-coding sequences are highly conserved across the mammals, and that most functional DNA comprises regulatory sequences, most human-specific adaptations should be grounded in changes to regulatory sequences. Human accelerated regions (HARs) are short, evolutionarily conserved DNA sequences that have acquired significantly more DNA substitutions than expected in the human lineage since divergence from chimpanzees. HARs are often, but not always, the product of positive natural selection (other mechanisms include relaxation of constraint, which allows a region to acquire more mutations than it would under purifying selection, and GC-biased gene conversion; Franchini and Pollard 2017). Franchini and Pollard 2017 summarized multiple studies that attempted to discover HARs, which differed in the number of species considered to determine "conserved" (which ranged from a few primate species to multiple primate, mammalian and vertebrate species), sequence filtering criteria (e.g., including or excluding coding sequences), and statistical tests for acceleration. Although each study identified a large number of HARs, there was only limited overlap in the identified regions. See Figure 12: Figure 12: Identification of human accelerated elements. Top: the four different approaches used to identify human accelerated regions. Some key differences include (i) the conserved elements used as candidates to identify HARs (which depend on multiple sequence alignments, methods to detect conservation, and whether human was masked in the alignments), (ii) bioinformatics filters that aim to restrict to non-coding elements and/or remove assembly or alignment artifacts, and (iii) tests used to detect acceleration. Bottom: overlap of the different datasets of human accelerated regions. Abbreviations: ANC accelerated conserved non-coding sequences [20]; HACNS human accelerated conserved non-coding sequences [23]; HTBE human terminal branch elements [21]. HARs include the original HARs [19] and the second generation HARs or 2xHARs [100]. Figure and caption from Franchini and Pollard 2017. Most studies of HARs used only sequence data. Some recent studies, though, also incorporated expression data. Gittelman et al. 2015, for example, used maps of DNase I hypersensitive sites (DHSs) from ENCODE and the Roadmap Epigenomics Projects. DHSs are regions of chromatin that serve as markers of regulatory DNA. Of 2,093,197 DHS loci, 113,577 exhibited significant constraint across the primates. DHSs active in fetal cell types, especially fetal brain cells, showed the highest levels of conservation in non-human primates. Of the conserved loci, 524 were accelerated in human evolution (haDHSs), evolving at approximately four times the neutral rate in the human lineage, mostly but not exclusively under positive selection, while other primate lineages evolved at less than half of the neutral rate. Gittelman et al. found that haDHSs tend to target developmentally and neuronally important genes relative to conserved DHSs, which themselves are already highly enriched for these categories. In another review of several previous studies of HARs, Levchenko et al. 2018 noted that of the 3500 candidates identified so far, most are in non-coding regions, estimates of the fraction under positive selection range from 15%-85%, many are active in the brain, consistent with this major phenotypic difference between humans and chimpanzees, and about 7-8% are not shared with Neanderthals or Denisovans, consistent with the time of their divergence from the modern human lineage. Recent human evolution Human \(N_e\) started a dramatic increase some 40-50,000 years ago as humans entered multiple new environments. Hawks et al. (2007) predicted and found that "selection has accelerated greatly during the last 40,000 years." Co-authors Cochran and Harpending followed up with a book, The 10,000 year explosion: How civilization accelerated human evolution (you can find my review of it here). Fan et al. (2016) review many examples of recent, population-specific human adaptations to local environments, such as the arctic, tropical rainforests, and high altitude: Figure 13: Examples of human local adaptations, each labeled by the phenotype and/or selection pressure, and the genetic loci under selection. Figure and caption from Fan et al. (2016). Evolutionary anthropologists and human behavioral ecologists investigate the relationships between human environments and humans' intricate molecular, cellular, anatomical, and behavioral phenotypes, most of which we inherited from primate, mammalian and earlier ancestors that evolved long before our lineage experienced a low \(N_e\). The extent to which a small \(N_e\) limited human-specific evolution is still under investigation. It certainly didn't reduce it to zero, and there are many other factors that might have accelerated it. There is abundant phenotypic evidence for human-specific adaptations, after all, and there is increasing genetic evidence that positive selection played an important role in our evolutionary history, especially selection on standing variation, which is not limited by low \(N_e\). Reading through the papers I discussed here, I was encouraged that folks studying human genetic variation seemed interested in, not hostile to, the many adaptationist hypotheses put forward by evolutionary anthropologists and behavioral ecologists. Charlesworth and Charlesworth, for instance, highlight the importance of kin selection and evolutionary game theory, two theoretical foundations of behavioral ecology. O'Bleness et al. (2012) give a nod to Lieberman's endurance running hypothesizes and many other phenotypic comparisons of humans and non-human primates. So what's up with the SDSU biologists? The debates over adaptationism and neutralism vs. selectionism have all the hallmarks of a sectarian conflict. Contrary to our rhetoric, academics are among the most ethnocentric of an infamously ethnocentric species. Perhaps — dare I say it? — ethnocentrism is part of human nature. If so, it's true of us all. But, among the primates, our species also evolved a unique ability for group alliances. The SDSU biologists are doing cool stuff. Casey is doing cool stuff. Knowing Casey as I do, he doesn't need my advice, but I'll give it anyway: I doubt the SDSU biologists speak with one voice. There is as much diversity of opinion within groups as there is between them, maybe more. You are one of the very few SDSU social scientists who has a professional interest in what the SDSU biologists are doing. I suspect many of them recognize that. 2018/7/15: minor edits to this post to improve clarity. There is debate about the human mutation rate. See Scally and Durbin 2012 and Scally 2016.↩︎ Comment on this article Share: For attribution, please cite this work as Hagen (2018, March 3). Grasshoppermouse: While it may be true that Evolutionary Anthropologists consider themselves scientists.... Retrieved from https://grasshoppermouse.github.io/posts/2018-03-03-while-it-may-be-true-that-evolutionary-anthropologists-consider-themselves-scientists-and-use-the-terms-evolution-and-evolutionary/ @misc{hagen2018while, author = {Hagen, Ed}, title = {Grasshoppermouse: While it may be true that Evolutionary Anthropologists consider themselves scientists...}, url = {https://grasshoppermouse.github.io/posts/2018-03-03-while-it-may-be-true-that-evolutionary-anthropologists-consider-themselves-scientists-and-use-the-terms-evolution-and-evolutionary/},
CommonCrawl
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance. Online ISSN 1534-7486; Print ISSN 1056-3911 Journals Home Search My Subscriptions Subscribe Your device is paired with for another days. Previous issue | This issue | Most recent issue | All issues | Next issue | Previous article | Recently published articles | Next article Root numbers, Selmer groups, and non-commutative Iwasawa theory Authors: John Coates, Takako Fukaya, Kazuya Kato and Ramdorai Sujatha Journal: J. Algebraic Geom. 19 (2010), 19-97 DOI: https://doi.org/10.1090/S1056-3911-09-00504-9 Published electronically: April 15, 2009 MathSciNet review: 2551757 Abstract | References | Additional Information Abstract: Let $E$ be an elliptic curve over a number field $F$, and let $F_\infty$ be a Galois extension of $F$ whose Galois group $G$ is a $p$-adic Lie group. The aim of the present paper is to provide some evidence that, in accordance with the main conjectures of Iwasawa theory, there is a close connection between the action of the Selmer group of $E$ over $F_\infty$, and the global root numbers attached to the twists of the complex $L$-function of $E$ by Artin representations of $G$. References [Enhancements On Off] (What's this?) Hyman Bass, Algebraic $K$-theory, W. A. Benjamin, Inc., New York-Amsterdam, 1968. MR 0249491 Nicolas Bourbaki, Commutative algebra. Chapters 1–7, Elements of Mathematics (Berlin), Springer-Verlag, Berlin, 1989. Translated from the French; Reprint of the 1972 edition. MR 979760 Christophe Breuil, Groupes $p$-divisibles, groupes finis et modules filtrés, Ann. of Math. (2) 152 (2000), no. 2, 489–549 (French, with French summary). MR 1804530, DOI https://doi.org/10.2307/2661391 B. J. Birch and N. M. Stephens, The parity of the rank of the Mordell-Weil group, Topology 5 (1966), 295–299. MR 201379, DOI https://doi.org/10.1016/0040-9383%2866%2990021-8 Henri Cartan and Samuel Eilenberg, Homological algebra, Princeton University Press, Princeton, N. J., 1956. MR 0077480 Charles W. Curtis and Irving Reiner, Methods of representation theory. Vol. I, John Wiley & Sons, Inc., New York, 1981. With applications to finite groups and orders; Pure and Applied Mathematics; A Wiley-Interscience Publication. MR 632548 J. H. Coates and S. Howson, Euler characteristics and elliptic curves. II, J. Math. Soc. Japan 53 (2001), no. 1, 175–235. MR 1800527, DOI https://doi.org/10.2969/jmsj/05310175 John Coates, Takako Fukaya, Kazuya Kato, Ramdorai Sujatha, and Otmar Venjakob, The $\rm GL_2$ main conjecture for elliptic curves without complex multiplication, Publ. Math. Inst. Hautes Études Sci. 101 (2005), 163–208. MR 2217048, DOI https://doi.org/10.1007/s10240-004-0029-3 J. Coates and R. Greenberg, Kummer theory for abelian varieties over local fields, Invent. Math. 124 (1996), no. 1-3, 129–174. MR 1369413, DOI https://doi.org/10.1007/s002220050048 John Coates, Peter Schneider, and Ramdorai Sujatha, Links between cyclotomic and ${\rm GL}_2$ Iwasawa theory, Doc. Math. Extra Vol. (2003), 187–215. Kazuya Kato's fiftieth birthday. MR 2046599 J. Coates and R. Sujatha, Galois cohomology of elliptic curves, Tata Institute of Fundamental Research Lectures on Mathematics, vol. 88, Published by Narosa Publishing House, New Delhi; for the Tata Institute of Fundamental Research, Mumbai, 2000. MR 1759312 John Coates, Ramdorai Sujatha, and Jean-Pierre Wintenberger, On the Euler-Poincaré characteristics of finite dimensional $p$-adic Galois representations, Publ. Math. Inst. Hautes Études Sci. 93 (2001), 107–143. MR 1863736, DOI https://doi.org/10.1007/s10240-001-8189-x P. Deligne, Les constantes des équations fonctionnelles des fonctions $L$, Modular functions of one variable, II (Proc. Internat. Summer School, Univ. Antwerp, Antwerp, 1972) Springer, Berlin, 1973, pp. 501–597. Lecture Notes in Math., Vol. 349 (French). MR 0349635 Darmon, H., Tian, Y., Heegner points over false Tate curve extensions, Preprint. Vladimir Dokchitser, Root numbers of non-abelian twists of elliptic curves, Proc. London Math. Soc. (3) 91 (2005), no. 2, 300–324. With an appendix by Tom Fisher. MR 2167089, DOI https://doi.org/10.1112/S0024611505015261 T. Dokchitser and V. Dokchitser, Computations in non-commutative Iwasawa theory, Proc. Lond. Math. Soc. (3) 94 (2007), no. 1, 211–272. With an appendix by J. Coates and R. Sujatha. MR 2294995, DOI https://doi.org/10.1112/plms/pdl014 Dokchitser, T., Dokchitser, V., Ranks of elliptic curves with a cyclic isogeny, Journal of Number Theory 128 (2008), 662–679. Dokchitser, T., Dokchitser, V., On the Birch Swinnerton-Dyer quotients modulo squares, arXiv:math/0610290v2, to appear in Annals of Mathematics. Dokchitser, T., Dokchitser, V., Regulator constants and the parity conjecture, arXiv:math 0709.2852, to appear in Inventiones Mathematicae. Matthias Flach, A generalisation of the Cassels-Tate pairing, J. Reine Angew. Math. 412 (1990), 113–127. MR 1079004, DOI https://doi.org/10.1515/crll.1990.412.113 Ralph Greenberg, Iwasawa theory for elliptic curves, Arithmetic theory of elliptic curves (Cetraro, 1997) Lecture Notes in Math., vol. 1716, Springer, Berlin, 1999, pp. 51–144. MR 1754686, DOI https://doi.org/10.1007/BFb0093453 Ralph Greenberg, Iwasawa theory for $p$-adic representations, Algebraic number theory, Adv. Stud. Pure Math., vol. 17, Academic Press, Boston, MA, 1989, pp. 97–137. MR 1097613, DOI https://doi.org/10.2969/aspm/01710097 Greenberg, R., Iwasawa theory, projective modules, and modular representations, to appear in Memoirs of Amer. Math. Soc. Li Guo, General Selmer groups and critical values of Hecke $L$-functions, Math. Ann. 297 (1993), no. 2, 221–233. MR 1241803, DOI https://doi.org/10.1007/BF01459498 Yoshitaka Hachimori and Kazuo Matsuno, An analogue of Kida's formula for the Selmer groups of elliptic curves, J. Algebraic Geom. 8 (1999), no. 3, 581–601. MR 1689359 Yoshitaka Hachimori and Otmar Venjakob, Completely faithful Selmer groups over Kummer extensions, Doc. Math. Extra Vol. (2003), 443–478. Kazuya Kato's fiftieth birthday. MR 2046605 Kazuya Kato, $p$-adic Hodge theory and values of zeta functions of modular forms, Astérisque 295 (2004), ix, 117–290 (English, with English and French summaries). Cohomologies $p$-adiques et applications arithmétiques. III. MR 2104361 Kim, B.-D., The parity conjecture and algebraic functional equations for elliptic curves at supersingular reduction primes, Ph.D Thesis, Stanford University (2005). Kazuo Matsuno, Finite $\Lambda $-submodules of Selmer groups of abelian varieties over cyclotomic $\Bbb Z_p$-extensions, J. Number Theory 99 (2003), no. 2, 415–443. MR 1969183, DOI https://doi.org/10.1016/S0022-314X%2802%2900078-1 Mazur, M., Rubin, K., Finding large Selmer ranks via an arithmetic theory of local constants, Ann. of Math. 166 (2007), 579–612. Mazur, M., Rubin, K., Growth of Selmer ranks in nonabelian extensions of number fields, Duke Math. J. 143 (2008), 437–461. J. S. Milne, Arithmetic duality theorems, Perspectives in Mathematics, vol. 1, Academic Press, Inc., Boston, MA, 1986. MR 881804 P. Monsky, Generalizing the Birch-Stephens theorem. I. Modular curves, Math. Z. 221 (1996), no. 3, 415–420. MR 1381589, DOI https://doi.org/10.1007/PL00004518 Jan Nekovář, Selmer complexes, Astérisque 310 (2006), viii+559 (English, with English and French summaries). MR 2333680 Jan Nekovář, On the parity of ranks of Selmer groups. II, C. R. Acad. Sci. Paris Sér. I Math. 332 (2001), no. 2, 99–104 (English, with English and French summaries). MR 1813764, DOI https://doi.org/10.1016/S0764-4442%2800%2901808-5 Michel Raynaud, Variétés abéliennes et géométrie rigide, Actes du Congrès International des Mathématiciens (Nice, 1970) Gauthier-Villars, Paris, 1971, pp. 473–477. MR 0427326 David E. Rohrlich, Galois theory, elliptic curves, and root numbers, Compositio Math. 100 (1996), no. 3, 311–349. MR 1387669 David E. Rohrlich, Scarcity and abundance of trivial zeros in division towers, J. Algebraic Geom. 17 (2008), no. 4, 643–675. MR 2424923, DOI https://doi.org/10.1090/S1056-3911-08-00462-1 Jean-Pierre Serre, Propriétés galoisiennes des points d'ordre fini des courbes elliptiques, Invent. Math. 15 (1972), no. 4, 259–331 (French). MR 387283, DOI https://doi.org/10.1007/BF01405086 Jean-Pierre Serre, Linear representations of finite groups, Springer-Verlag, New York-Heidelberg, 1977. Translated from the second French edition by Leonard L. Scott; Graduate Texts in Mathematics, Vol. 42. MR 0450380 Jean-Pierre Serre, Sur la dimension cohomologique des groupes profinis, Topology 3 (1965), 413–420 (French). MR 180619, DOI https://doi.org/10.1016/0040-9383%2865%2990006-6 Grothendieck, A., et al., Groupes de monodromie en géométrie algébrique (SGA 7, Vol 1), Lecture Notes in Math. 288 (1972). Shuter, M., Descent on division fields of elliptic curves, Cambridge Ph.D Thesis (2006). Richard G. Swan, The Grothendieck ring of a finite group, Topology 2 (1963), 85–110. MR 153722, DOI https://doi.org/10.1016/0040-9383%2863%2990025-9 Richard G. Swan, $K$-theory of finite groups and orders, Lecture Notes in Mathematics, Vol. 149, Springer-Verlag, Berlin-New York, 1970. MR 0308195 Bass, H., Algebraic $K$-theory, Benjamin, NewYork (1966). MR 0249491 (40:2736) Bourbaki, N., Elements of Mathematics, Commutative Algebra, Chapters 1–7, Springer (1989). MR 979760 (90a:13001) Breuil, C., Groupes $p$-divisibles, groupes finis et modules filtrés, Ann. of Math. 152 (2000), 489–547. MR 1804530 (2001k:14087) Birch, B. J., Stephens, N., The parity of the rank of the Mordell-Weil group, Topology 5 (1966), 295–299. MR 0201379 (34:1263) Cartan, H., Eilenberg, S., Homological Algebra, Princeton University Press (1956). MR 0077480 (17:1040e) Curtis, C.W., Methods of representation theory, vol. I, John Wiley & Sons, (1981). MR 632548 (82i:20001) Coates, J., Howson, S., Euler characteristics and elliptic curves II, J. Math. Soc. Japan 53 (2001), 175–235. MR 1800527 (2001k:11215) Coates, J., Fukaya, T., Kato, K., Sujatha, R., Venjakob, O., The $GL_2$ main conjecture for elliptic curves without complex multiplication, Publ. Math. IHES 101 (2005), 163-208. MR 2217048 (2007b:11172) Coates, J., Greenberg, R., Kummer theory for abelian varieties over local fields, Invent. Math. 124 (1996), 129-174. MR 1369413 (97b:11079) Coates, J., Schneider, P., Sujatha, R., Links between $GL_2$ and cyclotomic Iwasawa theory, Documenta Math. (Extra Volume: Kazuya Kato's fiftieth birthday) (2003), 187–215. MR 2046599 (2005c:11134) Coates, J., Sujatha, R., Galois cohomology of elliptic curves, Tata Institute of Fundamental Research Lectures on Mathematics 88, Narosa Publishing House, New Delhi (2000). MR 1759312 (2001b:11046) Coates, J., Sujatha, R., Wintenberger, J.-P, On the Euler-Poincaré characteristics of finite dimensional $p$-adic Galois representations, Publ. Math. IHES 93 (2001), 107-143. MR 1863736 (2003d:11078) Deligne, P., Les constantes des équations fonctionnelles des fonctions $L$, Modular functions of one variable, II, Lecture Notes in Math. 349 Springer (1973), 501–597. MR 0349635 (50:2128) Dokchitser, V., Root numbers of non-abelian twists of elliptic curves, with Appendix by Fisher, T., Proc. London Math. Soc. 91 (2005), 300-324. MR 2167089 (2006f:11060) Dokchitser, T., Dokchitser, V., Numerical computations in non-commutative Iwasawa theory, with Appendix by Coates, J., and Sujatha, R., Proc. London Math. Soc. 94 (2006), 211-272 MR 2294995 Flach, M., A generalisation of the Cassels-Tate pairing, J. Reine Angew Math. 512 (1990), 113–127. MR 1079004 (92b:11037) Greenberg, R., Iwasawa theory for elliptic curves, Arithmetic theory of elliptic curves, Lecture Notes in Math. 1716 Springer (1999), 51–144. MR 1754686 (2002a:11056) Greenberg, R., Iwasawa theory of p-adic representations, in "Algebraic Number Theory - in honor of K. Iwasawa�, Advanced Studies in Pure Math. 17, Kinokuniya (1989), 97–137. MR 1097613 (92c:11116) Guo, L., General Selmer groups and critical values of Hecke $L$-functions, Math. Ann. 297 (1993), 221–233. MR 1241803 (95b:11064) Hachimori, Y., Matsuno, K., An analogue of Kida's formula for the Selmer groups of elliptic curves, J. Algebraic Geom. 8 (1999), no. 3, 581–601. MR 1689359 (2000c:11086) Hachimori, Y., Venjakob, O., Completely faithful Selmer groups over Kummer extensions, Documenta Math. (Extra Volume: Kazuya Kato's fiftieth birthday) (2003), 443–478. MR 2046605 (2005b:11072) Kato, K., $p$-adic Hodge theory and values of zeta functions of modular forms, Cohomologies $p$-adiques et applications arithmétiques III, Astérisque 295 (2004), 117–290. MR 2104361 (2006b:11051) Matsuno, K., Finite $\Lambda$-submodules of Selmer groups of abelian varieties over cyclotomic ${\mathbb {Z}}_p$-extensions, J. Number Theory 99 (2003), no. 2, 415–443. MR 1969183 (2004c:11098) Milne, J.S., Arithmetic Duality theorems, Progress. in Math. 1 (1986), Birkhäuser. MR 881804 (88e:14028) Monsky, P., Generalizing the Birch-Stephens theorem, Math. Z., 221 (1996), 415–420. MR 1381589 (97a:11103) Nekovář, J., Selmer complexes, Astérisque No. 310 (2006), vii+559 pp. MR 2333680 Nekovář, J., On the parity of ranks of Selmer groups. II, C.R. Acad. Sci. Paris, 332 (2001), 99–104. MR 1813764 (2002e:11060) Raynaud, M., Variétés abéliennes et géométrie rigid, Actes du Congrès International des Mathématiciens (Nice 1970), Tome 1, 473–477. MR 0427326 (55:360) Rohrlich, D.E., Galois theory, elliptic curves, and root numbers, Compositio Math. 100 (1996), 311–349. MR 1387669 (97m:11075) Rohrlich, D. E., Scarcity and abundance of trivial zeros in division towers, Journal of Algebraic Geometry 17 (2008), no. 4, 643–675. MR 2424923 Serre, J.-P., Propriétés galoisiennes des points d'ordre fini des courbes elliptiques, Invent. Math. 15 (1972), 259–331. MR 0387283 (52:8126) Serre, J.-P., Linear representations of finite groups, GTM 42, Springer. MR 0450380 (56:8675) Serre, J.-P., Sur la dimension cohomologique des groupes profinis, Topology 3 (1965), 413–420. MR 0180619 (31:4853) Swan, R., The Grothendieck ring of a finite group, Topology 2 (1963), 85–110. MR 0153722 (27:3683) Swan, R., $K$-theory of finite groups and orders, LNM 149, Springer (1970). MR 0308195 (46:7310) Affiliation: DPMMS, University of Cambridge, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WB, England MR Author ID: 50035 Email: [email protected] Takako Fukaya Affiliation: Keio University, Hiyoshi, Kohoku-ku, Yokohama, 223-8521, Japan Email: [email protected] Kazuya Kato Affiliation: Department of Mathematics, Kyoto University, Kyoto, 606-8502, Japan Email: [email protected] Ramdorai Sujatha Affiliation: School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India MR Author ID: 293023 Email: [email protected] Received by editor(s): July 1, 2007 Received by editor(s) in revised form: October 24, 2007 Additional Notes: The second author gratefully acknowledges support from the JSPS Postdoctoral Fellowship for research abroad Join the AMS AMS Conferences News & Public Outreach Math in the Media Mathematical Imagery Mathematical Moments Data on the Profession Fellows of the AMS Mathematics Research Communities AMS Fellowships Collaborations and position statements Appropriations Process Primer Congressional briefings and exhibitions About the AMS Jobs at AMS Notices of the AMS · Bulletin of the AMS American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267 AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office. © Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility
CommonCrawl
What happens to the energy when waves perfectly cancel each other? What happens to the energy when waves perfectly cancel each other (destructive interference)? It appears that the energy "disappear" but the law of conservation of energy states that it can't be destructed. My guess is that the kinetic energy is transformed into potential energy. Or maybe it depends on the context of the waves where do the energy goes? Can someone elaborate on that or correct me if I'm wrong? energy waves energy-conservation interference superposition edited Jan 2 at 21:54 aortizmenaaortizmena Waves always travel. Even standing waves can always be interpreted as two traveling waves that are moving in opposite directions (more on that below). Keeping the idea that waves must travel in mind, here's what happens whenever you figure out a way to build a region in which the energy of such a moving wave cancels out fully: If you look closely, you will find that you have created a mirror, and that the missing energy has simply bounced off the region you created. Examples include opals, peacock feathers, and ordinary light mirrors. The first two reflect specific frequencies of light because repeating internal structures create a physical regions in which that frequency of light cannot travel - that is, a region in which near-total energy cancellation occurs. An optical mirror uses electrons at the top of their Fermi seas to cancel out light over a much broader range of frequencies. In all three examples the light bounces off the region, with only a little of its energy being absorbed (converted to heat). A skip rope (or perhaps a garden hose) provides a more accessible example. First, lay out the rope or hose along its length, then give it quick, sharp clockwise motion. You get a helical wave that travels quickly away from you like a moving corkscrew. No standing wave, that! You put a friend at the other end, but she does not want your wave hitting her. So what does she do? First she tries sending a clockwise wave at you too, but that seems to backfire. Your wave if anything seems to hit harder and faster. So she tries a counterclockwise motion instead. That seems to work much better. It halts the forward progress of the wave you launched at her, converting it instead to a loop. That loop still has lots of energy, but at least now it stays in one place. It has become a standing wave, in this case a classic skip-rope loop, or maybe two or more loops if you are good at skip rope. What happened is that she used a canceling motion to keep your wave from hitting her. But curiously, her cancelling motion also created a wave, one that is twisted in the opposite way (counterclockwise) and moving towards you, just as your clockwise wave moved towards her. As it turns out, the motion you are already doing cancels her wave too, sending it right back at her. The wave is now trapped between your two cancelling actions. The sum of the two waves, which now looks sinusoidal instead of helical, has the same energy as your two individual helical waves added together. I should note that you really only need one person driving the wave, since any sufficiently solid anchor for one end of the rope will also prevent the wave from entering it, and so end up reflecting that wave just as your friend did using a more active approach. Physical media such as peacock features and Fermi sea electrons also use a passive approach to reflection, with the same result: The energy is forbidden by cancellation from entering into some region of space. So, while this is by no means a complete explanation, I hope it provides some "feel" for what complete energy cancellation really means: It's more about keeping waves out. Thinking of cancellation as the art of building wave mirrors provides a different and less paradoxical-sounding perspective on a wide variety of phenomena that alter, cancel, or redirect waves. Terry BollingerTerry Bollinger $\begingroup$ Antireflection coatings(ARC) on solar cells work by facilitating destructive interference between partially reflected waves from the air-ARC interface and ARC-solar cell interface. It is well established that in this scenario, there is minimum reflection and maximum transmission through the ARC. So destructive interference does not carry energy. Quantum mechanically, the EM wave is the wave function for photons. Photons actually carry energy. So, destructive interference=> no waves=>no photons=>no energy. Is it appropriate to compare destructive interference in mechanical waves and EM waves? $\endgroup$ – user103515 Jan 19 '18 at 6:25 $\begingroup$ Ah... Yes? Just quantized. $\endgroup$ – Terry Bollinger Jan 19 '18 at 7:32 $\begingroup$ @TerryBollinger , but your answer basically relies on the fact that the 2 waves do not travel in the same direction and thus any cancelation is only instantaneous and not relevant to conservation of energy. But assuming waves can be reflected perfectly without any loss of energy, consider the following scenario: Suppose that we have 2 sources of same frequency light aimed directly at each other call this line l horizontal, and let P be a plane containing l and consider a perfectly reflective square (or cube ) S contained in P oriented 45 degrees to l and intersecting l on 2 adjacent $\endgroup$ – Hao Sun Dec 25 '18 at 23:52 $\begingroup$ sides of S so that the 2 light sources are perpendicularly reflected (in same direction) now slide the square S perpenticularly to l so that l intersects l at a single point p \in l. Now we have a way to "combine" 2 waves that are not in the same direction and we can choose p so that the two waves are "precisely" out of sync and cancel perfectly. Of course this does not address quantum @TerryBollinger (sorry stackexchange doesn't allow long comments for some reason. $\endgroup$ – Hao Sun Dec 26 '18 at 0:04 $\begingroup$ You have to remember, energy is the ability to do work. Work causes displacement along a vector. If wave A causes exactly opposite displacement to wave B, the waves will fully cancel (they do not form a standing wave!). This is not because energy has been lost. Its because you have combined negative and positive energy to create zero energy. Conservation of energy is not violated because -1 + 1 = 0. Equilibriums do not violate the laws of physics... $\endgroup$ – CommaToast Apr 5 at 19:56 We treated this a while back at University... First of all, I assume you mean global cancellation, since otherwise the energy that is missing at the cancelled point simply is what is added to points of constructive interference: Conservation of Energy is only global. The thing is, if multiple waves globally cancel out, there are actually only two possible explanations: One (or more) of the sources is actually a drain and converts wave energy into another form of energy, (e.g. whatever is used to generate the waves in sources, like electricity, and also as Anna said, very often heat) You are calculating with parts of an mathematical expansion which are only valid when convoluted with a weight function or distribution. For example, plane waves physically don't exist (But when used in the Fourier Transform they are still very useful) because their total energy is infinite Tobias KienzlerTobias Kienzler Just in case anyone (e.g. student) would be interested in the simple answer for mechanical waves: CASE 1 (global cancellation): Imagine that you have crest pulse moving right and equally large though pulse moving left. For a moment they "cancel", e.g. there is no net displacement at all, because two opposite displacements cancel out. However, velocities add up and are twice as large, meaning that all the energy in that moment is stored within kinetic energy. Instructive and opposite situation happens, when crest pulses meet. For a moment, displacements add up and are twice as large, meaning that all the energy in that moment is stored within potential energy, as velocities on the other hand cancel out. Because wave equation is linear differential equation, you can superpose different waves $\psi_{12} = \psi_1 + \psi_2$. As a consequence, after meeting both crest pulses or pair crest / though pulses keep traveling if nothing has had happened. It is instructive, that you can add velocities separately of amplitudes, as $\dot{\psi}_{12} = \frac{\partial}{\partial t} (\psi_1 + \psi_2) = \dot{\psi}_1 + \dot{\psi}_2$. So even if amplitudes do cancel out at a given moment ($\psi_1 + \psi_2 = 0$), speeds do not ($\dot{\psi}_1 + \dot{\psi}_2 \ne 0$). It is just as if you see that oscillator is in a equilibrium position at a given moment. That does not mean that it is not oscillating, as it still might posses velocity. If we generalize written above: in any wave you have exchange of two types of energy: kinetic vs. potential, magnetic vs. electrical. You can make such two waves that one of the energies cancels, but the other energy will become twice as big. CASE 2 (local cancellation): In case of spatial interference of two continuous waves there are areas of destructive and areas of constructive interferences. Energy is no longer uniformly distributed in space, but in average it equals added up energies of two waves. E.g. looking at standing waves, there is no energy at nodes of the standing waves, while at crests energy is four times the energy of one wave - giving a space average of twice the energy of one wave. More engineer-like explanations can be found here: http://van.physics.illinois.edu/qa/listing.php?id=1891 15 revs Maybe the question can simply be answered by the observation that a wave like $$\Psi(x,t)=A \cos(x)-A \cos(x+\omega\ t),$$ where the two cosines cancel at periodic times $$t_n=\frac{2\pi}{\omega}n\ \ \longrightarrow\ \ \Psi(x,t_n)=0,$$ still has nonvanishing kinetic energy, if it looks something like $$E=\sum_\mu\left(\frac{\partial \Psi}{\partial x^\mu} \right)^2+\ ...$$ You really would have to construct an example. Since non-dissipative waves whose equations of motions can be formulated by a Lagrangian will have an energy associated to them, as you say, you'd have to find a situation/theory without an energy quantity. The energy is related to the wave by its relation to the equation of motion. So if the energy is defined as that which is constant because of time symmetry and you don't have such a thing, then there is no question. Also don't make the mistake and talk about about two different waves with different energy. If you have a linear problem, the wave will be "one wave" in the energy expression, wherever its parts may wander around. edit: See also the other answer(s) for a discussion of a more physical reading of the question. Nikolaj-KNikolaj-K I think a good way to approach this question is with a Mach-Zehnder interferometer: The field landing on detector 1 is the interference between two waves, one from the lower path, and one from the upper path. Let's suppose the field in each arm is a collimated beam of coherent light, well-approximated as a plane wave, and the interferometer is well-aligned, so the two outputs are almost perfectly overlapped. By changing the thickness of the sample, we can change the relative phase between the two waves, changing our interference from destructive (less energy on detector 1) to constructive (more energy on detector 1). Where did this energy come from? If the beams are nicely matched, this interference can even be completely destructive, and detector 1 will register zero signal. Where does the energy go? The short answer is: detector 2. The total energy hitting the two detectors is constant, as you vary the phase shift caused by the sample. Constructive interference at detector 1 goes hand-in-hand with destructive interference at detector 2. If you only look at one detector or the other, it might seem like energy is created or destroyed by interference, but as other answers mention, we must consider the whole system. AndrewAndrew I had prepared this answer for a question that was made duplicate, so here it comes, because I found an instructive MIT video. (the second link) This answer is for electromagnetic waves mainly Have a look at this video to get an intuition how interference appears photon by photon in a two slit experiment. It comes because the probability distribution for the photons, as accumulated on the screen, has destructive and and constructive patterns, ruled by the underlying quantum mechanical solution of "photon + two slits". The classical electromagnetic wave emerges from a great plethora of photons which have phases and such that they build up the electric and magnetic fields. The nu in the E=h*nu of the photon is the frequency of the electromagnetic wave that emerges from the confluence of the individual photons. In order to get an interference pattern the photons have to react with a screen, or some some matter, as in the laser experiments. The reason that matter is needed for light interference phenomena is due to the very small electromagnetic coupling constant. Photon photon interactions due to the 1/137 end up having a probability of interaction of order of ~10^-8 . With respect to photon electron interactions, which to first order is ~ 10^-2,( and is the main photon-matter interaction) there are 6 orders of magnitude. To all intents two laser beams crossing will go through each other without any measurable interaction,( interference pattern may exist , but they are not photon photon interactions but quantum mechanical superpositions). (Keep this in mind when you reach the last question at the end of the next video.) This MIT video is instructive and a real experiment that shows that in destructive interference set up with interferometers there is a return beam, back to the source, as far as classical electromagnetic waves go. So the energy is balanced by going back to the source. What is happening at the photon level? If the laser emitted photons one by one as in the two slit video? I will hand wave as there is no corresponding video to show: The quantum mechanical solution with the complicated boundary values of the interferometer allows the elastic scattering ( not small, that is how we get reflections) of photons also back to the source. You can see in the video that there always exists a beam going back to the source, that beam is carried by individual photons scattering elastically backwards through the system of the optics of the interferometer. In total destructive interference all the energy is reflected back ( minus some due to absorption and scattering in the matter of the optical system). In essence this experiment is a clear demonstration that the system laser-optical-bench is in a coherent quantum mechanical state, the returning photons joining the ensemble of photons within the laser action , which also includes reflections to be generated. In this video, the first beam carries the information of the phases such that in space interference patterns will form if a screen or other matter intervenes. The energy of the final beam after it leaves the interferometer system and falls on the screen and is redistributed according to the pattern of the interference. The amount of energy carried by the beam there depends on the proportion of energy that manages to leave the interferometer/laser system, i.e. if all of the energy is returned to the laser (destructive interference) , or a part of it goes out of the lasing system to impinge on the screen. In the case of waves in matter, as sound waves or water waves: In the case of two sound waves interfering destructively, the temperature of the medium will go up and energy is conserved because it turns into incoherent kinetic energy of the molecules of the medium. For two water waves, ditto. anna vanna v This figure shows two common situations. The top is an example where the waves are coming from different directions--one from "S1", one from "S2". Then there is destructive interference in some areas ("nodes") and constructive interference in others ("hot spots"). The energy has been redistributed but the total amount of energy is the same. The bottom is an example where the two sources S3 and S4 are highly directional plane-wave emitters, so that they can destructively interfere everywhere they overlap. For that to happen, the source S4 itself has to be sitting in the field of S3. Then actually what is happening is that S4 is absorbing the energy of S3. (You may think that running the laser S4 will drain its battery, but ideally, the battery can even get recharged!) Steve ByrnesSteve Byrnes $\begingroup$ What about momentum conservation? Do you have a link for an experiment on this? $\endgroup$ – anna v Apr 19 '12 at 14:19 http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-27-11-2468 Oblique superposition of two elliptically polarized lightwaves using geometric algebra: is energy–momentum conserved? Michelle Wynne C. Sze, Quirino M. Sugon, Jr., and Daniel J. McNamara JOSA A, Vol. 27, Issue 11, pp. 2468-2479 (2010) We added the two elliptically polarized waves and computed the energy–momentum density of their sum. We showed that energy and momentum are not generally conserved, except when the two waves are moving in opposite directions. We also showed that the momentum of the superposition has an extra component perpendicular to the propagation directions of both waves. But when we took the time-average of the energy and momentum of the superposition, we found that the time-average energy and momentum could also be conserved if both waves are circularly polarized but with opposite handedness, regardless of the directions of the two waves. The non-conservation of energy and momentum of the superposition of two elliptically polarized plane waves is not due to the form of the plane waves themselves, but rather to the accepted definitions of the electromagnetic energy and momentum. Perhaps we may need to modify these definitions in order to preserve the energy–momentum conservation. In our computations, we restricted ourselves to the superposition of two waves with the same frequency. Quirino Sugon Jr.Quirino Sugon Jr. Think of light as photons, then black spots means that no photons were detected and bright spots means lots of photon detection. Since the energy of photons are always quantized: $E=h\nu$ there would be no problem on the energy conservation. One question arises, what makes a photon "arrives" here and not there? The answer is in the wave amplitude of probability associated to the photon. It is the wave amplitude of probability that interferes, in consequence the probability density becomes the intensity pattern for large amount of light quanta. It is often said that a photon 'interferes with itself', but it is the wave amplitude of probability which interferes. In this sense the statement is ok when you think of the 'wave function' as the photon itself. PD: wave function for photons is still an arguable issue, but some progress has been made. You may want to check Galuber's theory of photodetection. E.phyE.phy Explaining this problem using classical equations is simple, well known and well understood. The field values become zero in destructive interference and there is no energy as a result- full stop. The QM explanation is equally simple.. the probability of finding a photon there is zero and no more discussions is needed. So where is the problem, and clearly there is one in the logic- judging by the length and number of replies. We first need to realize that radiation is something special.. it is the advance of a force field in vacuum- but as if this vacuum is a medium, and when we can't see or feel any. This medium is homogeneous, has constant properties of permeability, permittivity and even an electric resistance 376.73 ohms. It has a constant speed of propagation as a result of that, and the speed is as in matter, given by; the square root of the ratio of Bulk modulus of elasticity divided by the density (get this by dividing the two sides of E=m c^2 by the volume. B=E/V, and ρ=m/V). When we deal with waves in a matter medium, we find the we can't transfer energy without the existence of a sink that absorbs this energy. This is the basis of the Newman-Feynman absorber theory. You can establish a force all the time in all the space- with or without an absorber, but not energy. So, matter sitting in intense radiation field at points of total darkness is under intense stress, but does not receive any energy. Energy is force times distance and matter needs to move to absorb energy. For this reason if you have a perfect microwave oven, you don't spend any electrical energy on it if it is running empty-despite being full of intense radiation with regions of destructive and constructive interference- accept the small amount needed to establish these from zero. Thus the simple answer is not that the energy is going back to the source which sounds ridiculous in my opinion, but by the source not giving its energy in the first place, because there is no absorber to take it. There is one mysterious exception in this however.. it is that it is possible to send energy to an infinite space even if we don't see any obsorber there- as in antenna sending to outer space. We need to resort here to the distant masses of Mach to provide an absorber. RiadRiad The Poynting_vector In physics, the Poynting vector represents the directional energy flux density (the rate of energy transfer per unit area, in Watts per square metre, W·m−2) of an electromagnetic field. If the antilaser antilaser experiment is performed in the vacuum there is no thermal dissipation, and the Poynting vectors are opposed, and cancel, for the same field intensity and with the fields out-of-phase. For plane waves (WP, link above): "The time-dependent and position magnitude of the Poynting vector is" : $\epsilon_0cE_0^2\cos^2(\omega t-\mathrm{k\cdot r})$ and the average is different of zero for a single propagating wave, but, for two opposing plane waves of equal intensity and 100% out-of-phase the instantaneous Poynting vector, that measures the flux of energy, is the vector $\vec{S}(t)=\mathrm{\vec{0}}$. If you have one electromagnetic beam at a time then work can be done. If you have two in the above conditions then no work can be extracted. (Energy is canceled, destroyed, ;) BUT, things can be more complicated then described by the eqs, because a physical emmiter antenna also behaves as a receiving antenna that absorbs and reradiates etc, ... changing and probably trashing my first oppinion. Helder VelezHelder Velez From my answer here-PSE-anti-laser-how-sure-we-are-that-energy-is-transported The Poyinting vectors, and the momenta vectors as the E, B fields are symetric. When we do 'field shaping' with antenae aggregates we simply use Maxwell eqs and go with waves everytime. When we got near a null in energy in some region of space we dont get infrared radiation to 'consume' the canceled field. E,B vectors additive: Light+Light=0 Antenae in sattelites (vaccum) work the same way as the ones at Earth surface to shape the intensity of the field. Because the "Poyinting vectors" add to null there is no doubt, imo, that energy vanish. See the antilaser experiment. We dont have theory? Then we must rethink. IMO energy is not transported. What is propagating is only an excitation of the medium (we call it photons) and energy is already 'in site' (vacuum, or whatever name we call the medium). the waves will obliterate each other but they will still exist, they just won't be moving they would just change form (energy cannot be destroyed it can only change form) so when the waves meet they will cancel each other so sound will change to potential and kinetic will change to sound or whatever Physics 101Physics 101 When there is a complete destructive interference of two light beams, Maxwell's equations predict that the energy becomes zero. Let's have the case of two coherent collinear beams, out of phase 180 degrees, like the case of the antilaser. \begin{align} E_1 = E_m \sin (kx - \omega t);\quad E_2 = E_m \sin (kx - \omega t + p) \\ B_1 = B_m \sin (kx - \omega t); \quad B_2 = B_m \sin (kx - \omega t + p) \end{align} $E = E_1 + E_2$ and $B = B_1 + B_2$ \begin{align} E &= E_m \sin (kx - \omega t) + E_m \sin (kx - \omega t + p) \\ B &=B_m \sin (kx - \omega t) + B_m \sin (kx - \omega t + p) \end{align} But, $\sin (kx - \omega t + p) = - \sin (kx – \omega t)$ , Then, $E = 0$ and $B = 0$ and, \begin{align} UT &= U_E + U_B \\ &= \frac12ԑ_0E^2 + \frac1{2\mu_0}B^2 \\ &= 0 \end{align} This is the classical interpretation of the waves electromagnetism during total destructive interference, following Maxwell. Maxwell's description of the energy of the light wave is of an undulating energy that predictably reaches a maximum and later becomes zero. The proposed solution to this problem is to calculate the mean of the energy when the fields are maxima. What is the physical meaning of an energy that have to be averaged in order to have the real magnitude. If the principle of conservation of energy is to be applied to this phenomenon, the energy must be constant, have an unique value for each instant during the movement of the wave. What is the meaning of that situation that has not been recognized for more than a century? What almost nobody want to admit is that electromagnetism is incomplete, because cannot describe the electromagnetic radiation adequately, and generate a violation of the principle of conservation of energy. As Helder Velez said: "We don't have theory?". NO Then we must rethink." He has a proposition: EM energy is not transported, only is an excitation of the medium, the quantum vacuum, or the quantum plenum as I prefer to call it. But this is only an idea, an intuition, without support or evidence. Kyle Kanos luis fondeurluis fondeur $\begingroup$ Lots of copy-pasting from poster's other (flawed) answers. $\endgroup$ – Kyle Kanos Feb 13 '15 at 21:21 $\begingroup$ I made one and only one reference: Helder Velez. The fact that his comment have negative punctuation does not mean that he is incorrect; only that they disagree with him. I support him in the violation of the conservation of energy. I support the "EMG theory of the foton" by Diogenes Aybar that can be found at journaloftheoretics.com/Links/Papers/EMG%20III.pdf. He thinks that the photon has another inherent field, the gravitational field, where the energy goes from the two electromagnetic fields to the gravitational field, keeping then, at every moment the energy constant. $\endgroup$ – luis fondeur Feb 15 '15 at 18:39 protected by Qmechanic♦ Aug 15 '15 at 5:47 Not the answer you're looking for? Browse other questions tagged energy waves energy-conservation interference superposition or ask your own question. Where does energy go in destructive interference? can silence happens when 2 sound waves destroy each other What happens to the energy of photons when two light waves with plane wavefront interferes destructively? I heard light was its own antiparticle. What happens if two photons exactly cancel each other out? Why is the phase of reflected wave during formation of standing waves 180 degrees? Complete destructive interference of light Destructive interference in vacuum (energy conservation) If two waves cancel out, where does the energy go? a question of optical interference and it's energy propagation Diffraction of rays How does interference move energy from destructive to constructive regions? What ultimately happens to energy? What happens to KE when lowering a raised book? Violation of conservation of energy? How to apply conservation of energy to active noise-cancelling headphones? Where does the energy go in such an interaction? Global destructive Interference and conservation of energy What happens to energy when waves cancel out? How does energy transfer when two identical pulses travelling in opposite directions completely out of phase cross each other?
CommonCrawl
Strong neutron pairing in core+4n nuclei (1803.04777) A. Revel, F.M. Marques, O. Sorlin, T. Aumann, C. Caesar, M. Holl, V. Panin, M. Vandebrouck, F. Wamers, H. Alvarez-Pol, L. Atar, V. Avdeichikov, S. Beceiro-Novo, D. Bemmerer, J. Benlliure, C. A. Bertulani, J. M. Boillos, K. Boretzky, M. J. G. Borge, M. Caamano, E. Casarejos, W.N. Catford, J. Cederkäll, M. Chartier, L. Chulkov, D. Cortina-Gil, E. Cravo, R. Crespo, U. Datta Pramanik, P. Diaz Fernandez, I. Dillmann, Z. Elekes, J. Enders, O. Ershova, A. Estrade, F. Farinon, L. M. Fraile, M. Freer, D. Galaviz, H. Geissel, R. Gernhauser, P. Golubev, K. Göbel, J. Hagdahl, T. Heftrich, M. Heil, M. Heine, A. Heinz, A. Henriques, A. Hufnagel, A. Ignatov, H.T. Johansson, B. Jonson, J. Kahlbow, N. Kalantar-Nayestanaki, R. Kanungo, A. Kelic-Heil, A. Knyazev, T. Kroll, N. Kurz, M. Labiche, C. Langer, T. Le Bleis, R. Lemmon, S. Lindberg, J. Machado, J. Marganiec, A. Movsesyan, E. Nacher, M. Najafi, E. Nikolskii, T. Nilsson, C. Nociforo, S. Paschalis, A. Perea, M. Petri, S. Pietri, R. Plag, R. Reifarth, G. Ribeiro, C. Rigollet, M. Roder, D. Rossi, D. Savran, H. Scheit, H. Simon, I. Syndikus, J. T. Taylor, O. Tengblad, R. Thies, Y. Togano, P. Velho, V. Volkov, A. Wagner, H. Weick, C. Wheldon, G. Wilson, J. S. Winfield, P. Woods, D. Yakorev, M. Zhukov, A. Zilges, K. Zuber The emission of neutron pairs from the neutron-rich $N\!=\!12$ isotones $^{18}$C and $^{20}$O has been studied by high-energy nucleon knockout from $^{19}$N and $^{21}$O secondary beams, populating unbound states of the two isotones up to 15~MeV above their two-neutron emission thresholds. The analysis of triple fragment-$n$-$n$ correlations shows that the decay $^{19}$N$(-1p)^{18}$C$^*\!\rightarrow^{16}$C+$n$+$n$ is clearly dominated by direct pair emission. The two-neutron correlation strength, the largest ever observed, suggests the predominance of a $^{14}$C core surrounded by four valence neutrons arranged in strongly correlated pairs. On the other hand, a significant competition of a sequential branch is found in the decay $^{21}$O$(-1n)^{20}$O$^*\!\rightarrow^{18}$O+$n$+$n$, attributed to its formation through the knockout of a deeply-bound neutron that breaks the $^{16}$O core and reduces the number of pairs. Effective proton-neutron interaction near the drip line from unbound states in $^{25,26}$F (1707.07995) M. Vandebrouck, A. Lepailleur, O. Sorlin, T. Aumann, C. Caesar, M. Holl, V. Panin, F. Wamers, S.R. Stroberg, J.D. Holt, F. De Oliveira Santos, H. Alvarez-Pol, L. Atar, V. Avdeichikov, S. Beceiro-Novo, D. Bemmerer, J. Benlliure, C. A. Bertulani, S.K. Bogner, J.M. Boillos, K. Boretzky, M.J.G. Borge, M. Caamano, E. Casarejos, W. Catford, J. Cederkäll, M. Chartier, L. Chulkov, D. Cortina-Gil, E. Cravo, R. Crespo, U. Datta Pramanik, P. Diaz Fernandez, I. Dillmann, Z. Elekes, J. Enders, O. Ershova, A. Estrade, F. Farinon, L.M. Fraile, M. Freer, D. Galaviz, H. Geissel, R. Gernhauser, J. Gibelin, P. Golubev, K. Göbel, J. Hagdahl, T. Heftrich, M. Heil, M. Heine, A. Heinz, A. Henriques, H. Hergert, A. Hufnagel, A. Ignatov, H.T. Johansson, B. Jonson, J. Kahlbow, N. Kalantar-Nayestanaki, R. Kanungo, A. Kelic-Heil, A. Knyazev, T. Kröll, N. Kurz, M. Labiche, C. Langer, T. Le Bleis, R. Lemmon, S. Lindberg, J. Machado, J. Marganiec, F.M. Marques, A. Movsesyan, E. Nacher, M. Najafi, E. Nikolskii, T. Nilsson, C. Nociforo, S. Paschalis, A. Perea, M. Petri, S. Pietri, R. Plag, R. Reifarth, G. Ribeiro, C. Rigollet, M. Röder, D. Rossi, D. Savran, H. Scheit, A. Schwenk, H. Simon, I. Syndikus, J. Taylor, O. Tengblad, R. Thies, Y. Togano, P. Velho, V. Volkov, A. Wagner, H. Weick, C. Wheldon, G. Wilson, J.S. Winfield, P. Woods, D. Yakorev, M. Zhukov, A. Zilges, K. Zuber July 25, 2017 nucl-ex Background: Odd-odd nuclei, around doubly closed shells, have been extensively used to study proton-neutron interactions. However, the evolution of these interactions as a function of the binding energy, ultimately when nuclei become unbound, is poorly known. The $^{26}$F nucleus, composed of a deeply bound $\pi0d\_{5/2}$ proton and an unbound $\nu0d\_{3/2}$ neutron on top of an $^{24}$O core, is particularly adapted for this purpose. The coupling of this proton and neutron results in a $J^{\pi} = 1^{+}\_1 - 4^{+}\_1$ multiplet, whose energies must be determined to study the influence of the proximity of the continuum on the corresponding proton-neutron interaction. The $J^{\pi} = 1^{+}\_1, 2^{+}\_1,4^{+}\_1$ bound states have been determined, and only a clear identification of the $J^{\pi} =3^{+}\_1$ is missing.Purpose: We wish to complete the study of the $J^{\pi} = 1^{+}\_1 - 4^{+}\_1$ multiplet in $^{26}$F, by studying the energy and width of the $J^{\pi} =3^{+}\_1$ unbound state. The method was firstly validated by the study of unbound states in $^{25}$F, for which resonances were already observed in a previous experiment.Method: Radioactive beams of $^{26}$Ne and $^{27}$Ne, produced at about $440A$\,MeV by the FRagment Separator at the GSI facility, were used to populate unbound states in $^{25}$F and $^{26}$F via one-proton knockout reactions on a CH$\_2$ target, located at the object focal point of the R$^3$B/LAND setup. The detection of emitted $\gamma$-rays and neutrons, added to the reconstruction of the momentum vector of the $A-1$ nuclei, allowed the determination of the energy of three unbound states in $^{25}$F and two in $^{26}$F. Results: Based on its width and decay properties, the first unbound state in $^{25}$F is proposed to be a $J^{\pi} = 1/2^-$ arising from a $p\_{1/2}$ proton-hole state. In $^{26}$F, the first resonance at 323(33)~keV is proposed to be the $J^{\pi} =3^{+}\_1$ member of the $J^{\pi} = 1^{+}\_1 - 4^{+}\_1$ multiplet. Energies of observed states in $^{25,26}$F have been compared to calculations using the independent-particle shell model, a phenomenological shell-model, and the ab initio valence-space in-medium similarity renormalization group method.Conclusions: The deduced effective proton-neutron interaction is weakened by about 30-40\% in comparison to the models, pointing to the need of implementing the role of the continuum in theoretical descriptions, or to a wrong determination of the atomic mass of $^{26}$F. Coulomb breakup of neutron-rich $^{29,30}$Na isotopes near the island of inversion (1601.04002) A . Rahaman, Ushasi Datta, T. Aumann, S. Beceiro-Novo, K. Boretzky, C. Caesar, B.V. Carlson, W.N. Catford, S. Chakraborty, M. Chartier, D. Cortina-Gil, G. De. Angelis, D. Gonzalez-Diaz, H. Emling, P. Diaz Fernandez, L.M. Fraile, O. Ershova, H. Geissel, B. Jonson, H. Johansson, N. Kalantar-Nayestanaki, R. Krücken, T. Kröll, J. Kurcewicz, C. Langer, T.Le Bleis, Y. Leifels, G. Münzenberg, J. Marganiec, T. Nilsson, C. Nociforo, A. Najafi, V. Panin, S. Paschalis, R. Plag, R. Reifarth, C. Rigollet, V. Ricciardi, D. Rossi, H. Scheit, H. Simon, C. Scheidenberger, S. Typel, J. Taylor, Y. Togano, V. Volkov, H. Weick, A. Wagner, F. Wamers, M. Weigand, J.S. Winfield, D. Yakorev, M. Zoric Jan. 23, 2017 nucl-ex, nucl-th First results are reported on the ground state configurations of the neutron-rich $^{29,30}$Na isotopes, obtained via Coulomb dissociation (CD) measurements as a method of the direct probe. The invariant mass spectra of those nuclei have been obtained through measurement of the four-momentum of all decay products after Coulomb excitation on a $^{208}Pb$ target at energies of 400-430 MeV/nucleon using FRS-ALADIN-LAND setup at GSI, Darmstadt. Integrated Coulomb-dissociation cross-sections (CD) of 89 $(7)$ mb and 167 $(13)$ mb up to excitation energy of 10 MeV for one neutron removal from $^{29}$Na and $^{30}$Na respectively, have been extracted. The major part of one neutron removal, CD cross-sections of those nuclei populate core, in its' ground state. A comparison with the direct breakup model, suggests the predominant occupation of the valence neutron in the ground state of $^{29}$Na${(3/2^+)}$ and $^{30}$Na${(2^+)}$ is the $d$ orbital with small contribution in the $s$-orbital which are coupled with ground state of the core. The ground state configurations of these nuclei are as $^{28}$Na$_{gs (1^+)\otimes\nu_{s,d}$ and $^{29}$Na$_{gs}(3/2^+)\otimes\nu_{ s,d}$, respectively. The ground state spin and parity of these nuclei, obtained from this experiment are in agreement with earlier reported values. The spectroscopic factors for the valence neutron occupying the $s$ and $d$ orbitals for these nuclei in the ground state have been extracted and reported for the first time. A comparison of the experimental findings with the shell model calculation using MCSM suggests a lower limit of around 4.3 MeV of the sd-pf shell gap in $^{30}$Na. Proton distribution radii of $^{12-19}$C illuminate features of neutron halos (1608.08697) R. Kanungo, W. Horiuchi, G. Hagen, G. R. Jansen, P. Navratil, F. Ameil, J. Atkinson, Y. Ayyad, D. Cortina-Gil, I. Dillmann, A. Estradé, A. Evdokimov, F. Farinon, H. Geissel, G. Guastalla, R. Janik, M. Kimura, R. Knöbel, J. Kurcewicz, Yu. A. Litvinov, M. Marta, M. Mostazo, I. Mukha, C. Nociforo, H.J. Ong, S. Pietri, A. Prochazka, C. Scheidenberger, B. Sitar, P. Strmen, Y. Suzuki, M. Takechi, J. Tanaka, I. Tanihata, S. Terashima, J. Vargas, H. Weick, J. S. Winfield Aug. 31, 2016 nucl-ex, nucl-th Proton radii of $^{12-19}$C densities derived from first accurate charge changing cross section measurements at 900$A$ MeV with a carbon target are reported. A thick neutron surface evolves from $\sim$ 0.5 fm in $^{15}$C to $\sim$ 1 fm in $^{19}$C. The halo radius in $^{19}$C is found to be 6.4$\pm$0.7 fm as large as $^{11}$Li. Ab initio calculations based on chiral nucleon-nucleon and three-nucleon forces reproduce well the radii. Determination of the Neutron-Capture Rate of 17C for the R-process Nucleosynthesis (1604.05832) M. Heine, S. Typel, M.-R. Wu, T. Adachi, Y. Aksyutina, J. Alcantara, S. Altstadt, H. Alvarez-Pol, N. Ashwood, T. Aumann, V. Avdeichikov, M. Barr, S. Beceiro-Novo, D. Bemmerer, J. Benlliure, C. A. Bertulani, K. Boretzky, M. J. G. Borge, G. Burgunder, M. Caamano, C. Caesar, E. Casarejos, W. Catford, J. Cederkäll, S. Chakraborty, M. Chartier, L. V. Chulkov, D. Cortina-Gil, R. Crespo, U. Datta Pramanik, P. Diaz Fernandez, I. Dillmann, Z. Elekes, J. Enders, O. Ershova, A. Estrade, F. Farinon, L. M. Fraile, M. Freer, M. Freudenberger, H. O. U. Fynbo, D. Galaviz, H. Geissel, R. Gernhäuser, K. Göbel, P. Golubev, D. Gonzalez Diaz, J. Hagdahl, T. Heftrich, M. Heil, A. Heinz, A. Henriques, M. Holl, G. Ickert, A. Ignatov, B. Jakobsson, H. T. Johansson, B. Jonson, N. Kalantar-Nayestanaki, R. Kanungo, A. Kelic-Heil, R. Knöbel, T. Kröll, R. Krücken, J. Kurcewicz, N. Kurz, M. Labiche, C. Langer, T. Le Bleis, R. Lemmon, O. Lepyoshkina, S. Lindberg, J. Machado, J. Marganiec, G. Martínez-Pinedo, V. Maroussov, M. Mostazo, A. Movsesyan, A. Najafi, T. Neff, T. Nilsson, C. Nociforo, V. Panin, S. Paschalis, A. Perea, M. Petri, S. Pietri, R. Plag, A. Prochazka, A. Rahaman, G. Rastrepina, R. Reifarth, G. Ribeiro, M. V. Ricciardi, C. Rigollet, K. Riisager, M. Röder, D. Rossi, J. Sanchez del Rio, D. Savran, H. Scheit, H. Simon, O. Sorlin, V. Stoica, B. Streicher, J. T. Taylor, O. Tengblad, S. Terashima, R. Thies, Y. Togano, E. Uberseder, J. Van de Walle, P. Velho, V. Volkov, A. Wagner, F. Wamers, H. Weick, M. Weigand, C. Wheldon, G. Wilson, C. Wimmer, J. S. Winfield, P. Woods, D. Yakorev, M. V. Zhukov, A. Zilges, K. Zuber April 20, 2016 nucl-ex, astro-ph.SR With the R$^{3}$B-LAND setup at GSI we have measured exclusive relative-energy spectra of the Coulomb dissociation of $^{18}$C at a projectile energy around 425~AMeV on a lead target, which are needed to determine the radiative neutron-capture cross sections of $^{17}$C into the ground state of $^{18}$C. Those data have been used to constrain theoretical calculations for transitions populating excited states in $^{18}$C. This allowed to derive the astrophysical cross section $\sigma^{*}_{\mathrm{n}\gamma}$ accounting for the thermal population of $^{17}$C target states in astrophysical scenarios. The experimentally verified capture rate is significantly lower than those of previously obtained Hauser-Feshbach estimations at temperatures $T_{9}\leq{}1$~GK. Network simulations with updated neutron-capture rates and hydrodynamics according to the neutrino-driven wind model as well as the neutron-star merger scenario reveal no pronounced influence of neutron capture of $^{17}$C on the production of second- and third-peak elements in contrast to earlier sensitivity studies. Observation of Large Enhancement of Charge Exchange Cross Sections with Neutron-Rich Carbon Isotopes (1512.00590) I. Tanihata, S. Terashima, R. Kanungo, F. Ameil, J. Atkinson, Y. Ayyad, D. Cortina-Gil, I. Dillmann, A. Estradé, A. Evdokimov, F. Farinon, H. Geissel, G. Guastalla, R. Janik, R. Knoebel, J. Kurcewicz, Yu. A. Litvinov, M. Marta, M. Mostazo, I. Mukha, C. Nociforo, H.J. Ong, S. Pietri, A. Prochazka, C. Scheidenberger, B. Sitar, P. Strmen, M. Takechi, J. Tanaka, H. Toki, J. Vargas, J. S. Winfield, H. Weick Production cross sections of nitrogen isotopes from high-energy carbon isotopes on hydrogen and carbon targets have been measured for the first time for a wide range of isotopes. The fragment separator FRS at GSI was used to deliver C isotope beams. The cross sections of the production of N isotopes were determined by charge measurements of forward going fragments. The cross sections show a rapid increase with the number of neutrons in the projectile. Since the production of nitrogen is mostly due to charge exchange reactions below the proton separation energies, the present data suggests a concentration of Gamow-Teller and Fermi transition strength at low excitation energies for neutron-rich isotopes. It was also observed that the cross sections were enhanced much more strongly for neutron rich isotopes in the C-target data. Systematic investigation of projectile fragmentation using beams of unstable B and C isotopes (1603.00323) R. Thies, A. Heinz, T. Adachi, Y. Aksyutina, J. Alcantara-Núñes, S. Altstadt, H. Alvarez-Pol, N. Ashwood, T. Aumann, V. Avdeichikov, M. Barr, S. Beceiro-Novo, D. Bemmerer, J. Benlliure, C. A. Bertulani, K. Boretzky, M. J. G. Borge, G. Burgunder, M. Caamano, C. Caesar, E. Casarejos, W. Catford, J. Cederkäll, S. Chakraborty, M. Chartier, L. V. Chulkov, D. Cortina-Gil, R. Crespo, U. Datta, P. Díaz Fernández, I. Dillmann, Z. Elekes, J. Enders, O. Ershova, A. Estradé, F. Farinon, L. M. Fraile, M. Freer, M. Freudenberger, H. O. U. Fynbo, D. Galaviz, H. Geissel, R. Gernhäuser, K. Göbel, P. Golubev, D. Gonzalez Diaz, J. Hagdahl, T. Heftrich, M. Heil, M. Heine, A. Henriques, M. Holl, G. Ickert, A. Ignatov, B. Jakobsson, H. T. Johansson, B. Jonson, N. Kalantar-Nayestanaki, R. Kanungo, R. Knöbel, T. Kröll, R. Krücken, J. Kurcewicz, N. Kurz, M. Labiche, C. Langer, T. Le Bleis, R. Lemmon, O. Lepyoshkina, S. Lindberg, J. Machado, J. Marganiec, V. Maroussov, M. Mostazo, A. Movsesyan, A. Najafi, T. Nilsson, C. Nociforo, V. Panin, S. Paschalis, A. Perea, M. Petri, S. Pietri, R. Plag, A. Prochazka, A. Rahaman, G. Rastrepina, R. Reifarth, G. Ribeiro, M. V. Ricciardi, C. Rigollet, K. Riisager, M. Röder, D. Rossi, J. Sanchez del Rio, D. Savran, H. Scheit, H. Simon, O. Sorlin, V. Stoica, B. Streicher, J. T. Taylor, O. Tengblad, S. Terashima, Y. Togano, E. Uberseder, J. Van de Walle, P. Velho, V. Volkov, A. Wagner, F. Wamers, H. Weick, M. Weigand, C. Wheldon, G. Wilson, C. Wimmer, J. S. Winfield, P. Woods, D. Yakorev, M. V. Zhukov, A. Zilges, K. Zuber Background: Models describing nuclear fragmentation and fragmentation-fission deliver important input for planning nuclear physics experiments and future radioactive ion beam facilities. These models are usually benchmarked against data from stable beam experiments. In the future, two-step fragmentation reactions with exotic nuclei as stepping stones are a promising tool to reach the most neutron-rich nuclei, creating a need for models to describe also these reactions. Purpose: We want to extend the presently available data on fragmentation reactions towards the light exotic region on the nuclear chart. Furthermore, we want to improve the understanding of projectile fragmentation especially for unstable isotopes. Method: We have measured projectile fragments from 10,12-18C and 10-15B isotopes colliding with a carbon target. These measurements were all performed within one experiment, which gives rise to a very consistent dataset. We compare our data to model calculations. Results: One-proton removal cross sections with different final neutron numbers (1pxn) for relativistic 10,12-18C and 10-15B isotopes impinging on a carbon target. Comparing model calculations to the data, we find that EPAX is not able to describe the data satisfactorily. Using ABRABLA07 on the other hand, we find that the average excitation energy per abraded nucleon needs to be decreased from 27 MeV to 8.1 MeV. With that decrease ABRABLA07 describes the data surprisingly well. Conclusions: Extending the available data towards light unstable nuclei with a consistent set of new data have allowed for a systematic investigation of the role of the excitation energy induced in projectile fragmentation. Most striking is the apparent mass dependence of the average excitation energy per abraded nucleon. Nevertheless, this parameter, which has been related to final-state interactions, requires further study. Measurement of the 92,93,94,100Mo(g,n) reactions by Coulomb Dissociation (1310.2165) K. Göbel, P. Adrich, S. Altstadt, H. Alvarez-Pol, F. Aksouh, T. Aumann, M. Babilon, K-H. Behr, J. Benlliure, T. Berg, M. Böhmer, K. Boretzky, A. Brünle, R. Beyer, E. Casarejos, M. Chartier, D. Cortina-Gil, A. Chatillon, U. Datta. Pramanik, L. Deveaux, M. Elvers, T. W. Elze, H. Emling, M. Erhard, O. Ershova, B. Fernandez-Dominguez, H. Geissel, M. Górska, T. Heftrich, M. Heil, M. Hellstroem, G. Ickert, H. Johansson, A. R. Junghans, F. Käppeler, O. Kiselev, A. Klimkiewicz, J. V. Kratz, R. Kulessa, N. Kurz, M. Labiche, C. Langer, T. Le. Bleis, R. Lemmon, K. Lindenberg, Y. A. Litvinov, P. Maierbeck, A. Movsesyan, S. Müller, T. Nilsson, C. Nociforo, N. Paar, R. Palit, S. Paschalis, R. Plag, W. Prokopowicz, R. Reifarth, D. M. Rossi, L. Schnorrenberger, H. Simon, K. Sonnabend, K. Sümmerer, G. Surówka, D. Vretenar, A. Wagner, S. Walter, W. Waluś, F. Wamers, H. Weick, M. Weigand, N. Winckler, M. Winkler, A. Zilges Oct. 8, 2013 nucl-ex, nucl-th, astro-ph.IM The Coulomb Dissociation (CD) cross sections of the stable isotopes 92,94,100Mo and of the unstable isotope 93Mo were measured at the LAND/R3B setup at GSI Helmholtzzentrum f\"ur Schwerionenforschung in Darmstadt, Germany. Experimental data on these isotopes may help to explain the problem of the underproduction of 92,94Mo and 96,98Ru in the models of p-process nucleosynthesis. The CD cross sections obtained for the stable Mo isotopes are in good agreement with experiments performed with real photons, thus validating the method of Coulomb Dissociation. The result for the reaction 93Mo(g,n) is especially important since the corresponding cross section has not been measured before. A preliminary integral Coulomb Dissociation cross section of the 94Mo(g,n) reaction is presented. Further analysis will complete the experimental database for the (g,n) production chain of the p-isotopes of molybdenum. Coulomb excitation of exotic nuclei at the R3B-LAND setup (1209.1024) D. M. Rossi, P. Adrich, F. Aksouh, H. Alvarez-Pol, T. Aumann, J. Benlliure, M. Böhmer, K. Boretzky, E. Casarejos, M. Chartier, A. Chatillon, D. Cortina-Gil, U. Datta Pramanik, H. Emling, O. Ershova, B. Fernandez-Dominguez, H. Geissel, M. Gorska, M. Heil, H. Johansson, A. Junghans, O. Kiselev, A. Klimkiewicz, J. V. Kratz, N. Kurz, M. Labiche, T. Le Bleis, R. Lemmon, Yu. A. Litvinov, K. Mahata, P. Maierbeck, A. Movsesyan, T. Nilsson, C. Nociforo, R. Palit, S. Paschalis, R. Plag, R. Reifarth, H. Simon, K. Sümmerer, A. Wagner, W. Walus, H. Weick, M. Winkler Sept. 5, 2012 nucl-ex Exotic Ni isotopes have been measured at the R3B-LAND setup at GSI in Darmstadt, using Coulomb excitation in inverse kinematics at beam energies around 500 MeV/u. As the experimental setup allows kinematically complete measurements, the excitation energy was reconstructed using the invariant mass method. The GDR and additional low-lying strength have been observed in 68Ni, the latter exhausting 4.1(1.9)% of the E1 energy-weighted sum rule. Also, the branching ratio for the non-statistical decay of the excited 68Ni nuclei was measured and amounts to 24(4)%. Exploring the anomaly in the interaction cross section and matter radius of 23O (1112.3282) R. Kanungo, A. Prochazka, M. Uchida, W. Horiuchi, G. Hagen, T. Papenbrock, C. Nociforo, T. Aumann, D. Boutin, D. Cortina-Gil, B. Davids, M. Diakaki, F. Farinon, H. Geissel, R. Gernhauser, J. Gerl, R. Janik, Ø. Jensen, B. Jonson, B. Kindler, R. Knobel, R. Krucken, M. Lantz, H. Lenske, Y. Litvinov, B. Lommel, K. Mahata, P. Maierbeck, A. Musumarra, T. Nilsson, C. Perro, C. Scheidenberger, B. Sitar, P. Strmen, B. Sun, Y. Suzuki, I. Szarka, I. Tanihata, H. Weick, M. Winkler Dec. 14, 2011 nucl-ex, nucl-th New measurements of the interaction cross sections of 22,23O at 900A MeV performed at the GSI, Darmstadt are reported that address the unsolved puzzle of the large cross section previously observed for 23O. The matter radii for these oxygen isotopes extracted through a Glauber model analysis are in good agreement with the new predictions of the ab initio coupled-cluster theory reported here. They are consistent with a 22O+neutron description of 23O as well. Production of new neutron-rich isotopes of heavy elements in fragmentation reactions of $^{238}$U projectiles at 1 A GeV (1007.5506) H. Alvarez-Pol, J. Benlliure, L. Audouin, E. Casarejos, D. Cortina-Gil, T. Enqvist, B. Fernandez, A. R. Junghans, B. Jurado, P. Napolitani, J. Pereira, F. Rejmund, K.-H. Schmidt, O. Yordanov The production of heavy neutron-rich nuclei has been investigated using cold fragmentation reactions of $^{238}$U projectiles at relativistic energies. The experiment performed at the high-resolving-power magnetic spectrometer FRS at GSI allowed to identify 45 new heavy neutron-rich nuclei: $^{205}$Pt, $^{207-210}$Au, $^{211-216}$Hg, $^{213-217}$Tl, $^{215-220}$Pb, $^{219-224}$Bi, $^{221-227}$Po, $^{224-229}$At, $^{229-231}$Rn and $^{233}$Fr. The production cross sections of these nuclei were also determined and used to benchmark reaction codes that predict the production of nuclei far from stability. Extending the north-east limit of the chart of nuclides (1004.0265) J. Benlliure, H. Alvarez-Pol, T. Kurtukian-Nieto, K.-H. Schmidt, L. Audouin, B. Blank, F. Becker, E. Casarejos, D. Cortina-Gil, T. Enqvist, B. Fernández, M. Fernández-Ordóñez, J. Giovinazzo, D. Henzlova, A.R. Junghans, B. Jurado, P. Napolitani, J. Pereira, F. Rejmund, O. Yordanov April 2, 2010 nucl-ex The existence of nuclei with exotic combinations of protons and neutrons provides fundamental information on the forces acting between nucleons. The maximum number of neutrons a given number of protons can bind, neutron drip line1, is only known for the lightest chemical elements, up to oxygen. For heavier elements, the larger its atomic number, the farther from this limit is the most neutron-rich known isotope. The properties of heavy neutron-rich nuclei also have a direct impact on understanding the observed abundances of chemical elements heavier than iron in our Universe. Above half of the abundances of these elements are thought to be produced in rapid-neutron capture reactions, r-process, taking place in violent stellar scenarios2 where heavy neutron-rich nuclei, far beyond the ones known up today, are produced. Here we present a major step forward in the production of heavy neutron-rich nuclei: the discovery of 73 new neutron-rich isotopes of chemical elements between tantalum (Z=72) and actinium (Z=89). This result proves that cold-fragmentation reactions3 at relativistic energies are governed by large fluctuations in isospin and energy dissipation making possible the massive production of heavy neutron-rich nuclei, paving then the way for the full understanding of the origin of the heavier elements in our Universe. It is expected that further studies providing ground and structural properties of the nuclei presented here will reveal further details on the nuclear shell evolution along Z=82 and N=126, but also on the understanding of the stellar nucleosyntheis r-process around the waiting point at A~190 defining the speed of the matter flow towards heavier fissioning nuclei. Structure of 55Ti from relativistic one-neutron knockout (0810.3157) P. Maierbeck, R. Gernhäuser, R. Krücken, T. Kröll, H. Alvarez-Pol, F. Aksouh, T. Aumann, K. Behr, E.A. Benjamim, J. Benlliure, V. Bildstein, M. Böhmer, K. Boretzky, M.J.G. Borge, A. Brünle, A. Bürger, M. Caamaño, E. Casarejos, A. Chatillon, L. V. Chulkov, D. Cortina-Gil, J. Enders, K. Eppinger, T. Faestermann, J. Friese, L. Fabbietti, M. Gascón, H. Geissel, J. Gerl, M. Gorska, P.G. Hansen, B. Jonson, R. Kanungo, O. Kiselev, I. Kojouharov, A. Klimkiewicz, T. Kurtukian, N. Kurz, K. Larsson, T. Le Bleis, K. Mahata, L. Maier, T. Nilsson, C. Nociforo, G. Nyman, C. Pascual-Izarra, A. Perea, D. Perez, A. Prochazka, C. Rodriguez-Tajes, D. Rossi, H. Schaffner, G. Schrieder, S. Schwertel, H. Simon, B. Sitar, M. Stanoiu, K. Sümmerer, O. Tengblad, H. Weick, S. Winkler, B.A. Brown, T. Otsuka, J. Tostevin, W.D.M. Rae Results are presented from a one-neutron knockout reaction at relativistic energies on 56Ti using the GSI FRS as a two-stage magnetic spectrometer and the Miniball array for gamma-ray detection. Inclusive and exclusive longitudinal momentum distributions and cross-sections were measured enabling the determination of the orbital angular momentum of the populated states. First-time observation of the 955(6) keV nu p3/2-hole state in 55Ti is reported. The measured data for the first time proves that the ground state of 55Ti is a 1/2- state, in agreement with shell-model calculations using the GXPF1A interaction that predict a sizable N=34 gap in 54Ca. Proton-proton correlations observed in two-proton decay of $^{19}$Mg and $^{16}$Ne (0802.4263) I. Mukha, L. Grigorenko, K. Summerer, L. Acosta, M.A.G. Alvarez, E. Casarejos, A. Chatillon, D. Cortina-Gil, J. Espino, A. Fomichev, J.E. Garcia-Ramos, H. Geissel, J. Gomez-Camacho, J. Hofmann, O. Kiselev, A. Korsheninnikov, N. Kurz, Yu. Litvinov, I. Martel, C. Nociforo, W. Ott, M. Pfutzner, C. Rodriguez-Tajes, E. Roeckl, M. Stanoiu, H. Weick, P.J. Woods Feb. 28, 2008 nucl-ex Proton-proton correlations were observed for the two-proton decays of the ground states of $^{19}$Mg and $^{16}$Ne. The trajectories of the respective decay products, $^{17}$Ne+p+p and $^{14}$O+p+p, were measured by using a tracking technique with microstrip detectors. These data were used to reconstruct the angular correlations of fragments projected on planes transverse to the precursor momenta. The measured three-particle correlations reflect a genuine three-body decay mechanism and allowed us to obtain spectroscopic information on the precursors with valence protons in the $sd$ shell. Characterization of 7H Nuclear System (nucl-ex/0702021) M. Caamano, D. Cortina-Gil, W. Mittig, H. Savajols, M. Chartier, C. E. Demonchy, B. Fernandez, M. B. Gomez Hornillos, A. Gillibert, B. Jurado, O. Kiselev, R. Lemmon, A. Obertelli, F. Rejmund, M. Rejmund, P. Roussel-Chomaz, R. Wolski The 7H resonance was produced via one-proton transfer reaction with a 8He beam at 15.4A MeV and a 12C gas target. The experimental setup was based on the active-target MAYA which allowed a complete reconstruction of the reaction kinematics. The characterization of the identified 7H events resulted in a resonance energy of 0.57(+0.42-0.21) MeV above the 3H+4n threshold and a resonance width of 0.09(+0.94-0.06) MeV. The 7Be(d,p)2alpha cross section at Big Bang energies and the primordial 7Li abundance (astro-ph/0508454) C. Angulo, E. Casarejos, M. Couder, P. Demaret, P. Leleux, F. Vanderbist, A. Coc, J. Kiener, V. Tatischeff, T. Davinson, A.S. Murphy, N.L. Achouri, N.A. Orr, D. Cortina-Gil, P. Figuera, B.R. Fulton, I. Mukha, E. Vangioni Aug. 22, 2005 astro-ph The WMAP satellite, devoted to the observations of the anisotropies of the Cosmic Microwave Background (CMB) radiation, has recently provided a determination of the baryonic density of the Universe with unprecedented precision. Using this, Big Bang Nucleosynthesis (BBN) calculations predict a primordial 7Li abundance which is a factor 2-3 higher than that observed in galactic halo dwarf stars. It has been argued that this discrepancy could be resolved if the 7Be(d,p)2alpha reaction rate is around a factor of 100 larger than has previously been considered. We have now studied this reaction, for the first time at energies appropriate to the Big Bang environment, at the CYCLONE radioactive beam facility at Louvain-la-Neuve. The cross section was found to be a factor of 10 smaller than derived from earlier measurements. It is concluded therefore that nuclear uncertainties cannot explain the discrepancy between observed and predicted primordial 7Li abundances, and an alternative astrophysical solution must be investigated. Production cross-sections and momentum distributions of fragments from neutron-deficient 36Ar at 1.05 A.GeV (nucl-ex/0310033) M. Caamano, D. Cortina-Gil, K. Suemmerer, J. Benlliure, E. Casarejos, H. Geissel, G. Muenzenberg, J.Pereira We have measured production cross sections and longitudinal momentum distributions of fragments from neutron-deficient 36Ar at 1.05 A.GeV. The production cross-sections show excellent agreement with the predictions of the semiempirical formula EPAX. We have compared these results, involving extremly neutron deficient nuclei, with model calculations to extract informa tion about the response of these models close to the driplines. The longitudinal momentum distributions have also been extracted and are compared with the Goldhaber and Morrissey systematics. Longitudinal momentum distributions of {16,18}C fragments after one-neutron removal from {17,19}C (nucl-ex/9810011) T. Baumann, H. Geissel, H. Lenske, K. Markenroth, W. Schwab, M. H. Smedberg, T. Aumann, L. Axelsson, U. Bergmann, M. J. G. Borge, D. Cortina-Gil, L. Fraile, M. Hellstroem, M. Ivanov, N. Iwasa, R. Janik, B. Jonson, G. Muenzenberg, F. Nickel, T. Nilsson, A. Ozawa, A. Richter, K. Riisager, C. Scheidenberger, G. Schrieder, H. Simon, B. Sitar, P. Strmen, K. Suemmerer, T. Suzuki, M. Winkler, H. Wollnik, M. V. Zhukov The fragment separator FRS at GSI was used as an energy-loss spectrometer to measure the longitudinal momentum distributions of {16,18}C fragments after one-neutron removal reactions in {17,19}C impinging on a carbon target at about 910 MeV/u. The distributions in the projectile frames are characterized by a FWHM of 141+-6 MeV/c for {16}C and 69+-3 MeV/c for {18}C. The results are compared with experimental data obtained at lower energies and discussed within existing theoretical models.
CommonCrawl
communications biology Hippocampal cells segregate positive and negative engrams Circuit and molecular architecture of a ventral hippocampal network Mark M. Gergues, Kasey J. Han, … Mazen A. Kheirbek Aversive state processing in the posterior insular cortex Daniel A. Gehrlach, Nate Dolensek, … Nadine Gogolla Formation and fate of an engram in the lateral amygdala supporting a rewarding memory in mice Albert Park, Alexander D. Jacob, … Sheena A. Josselyn Excitatory VTA to DH projections provide a valence signal to memory circuits Yuan Han, Yi Zhang, … Jelena Radulovic Manipulation of a genetically and spatially defined sub-population of BDNF-expressing neurons potentiates learned fear and decreases hippocampal-prefrontal synchrony in mice Henry L. Hallock, Henry M. Quillian IV, … Keri Martinowich Neurotensin orchestrates valence assignment in the amygdala Hao Li, Praneeth Namburi, … Kay M. Tye A ventral CA1 to nucleus accumbens core engram circuit mediates conditioned place preference for cocaine Yiming Zhou, Huiwen Zhu, … Lan Ma Prefrontal cortex neuronal ensembles encoding fear drive fear expression during long-term memory retrieval Giuseppe Giannotti, Jasper A. Heinsbroek, … Jamie Peters Chemogenetic inhibition of lateral habenula projections to the dorsal raphe nucleus reduces passive coping and perseverative reward seeking in rats Kevin R. Coffey, Ruby E. Marx, … John F. Neumaier Monika Shpokayte1,2, Olivia McKissick ORCID: orcid.org/0000-0002-1733-29713, Xiaonan Guan ORCID: orcid.org/0000-0002-1307-75204, Bingbing Yuan5, Bahar Rahsepar6,7, Fernando R. Fernandez6,7, Evan Ruesch2, Stephanie L. Grella ORCID: orcid.org/0000-0002-7828-40298, John A. White ORCID: orcid.org/0000-0003-1073-26386,7, X. Shawn Liu ORCID: orcid.org/0000-0002-2799-25194 & Steve Ramirez ORCID: orcid.org/0000-0002-9966-598X2,6 Communications Biology volume 5, Article number: 1009 (2022) Cite this article The hippocampus is involved in processing a variety of mnemonic computations specifically the spatiotemporal components and emotional dimensions of contextual memory. Recent studies have demonstrated cellular heterogeneity along the hippocampal axis. The ventral hippocampus has been shown to be important in the processing of emotion and valence. Here, we combine transgenic and all-virus based activity-dependent tagging strategies to visualize multiple valence-specific engrams in the vHPC and demonstrate two partially segregated cell populations and projections that respond to appetitive and aversive experiences. Next, using RNA sequencing and DNA methylation sequencing approaches, we find that vHPC appetitive and aversive engram cells display different transcriptional programs and DNA methylation landscapes compared to a neutral engram population. Additionally, optogenetic manipulation of tagged cell bodies in vHPC is not sufficient to drive appetitive or aversive behavior in real-time place preference, stimulation of tagged vHPC terminals projecting to the amygdala and nucleus accumbens (NAc), but not the prefrontal cortex (PFC), showed the capacity drive preference and avoidance. These terminals also were able to change their capacity to drive behavior. We conclude that the vHPC contains genetically, cellularly, and behaviorally segregated populations of cells processing appetitive and aversive memory engrams. Within the brain we find a rich repository of memories that can be imbued with positive and negative valenced information1,2,3. These experiences leave enduring structural and functional4 changes that are parceled up into1,5,6,7,8,9 discrete sets of cells and circuits comprising the memory engram10. Recent studies have successfully visualized and manipulated defined sets of cells previously active during a single experience10,11,12,13,14,15,16,17. However, how multiple engrams of varying valences (hereafter defined as cells that differentially respond to appetitive or aversive events in a stimulus-independent manner18) are represented within the same brain region remains poorly understood. Previous studies have suggested the ventral hippocampus (vHPC) selectively demarcates and relays emotional information to various downstream targets. We sought to characterize the molecular and cellular identities and the behaviorally-relevant functions of vHPC cells processing appetitive and aversive engrams, with a focus on ventral CA1 (vCA1). In order to address the question of how the vCA1 processes multiple emotional experiences, we first devised a strategy combining two cFos-based tagging methods with endogenous cfos immunohistochemistry to visualize engrams across three discrete timepoints. First, we anatomically charted the projection patterns of appetitively and aversively-tagged vCA1 cells to measure structural overlap and segregation across various downstream targets. Second, we performed genome-wide RNA sequencing (RNA-seq) and DNA methylation sequencing to investigate the genetic features of these sets of cells. Lastly, we behaviorally tested the causal role and functional flexibility vHPC cells in both a cell body and projection-specific manner. First, to access cells across multiple timepoints and in an activity-dependent, within-subject manner, we used the Fos-based transgenic animal, TRAP218, under the control of 4-Hydroxytamoxifen (4-OHT) paired with an all-virus Fos-based strategy under the control of doxycycline10 (Dox) (Fig. 1a). TRAP2 + mice expressing iCre-ERT2 recombinase, when injected with DMSO instead of 4-OHT, show no TdTomato expression (Supplementary Fig. 1), were injected bilaterally with a cocktail of viruses: AAV9-Flex-DIO-TdTomato and AAV-cFos-tTA + AAV-TRE-EYFP. The cfos-tTA strategy couples the c-Fos promoter to the tetracycline transactivator (tTA), which, in its protein form, directly binds to the tetracycline response element (TRE) in a doxycycline-(Dox)-dependent manner and drives the expression of a protein of interest (i.e. EYFP). Combining these two independent systems yields two inducible windows for tagging vHPC cells within the same subject (see Methods). Fig. 1: Hippocampus cells processing appetitive or aversive memory engrams are preferentially reactivated by their respective valences. a Fos-CreERT2 mice were injected with a viral cocktail of 1:1 ratio of AAV9-c-Fos-tTA and AAV9-TRE-EYFP mixed in at 1:1 with AAV-Flex-DIO-TdTomato into the vHPC to create the ability to open two windows of tagging in the mouse. b Schematic of the timeline for TAG1 and TAG2 which are doxycycline and 4-hydroxytamoxifen dependent, respectively. The aversive tag consists of 4 foot shocks and the appetitive tag consists of male to female exposure. c Representative image of dual memory segregation in a sagittal section of the vHPC, specifically vCA1. d Quantification (%Labeling) of EYFP + (aversive) and TdTomato + (appetitive) cells over DAPI along the anterior-posterior axis of vCA1 showing the segregation of valence. (N = 3) e Schematic to visualize cells active during three different behavioral experiences. TAG1 is doxycycline dependent in EYFP, TAG2 is 4-OHT dependent in TdTomato, and lastly, TAG3 is visualized with immunohistochemistry by staining for endogenous cfos 90 minutes after a behavioral experience. f Schematic of groups for visualizing triple memory overlaps (n = 4–6 per group). Groups were counterbalanced to avoid potential timeline biases g Representative images of triple memory overlaps in vCA1 with a 20x zoom in showing the different representative overlaps. h The number of tagged neurons in vCA1 are similar between EYFP, TdTomato, and cfos; there is no statistical difference between recruitment. (Multiple comparisons one-way Anova; N = 10–12) i, Percent overlap for respective behavioral tags for EYFP + Tdtomato for aversive + appetitive (Group 1), aversive + aversive (Group 2), appetitive + aversive (Group 3), and appetitive + appetitive (Group4), (F (3, 12) = 170.0, P < 0.0001). j NS difference in cfos+EYFP overlapping cells in restraint stress + shock vs. sweetened condensed milk + female exposure. k, similar valences aversive and stress and appetitive and milk highly overlap with one another when compared to overlap of different valences, appetitive and stress and aversive and milk. Multiple comparisons one-way ANOVA; F (3, 12) = 20.77, P < 0.0001. l Triple overlap counts for EYFP + TdTomato + cfos (F (3, 12) = 10.55, ns, not significant p > 0.05, P = 0.0011; Error bars represent mean ± Standard mean of error (SEM). After injecting the aforementioned virus cocktail into the vHPC, we used this dual tagging system to label cells processing appetitive or aversive engrams. Ten days after surgery and recovery, mice were taken off their DOX chow diet for 48 hours to open the first window of tagging. While off DOX, to tag the appetitive experience, the mice were subjected to female exposure for 1 hour in a clean homecage13,19 and placed immediately back on the DOX diet for the remainder of the study. To tag the aversive experience, the second set of cells, 24 hours later male mice were subjected to four 2 s and 1.5 mA foot shocks. 30 minutes after the end of fear conditioning, the mice were injected with 40 mg/kg of 4-OHT and left undisturbed for 72 hours (Fig. 1b). We first observed a striking anatomical distinction between appetitive and aversive engrams along the anterior and posterior axis in sagittal sections of vCA1 (Fig. 1c). Appetitive engram cells were predominantly located in more posterior sections, whereas, aversive engram cells were largely present in anterior sections of vCA1, with a salt-and-pepper pattern observed within medial sections (Fig. 1c, d). We found that the order of tagging, tagging shock and then female exposure, did not impact the anatomical recruitment of these cells (Supplementary Fig. 2). Next, using the same dual tagging strategy as described above, we asked whether or not these appetitive- and aversive-tagged populations were recruited during subsequent experiences of similar valences. To that end, we added a third-time point by visualizing endogenous cFos expression 90 minutes after a final behavioral experience, i.e. exposure to sweetened condensed milk or restraint stress20,21,22,23 (Fig. 1e, f). To control the order of experiences, we counterbalanced the four groups, each of which contained two tagged populations of cells (i.e. aversive for cells tagged by fear conditioning and appetitive for cells tagged by male-female interactions), followed by a third experience of varying valence, restraint stress as another aversive and sweetened condensed milk as the other appetitive experience, which was captured by endogenous cFos expression. Our groups were as follows: Aversive-Appetitive-Restraint Stress (GROUP 1), Aversive-Aversive-Restraint Stress (GROUP 2), Appetitive-Aversive-Sweetened Condensed Milk (GROUP3), and Appetitive-Appetitive-Sweetened Condensed Milk (GROUP 4) as shown in Fig. 1f. In each group we measured the cellular co-localization of TdTomato, EYFP, and cFos to infer which cells were active in one or more of the three experiences (Fig. 1g). All mice exhibited a similar proportion of tagged cells across all three tagging approaches regardless of method of tagging or valence (Fig. 1h and Supplementary Fig. 3). Histological analyses revealed that there were significantly higher rates of cells showing co-localization of TdTomato and EYFP when labeled with the same experiences (i.e. appetitive and appetitive) and lower rates of co-localization when labeling differently valenced experiences (i.e. appetitive and aversive) in Fig. 1i. Further, we observed significant colocalization between TdTomato, EYFP, and cFos when mice were subjected to three aversive or three appetitive experiences (Fig. 1l), i.e. GROUP 2 and GROUP 4 respectively. Appetitive-tagged cells were preferentially reactivated when mice were exposed to sweetened condensed milk, and aversive-tagged cells were preferentially reactivated when mice were exposed to restraint stress (Fig. 1j, k). Together, these findings raise the intriguing possibility that vCA1 designates emotionally-relevant information to two partially non-overlapping sets of cells. Moving forward, we chose two valenced engrams, shock and male-female exposure, as a proxy to better understand how the vHPC processes these aversive and appetitive experiences. Therefore, we used the same dual-tagging strategies to characterize the basic physiological properties of vHPC cells (Supplementary Fig. 4). TRAP2 mice were co-injected with the same activity-dependent viruses and tagged in the same manner as mentioned previously (Supplementary Fig. 4a, b). Interestingly, we did not observe any differences in firing frequency, suprathreshold characterization, adaptation rate, input rate, or spike rates (Supplementary Fig. 4c–l) suggesting that, despite recruiting partially non-overlapping sets of cells for appetitive and aversive experiences, these tagged cells themselves share similar physiological characteristics. Despite these physiological similarities, vHPC cells have been shown to project to a myriad of distinct brain regions involved in stress and approach-avoidance behaviors, thereby forming multi-regional networks involving emotion and memory1. We speculated that within these networks there exists structural heterogeneity partly defined by whether an experience is appetitively or aversely valenced, as has been observed in areas including the amygdala10,24,25, nucleus accumbens26, and medial prefrontal cortex27. Using our dual tagging strategy, we next traced vHPC outputs tagged by appetitive and shock experiences in a within-subject manner and measured the axonal fluorescence intensities in the following target areas, given their crucial role in emotional processing: BLA, NAc, PFC, dorsal hypothalamus, fornix, and the dentate gyrus (Fig. 2a, b). Interestingly, we observed both red and green fluorescent signal between appetitive and aversive-tagged vCA1 terminals in the medial BLA (A.P. −1.8) and the PFC (including IL and PL), while also finding evidence of structural segregation in the following regions: we observed predominantly stronger EYFP (appetitive) projections, as measured by fluorescence intensities, to the basomedial amygdala Posterior part (BMP), anterior commissure anterior part (ACA), fornix, dorsal medial hypothalamic nucleus (DMD and DMV), and the lower layer of the dentate gyrus (Fig. 2c–o). Further, we found predominantly stronger TdTomato (aversive) projections to the anterior BLA, posterior BLA, NAc core, and the upper layer of the DG (Fig. 2c–o). Previous studies have demonstrated that neurons in regions such as the PFC27, NAc26, and BLA10.25, collectively process experiences in a manner that can be anatomically segregated or heterogeneous. We posit that appetitive- and aversive-tagged vHPC terminals are embedded in a larger network of emotional memory processing that can be partly defined both by unique anatomical patterns and by the activity-dependent recruitment of ensembles involved in processing a specific valence. Fig. 2: Ventral hippocampal neurons encoding appetitive or aversive valences have discrete projection patterns. a Fos-CreERT2 mice were injected with a viral cocktail of 1:1 ratio of AAV9-c-Fos-tTA and AAV9-TRE-EYFP mixed in with AAV-Flex-DIO-TdTomato into the vHPC for dual memory tagging. b Experimental timeline for tagging; EYFP represents aversive and TdTomato represents appetitive. c, d Representative images of terminal projections; anterior, medial, and posterior BLA, dorsal hippocampus, prefrontal cortex, nucleus accumbens, dorsal medial hypothalamic nucleus, and the fornix. Arbitrary units for EYFP (appetitive) and TdTom (aversive) (N = 3) for the e, anterior BLA (unpaired student's t-test t = 7.279, df = 4, p = 0.00019), f medial BLA (t = 1.271, df = 4; p = 0.2725), g, posterior BLA (t = 14.95, df = 4; p = 0.0001), h, BMP (t = 6.168, df = 4; p = 0.0035), i, PFC (includes IL and PL) (t = 1.078, df = 4; p = 0.3419), j, ACA (t = 50.68, df = 4; P < 0.0001), k, NAc Core (t = 3.018, df = 4; p = 0.0392), l, fornix (t = 5.718, df = 4; p = 0.0046), m, hypothalamus t = 13.96, df = 4; p = 0.0002, n, upper layer of DG(t = 54.94, df = 4; p < 0.0001), o, lower layer of DG (t = 11.73, df = 4; p = 0.0003). ns = p > 0.05. Error bars represent mean ± SEM. Scale bars are all 100 μm. Recent studies have identified unique molecular profiles of vCA1 cells containing distinct projection targets. These vCA1 projections transmit information to multiple areas involved in emotion and memory and form a network that can be organized by unique architectural features, including vCA1 cell inputs, outputs, and transcriptional signatures1. We wanted to ask the question of whether cell populations as a whole, for appetitive or aversive engram, have distinct genetic differences based on experience. Accordingly, we next examined whether or not the molecular composition of vHPC cells contained distinct genetic profiles. To catalogue the molecular landscape of vHPC cells in an activity-dependent manner, we tagged an appetitive experience (i.e. male-female interactions), a aversive experience (i.e. multiple shocks), or a neutral experience (i.e. exposure to the same conditioning cage without an appetitive or aversive stimuli; see methods; Fig. 3a). We then performed RNA-seq using tagged nuclei (EYFP + ) isolated by Fluorescence-Activated Cell Sorting (FACS), which showed ~0.6% tagged nuclei from the appetitive and aversive groups, whereas, neutral recruited on average 0.35% EYFP + nuclei (Fig. 3b, c). The appetitive and aversive groups recruited significantly more EYFP + nuclei than the neutral experience group (Fig. 3c). Comparing aversive and appetitive vCA1 cells to the neutral group, we identified 474 differentially expressed genes (DEGs) in aversive vCA1 cells (Fig. 3d, f), including 340 down regulated genes and 134 upregulated genes (Fig. 3g). We also identified 1,104 DEGs in appetitive vCA1 cells compared to the neutral group (Fig. 3e, f), including 1,025 downregulated genes and 79 upregulated genes (Fig. 3g). There were 842 unique DEGs in appetitive engram cells and 212 unique DEGs in aversive engram cells (Fig. 3f), suggesting distinct transcriptional landscapes in these two populations. Fig. 3: Distinct molecular landscapes of appetitive and aversive hippocampus engram cells. a Experimental scheme to label, isolate, and analyze the two groups of vCA1 cells labeled by EYFP upon aversive stimulation (electrical shock) or appetitive stimulation (male mice engaged in social interaction with female mice) or neutral conditioning in the same cage without aversive or appetitive stimulation. b FACS isolation of EYFP-appetitive nuclei from the hippocampi of mice labeled by AAV9-cFos-tTA + AAV9-TRE-ChR2-EYFP virus from their respective groups c, Percentage of EYFP-appetitive nuclei from each group (N = 3 per group/timepoint, Ordinary one-way ANOVA, Multiple Comparisons ***P = 0.001, ****P < 0.0001, ns = Not Significant, p > 0.05). d A Volcano plot with the relative fold change of gene expression in log 2 ratio and the FDR-adjusted p-value in log 2 ratio as X and Y axis showing the up and downregulated genes in aversive vCA1 cells compared to neutral group. Up or downregulated genes with FDR adjusted p-value less than 1e-5 plus at least four-fold differences were highlighted in either red or blue respectively. The gene names were displayed for the top 20 most significant genes as sorted by Wald statistics. e A Volcano plot showing the up and downregulated genes in appetitive vCA1 cells compared to neutral group. f A Venn diagram showing the 1104 differentially expressed genes (DEGs) in appetitive and 474 DEGs in aversive vCA1 cells. 262 DEGs were identified in both groups. g An UpSet plot showing 226 downregulated genes in both aversive and appetitive vCA1 cells, and 35 upregulated genes in both aversive and appetitive vCA1 cells. Only one gene (Nufip1) was upregulated in the appetitive engram but downregulated in the aversive engram. Error bars represent mean ± SEM. Furthermore, the top 20 downregulated DEGs identified for aversive and appetitive vCA1 cells showed no overlap with each other, and 14 among the top 20 upregulated DEGs for aversive and appetitive vCA1 cells do not overlap with each other (Fig. 3d, e, Supplementary Tables 1 and 2), supporting the distinct transcriptomes in these neurons. Interestingly, among the 262 shared DEGs we identified one gene Nufip1 (Nuclear Fragile X Mental Retardation Protein Interacting Protein 1) that was upregulated in appetitive vCA1 cells and downregulated in aversive vCA1 cells (Fig. 3g). This gene encodes a nuclear RNA binding protein that contains a C2H2 zinc finger motif and a nuclear localization signal28. Diseases associated with NUFIP1 mutations include Peho Syndrome (progressive encephalopathy with Edema, Hypsarrhythmia and Optic atrophy), an autosomal recessive and dominate, progressive neurodegenerative disorder that starts in the first few weeks or months of life. Its interacting protein FMRP1 is essential for protein synthesis in the synapse29 and CGG trinucleotide expansion mutation of FMR1 gene coding FMRP1 cause Fragile X syndrome, the most common intellectual disability in males30. Further investigation of the functional significance of NUFIP1 and other DEGs could reveal mechanistic insights to the transcriptomic plasticity engaged by the valences of memory. To gain deeper insight into the molecular signatures of transcriptomes associated with aversive and appetitive vCA1 cells, we performed Gene Ontology (GO) analysis of the up- and downregulated pathways in aversive and appetitive vCA1 cells (Fig. 4a–d). We found that the top upregulated pathway in aversive vCA1 cells involved neurotransmitter complexes such as ionotropic glutamate receptor activity (Fig. 4a). This finding is consistent with previous studies using brain tissues that identified 3,759 differentially methylated DNA regions in the hippocampus associated with 1,206 genes enriched in the categories of ion gated channel activity after contextual fear conditioning29,31,32. Interestingly, we found the top downregulated pathway in aversive vCA1 cells involved DNA mismatch repair (Fig. 4b). Although the top upregulated pathways in appetitive vCA1 cells also include ionotropic glutamate receptor activity (Fig. 4c), the top downregulated pathways in appetitive vCA1 cells enrich on axoneme assembly and microtubule bundle formation (Fig. 4d) different from the pathways downregulated in appetitive vCA1 cells (Fig. 4b). Next, we compared the RNA-seq data between aversive and appetitive vCA1 cells directly. We identified 494 DEGs, including 47 upregulated genes and 447 downregulated genes (Supplementary Fig. 5a and Supplementary Table 3). Furthermore, we found that five pathways were upregulated in appetitive vCA1 cells such as nuclear exosome RNAse complex and 16 pathways were downregulated in appetitive engram compared to aversive vCA1 cells, such as axoneme assembly Supplementary Fig. 5b, c). These differentially altered signaling pathways between aversive and appetitive vCA1 cells, which resulted from multiple comparisons, support our conclusion that these neurons indeed represent transcriptionally distinct subpopulations. One future direction is to explore the functions of these DEGs and altered signaling pathways. As a proof-of-concept, we applied GeneMANIA33 to predict the functions of the top 20 DEGs in Fig. 4e–h. For instance, the brain-specific angiogenesis inhibitor 2 (BAI2) is uniquely identified for the DEGs upregulated in aversive vCA1 cells (Fig. 4e), and the Ionotropic glutamate receptor is only implicated for the DEGs upregulated in appetitive vCA1 cells (Fig. 4g). Similarly, the WNT signaling pathway component Disheveled is uniquely identified for the DEGs downregulated in aversive vCA1 cells (Fig. 4f), and the TRPV1-4 channel is only implicated for the DEGs upregulated in appetitive vCA1 cells (Fig. 4h). Loss- and gain-function study of these predicted genes will provide mechanistic insight for the molecular signatures of engram neurons with different memory valences. Fig. 4: Gene ontology and GeneMANIA analysis provide further mechanistic insight into the molecular landscape of appetite and aversive engram regulation. a Gene Enrichment Set Analysis of the upregulated pathways in aversive engrams using Gene Ontology (GO) module. The size of the dot represents the gene count, and the color of the dot indicates the FDR. The pathway with an FDR-adjusted P value smaller than 0.25 is considered as significantly enriched. b GO analysis of downregulated pathways in aversive vCA1 cells. c GO analysis of upregulated pathways in appetitive vCA1 cells. d GO analysis of downregulated pathways in appetitive vCA1 cells. GeneMANIA network of the top 20 upregulated and downregulated genes in aversive and appetitive vCA1 cells, respectively. Genes with red nodes are upregulated e, or downregulated f, in the aversive vCA1 cells compared to the neutral sample, and genes with green nodes are upregulated (g) or downregulated (h) in the appetitive vCA1 compared to the neutral sample. The grey nodes represent the genes interacting with these 20 differentially expressed genes. The shared protein domains supported by InterPro domain databases within each network were labeled in red for aversive vCA1 cells and in green for appetitive vCA1 cells. The detailed description for each shared protein domain is listed in Supplementary Table 4. The interaction network categories between these genes are annotated in the legend next to panel g. Besides the distinct transcriptomic profiles in engram neurons described above, recent studies reveal the transcriptional priming role of epigenetic regulation in engram34. To explore whether the dynamic epigenomic landscape also contributes to the specificity of engram neurons with different valences, we performed reduced representation bisulfite sequencing (RRBS) to characterize the DNA methylation landscapes in aversive and appetitive vCA1 cells. As shown in Fig. 5a, b, we identified 1939 differentially methylated cytosines (DMCs) with the change of DNA methylation larger than 20% and p value smaller than 0.05 in aversive vCA1 cells compared to the neutral group, and 3117 DMCs in appetitive vCA1 cells. These DMCs are located in the different positions of the genome including 5'UTR, promotor, exon, intron, 3'UTR, transcription termination sites (TTS), intergenic, noncoding regions, suggesting different functional output at the transcriptional level. Interestingly, the genomic distributions of these DMCs in aversive vCA1 cells are slightly different from the one in appetitive vCA1 (Supplementary Table 5). Based on these DMCs, we identified differentially methylated regions (DMRs) that contain at least two DMCs for each DMR (Fig. 5c, d). These DMRs allow us to identify the differentially methylated genes (DMGs) that either contain or close to these DMRs. The top 20 DMGs with the change of methylation level larger than 20% and p value smaller than 0.05 show no overlapping between aversive and appetitive vCA1 cells, suggesting different memory valences trigger different changes of DNA methylations. Among the 266 DMGs in appetitive vCA1 cells and the 98 DMGs in aversive vCA1 cells, only 32 DMGs are commonly shared (Fig. 5e, Supplementary Data 1), confirming the distinct DNA methylation landscape between these two populations of engram cells. Last, we performed Gene Ontology (GO) analysis of DMGs in aversive and appetitive vCA1 cells (Fig. 5f, g). We found that the pathways in aversive vCA1 cells mainly enriched in the structure and function of synaptic connections (Fig. 5f). However, the enriched pathways in appetitive vCA1 cells are much more diverse including axon growth, synaptic connection, ion channels, and RNA polymerase II transcription regulator complex (Fig. 5g). These differentially enriched pathways between aversive and appetitive vCA1 cells suggest potentially distinguished functional outputs attributed by DNA methylation at the transcriptional level to confer the specificity of memory valences. One interesting future direction is to explore the maintenance and functions of these DNA methylation changes during the consolidation and recall of memory. Overall, our results in Figs. 3, 4, and 5 showed that the distinct molecular signatures of aversive and appetitive vCA1 cells are reflected at the transcriptomic and epigenomic levels likely contributing to the different valences of memory. Fig. 5: Distinct DNA methylation landscapes of aversive and appetitive vCA1 cells. Volcano plots showing the differentially methylated cytosines (DMC) with the change of DNA methylation larger than 20% and p value smaller than 0.05 in aversive a, or appetitive b, vCA1 cells compared to the neutral group. DMCs in different genomic positions (3'UTR, 5'UTR, exon, intergenic, intron, non-coding promotor, TSS) are color coded. Volcano plot showing differently methylated regions (DMRs) that contain at least two DMCs for each DMR in aversive c, and appetitive d, vCA1 cells. The top 10 hypermethylated and hypomethylated regions were highlighted in either pink or blue separately and were annotated to its nearest genes. e A Venn diagram showing differently methylated genes (DMGs) that are associated with DMRs in c and d identified in aversive and appetitive vCA1 cells. f, GO analysis of DMGs in aversive vCA1 cells. g GO analysis of DMGs in appetitive vCA1 cells. vCA1 is known to have monosynaptic projections to the BLA, NAc, and the mPFC35. Previous studies have shown that these brain regions are important in the modulation of both appetitive and aversive experiences, especially in the NAc26,36 and the BLA24,37, in which molecularly and topographically distinct cellular populations have been identified for each behavior. Therefore, we tested for a causal role of tagged vHPC cell bodies and its selected terminals in driving behavior by first infusing a virus cocktail of AAV9-cFos-tTA and AAV9-TRE-ChR2-EYFP or AAV9-TRE-EYFP into vCA1. We then implanted an optical fiber bilaterally above either the vCA1 or its terminals, vCA1–BLA, vCA1–NAc, vCA1–PFC (Fig. 6a). We first found that all terminals were capable of activating their corresponding downstream targets by assessing increases in cFos levels following stimulation (Supplementary Fig. 6). Following 10 days of recovery, a separate group of mice were taken off DOX and tagged with aversive (i.e. shock) or appetitive (i.e. male-female interactions) experiences. As illustrated in Fig. 6b, for the first set of experiments, on Day 1, the aversively tagged mice were placed in a real-time place preference/avoidance (RTPP/A) chamber on day 1 to assess baseline levels. On day 3, the animals were placed back into the PTPP/A chamber; this time they received optical stimulation at 20 Hz bilaterally on one side and no stimulation on the other. We found that optical stimulation of vCA1–BLA or vCA1–NAc terminals drove aversion (Fig. 6e, f), whereas the EYFP controls, vCA1 cell body stimulation, and vCA1-PFC terminals did not statistically deviate from baseline levels (Fig. 6c, d, g). On day 5 of the experiment, the mice were subjected to an induction protocol, as previously reported13, to test the capacity of the engram to switch the behavior it drives. The aversive-tagged male mice were placed in a new chamber with a female mouse for 10 minutes while receiving optical stimulation for the entire duration of exposure. Afterwards, the animals were placed back in their chambers and assessed for behavioral changes on day 7. In this post-induction test, we observed that optical stimulation of vCA1–BLA terminals was now sufficient to drive preference despite driving aversion in the pre-induction test earlier (Fig. 6e). The induction protocol also revealed that vCA1–NAc terminals, which previously were sufficient to drive aversion, now had reversed or reset their capacity to modulate behavior and returned to baseline levels (Fig. 6g). Lastly, we saw no changes in the EYFP controls, vCA1 cell body, or vCA1–PFC stimulation (Fig. 6c, d, g). Fig. 6: Hippocampal valence specific outputs are sufficient to drive preference or aversion in a projection-specific and functionally reversible manner. Mice were injected with a virus cocktail of AAV9-c-Fos-tTA AAV9-TRE-ChR2-EYFP or AAV9-TRE-EYFP into vCA1 and optic fibers were placed bilaterally over the cell bodies of vCA1 or the terminals from vCA1 to the BLA, Nacc, or the PFC, in separate groups. The mice were then tagged with either a aversive or appetitive experience and subjected to real time opto-place avoidance and preference. a Representative images of ChR2-EYFP labeling cell bodies in vCA1 and its respective terminals in the BLA, NAcc, and PFC. b Fear to reward protocol in which the subjects received stimulation of vCA1 cell body or vCA1 projections to the BLA, NAcc, or PFC terminals. c–g Percent preference for the stimulation side at baseline, preinduction, and postinduction (n = 7 subjects for EYFP controls, n = 8 subjects for vCA1, n = 8 subjects for BLA, n = 7 for NAcc, and n = 7 for PFC, **P = 0.0018, ***P = 0.0006, repeated measures one-way ANOVA followed by Tukey's multiple comparison test). h Reward to Fear protocol in which the subjects received stimulation of BLA, NAcc, or PFC terminals originating from vCA1. i–m, Percent preference for the stimulation side at baseline, preinduction, and postinduction (n = 7 for EYFP, n = 9 for vCA1, n = 8 for BLA, n = 7 for NAcc, and n = 7 for PFC, ns = p > 0.05, **P = 0.0032, ****P < 0.0001, repeated measures one-way ANOVA followed by Tukey's multiple comparison test. Error bars represent mean ± SEM.). Next, we asked whether appetitive-tagged vCA1 terminals to the BLA, NAc, or PFC were sufficient to modulate behavior (Fig. 6h). Similar to the above findings, optical stimulation of vCA1–BLA (Fig. 6k) and vCA1-NAc (Fig. 6l) terminals, but not in any of the other groups (Fig. 6i, j, m), was sufficient to drive place preference. During the induction phase of the experiment, we placed the mice into a fear conditioning chamber where they received multiple foot shocks and simultaneous optical stimulation of the appetitive-tagged terminals. This experiment assessed if these vCA1 terminals are able to switch or reset their capacity to drive preference in a manner mirroring the experiments above. Indeed, we found that in the post-induction tests, the vCA1–BLA group switches from driving preference to aversion (Fig. 6k) while the vCA1–NAc group resets from driving preference back to baseline levels (Fig. 6l). Furthermore, we did not observe this effect in neutral-tagged animals (Supplementary Fig. 7), nor did we see any statistically significant behavioral changes for the RTPP/A task in the EYFP controls, vCA1 cell body stimulation group, or vCA1–PFC stimulation group (Fig. 6i, j, m). Importantly, we tested whether optical stimulation may cause increases in motor behaviors. We found no significant difference in distance traveled across all groups during light on or off epochs in an open field (Supplementary Fig. 8). Moreover, the lack of preference or avoidance observed from vCA1 cell body stimulation raised the possibility that vCA1's role in driving behavior is determined in a terminal-specific manner35; a notion that dovetails with recent studies suggesting that computations along the axons of a given cell body can differentially drive behavior in accordance with the downstream target8. Taken together, these results suggest that vHPC cell bodies relay their behaviorally relevant and valence-specific content to its downstream targets despite sharing partially non-overlapping molecular and neuronal signatures and distinct projection patterns. This is corroborated by evidence showing that vHPC axonal outputs preferentially route independent features of a given behavior7. Here we have shown that the vHPC processes appetitive and aversive experiences in defined populations of cells that are partially distinct at the molecular, cellular, and projection-specific levels. We also demonstrated their capacity to drive behaviors through functionally plastic projection-specific terminals. Our immunohistochemical data suggest that vCA1 contains at least three populations of neurons: two subsets that can be demarcated based on their anterior-posterior locations and preferential response to appetitive or aversive valences, and a third population that responds to both, perhaps reflecting a biological predilection for salience5. While their exact brain-wide structural and functional outputs remain undetermined, we speculate that our observed population of vCA1 cells responding to aversion are a superset of recently observed anxiety cells that transmit information to the hypothalamus25 and PFC38. Moreover, vCA1 cells processing aversive or appetitive perhaps route information to and innervate the BLA and NAc at differing anatomic, receptor- and cell-type specifically optimizing their capacity to integrate mnemonic information25. Additionally, by combining transgenic activity-dependent tagging strategies with all-virus-based expression of fluorophores, our design permits the visualization of multiple ensembles in a within-subject manner, which coalesces with previous studies monitoring and manipulating a single ensemble active at two discrete time points as well26. By intersecting these approaches with genetic sequencing strategies, these tools enable the tagging, manipulation, and molecular documentation of cells processing aversive and appetitive behaviors, opening the possibility of cataloging topographical similarities and differences between the two in a brain-wide manner. For instance, future studies may probe the molecular composition of appetitive- and aversive-tagged vCA1 cells along its anterior-posterior axis, and test whether or not the transcriptomic profiles of these cells differ both across valence and their physical location. Moreover, it is intriguing that vCA1 appetitive- and aversive-tagged terminals showed evidence of segregation within the amygdala and hippocampus. Consequently, subsequent research may seek to functionally tease out their contributions to behavior, ands measure the types of information they are transmitting through a combination of imaging and terminal-specific perturbation approaches. It is important to constrain interpretations of engrams given the vast array of genetic tools used to access cells in an activity-dependent manner. For instance, vCA1 ensembles vary drastically in size and in activity patterns across learning and memory. These numbers range from single percentages and can reach ~35% of cells depending on immediate early gene marker used, tagging strategy, rodent line, and viral tools employed36,39,40,41. Fittingly, we believe that our dual-tagging strategy partly over-samples the number of tagged cells given the time-frame necessary for tagging to occur (e.g. 48 hours off Dox; 72 hours post-4OHT injection42,43,44,45), and future experiments may aim to improve the temporal resolution of contemporary tagging approaches to enhance the signal-to-noise ratio of experience-related tagging to background tagging or leakiness. Additionally, cFos+ cells only reflect a subset of an engram that is otherwise distributed throughout the brain and recruits numerous immediate early genes, cell types, and complementary physiological activity that activity-dependent markers may fail to capture due to the relatively slow timescales of gene expression. We indeed note that while cFos+ cells indicate recent neural activity, a cell that was recently active does not necessarily have to express cFos especially across brain regions and cell types which presumably have their own thresholds for cFos expression. These points highlight the complex nature of a memory engram and underscore a caution that is warranted when interpreting data in light of inherent technical limitations. We believe that a multi-faceted approach combining genetic tagging strategies with real-time imaging during complex behaviors will help to disentangle the relationship between neural activity, genetic modifications, and systems-level changes in response to learning and memory. Our terminal manipulations are in line with recent studies demonstrating terminal-specific routing of information from vCA1 cell bodies through a variety of single, bi-, and trifurcating processes8,46,47. Our data provide a gain-of-function demonstration that activated vCA1 terminals can drive preference or aversion, which we believe was obfuscated by cell body stimulation given that the latter presumably activates the cell body and most, if not all, corresponding axonal outputs. The former would selectively modulate a set of terminals emerging from vCA1 that project to a distinct target area while only minimally affecting vCA1 terminals projecting to other target areas. Additionally, while the molecular basis underlying our valence switch experiments remain unknown, we posit that the plasticity of the transcriptome could confer the ability to the switch which aspect of behavior a terminal drives given the rapid and enduring responses to learning and memory present at the genetic level in tagged cells34. Indeed, the comprehensive transcriptional landscape in the mouse hippocampus is dynamic across the lifespan of memory formation and recall, and its experience-dependent modifications are largely present specifically in tagged cells within minutes of learning and lasting for several weeks47. Future experiments may perform RNA-seq on cells before and after the induction protocol to measure the ensuing transcriptional changes and identify putative loci mediating such functional plasticity. Further, our sequencing data enhances our understanding of how the ventral hippocampus genetically parses out both appetitive and aversive engrams, how those experiences can cause a change in the upregulation and down-regulation of discrete genes, and how these experiences can have lasting effects on the epigenetic genome through the methylation study (Fig. 4). Future studies, for instance, can build on recent work assessing the molecular and projection-defined connectivity between vCA1 and its downstream targets with MAPSeq1, while adding an activity-dependent component, such as our dual memory tagging approach, to measure organizing principles of the ventral hippocampus and its projections in a manner that takes a cell's history into account. This study in particular provides an influential platform for characterizing the organization of the ventral hippocampus and its nonrandom input-output patterns, and we posit that this logic includes an activity-dependent-defined dimension such that appetitive and aversive experiences engage unique input-output hippocampal circuitry as well. Finally, while the physiological basis by which terminals can switch or reset their capacity to drive valence-specific behaviors remains unclear, future studies may consider candidate mechanisms including homeostatic plasticity, dendritic growth and retraction, and counter-conditioning-facilitated changes along the axon-dendrite interface between vCA1 and the BLA or NAc24,37. In line with this speculation, previous studies have demonstrated that the dorsal hippocampus contains defined sets of functionally plastic cell bodies sufficient to drive aversive or appetitive behavior, while the BLA contains fixed populations that are sufficient to drive each behavior contingent on the anatomical location stimulated along the anterior-posterior axis as well as on their projection-specific sites7,8,24,25,37. Subsequent research can be aimed at utilizing our dual-memory tagging approach to genetically and physiologically map out which cell-types and circuits show such hardwired or experience-dependent responses to emotional memories as well. In our study, we speculate that vCA1 cells become appetitive or aversive in an experience-dependent manner as opposed to being hardwired for either valence per se, as has been observed in many other brain regions (e.g. BLA24). However, though it remains possible that a subset of these vCA1 cells are preferentially tuned to process a given valence in an experience-independent manner. Indeed, the notion that experience itself modifies these neurons to become appetitive or aversive does not preclude the possibility that subsequent experiences will modify their capacity in a flexible manner to drive a given behavior associated with a given valence. Moreover, in our study, while the criteria for valence was met in Fig. 1 (e.g. hippocampus cells responded differentially to stimuli of positive and negative valence in a manner independent of stimulus features), our subsequent data sets honed in on a single appetitive (e.g. male-female social interactions) and a single aversive (e.g. foot shocks) experience for thorough molecular, anatomical, and behavioral profiling. Importantly, we highlight here that in order to make a direct claim about valence, the task structure needs to be held constant. For instance, an alternative interpretation of our current data is that our observed differences in gene enrichment profiles and anatomical segregation are due to the inherent differences in tasks used (e.g. male-female social interactions vs. contextual fear conditioning), and not valence per se. It remains possible therefore that male-female social interactions, which require multi-modal integration of social and contextual cues, recruit a unique set of genes and ventral hippocampal anatomy in comparison to contextual fear conditioning, in which a conditioned cue (e.g. context) is associated with an unconditioned cue (e.g. shock), and therefore recruits task-specific molecular and anatomical activity. As such, we propose that future studies may focus on tagging experiences in which most, if not all, environmental features are held constant except the valence associated with a unimodal stimulus itself, which we believe opens up an experimental platform for studying how multiple experiences of similar or varying valence differentially engage cellular populations. While we believe that this partially accounted for by utilizing multiple experiences of similar valence in Fig. 1, additional future experiments may implement an in vivo recording approach in which putative negative- and positive-tagged cells are imaged while task structure and valence are parametrically varied. Building from this notion, our RNA-Seq and DNA methylation data provides an additional means by which memories alter the functions of the genome in a valence-dependent manner, both in healthy and pathological states. For instance, our methylation data identified Pin148,49 among 266 DMGs in the appetitive vCA1 cells, which is known for its neuroprotective qualities specifically in neurodegeneration through the regulation in the spread of cis p-tau. Follow-up studies may leverage these appetitive engrams by altering these DMGs as a means to alleviate psychiatric and neurodegenerative diseases. Together, we propose that, in addition to processing spatio-temporal units of information, the hippocampus contains discrete sets of cells processing aversive and appetitive information that relay content-specific and behaviorally relevant information to downstream areas in a molecularly-defined and projection-specific manner, thus collectively providing a multisynaptic biological substrate for multiple memory engrams. FosCreER (Jax stock: #021882) and Wildtype male C57BL/6 mice (2-3 months of age; Charles River Labs) were housed in groups of 5 mice per cage. The animal vivarium was maintained on a 12:12-hour light cycle (lights on at 0700). Mice were placed on a diet containing 40 mg/kg doxycycline (Dox) for a minimum of 48 hours prior to surgery with access to food (doxycycline diet) and water ad libitum. Mice were given a minimum of ten days after surgery to recover. Dox-containing diet was replaced with standard mouse chow (ad libitum) 48 hours prior to behavioral tagging to open a time window of activity-dependent labeling1,2. All subjects were treated in accordance with protocol 17-008 approved by the Institutional Animal Care and Use Committee at Boston University. Experiments all complied with all relevant ethical regulations for animal testing and research as dictated by AALAC and IACUC standards. Stereotaxic surgery and optic implant Stereotaxic injections and optical fiber implants follow methods previously reported1,2. All surgeries were performed under stereotaxic guidance and subsequent coordinates are given relative to bregma (in mm) dorsal-ventral injections were calculated and zeroed out relative to the skull. Mice were placed into a stereotaxic frame (Kopf Instruments, Tujunga, CA, USA) and anesthetized with 3% isoflurane during induction and lowered to 1–2% to maintain anesthesia (oxygen L/min) throughout the surgery. Ophthalmic ointment was applied to both eyes to prevent corneal desiccation. Hair was removed with a hair removal cream and the surgical site was cleaned three times with ethanol and betadine. Following this, an incision was made to expose the skull. Bilateral craniotomies involved drilling windows through the skull above the injection sites using a 0.5 mm diameter drill bit. Coordinates were −3.16 anteroposterior (AP), ± 3.1 mediolateral (ML), and −4.6 dorsoventral (DV) for vCA1; −1.8 AP, ± 3.1 ML, and −4.7 DV for the BLA; −1.25 AP, ± 1.0 ML, and −4.7 DV for the NAc; 1.70 AP, ± 0.35 ML, and −2.8 DV for the PFC. All mice were injected with a volume of 300 nl of cocktail per site at a control rate of 100 μl min−1 using a mineral oil-filled 33-gage beveled needle attached to a 10 μl Hamilton microsyringe (701LT; Hamilton) in a microsyringe pump (UMP3; WPI). The needle remained at the target site for five minutes post-injection before removal. For all targets, bilateral optic fibers were placed 0.5 DV above the injection site. Jewelry screws secured to the skull acted as anchors. Layers of adhesive cement (C&B Metabond) followed by dental cement (A-M Systems) were spread over the surgical site. Mice received 0.1 mL of 0.3 mg/ml buprenorphine (intraperitoneally) following surgery and were placed on a heating pad during surgery and recovery. Histological assessment verified viral targeting and fiber placements. Data from off-target injections were not included in analyses. Activity-dependent Viral Constructs pAAV9-cFos-tTA, pAAV9-TRE-eYFP and pAAV9-TRE-mCherry were constructed as previously described (Ramirez et al., 2015). AAV9-c-Fos-tTA was combined with AAV9-TRE-eYFP or AAV9-TRE-ChR2-EYFP (UMass Vector core) prior to injection at a 1:1 ratio. This cocktail was further combined in a 1:1 ratio AAV9-CAG-Flex-DIO-TdTomato (UNC Vector Core). Optogenetic methods Optic fiber implants were plugged into a patch cord connected to a 450 nm laser diode controlled by automated software (Doric Lenses). Laser output was tested at the beginning of every experiment to ensure that at least 15 mW of power was delivered at the end of the patch cord (Doric lenses). Behavior tagging When animals were off Dox, as previously reported9,10,11,12, Dox diet was replaced with standard lab chow (ad libitum) 48-h prior to behavioral tagging. Female exposure: One female mouse (PD 30–40) was placed into a clean home cage with a clear cage top. The experimental male mouse was then placed into the chamber and allowed to interact freely for 2 hours. Fear exposure: Mice were placed into a conditioning chamber and received four 1.50 mA (2 s) foot shocks over an 8-minute training session. Following tagging, Dox was reintroduced to the diet and the male mice were returned to their home cage with access to Dox diet. For 4-OHT tagging, 4-hydroxytamoxifen (Sigma: H7904) was diluted into 100% ethanol and vortexed for 5 minutes. Once in solution, corn oil was added and the solution was sonicated to achieve a dilution of 10 mg/kg stock. When the solution was ready, 4-OHT was loaded into syringes for injection and any left-over solution was placed in the −20C to be used in the future. On the day of tagging, 200 mg of 10 mg/ml stock (2 mg 4OHT) was administered I.P. in FoscreER mice 30 minutes following behavior and were left undisturbed for 72 hours. Mice were injected with saline or DMSO at least twice prior to tagging protocol to acclimate animals to injection and prevent off-target tagging. Behavioral assays All behavior assays were conducted during the light cycle of the day (0700–1900). Mice were handled for 3–5 days, 5-10 minutes per day, before all behavioral experiments. The testing chamber consisted of a custom-built rectangular box with a fiber optic holder (38 × 23.5 × 42 cm). Red tape divided the chamber down the middle, creating two halves. Right and left sides for stimulation were randomized. Day 1 was used to assess baseline levels, during which the mouse was given 10 min to freely explore the arena. Animals were rerun in baseline days if they showed more than a 45–55% preference for either side. Animals were acclimated to the chamber until they had a minimum of a 45/55 preference. Once baseline is achieved, the day following the first engram tag, mice received light stimulation (15 ms pulses at 20-Hz) upon entry in the designated side of the chamber (counterbalanced across groups) over a 10-minute test period. Once the mouse entered the stimulated side, a TTL signal from the EthoVision software via a Noldus USB-IO Box triggered a stimulus generator (STG-4008, Multi-channel Systems). 6 A video camera (Activeon CX LCD Action Camera) recorded each session and an experimenter blind to treatment conditions scored the amount of time on each side. Statistical analyses involved one-way ANOVAs comparing group difference scores [time (in seconds) on stimulated side minus time on unstimulated side] as well as changes across days using matching or repeated measures one-way ANOVA. Behavioral diagrams were made with BioRender. Optogenetic induction protocol For optogenetic fear induction, subjects were placed in a shock chamber with light stimulation (20 Hz, 15 ms) for 500 seconds. Foot shocks (1.5 mA, 2 s duration) were administered at the 198 s, 278 s, 358 s, and 438 s time points. During optogenetic reward induction, in a different room from the initial fear tag, the subject was placed into a clean homecage with a female mouse. Light stimulation (20 Hz, 15 ms) was applied to the male mouse for 10 minutes. Immunohistochemistry follows protocols previously reported15,16,18. Mice were overdosed with 3% isoflurane and perfused transcardially with cold (4 °C) phosphate-buffered saline (PBS) followed by 4% paraformaldehyde (PFA) in PBS. Brains were extracted and stored overnight in PFA at 4 °C. Fifty μm coronal sections were collected in serial order using a vibratome and collected in cold PBS. Sections were blocked for 1 hour at room temperature in PBST and 5% normal goat serum (NGS) or Bovine albumin serum (BSA) on a shaker. Sections were transferred to wells containing primary antibodies (1:1000 guinea anti-c-Fos [SySy]; 1:1000 rabbit anti-RFP [Rockland]; 1:5000 chicken anti-GFP [Invitrogen]) and allowed to incubate on a shaker overnight at room temperature or 3 days at 4 degrees C. Sections were then washed in PBST for 10-min (x3), followed by 2-hour incubation with secondary antibody (1:200 Alexa 555 anti-rabbit [Invitrogen]; 1:200 Alexa anti-guinea 647 [Invitrogen]; 1:200 Alexa 488 anti-chicken [Invitrogen]). Following three additional 10-min washes in PBST, sections were mounted onto micro slides (VWR International, LLC). Vectashield Hart Set Mounting Medium with DAPI (Vector Laboratories, Inc) was applied, slides were coverslipped, and allowed to dry overnight. Cell quantification Only animals that had accurate bilateral injections were selected for quantification. Fluorescent images were acquired using a confocal microscope (Zeiss LSM800, Germany) at the 20X objective. For cfos quantification, all animals were sacrificed 90 minutes post-behavior for immunohistochemical analyses. The number of EYFP, TdTomato, and c-Fos-immunoreactive neurons in the vCA1 were counted to measure the number of active cells in the respective area. 3– 5 coronal slices (spaced 50 um from each other) per mouse of which the means were taken; each N value contains 3-5 image quantification for 3-5 mice. The number of eYFP-positive, TdTomato-positive, and c-Fos-positive, and DAPI-positive cells in a set region of interest (ROI) were quantified manually across two different experimenters with ImageJ (https://imagej.nih.gov/ij/). The size of the ROI was standardized across brains, animals, experiments, and groups. The two counters were blind to experimental and control groups. To calculate the percentage of tagged cells we counted the number of fluorescently positive cells and divided them by the total number of DAPI cells. Overlaps were counted as a percentage over DAPI as well, for example, cells that were positive for both EYFP and TdTomato were counted over DAPI. Venn Diagram charts are representative graphs of the proportion of EYFP + , TdTomato + , and cfos+ cells normalized to 100% not to DAPI. Fluorescence intensity calculations 50-micron coronal sections were stained with chicken anti GFP (1:1000) and rabbit anti RFP (1:1000) as described above. Fluorescent images were taken at 20X magnification using a confocal microscope (Zeiss LSM800, Germany). All laser and imaging settings were maintain consistent across all animals and all brain regions imaged. Regions of interest (ROIs) were maintained consistent across images, sections, and animals. ROIs were identified using landmarks and referencing the Paxinos & Franklin Mouse Brain Atlas. Fluorescence quantification was conducted in ImageJ. After manual selection of the ROI using the selections tool, we set to gather information for area integrated density and mean grey value. Images were analyzed for EYFP and TdTomato separately. Within each image we gathered information about the average fluorescence of the terminals as well as background/baseline levels of a negative region to have a means of normalization. From here we used the following formula for Corrected Total Fluorescence (CTF)50,51,52 $${Integrated\; density}-[{Area\; of\; Selected\; ROI}\times {Mean\; Fluorescence\; of\; Baseline}/{background}].$$ This gave us the arbitrary units for average fluorescence intensity of our EYFP vs TdTomato terminals of interest. Image integrity Acquired image files (.czi) were opened in ImageJ. Processing of images in Fig. 1 involved maximizing intensity, removing outlier noise, and adjusting contrast of images. Generation of single cell suspension from mouse hippocampal tissue Five-week-old male mice labeled with ChR2-YFP transgene1 after conditioning were euthanized by isoflurane. Mouse brains were rapidly extracted, and the hippocampal regions were isolated by microdissection. Eight mice were pooled by each experimental condition. Single cell suspension was prepared according to the guideline of Adult Brain Dissociation Kit (Miltenyi Botec, Cat No: 13-107-677). Briefly, the hippocampal samples were incubated with digestion enzymes in the C Tube placed on the gentleMACS Octo Dissociator with Heaters with gentleMACS Program: 37C_ABDK_01. After termination of the program, the samples were applied through a MACS SmartStrainer (70 μm). Then a debris removal step and a red blood cell removal step were applied to obtain single cell suspension. Isolation of YFP-positive single cell by FACS The single cell suspension was subject to a BD FACSAria cell sorter according to the manufacturer's protocol to isolate EYFP-single cell population. Preparation of RNA-seq library The RNA of FACS isolated YFP-positive cells was extracted by using Trizol (Life Technologies) followed by Direct-zol kit (Zymo Research) according to manufacturer's instructions. Then the RNA-seq library was prepared using SMART-Seq® v4 Ultra® Low Input RNA Kit (TaKaRa). Analysis of RNA-seq data The resulting reads from Illumina had good quality by checking with FastQC53. The first 40 bp of the reads were mapped to mouse genome (mm10) using STAR26, which was indexed with Ensembl GRCm38.91 gene annotation. The read counts were obtained using featureCounts54 function from Subread package with unstranded option55. Reads were normalized by library size and differential expression analysis based on negative binomial distribution was done with DESeq256. Genes with FDR-adjusted p-value less than 0.00001 plus more than 4-fold difference were considered to be differentially expressed. Raw data along with gene expression levels are deposited to NCBI Gene Expression Omnibusas GSE198731. For Gene Ontology analysis, we ranked protein-coding genes by their fold changes and used as the input for GseaPreranked tool57. Based on the GSEA outputs, the dot plots were created with a custom R script. The pathways with FDR- adjusted p-value less than 0.25 were considered to be enriched. GeneMANIA network were analyzed by GeneMANIA33. The max resultant genes were 20 and the max resultant attributes were 10. All networks were assigned an equal weight. The network plots were generated by Cytoscape58. Preparation of RRBS-seq library The genomic DNA of FACS isolated YFP-positive cells was extracted by using DNeasy Blood & Tissue Kit (Qiagen) according to manufacturer's instructions. Then genomic DNA was digested by MspI enzyme (NEB) and size selected using the EpiNext™DNA Size Selection Kit (EpigenTek) to enrich the DNA fragment between 100-600 bp. The selected DNA fragments were used to prepared into sequencing library with EpiNext™High-Sensitivity Bisulfite-Seq Kit (EpigenTek) according to manufacturer's instructions. Analysis of RRBS data Raw reads were trimmed using cutadapt59 and then mapped to mouse genome (mm10) using Bismark60 Mcomp module from MOABS61 was used to call the differently methylated cytosines (DMCs) and regions (DMRs). Only Cytosines covered with at least five reads were used for further analysis. Differences in DNA methylation levels greater than 0.2 and P value from Fisher Exact Test smaller than 0.05 were considered as DMCs. Raw data is deposited to NCBI Gene Expression Omnibus as GSE208137. The differently methylated regions at least included two DMCs, and the max distance between two DMCs was 300 bp. Homer62 software was used for DMCs and DMRs annotation. GO analysis was done by R package clusterProfiler63. Slice electrophysiology recording FosCRE-ERT2 male mice were injected with a 1:1 ratio of AAV-CAG-FLEX-TdTomato + AAV-cfos-tTA+AAv-TRE-EYFP into the ventral hippocampus. Animals were counterbalanced for the experiment where half received appetitive tagging with 4OHT and aversive tagging with Dox and the other half received aversive tagging with 4OHT and appetitive tagging with Dox. Coronal slices of the ventral hippocampus were prepared from previously injected animals, 3-5 days post tagging experience. Animals were deeply anesthetized with isoflurane anesthesia, decapitated and the brains were removed. Brain slices were prepared in oxygen-perfused sucrose-substitute artificial cerebrospinal fluid (ACSF). Solution contained: 185 mM NaCl, 2.5 mM KCl, 1.25 mM NaH2PO4, mM MgCl2, 25 mM NaHCO3, 12.5 mM glucose, and 0.5 mM CaCl2. 400 µm slices were cut via Leica VT 1200 (Leica Microsystems) and incubated in 30 °C ACSF consisting of 125 mM NaCl, 25 mM NaHCO3, 25 mM D-glucose, 2 mM KCl, 2 mM CaCl2, 1.25 mM NaH2PO4, and 1 mM MgCl2, for 20 minutes and then cooled to room temperature. Two-photon imaging system (Thorlabs Inc.) was used to distinguish appetitive and aversive cells. Imaging system is equipped with a mode-locked Ti:Sapphire laser (Chameleon Ultra II; Coherent) which was set to wavelengths between 920 nm to 950 nm in order to excite Alexa Fluor 488 and 568, tdTomato and EYFP fluorophores using a 20x, NA 1.0 objective lens (Olympus). In order to ensure differences between the two populations is not attributable to the type of virus used for tagging the neurons, two groups of animals were prepared. In one group appetitive memories tagged with tdTomato and aversive ones with EYFP and in the other one vice versa. Patch-clamp electrodes with 0.6–1 μm tips were pulled via horizontal puller (Sutter Instruments), and pipette resistance was recorded between 4 and 6 MΩ. Pipettes were filled with intracellular fluid containing: 120 mM K-gluconate, 20 mM KCl, 10 mM HEPES, 7 mM diTrisPhCr, 4 mM Na2ATP, 2 mM MgCl2, 0.3 mM Tris-GTP, and 0.2 mM EGTA; buffered to pH 7.3 with KOH. 0.15% weight/volume of Alexa Fluor (Thermo Fisher Scientific) 488 hydrazide (for recording from tdTomato cells) or 567 (for recording from EYFP cells) were added to visualize the recording pipette under the imaging system. Prior to breaking into the cells, pipette capacitance neutralization and bridge balance compensation were performed. Data was acquired using Multiclamp 700B (Molecular Devices) and a Digidata 1550 (Molecular Devices) with sampling rate of 10 kHz. Electrophysiology data analysis Brain slice electrophysiology data analysis was performed in python with custom-written scripts using pyABF package (http://swharden.com/pyabf/). For spike shape analysis, spike onsets were identified using their first derivative, corresponding voltage was called spike onset. Spike half-width indicates the time between the two halves of peak voltage. For dynamic analysis of spiking properties stepwise current injections were performed. Firing threshold is the voltage at which a neuron fires at least a single spike, and firing onset is the corresponding current. FI gain was calculated as change in the firing frequency from the lowest to the highest frequency of firing divided by current injected. Adaptation ratio was obtained by dividing the mean inter spike intervals (ISIs) during the last 200 ms of spiking by the mean ISIs during the first 200 ms of spiking. Comparison between appetitive and aversive memory cells were done both within a single slice, within an animal as well as across animals to control for potential differences between animals or slices. Since the results in all three situations were similar, the pooled data for comparison of different groups d'Agostino-Pearson K2 test was used to determine normality of data. Based on the normality results, either independent t-test (normal distribution) or Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction were used. Sampling strategy Subjects were randomly assigned to groups. No statistical methods were used to determine sample size; the number of subjects per group were based on those in previously published studies and are reported in figure captions. Behavioral experiments were replicated at least three times with three different experimenters. The first experiments were run at a different institution and the last two replications were run at Boston University. Not only did the behavioral findings stand across experimenters (1 male, 2 females), but it stood across institutions as well. Sequencing data was replicated twice as well to confirm original findings. All behavioral and cell counters and scorers were blinded to experimental and control groups. Statistics and reproducibility For behavioral experiments were conducted on a sample size ranging from 7 to 10; animals that were mistargetted or had unilateral expression of the virus were removed from the experiment. For histological analysis sample sizes ranged from 3-4 animals pre group. RNA-Sequencing or RRBS data was conducted with a pooled sample group of 8 animals per experimental group. Mock, or untagged control, sample size was 2 mice. For electrophysiology experiments, the sample size ranged from 4-8 animals. Data were analyzed using Prism (GraphPad Software, La Jolla, CA). Data were analyzed using paired t-tests (two factors), unpaired t-tests, one-way or two-way ANOVAs with repeated measures ANOVAs (more than two factors), where appropriate. Post-hoc analyses (Tukey's multiple comparisons test) were used to characterize treatment and interaction effects, when statistically significant alpha set at p < 0.05, two-tailed). Statistical analyses are reported in figure captions. All graphs with displayed bar graphs represent Standard Error of Mean (SEM). RNA-Sequencing and methylation data is publicly available through the NIH platform, Gene Expression Omnibus (GEO) available RNA-seq data: GSE198731 and RRBS-seq data: GSE208137 respectively. Behavioral data is available upon request of the senior authors. Source data underlying Figs. 1, 2, 3 and 6 can be found in Supplementary Data 2. Additional data that support the findings of this study are available from the corresponding author upon reasonable request. Gergues, M. M. et al. Circuit and molecular architecture of a ventral hippocampal network. Nat. Neurosci. 23, 1444–1452 (2020). Josselyn, S. A. & Tonegawa, S. Memory engrams: Recalling the past and imagining the future. Science 367, eaaw4325 (2020) Josselyn, S., Köhler, S. & Frankland, P. Finding the engram. Nat. Rev. Neurosci. 16, 521–534 (2015). Cai, D. J. et al. A shared neural ensemble links distinct contextual memories encoded close in time. Nature 000, 1–4 (2016). Fanselow, M. S. & Dong, H.W., Neuron 65, 7–19 (2010). Gaurgier, J. L. & Tank, D. W. A dedication population for reward coding in the hippocampus. Neuron 99, 179–193 (2018). Xu, C. et al. Distinct hippocampal pathways mediate dissociable roles of context in memory retrieval. Cell 167, 1–2 (2016). Ciocchi, S., Passecker, J., Malagon-Vina, H., Mikus, N. & Klausberger, T. Selective information routing by ventral hippocampal CA1 projection neurons. Science 348, 560–563 (2015). Padilla-Coreano, N. et al. Direct Ventral Hippocampal-Prefrontal Input is Required for Anxiety-Related Neural Activity and Behavior. Neuron 89, 857–866 (2016). Kim, J., Pignatelli, M., Xu, S., Itohara, S. & Tonegawa, S. Antagonistic negative and positive neurons of the basolateral amygdala. Nat. Neurosci. 19, 1636–1646 (2016). Liu, X. et al. Optogenetic stimulation of a hippocampal engram activates fear memory recall. Nature 484, 381–385 (2012). Ramirez, S. et al. Creating a false memory in the hippocampus. Science 341, 387–391 (2013). Redondo, R. L. et al. Bidirectional switch of the valence associated with a hippocampal contextual memory engram. Nature 513, 426–430 (2014). Ryan, T. J., Roy, D. S., Pignatelli, M., Arons, A. & Tonegawa, S. Engram cells retain memory under retrograde amnesia. Science 348, 1007–1013 (2015). Gore, F. et al. Neural representations of unconditioned stimuli inn basolateral amygdala mediate innate and learned responses. Cell 162, 134–145 (2015). Rashid, A. J. et al. Competition between engrams influences fear memory formation and recall. Science 353, 383–38 (2017). Yokose, J. et al. Overlapping memory trace indispensable for linking, but not recalling, individual memories. Science 355, 398–403 (2017). DeNardo, L. A. et al. Temporal evolution of cortical ensembles promoting remote memory retrieval. Nat. Neurosci. 22, 460–469 (2019). Chen, B. K. et al. Artificially enhancing and suppressing hippocampus-mediated memories. Curr. Biol. 29, 1885–1894.e4 (2019). Rodriguez, E. et al. Identifying Parabrachial neurons selectively regulating satiety for highly palatable food in mice. Eneuro 6, ENEURo.0252–19 (2019). Peters, J., Dieppa-Perea, L. M., Melendez, L. M. & Quirk, G. J. Induction of fear extinction with hippocampal-infralimbic BDNF. Science 328, 1288–1290 (2010). Gan, L. et al. Converging pathways in neurodegeneration, from genetics to mechanisms. Nat. Neurosci. 21, 1300–1309 (2018). Ramirez, S. et al. Activating positive memory engrams suppresses depression-like behaviour. Nature 522, 335–339 (2015). Beyeler, A. et al. Divergent Routing of Positive and Negative Information from the Amygdala during Memory Retrieval. Neuron 90, 348–361 (2016). Jimenez, J. C. et al. Anxiety cells in a hippocampal-hypothalamic circuit. Neuron 97, 1–14 (2018). Xiu, J. et al. Visualizing an emotional valence map in the limbic forebrain by TAI-FISH. Nat. Neurosci. 17, 1552–1559 (2014). Ye, L. et al. Wiring and Molecular Features of Prefrontal Ensembles Representing Distinct Experiences. Cell 7, 1776–1788 (2016). Bardoni, B., Schenck, A. & Mandel, J. L. A novel RNA-binding nuclear protein that interacts with the fragile X mental retardation (FMR1) protein. Hum. Mol. Genet 8, 2557–2566 (1999). Duke, C. G., Kennedy, A. J., Gavin, C. F., Day, J. J. & Sweatt, J. D. Experience-dependent epigenomic reorganization in the hippocampus. Learn. Mem. 24, 278–288 (2017). Hagerman, R. J. et al. Fragile X syndrome. Nat. Rev. Dis. Prim. 3, 17065 (2017). Halder, R. et al. DNA methylation changes in plasticity genes accompany the formation and maintenance of memory. Nat. Neurosci. 19, 102–110 (2016). Sidorov, M. S., Auerbach, B. D. & Bear, M. F. Fragile X mental retardation protein and synaptic plasticity. Mol. brain 6, 15 (2013). Warde-Farley, D. et al. The Genemania Prediction Server: Biological Network Integration for Gene Prioritization and Predicting Gene Function. Nucleic Acids Res. 38, 2 (2010) Marco, A. et al. Mapping the Epigenomic and Transcriptomic Interplay during Memory Formation and Recall in the Hippocampal Engram Ensemble. Nat. Neurosci. 12, 1606–1617 (2020). Warden, M. R. et al. A prefrontal cortex-brainstem neuronal projection that controls response to behavioural challenge. Nature 428, 428–432 (2012). Zhou, Y. et al. A ventral CA1 to nucleus accumbens core engram circuit mediates conditioned place preference for cocaine. Nat. Neurosci. 22, 1986–1999 (2019). Namburi, P. et al. A circuit mechanism for differentiating positive and negative associations. Nature 520, 675–678 (2015). Spellman, T. et al. Hippocampal–prefrontal input supports spatial encoding in working memory. Nature 522, 309–314 (2015). Knapska, E., Mikosz, M., Werka, T. & Maren, S. Social modulation of learning in rats. Learn. Mem. (Cold Spring Harb., N. Y.) 17, 35–42 (2009). Nakazawa, Y. et al. Memory Retrieval along the Proximodistal Axis of CA1. Hippocampus 9, 1140–1148. Graham, K. et al. Hippocampal and Thalamic Afferents Form Distinct Synaptic Microcircuits in the Mouse Infralimbic Frontal Cortex. Cell Rep. 3, 109837 (2021). Goode, T. D. et al. An Integrated Index: Engrams, Place Cells, and Hippocampal Memory. Neuron 5, 805–820 (2020). Rao-Ruiz, P. et al. Engram-Specific Transcriptome Profiling of Contextual Memory Consolidation. Nat. Commun. 1, 2232 (2019) Reijmers, L. Genetic Control of Active Neural Circuits. Front. Mol. Neurosci. (2009) Sweis, B. M. et al. Dynamic and Heterogeneous Neural Ensembles Contribute to a Memory Engram. Curr. Opin. Neurobiol. 199–206 (2021) Arszovszki, A. et al. Three Axonal Projection Routes of Individual Pyramidal Cells in the Ventral CA1 Hippocampus. Front. Neuroanatomy (2014) Chen, M. B. et al. Persistent Transcriptional Programmes Are Associated with Remote Memory. Nature 7834, 437–442 (2020). Albayram, O. et al. Cis P-tau is induced in clinical and preclinical brain injury and contributes to post-injury sequelae. Nat. Commun. 8, 1000 (2017). Kondo, A. et al. Antibody against early driver of neurodegeneration cis P-tau blocks brain injury and tauopathy. Nature 523, 431–436 (2015). Donohue, J. D. et al. Parahippocampal Latrophilin-2 (ADGRL2) Expression Controls Topographical Presubiculum to Entorhinal Cortex Circuit Connectivity. Cell Rep. 8, 110031. (2021) Phillips, M. L. et al. Ventral Hippocampal Projections to the Medial Prefrontal Cortex Regulate Social Memory. ELife (2019) Shihan, M. H. et al. A Simple Method for Quantitating Confocal Fluorescent Images. Biochem. Biophys. Rep. 100916. (2021) "Babraham Bioinformatics - FastQC A Quality Control Tool for High Throughput Sequence Data." Babraham Bioinformatics, http://www.bioinformatics.babraham.ac.uk/projects/fastqc/. Accessed 7 Apr. 2022. Dobin, A. et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics 29, 15–21 (2013). Liao, Y., Smyth, G. K. & Shi, W. featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics 30, 923–930 (2014). Love, M. I. et al. "Moderated Estimation of Fold Change and Dispersion for RNA-Seq Data with DESeq2." Genome Biol. 12, (2014) Subramanian, A. et al. Gene Set Enrichment Analysis: A Knowledge-Based Approach for Interpreting Genome-Wide Expression Profiles. Proc. Natl Acad. Sci. no. 43, 15545–15550 (2005). Shannon, P. et al. Cytoscape: A Software Environment for Integrated Models of Biomolecular Interaction Networks. Genome Res. 13, 2498–2504 (2003). Martin, M. CUTADAPT Removes Adapter Sequences from High-Throughput Sequencing Reads. EMBnet. J. 17, 10 (2011). Krueger, F., & S. R. Andrews. "Bismark: A Flexible Aligner and Methylation Caller for Bisulfite-Seq Applications." Bioinformatics, no. 11, Oxford University Press (OUP) 1571–1572. (2011) Sun, D. et al. MOABS: Model Based Analysis of Bisulfite Sequencing Data. Genome Biol. 15, (2014) Heinz, S. et al. Simple Combinations of Lineage-Determining Transcription Factors Prime Cis-Regulatory Elements Required for Macrophage and B Cell Identities. Mol. Cell 38, 576–589 (2010). Yu, G. et al. Clusterprofiler: An R Package for Comparing Biological Themes among Gene Clusters. OMICS: A J. Integr. Biol. 16, 284–287 (2012). We thank Dr. Joshua Sanes and his lab at the Center for Brain Science, Harvard University, for providing laboratory space within which the initial experiments were conducted, especially the early members of the lab, Emily Merfeld, Emily Doucette, Stephanie Grella, and Joseph Zaki. Further we thank the Center for Brain Science Neuroengineering core for providing technical support, and the Society of Fellows at Harvard University for their support. This work was supported by an NIH Early Independence Award (DP5 OD023106-01), an NIH Transformative R01 Award, a Young Investigator Grant from the Brain and Behavior Research Foundation, a Ludwig Family Foundation grant, the McKnight Foundation Memory and Cognitive Disorders award, the Center for Systems Neuroscience and Neurophotonics Center at Boston University, and a Columbia University Startup Package (UR011118). Graduate Program for Neuroscience, Boston University, Boston, 02215, MA, USA Monika Shpokayte Department of Psychological and Brain Sciences, The Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA Monika Shpokayte, Evan Ruesch & Steve Ramirez Neuroscience Graduate Program, Brown University, Providence, 02912, RI, USA Olivia McKissick Department of Physiology and Cellular Biophysics, Columbia University Medical Center, New York, 10032, NY, USA Xiaonan Guan & X. Shawn Liu Whitehead Institute for Biomedical Research, MIT, Cambridge, 02142, MA, USA Bingbing Yuan Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA Bahar Rahsepar, Fernando R. Fernandez, John A. White & Steve Ramirez Neurophotonics Center, and Photonics Center, Boston University, Boston, 02215, MA, USA Bahar Rahsepar, Fernando R. Fernandez & John A. White Loyola University, Chicago Department of Psychology, Chicago, IL, 60660, USA Stephanie L. Grella Xiaonan Guan Bahar Rahsepar Fernando R. Fernandez Evan Ruesch John A. White X. Shawn Liu Steve Ramirez M.S. and O.M. conducted and analyzed histology. M.S., O.M., and E.R. ran and analyzed optogenetic experiments. B.Y., X.G., and X.S.L. performed RNA-seq and Reduced Representation Bisulfite sequencing and corresponding analyses. B.R., F.R.F., J.A.W. conducted in vitro physiological experiments and analyses. M.S., O.M., X.S.L. and S.R. designed the project. M.S. X.S.L. and S.R. wrote the manuscript. S.L.G conducted statistical analyses. All authors edited and commented on the manuscript. Correspondence to X. Shawn Liu or Steve Ramirez. Primary Handling Editor: George Inglis. This manuscript has been previously reviewed at another Nature Portfolio journal. The manuscript was considered suitable for publication without further review at Communications Biology. Description of Additional Supplementary Files Supplementary Data 1 (new) Supplementary Data 2 Shpokayte, M., McKissick, O., Guan, X. et al. Hippocampal cells segregate positive and negative engrams. Commun Biol 5, 1009 (2022). https://doi.org/10.1038/s42003-022-03906-8 Communications Biology (Commun Biol) ISSN 2399-3642 (online)
CommonCrawl
View all Nature Research journals Explore our content Exciton diffusion in two-dimensional metal-halide perovskites Michael Seitz ORCID: orcid.org/0000-0002-3515-16481,2, Alvaro J. Magdaleno1,2, Nerea Alcázar-Cano ORCID: orcid.org/0000-0002-4381-63631,3, Marc Meléndez ORCID: orcid.org/0000-0001-5198-35861,3, Tim J. Lubbers1,2, Sanne W. Walraven ORCID: orcid.org/0000-0001-5980-64101,2, Sahar Pakdel ORCID: orcid.org/0000-0002-4676-07804, Elsa Prada ORCID: orcid.org/0000-0001-7522-47951,2, Rafael Delgado-Buscalioni ORCID: orcid.org/0000-0001-6637-20911,3 & Ferry Prins ORCID: orcid.org/0000-0001-7605-15661,2 Nature Communications volume 11, Article number: 2035 (2020) Cite this article Electronic properties and materials Inorganic LEDs Photonic devices Two-dimensional materials Two-dimensional layered perovskites are attracting increasing attention as more robust analogues to the conventional three-dimensional metal-halide perovskites for both light harvesting and light emitting applications. However, the impact of the reduced dimensionality on the optoelectronic properties remains unclear, particularly regarding the spatial dynamics of the excitonic excited state within the two-dimensional plane. Here, we present direct measurements of exciton transport in single-crystalline layered perovskites. Using transient photoluminescence microscopy, we show that excitons undergo an initial fast diffusion through the crystalline plane, followed by a slower subdiffusive regime as excitons get trapped. Interestingly, the early intrinsic diffusivity depends sensitively on the choice of organic spacer. A clear correlation between lattice stiffness and diffusivity is found, suggesting exciton–phonon interactions to be dominant in the spatial dynamics of the excitons in perovskites, consistent with the formation of exciton–polarons. Our findings provide a clear design strategy to optimize exciton transport in these systems. Metal-halide perovskites are a versatile material platform for light harvesting1,2,3,4 and light emitting applications5,6, combining the advantages of solution processability with high ambipolar charge carrier mobilities7,8, high defect tolerance9,10,11, and tunable optical properties12,13. Currently, the main challenge in the applicability of perovskites is their poor environmental stability14,15,16,17. Reducing the dimensionality of perovskites has proven to be one of the most promising strategies to yield a more stable performance17,18,19. Perovskite solar cells with mixed two-dimensional (2D) and three-dimensional (3D) phases, for example, have been fabricated with efficiencies above 22%20 and stable performance for more than 10,000 h21, while phase pure 2D perovskite solar cells have been reported with efficiencies above 18%22,23. Likewise, significant stability improvements have been reported for phase pure 2D perovskites as the active layer in light emitting technologies24,25,26,27,28,29. The improved environmental stability in 2D perovskite phases is attributed to a better moisture resistance due to the hydrophobic organic spacers that passivate the inorganic perovskite sheets, as well as an increased formation energy of the material17,18,19,30. However, the reduced dimensionality of 2D perovskites dramatically affects the charge carrier dynamics in the material, requiring careful consideration in their application in optoelectronic devices31,32,33. 2D perovskites are composed of inorganic metal-halide layers, which are separated by long organic spacer molecules. They are described by their general chemical formula L2[ABX3]n-1BX4, where A is a small cation (e.g. methylammonium, formamidinium), B is a divalent metal cation (e.g. lead, tin), X is a halide anion (chloride, bromide, iodide), L is a long organic spacer molecule, and n is the number of octahedra that make up the thickness of the inorganic layer. The separation into few-atom thick inorganic layers yields strong quantum and dielectric confinement effects34. As a result, the exciton binding energies in 2D perovskites can be as high as several hundreds of meVs, which is around an order of magnitude larger than those found in bulk perovskites35,36,37. The excitonic character of the excited state is accompanied by an effective widening of the bandgap, an increase in the oscillator strength, and a narrowing of the emission spectrum36,37,38. The strongest confinement effects are observed for n = 1, where the excited state is confined to a single B-X-octahedral layer. Consequently, light harvesting using 2D perovskites relies on the efficient transport of excitons and their subsequent separation into free charges39. This stands in contrast to bulk perovskites in which free charges are generated instantaneously thanks to the small exciton binding energy35. Particularly, with excitons being neutral quasi-particles, the charge extraction becomes significantly more challenging as they cannot be guided to the electrodes through an external electric field40. Excitons need to diffuse to an interface before the electron and hole can be efficiently separated into free charges41. On the other hand, for light emitting applications the spatial displacement is preferably inhibited, as a larger diffusion path increases the risk of encountering quenching sites which would reduce brightness. While charge transport in bulk perovskites has been studied in great detail, the mechanisms that dictate exciton transport in 2D perovskites remain elusive41. Moreover, it is unclear to what extent exciton transport is influenced by variations in the perovskite composition. Here, we report the direct visualization of exciton diffusion in 2D single-crystalline perovskites using transient photoluminescence-microscopy42. This technique allows us to follow the temporal evolution of a near-diffraction-limited exciton population with sub-nanosecond resolution and reveals the spatial and temporal exciton dynamics. We observe two different diffusion regimes. For early times, excitons follow normal diffusion, while for later times a subdiffusive regime emerges, which is attributed to the presence of trap states. Using the versatility of perovskite materials, we study the influence of the organic spacer on the diffusion dynamics of excitons in 2D perovskites. We find that between commonly used organic spacers (phenethylammonium, PEA, and butylammonium, BA), diffusivities and diffusion lengths can differ by one order of magnitude. We show that these changes are closely correlated with variations in the softness of the lattice, suggesting a dominant role for exciton–phonon coupling and exciton–polaron formation in the spatial dynamics of excitons in these materials. These insights provide a clear design strategy to further improve the performance of 2D perovskite solar cells and light emitting devices. Exciton diffusion imaging We prepare single crystals of n = 1 phenethylammonium lead iodine (PEA)2PbI4 2D perovskite by drop-casting a saturated precursor solution onto a glass substrate43,44, as confirmed by XRD analysis and photoluminescence spectroscopy (see Methods section for details). Using mechanical exfoliation, we isolate single-crystalline flakes of the perovskite and transfer these to microscopy slides. The single-crystalline flakes have typical lateral sizes of tens to hundreds of micrometers and are optically thick. The use of thick flakes provides a form of self-passivation that prevents the typical fast degradation of the perovskite in ambient conditions. To measure the temporal and spatial exciton dynamics, we create a near-diffraction-limited exciton population using a pulsed laser diode (λex = 405 nm) and an oil immersion objective (N.A. = 1.3). The image of the fluorescence emission of the exciton population is projected outside the microscope with high magnification (×330), as illustrated in Fig. 1b. By placing a scanning avalanche photodiode (20 µm in size) in the image plane, we resolve the time-dependent broadening of the population with high temporal and spatial resolution. Fig. 1c shows the resulting map of the evolution in space and time of the fluorescence emission intensity of an exciton population in (PEA)2PbI4. The fluorescence emission intensity I(x,t) is normalized at each point in time to highlight the broadening of the emission spot over time. Each time-slice I(x,tc) is well described by a Voigt function45, from which we can extract the variance σ(t)2 of the exciton distribution at each point in time (Fig. 1d). On a timescale of several nanoseconds, the exciton distribution broadens from an initial σ(t = 0 ns) = 171 nm to σ(t = 10 ns) = 448 nm, indicating fast exciton diffusion. Fig. 1: Diffusion imaging of excitons in two-dimensional perovskites. a Illustration of the (PEA)2PbI4 crystal structure, showing the perovskite octahedra sandwiched between the organic spacer molecules. b Schematic of the experimental setup. A near-diffraction limited exciton population is generated with a pulsed laser diode. The spatial and temporal evolution of the exciton population is recorded by scanning an avalanche photodiode through the magnified image of the fluorescence I(x,t). c Fluorescence emission intensity I(x,t) normalized at each point in time to highlight the spreading of the excitons. d Cross section of I(x,t) for different times tc. e Mean-square-displacement of the exciton population over time. Two distinct regimes are present: First, normal diffusion with α = 1 is observed, which is followed by a subdiffusive regime with α < 1. The inset shows a log–log plot of the same data, highlighting the two distinct regimes. Reported errors represent the uncertainty in the fitting procedure for σ(t)2. To analyze the time-dependent broadening of the emission spot in more detail, we study the temporal evolution of the mean-square-displacement (MSD) of the exciton population, given by MSD(t) = σ(t)2 − σ(0)2. Taking the one-dimensional diffusion equation as a simple approximation, it follows that MSD(t) = 2Dtα, which allows us to extract the diffusivity D and the diffusion exponent α from our measurement (see Supplementary Note 1)42,45. In Fig. 1e we plot the MSD as a function of time. Two distinct regimes can be observed: For early times (t ≲ 1 ns) a fast linear broadening occurs with α = 1.01 ± 0.01, indicative of normal diffusion, while for later times (t ≳ 1 ns) the broadening becomes progressively slower with α = 0.65 ± 0.01, suggesting a regime of trap state limited exciton transport (see Supplementary Note 2). The two regimes are clearly visible in the log–log representation shown in the inset of Fig. 1e, where different slopes correspond to different α values. From these measurements, a diffusivity of 0.192 ± 0.013 cm2 s−1 is found for (PEA)2PbI4. Our diffusivity of single crystalline (PEA)2PbI4 is around an order of magnitude higher than previously reported mobility values from conductivity measurements (μ = 1 cm2 V−1 s−1; D = μkBT = 0.025 cm2 s−1) of polycrystalline films46. This finding is reasonable as grain boundaries slow down the movement of excitons and conventional methods measure a time-averaged mobility that cannot separate intrinsic diffusion from trap state limited diffusion. Influence of trap states The role of trap states in perovskite materials is well studied and is generally attributed to the presence of imperfections at the surface of the inorganic layer47. These lower-energy sites lead to a subdiffusive behavior as a subpopulation of excitons becomes trapped. To test the influence of trap states, we have performed diffusion measurements in the presence of a continuous wave (CW) background excitation of varying intensity (Fig. 2). The background excitation leads to a steady state population of excitons, which fill some of the traps and thereby reduce the effective trap density. To minimize the invasiveness of the measurement itself, the repetition rate and fluence were reduced to a minimum (see Supplementary Note 2). In the absence of any background illumination, we find a strongly subdiffusive diffusion exponent of α = 0.48 ± 0.02. As the background intensity is increased, an increasing α is observed, indicative of trap state filling. Ultimately, a complete elimination of subdiffusion (α = 0.99 ± 0.02) is obtained at a background illumination power of 60 mW cm−2. For comparison, this value corresponds roughly to a 2.5 Sun illumination. Additionally, we observe that the onset of the subdiffusive regime is delayed as more and more trap states are filled, as represented by the increasing tsplit parameter (see Fig. 2b, bottom panel). Fig. 2: Exciton diffusion with different background excitation intensities. a Mean-square-displacement of the exciton population for different continuous wave (CW) background intensities. Experimental values are displayed with open markers, while the fit functions (Supplementary Eq. 9), defined through the parameters D, α, and tsplit, are displayed as solid lines. Reported errors represent the uncertainty in the fitting procedure for σ(t)2. b Diffusivity D, diffusion exponent α, and the onset of subdiffusive regime tsplit extracted from fits in a. c Theoretical model (Eq. 2, solid lines), and numerical simulation (open markers) for exciton diffusion with different trap densities. Experimental values from a are displayed as shaded areas for comparison. Reported errors represent the standard deviation of 104 Brownian motion simulations. The inset shows the trap densities found with the simulations. Mirror axis of the inset is the sun equivalent of the background illumination intensity (AM1.5 Global with Ephoton > Ebandgap). To gain theoretical insights and quantitative predictions concerning the observed subdiffusive behavior of excitons and its relation with trap state densities, we performed numerical simulations based on Brownian dynamics of individual excitons diffusing in a homogeneously distributed and random trap field (see Supplementary Note 3). In addition, we developed a coarse-grained theoretical model based on continuum diffusion of the exciton concentration (see Supplementary Note 4). The continuum theory predicts an exponential decay of the diffusion coefficient, $$\frac{1}{2}\frac{{{\mathrm{dMSD}}\left( t \right)}}{{{\mathrm{d}}t}} = D\left( t \right) = D\,\exp \left( { - \frac{D}{{\lambda ^2}}t} \right)$$ where λ is the average distance between traps. The integral of this expression leads to $${\mathrm{MSD}}\left( t \right) = 2\lambda ^2\left[ {1 - \exp \left( { - \frac{D}{{\lambda ^2}}t} \right)} \right],$$ which, as shown in Fig. 2c, successfully reproduces both experimental and numerical results and allows us to determine the value of the intrinsic trap state density, yielding 1/λ2 = 22 µm−2 per layer (≈1016 cm−3), which is of the same order of magnitude as previously reported values for bulk perovskites48,49. The inset in Fig. 2c shows the evolution of the effective trap state density 1/λ2 with increasing illumination intensity. We note that the exponential decay of Eq. 1 allows for a more intuitive characterization of D(t) by relating the subdiffusion directly to the trap density 1/λ2 rather than relying on the subdiffusive exponent α of a power law commonly used in literature42. Structure-property relations of exciton transport Importantly, the early diffusion dynamics is unaffected by the trap density and shows normal diffusion (α = 1) for all illumination intensities. This strongly suggests that the early diffusion dynamics is unaffected by energetic disorder, which would result in a sublinear behavior with α < 1, and any trap states, giving us direct access to the intrinsic exciton diffusivity of the material and allowing us to compare the intrinsic exciton diffusivity between perovskites of different compositions. To explore compositional variations, we substitute phenethylammonium (PEA) with butylammonium (BA)—another commonly used spacer molecule for 2D perovskites18,24,25,28,39,50,51. Fig. 3a displays the MSD of the (BA)2PbI4 perovskite, again showing the distinct transition from normal diffusion to a subdiffusive regime. However, as compared to (PEA)2PbI4, excitons in (BA)2PbI4 are remarkably less mobile, displaying a diffusivity of only 0.013 ± 0.002 cm2 s−1, which is over an order of magnitude smaller than that of (PEA)2PbI4 with 0.192 ± 0.013 cm2 s−1 (green curve shown in Fig. 3b for comparison). Taking the exciton lifetime into account, the difference in diffusivity results in a reduction in the diffusion length from 236 ± 4 nm for (PEA)2PbI4 to a mere 39 ± 8 nm for (BA)2PbI4 (see Fig. 3c and Supplementary Note 5). These results indicate that the choice of ligand plays a crucial role in controlling the spatial dynamics of excitons in 2D perovskites. We would like to note that the reported diffusion lengths follow the literature convention of diffusion lengths in one dimension, as it is the relevant length scale for device design. The actual 2D diffusion length is greater by a factor of √2. Fig. 3: Exciton diffusion in (PEA)2PbI4 and (BA)2PbI4. a (PEA)2PbI4 and (BA)2PbI4 crystal structure along the a crystal axis53,54. b Mean-square-displacement of exciton population over time for (PEA)2PbI4 (dotted line) and (BA)2PbI4 (circles). Inset shows the normalized fluorescence emission intensity I(x,t) for (BA)2PbI4. Reported errors represent the uncertainty in the fitting procedure for σ(t)2. c Fractions of surviving excitons (extracted from lifetime data in Supplementary Fig. 7) as a function of net spatial displacement \(\sqrt {{\mathrm{MSD}}(t)}\) of excitons for (PEA)2PbI4 (triangles) and (BA)2PbI4 (circles). Reported errors represent the uncertainty in the fitting procedure for σ(t)2. d Average atomic displacement Ueq of the chemical elements in (PEA)2PbI4 and (BA)2PbI4. Data was extracted from previously published single crystal X-ray diffraction data53,54. e Diffusivity D as a function of average atomic displacement Ueq for different organic spacers: 4-fluoro-phenethylammonium (4FPEA)65 phenethylammonium (PEA)53, hexylammonium (HA)54, octylammonium (OA)66, decylammonium (DA)66, Butylammonium (BA)54. Reported errors represent the standard deviation of the average diffusivity D obtained from multiple single crystalline flakes. To understand the large difference in diffusivity between (PEA)2PbI4 and (BA)2PbI4, we take a closer look at the structural differences between these two materials. Changing the organic spacer can have a significant influence on the structural and optoelectronic properties of 2D perovskites. Specifically, increasing the cross-sectional area of the organic spacer distorts the inorganic lattice and reduces the orbital overlap between neighboring octahedra, which in turn increases the effective mass of the exciton52. Comparing the octahedral tilt angles of (PEA)2PbI4 and (BA)2PbI4, a larger distortion for the bulkier (PEA)2PbI4 (152.8°) as compared to (BA)2PbI4 (155.1°) is found53,54. The larger exciton effective mass in (PEA)2PbI4 would, however, suggest slower diffusion, meaning a simple effective mass picture for free excitons cannot explain the observed trend in the diffusivity between (PEA)2PbI4 and (BA)2PbI4. Recently, exciton–phonon interactions have been found to strongly influence exciton dynamics in perovskites31,33. To investigate the possible role of exciton–phonon coupling on exciton diffusion, we first quantify the softness of the lattices of both (PEA)2PbI4 and (BA)2PbI4 by extracting the atomic displacement parameters from their respective single crystal X-ray data55. The atomic displacement of the different atoms of both systems are summarized in Fig. 3d, showing distinctly larger displacements for (BA)2PbI4 as compared to (PEA)2PbI4 in both the organic and inorganic sublattice53,54. The increased lattice rigidity for (PEA)2PbI4 can be attributed to the formation of an extensive network of pi-hydrogen bonds and a more space-filling nature of the aromatic ring, both of which are absent in the aliphatic BA spacer molecule. Qualitatively, a stiffening of the lattice reduces the exciton–phonon coupling and would explain the observed higher diffusivity in (PEA)2PbI4 as compared to (BA)2PbI4. In addition to a softer lattice, we find that (BA)2PbI4 exhibits a larger exciton–phonon coupling strength than (PEA)2PbI4, as confirmed by analyzing the temperature-dependent broadening of the photoluminescence linewidth of the two materials (see Supplementary Note 6)56. To further test the correlation between lattice softness and diffusivity, we have performed measurements on a wider range of 2D perovskites with different organic spacers. In Fig. 3e, we present the diffusivity as a function of average atomic displacement for each of the different perovskite unit cells. Across the entire range of organic spacers, a clear correlation between the diffusivity and the lattice softness is found, further confirming the dominant role of exciton–phonon coupling in the spatial dynamics of the excited state in 2D perovskites. In the limit of strong exciton–phonon coupling, the presence of an exciton could potentially cause distortions of the soft inorganic lattice of the perovskite and lead to the formation of exciton–polarons57,58. As compared to a free exciton, an exciton–polaron would exhibit a larger effective mass and, consequently, a lower diffusivity. The softer the lattice, the larger the distortion, and the heavier the polaron effective mass would be59. Polaron formation can significantly modify the mechanism of transport, in some cases causing a transition from band-like to a hopping type transport59. When short-range deformations of the lattice are dominant, the exciton–polaron is localized within a unit cell of the material and is known as a small polaron. The motion of small polarons occurs through site-to-site hopping and increases with temperature (∂D/∂T > 0). However, in the presence of dominant long-range lattice deformations, large exciton–polarons may form which extend across multiple lattice sites. The diffusion of large polarons decreases with increasing temperature (∂D/∂T < 0), resembling that of band-like free exciton motion, although with a strongly increased effective mass. In Fig. 4, we present temperature-dependent measurements of the diffusivity for both (PEA)2PbI4 and (BA)2PbI4. In both materials a clear negative scaling of the diffusivity with temperature is observed (∂D/∂T < 0), characteristic of band-like transport. Fig. 4: Temperature-dependent diffusivity in (PEA)2PbI4 (triangles) and (BA)2PbI4 (circles). Error bars represent the uncertainty of the fit and are smaller than the markers for (PEA)2PbI4. The observed correlation between diffusivity and lattice softness in combination with band-like transport is in good qualitative agreement with the formation of large exciton–polarons. However, further studies will be needed to provide a more quantitative model that can explain the large differences in diffusivity between the various organic spacers. The correct theoretical description of exciton–phonon coupling and exciton–polarons in 2D perovskites is still the subject of ongoing debate, though the current consensus is that the polar anharmonic lattice of these materials requires a description beyond conventional Frohlich theory57,58,60. Crucial in this respect will be further spectroscopic investigations of temperature-dependent optical properties of these materials, which should allow to better distinguish the influence of exciton–polaron formation from more traditional phonon-scattering mechanisms in these materials. Meanwhile, structural rigidity can be used as a design parameter in these systems for optimized exciton transport characteristics. Taking into account the close correlation between diffusivity and the atomic displacement, this parameter space can be readily explored using available X-ray crystal structure data for many 2D perovskite analogues. While the influence of the organic spacer is expected to be particularly strong in the class of n = 1 2D perovskites, we have observed consistent trends in the n = 2 analogues. Indeed, just like in n = 1, in n = 2 the use of the PEA cation yields higher diffusivities than for BA (see Supplementary Fig. 13). Similarly, the interstitial formamidinium (FA) cation in n = 2 yields higher diffusivity than the methylammonium (MA) cation, consistent with the trend in the atomic displacement parameters. It is important to note, though, that already for n = 2 perovskites a significant free carrier fraction may be present in the perovskites61, suggesting that transport in n > 1 perovskites cannot be assumed to be purely excitonic and needs to be evaluated more rigorously. From a technological perspective, structural rigidity may play a particularly important role in light emitting devices. Long exciton diffusion lengths in light emitting applications can act detrimentally on device performance, as it increases the possibility of encountering a trapping site. From an exciton–polaron perspective, this suggests soft lattices are preferred. At the same time though, Gong et al. highlighted the role of structural rigidity in improving the luminescence quantum yield through a reduced coupling to non-radiative decay pathways55,62. A trade-off therefore exists in choosing the optimal rigidity for bright emission. Meanwhile, for light harvesting applications, long diffusion lengths are essential for the successful extraction of excitons. While strongly excitonic 2D perovskites are generally to be avoided due to the penalty imposed by the exciton binding energy, improving the understanding of the spatial dynamics of the excitonic state may help mitigate this negative impact of the thinnest members of the 2D perovskites in solar harvesting. In summary, we have studied the spatial and temporal exciton dynamics in 2D metal-halide perovskites of the form L2PbI4. We show that excitons undergo an initial fast diffusion through the crystalline plane, followed by a slower subdiffusive regime as excitons get trapped. Traps can be efficiently filled through a continuous wave background illumination, extending the initial regime where excitons undergo normal diffusion. By varying the organic spacer L we find that the intrinsic diffusivity depends sensitively on the stiffness of the lattice, revealing a clear correlation between the lattice rigidity and the diffusivity. Our results indicate that exciton–phonon interactions dominate the spatial dynamics of excitons in 2D perovskites. Moreover, the observations are consistent with the formation of large exciton–polarons. During the review process we became aware of a related manuscript by Deng et al.63 using transient-absorption microscopy to study excited-state transport in 2D perovskites, with a focus on the differences in the spatial dynamics as a function of layer thickness (n = 1 to 5). Growth of single-crystalline flakes Chemicals were purchased from commercial suppliers and used as received (see Supplementary Methods). Layered perovskites, with the exception of (HA)2PbI4 and (DA)2PbI4, were synthesized under ambient laboratory conditions following the over-saturation techniques43,44. In a nutshell, the precursor salts LI, PbI2, and AI were mixed in a stoichiometric ratio (2:1:0 for n = 1 and 2:2:1 for n = 2) and dissolved in γ-butyrolactone. The solution was heated to 70 °C and more γ-butyrolactone was added (while stirring) until all the precursors were completely dissolved. The resulting solutions were heated to 70 °C and the solvent was left to evaporate. After 2–3 days, millimeter sized crystals formed in the solution, which was subsequently cooled down to room temperature. For this study, we drop cast some of the remaining supersaturated solution on a glass slide, heated it up to 50 °C with a hotplate and after the solvent was evaporated, crystals with crystal sizes of up to several hundred microns were formed. The saturated solution can be stored and re-used to produce freshly grown 2D perovskites within several minutes. We would like to note that drop cast n = 2 solutions form several crystals with different n values. However, n = 2 crystals can be easily isolated during the exfoliation (see next section) and the formation of n = 2 can be favored by preheating the substrate to 50 °C before drop casting. (HA)2PbI4 and (DA)2PbI4 were synthesized by dissolving PbI2 (100 mg) in HI (800 µl) through heavy stirring and heating the solution to 90 °C. After PbI2 was completely dissolved a stoichiometric amount of the amine was added dropwise to the solution. The perovskite crystals of the thin film were mechanically exfoliated using the Scotch tape method (Nitto SPV 224). The exfoliation guarantees a freshly cleaved and atomically flat surface area for inspection, which is crucial to avoid emission from edge states and guarantee direct contact with the glass substrate. After several exfoliation steps, the crystals were transferred on a glass slide and were subsequently studied through the glass slide with a ×100 oil immersion objective (Nikon CFI Plan Fluor, NA = 1.3). A big advantage of this technique is that the perovskites are encapsulated through the glass slide from one side and by the bulk of the crystal from the other side. It is important to use thick crystals to guarantee good self-encapsulation and prevent premature degradation of the perovskite flakes to affect the measurement43. X-ray diffraction (XRD) was performed with a PANanaltical X'Pert PRO operating at 45 kV and 40 mA using a copper radiation source (λ = 1.5406 Å). The polycrystalline perovskite films were prepared by drop casting the saturated perovskite solutions on a silicon zero diffraction plate. Temperature-dependent photoluminescence measurements Perovskite flakes were excited with a 385 nm light emitting diode (Thorlabs) and the emission spectrum was measured using a spectrograph and an EMCCD camera coupled to a spectrograph (Princeton Instruments, SpectraPro HRS-300, ProEM HS 1024BX3). Temperature of the flakes was varied with a Peltier element (Adaptive Thermal Management, ET-127-10-13-H1), using a PID temperature controller (Dwyer Instruments, Series 16C-3) connected to a type K thermocouple (Labfacility, Z2-K-1M) for feedback control and a fan for cooling. Lifetime measurements Perovskite flakes were excited with a 405 nm laser (PicoQuant LDH-D-C-405, PDL 800-D), which was focused down to a near-diffraction limited spot. The photoluminescence was collected with an APD (Micro Photon Devices PDM, 20 × 20 µm detector size). The laser and APD were synchronized using a timing board for time correlated single photon counting (Pico-Harp 300). Diffusion measurements Exciton diffusion measurements were measured following the same procedure as Akselrod et al.42,45. In short, a near diffraction limited exciton population was created using a 405 nm laser (PicoQuant LDH-D-C-405, PDL 800-D) and a ×100 oil immersion objective (Nikon CFI Plan Fluor, NA = 1.3). Fluorescence of the exciton population was then imaged with a total ×330 magnification onto an avalanche photodiode (APD, Micro Photon Devices PDM) with a detector size of 20 µm. The laser and APD were synchronized using a timing board for time correlated single photon counting (Pico-Harp 300). The APD was capturing an effective area of around 60 × 60 nm (= 20 µm/330). The APD was scanned through the middle of the exciton population in 60 or 120 nm steps, recording a time trace in every point. To minimize the degradation of the perovskites through laser irradiation, the perovskite flakes were scanned using an x-y-piezo stage (MCL Nano-BIOS 100), covering an area of 5 × 5 µm. Diffusion measurements were performed with a 40 MHz laser repetition rate and a laser fluence of 50 nJ cm−2 unless stated otherwise. The time binning of the measurement was set to 4 ps before software binning was applied. For the temperature-dependent measurements, the temperature was varied with a silicon heater mat (RS PRO, 245-499), using a PID temperature controller (Dwyer Instruments, Series 16C-3) connected to a type K thermocouple (Labfacility, Z2-K-1M) for feedback control. Here, a silicon heater mat was chosen over the Peltier element as a Peltier element expands during the heating process and causes mechanical vibrations that lead to drift. Brownian motion simulations We have performed Brownian dynamics simulations of a single exciton diffusing in a field of traps, representing ideal (non-interacting) excitons in the dilute limit carried out in experiments. In these simulations, an exciton diffuses freely until it finds a trap, where it just stops. Free diffusion is modelled using the standard stochastic differential equation for Brownian motion in the Itô interpretation. If r(t) is the position of the exciton in the plane at time t, its displacement Δr over a time Δt is given by, $$\Delta {\mathbf{r}} = \sqrt {2D} {\mathrm{d}}{\mathbf{W}},$$ where D is the free-diffusion coefficient and dW is taken from a Wiener process, such that 〈dWdW〉 = Δt. Traps were scattered throughout the plane following a uniform random distribution. The exciton is considered to be trapped as soon its location gets closer than Rtrap = 1.2 nm to the trap center. The value was taken from estimations of the exciton Bohr radius and corresponds to a trap area of 1.44 nm2 37. In any case, in the dilute regime, the diffusion is not sensitive to the trap size Rtrap, because the trap radius is much smaller than the average separation between traps, Rtrap ≪ λ. To numerically integrate the equation of motion, we used a simple second-order-accurate modification of the well-known Euler Maruyama algorithm: the BAOAB-Limit method64. Trajectories were computed for many independent excitons and the data was averaged to determine the MSD as a function of time. While the simulation of the MSD was done in two dimensions, we used the MSD in one dimension to match the experimental conditions: \({\mathrm{MSD}}\left( t \right) = \frac{1}{2}\left( {{\mathrm{MSD}}_x\left( t \right) + {\mathrm{MSD}}_y\left( t \right)} \right)\). The data supporting the findings of this study are available within the article and its Supplementary information. Extra data are available upon reasonable request to the corresponding author. Code availability Correspondence and requests for codes used in the paper should be addressed to the corresponding author. Kojima, A., Miyasaka, T., Teshima, K. & Shirai, Y. Organometal halide perovskites as visible-light sensitizers for photovoltaic cells. J. Am. Chem. Soc. 131, 6050–6051 (2009). Burschka, J. et al. Sequential deposition as a route to high-performance perovskite-sensitized solar cells. Nature 499, 316–319 (2013). ADS CAS PubMed Article PubMed Central Google Scholar Liu, M., Johnston, M. B. & Snaith, H. J. Efficient planar heterojunction perovskite solar cells by vapour deposition. Nature 501, 395–398 (2013). Green, M. A., Ho-Baillie, A. & Snaith, H. J. The emergence of perovskite solar cells. Nat. Photonics 8, 506–514 (2014). ADS CAS Article Google Scholar Veldhuis, S. A. et al. Perovskite materials for light-emitting diodes and lasers. Adv. Mater. 28, 6804–6834 (2016). Tan, Z.-K. et al. Bright light-emitting diodes based on organometal halide perovskite. Nat. Nanotechnol. 9, 687–692 (2014). Stranks, S. D. et al. Electron-hole diffusion lengths exceeding 1 micrometer in an organometal trihalide perovskite absorber. Science 342, 341–344 (2013). Shi, D. et al. Low trap-state density and long carrier diffusion in organolead trihalide perovskite single crystals. Science 347, 519–522 (2015). Yin, W. J., Shi, T. & Yan, Y. Unusual defect physics in CH3NH3PbI3 perovskite solar cell absorber. Appl. Phys. Lett. 104, 063903-063904 (2014). Brandt, R. E., Stevanović, V., Ginley, D. S. & Buonassisi, T. Identifying defect-tolerant semiconductors with high minority-carrier lifetimes: beyond hybrid lead halide perovskites. MRS Commun. 5, 265–275 (2015). Steirer, K. X. et al. Defect tolerance in methylammonium lead triiodide perovskite. ACS Energy Lett. 1, 360–366 (2016). Weidman, M. C., Seitz, M., Stranks, S. D. & Tisdale, W. A. Highly tunable colloidal perovskite nanoplatelets through variable cation, metal, and halide composition. ACS Nano 10, 7830–7839 (2016). Shamsi, J., Urban, A. S., Imran, M., De Trizio, L. & Manna, L. Metal halide perovskite nanocrystals: synthesis, post-synthesis modifications, and their optical properties. Chem. Rev. 119, 3296–3348 (2019). Jena, A. K., Kulkarni, A. & Miyasaka, T. Halide perovskite photovoltaics: background, status, and future prospects. Chem. Rev. 119, 3036–3103 (2019). Niu, G., Guo, X. & Wang, L. Review of recent progress in chemical stability of perovskite solar cells. J. Mater. Chem. A 3, 8970–8980 (2015). Berhe, T. A. et al. Organometal halide perovskite solar cells: Degradation and stability. Energy Environ. Sci. 9, 323–356 (2016). Yang, S., Fu, W., Zhang, Z., Chen, H. & Li, C. Z. Recent advances in perovskite solar cells: efficiency, stability and lead-free perovskite. J. Mater. Chem. A 5, 11462–11482 (2017). Smith, I. C., Hoke, E. T., Solis-Ibarra, D., McGehee, M. D. & Karunadasa, H. I. A layered hybrid perovskite solar-cell absorber with enhanced moisture stability. Angew. Chem. Int. Ed. 53, 11232–11235 (2014). Quan, L. N. et al. Ligand-stabilized reduced-dimensionality perovskites. J. Am. Chem. Soc. 138, 2649–2655 (2016). Liu, Y. et al. Ultrahydrophobic 3D/2D fluoroarene bilayer-based water-resistant perovskite solar cells with efficiencies exceeding 22%. Sci. Adv. 5, eaaw2543 (2019). Grancini, G. et al. One-Year stable perovskite solar cells by 2D/3D interface engineering. Nat. Commun. 8, 1–8 (2017). Yang, R. et al. Oriented quasi-2D perovskites for high performance optoelectronic devices. Adv. Mater. 30, 1804771 (2018). Ortiz-Cervantes, C., Carmona-Monroy, P. & Solis-Ibarra, D. Two-dimensional halide perovskites in solar cells: 2D or not 2D? ChemSusChem 12, 1560–1575 (2019). Tsai, H. et al. Stable light-emitting diodes using phase-pure ruddlesden-popper layered perovskites. Adv. Mater. 30, 1704217 (2018). Xing, J. et al. Color-stable highly luminescent sky-blue perovskite light-emitting diodes. Nat. Commun. 9, 3541 (2018). Lin, Y. et al. Suppressed ion migration in low-dimensional perovskites. ACS Energy Lett. 2, 1571–1572 (2017). Zhang, L., Liu, Y., Yang, Z. & Liu, F. S. Two dimensional metal halide perovskites: Promising candidates for light-emitting diodes. J. Energy Chem. 37, 97–110 (2019). Yuan, M. et al. Perovskite energy funnels for efficient light-emitting diodes. Nat. Nanotechnol. 11, 872–877 (2016). Congreve, D. N. et al. Tunable light-emitting diodes utilizing quantum-confined layered perovskite emitters. ACS Photonics 4, 476–481 (2017). Fu, Q. et al. Recent progress on the long-term stability of perovskite solar cells. Adv. Sci. 5, (2018). Straus, D. B. & Kagan, C. R. Electrons, excitons, and phonons in two-dimensional hybrid perovskites: connecting structural, optical, and electronic properties. J. Phys. Chem. Lett. 9, 1434–1447 (2018). Mao, L., Stoumpos, C. C. & Kanatzidis, M. G. Two-dimensional hybrid halide perovskites: principles and promises. J. Am. Chem. Soc. 141, 1171–1190 (2019). Mauck, C. M. & Tisdale, W. A. Excitons in 2D organic–inorganic halide perovskites. Trends Chem. 1, 380–393 (2019). Katan, C., Mercier, N. & Even, J. Quantum and dielectric confinement effects in lower-dimensional hybrid perovskite semiconductors. Chem. Rev. 119, 3140–3192 (2019). D'Innocenzo, V. et al. Excitons versus free charges in organo-lead tri-halide perovskites. Nat. Commun. 5, 3586 (2014). Blancon, J. C. et al. Scaling law for excitons in 2D perovskite quantum wells. Nat. Commun. 9, 2254 (2018). Papavassiliou, G. C. Three- and low-dimensional inorganic semiconductors. Prog. Solid State Ch. 25, 125–270 (1997). Chong, W. K., Giovanni, D. & Sum, T.-C. Excitonics in 2D Perovskites. in Halide Perovskites 55–79 (Wiley, 2018). Luque, A. & Hegedus, S. Handbook of Photovoltaic Science and Engineering. (John Wiley & Sons, Ltd, 2003). Blancon, J. C. et al. Extremely efficient internal exciton dissociation through edge states in layered 2D perovskites. Science 355, 1288–1292 (2017). Akselrod, G. M. et al. Visualization of exciton transport in ordered and disordered molecular solids. Nat. Commun. 5, 3646 (2014). Seitz, M., Gant, P., Castellanos-Gomez, A. & Prins, F. Long-term stabilization of two-dimensional perovskites by encapsulation with hexagonal boron nitride. Nanomaterials 9, 1120 (2019). CAS PubMed Central Article Google Scholar Ha, S. T., Shen, C., Zhang, J. & Xiong, Q. Laser cooling of organic-inorganic lead halide perovskites. Nat. Photonics 10, 115–121 (2016). Akselrod, G. M. et al. Subdiffusive exciton transport in quantum dot solids. Nano Lett. 14, 3556–3562 (2014). Wright, A. D. et al. Electron-phonon coupling in hybrid lead halide perovskites. Nat. Commun. 7, 11755 (2016). ADS Article CAS Google Scholar Wu, X. et al. Trap states in lead iodide perovskites. J. Am. Chem. Soc. 137, 2089–2096 (2015). Wenger, B. et al. Consolidation of the optoelectronic properties of CH3NH3PbBr3 perovskite single crystals. Nat. Commun. 8, 1–10 (2017). Xing, G. et al. Low-temperature solution-processed wavelength-tunable perovskites for lasing. Nat. Mater. 13, 476–480 (2014). Fu, W. et al. Two-dimensional perovskite solar cells with 14.1% power conversion efficiency and 0.68% external radiative efficiency. ACS Energy Lett. 3, 2086–2093 (2018). Cao, D. H., Stoumpos, C. C., Farha, O. K., Hupp, J. T. & Kanatzidis, M. G. 2D homologous perovskites as light-absorbing materials for solar cell applications. J. Am. Chem. Soc. 137, 7843–7850 (2015). Lee, J. H. et al. Resolving the physical origin of octahedral tilting in halide perovskites. Chem. Mater. 28, 4259–4266 (2016). Du, K. Z. et al. Two-dimensional lead(II) halide-based hybrid perovskites templated by acene alkylamines: crystal structures, optical properties, and piezoelectricity. Inorg. Chem. 56, 9291–9302 (2017). Billing, D. G. & Lemmerer, A. Synthesis, characterization and phase transitions in the inorganic—organic layered perovskite-type research papers. Acta Crystallogr. Sect. B 63, 735–747 (2007). Gong, X. et al. Electron-phonon interaction in efficient perovskite blue emitters. Nat. Mater. 17, 550–556 (2018). Rudin, S., Reinecke, T. L. & Segall, B. Temperature-dependent exciton linewidths in semiconductors. Phys. Rev. B Condens. Matter. 42, 11218–11231 (1990). Zhu, H. et al. Screening in crystalline liquids protects energetic carriers in hybrid perovskites. Science 353, 1409–1413 (2016). Katan, C., Mohite, A. D. & Even, J. Entropy in halide perovskites. Nat. Mater. 17, 277–279 (2018). Emin, D. Polarons. Polarons (Cambridge University Press, 2010). Guo, Y. et al. Dynamic emission Stokes shift and liquid-like dielectric solvation of band edge carriers in lead-halide perovskites. Nat. Commun. 10, 1175 (2019). Gélvez-Rueda, M. C. et al. Interconversion between free charges and bound excitons in 2D hybrid lead halide perovskites. J. Phys. Chem. C. 121, 26566–26574 (2017). He, J., Fang, W. H., Long, R. & Prezhdo, O. V. Increased lattice stiffness suppresses nonradiative charge recombination in MAPbI3 doped with larger cations: time-domain ab initio analysis. ACS Energy Lett. 3, 2070–2076 (2018). Deng, S. et al. Long-range exciton transport and slow annihilation in two-dimensional hybrid perovskites. Nat. Commun. 11, 1–8 (2020). Leimkuhler, B. & Matthews, C. Rational construction of stochastic numerical methods for molecular sampling. Appl. Math. Res. eXpress. https://doi.org/10.1093/amrx/abs010 (2012). Hu, J. et al. Synthetic control over orientational degeneracy of spacer cations enhances solar cell efficiency in two-dimensional perovskites. Nat. Commun. 10, 1276 (2019). Lemmerer, A. & Billing, D. G. Synthesis, characterization and phase transitions of the inorganic-organic layered perovskite-type hybrids [(CnH2n+1NH3)2PbI4], n = 7, 8, 9 and 10. Dalt. Trans. 41, 1146–1157 (2012). This work has been supported by the Spanish Ministry of Economy and Competitiveness through The "María de Maeztu" Program for Units of Excellence in R&D (MDM-2014-0377). M.S. acknowledges the financial support of a fellowship from "la Caixa" Foundation (ID 100010434). The fellowship code is LCF/BQ/IN17/11620040. M.S. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 713673. F.P. acknowledges support from the Spanish Ministry for Science, Innovation, and Universities through the state program (PGC2018-097236-A-I00) and through the Ramón y Cajal program (RYC-2017-23253), as well as the Comunidad de Madrid Talent Program for Experienced Researchers (2016-T1/IND-1209). N.A., M.M. and R.D.B. acknowledges support from the Spanish Ministry of Economy, Industry and Competitiveness through Grant FIS2017-86007-C3-1-P (AEI/FEDER, EU). E.P. acknowledges support from the Spanish Ministry of Economy, Industry and Competitiveness through Grant FIS2016-80434-P (AEI/FEDER, EU), the Ramón y Cajal program (RYC-2011- 09345) and the Comunidad de Madrid through Grant S2018/NMT-4511 (NMAT2D-CM). S.P. acknowledges financial support by the VILLUM FONDEN via the Centre of Excellence for Dirac Materials (Grant No. 11744). Condensed Matter Physics Center (IFIMAC), Autonomous University of Madrid, 28049, Madrid, Spain Michael Seitz, Alvaro J. Magdaleno, Nerea Alcázar-Cano, Marc Meléndez, Tim J. Lubbers, Sanne W. Walraven, Elsa Prada, Rafael Delgado-Buscalioni & Ferry Prins Department of Condensed Matter Physics, Autonomous University of Madrid, 28049, Madrid, Spain Michael Seitz, Alvaro J. Magdaleno, Tim J. Lubbers, Sanne W. Walraven, Elsa Prada & Ferry Prins Department of Theoretical Condensed Matter Physics, Autonomous University of Madrid, 28049, Madrid, Spain Nerea Alcázar-Cano, Marc Meléndez & Rafael Delgado-Buscalioni Department of Physics and Astronomy, Aarhus University, 8000, Aarhus C, Denmark Sahar Pakdel Michael Seitz Alvaro J. Magdaleno Nerea Alcázar-Cano Marc Meléndez Tim J. Lubbers Sanne W. Walraven Elsa Prada Rafael Delgado-Buscalioni Ferry Prins M.S. and F.P. designed this study. M.S. led the experimental work and processing of experimental data. M.S. set up the diffusion measurement technique with the assistance of T.J.L., and S.W.W. A.J.M. and M.S. performed temperature-dependent measurements. M.S. and A.J.M. prepared perovskite materials. N.A., M.M., and R.D.-B. performed theoretical and numerical modelling of exciton transport. M.S, F.P., S.P., and E.P. provided the theoretical interpretation of the intrinsic exciton transport. F.P. supervised the project. M.S. and F.P. wrote the original draft of the paper. All authors contributed to reviewing the paper. Correspondence to Ferry Prins. The authors declare no competing interests. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Seitz, M., Magdaleno, A.J., Alcázar-Cano, N. et al. Exciton diffusion in two-dimensional metal-halide perovskites. Nat Commun 11, 2035 (2020). https://doi.org/10.1038/s41467-020-15882-w DOI: https://doi.org/10.1038/s41467-020-15882-w Efficient interlayer exciton transport in two-dimensional metal-halide perovskites , Michael Seitz , Michel Frising , Ana Herranz de la Cruz , Antonio I. Fernández-Domínguez & Ferry Prins Materials Horizons (2021) Tuning the Structural Rigidity of Two-Dimensional Ruddlesden–Popper Perovskites through the Organic Cation Magnus B. Fridriksson , Nadia van der Meer , Jiska de Haas & Ferdinand C. Grozema Structural Dynamics of Two-Dimensional Ruddlesden–Popper Perovskites: A Computational Study , Sudeep Maheshwari Semiconductor physics of organic–inorganic 2D halide perovskites Jean-Christophe Blancon , Jacky Even , Costas. C. Stoumpos , Mercouri. G. Kanatzidis & Aditya D. Mohite Nature Nanotechnology (2020) Phenethylammonium Functionalization Enhances Near-Surface Carrier Diffusion in Hybrid Perovskites Ti Wang , Yongping Fu , Linrui Jin , Shibin Deng , Dongxu Pan , Liang Dong , Song Jin & Libai Huang Journal of the American Chemical Society (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Editors' Highlights Nature Communications ISSN 2041-1723 (online) Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O C. Y. Liao, K. Y. Hsiang, C. Y. Lin, Z. F. Lou, Z. X. Li, H. C. Tseng, F. S. Chang, W. C. Ray, C. C. Wang, J. Y. Lee, P. H. Chen, J. H. Tsai, M. H. Liao, M. H. Lee* Undergraduate Program of Electro-Optical Engineering Experimental insights into a reverse switching charge for antiferroelectric (AFE) Hf0.1Zr0.9O2 are validated by pulse measurement and capacitance-voltage (C-V). The difference between saturation polarization ( $\text{P}_{\mathrm {S}}$ ) and remnant polarization ( $\text{P}_{\mathrm {r}}$ ) plays an important role in the model and is confirmed by the steep and gradual slope of the P-V loop, which is made by AFE and antiferroelectric-dielectric (AFE-DE), respectively. AFE capacitor yield far superior released charge ( $\text{Q}_{\mathrm {D}}$ ) than capacitor of AFE-DE bilayers due to strong reverse switching of $\text{P}_{\mathrm {S}}$ and $\text{P}_{\mathrm {r}}$ difference. A nonhysteretic $\text{Q}_{\mathrm {D}}$ scheme is proposed by alternating bipolar AFE operation without a DE to achieve a bidirectional enhancement. This work demonstrates an experimental $\text{Q}_{\mathrm {D}}$ enhancement by an AFE system and supports the reverse switching concept. IEEE Electron Device Letters https://doi.org/10.1109/LED.2022.3189669 Antiferroelectric charge enhancement reverse switching 10.1109/LED.2022.3189669 Dive into the research topics of 'Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O'. Together they form a unique fingerprint. Antiferroelectricity Chemical Compounds 100% Capacitor Chemical Compounds 28% Capacitance Engineering & Materials Science 20% Electric potential Engineering & Materials Science 12% Reaction Yield Chemical Compounds 5% Liao, C. Y., Hsiang, K. Y., Lin, C. Y., Lou, Z. F., Li, Z. X., Tseng, H. C., Chang, F. S., Ray, W. C., Wang, C. C., Lee, J. Y., Chen, P. H., Tsai, J. H., Liao, M. H., & Lee, M. H. (2022). Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O. IEEE Electron Device Letters, 43(9), 1559-1562. https://doi.org/10.1109/LED.2022.3189669 Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O. / Liao, C. Y.; Hsiang, K. Y.; Lin, C. Y. et al. In: IEEE Electron Device Letters, Vol. 43, No. 9, 01.09.2022, p. 1559-1562. Liao, CY, Hsiang, KY, Lin, CY, Lou, ZF, Li, ZX, Tseng, HC, Chang, FS, Ray, WC, Wang, CC, Lee, JY, Chen, PH, Tsai, JH, Liao, MH & Lee, MH 2022, 'Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O', IEEE Electron Device Letters, vol. 43, no. 9, pp. 1559-1562. https://doi.org/10.1109/LED.2022.3189669 Liao CY, Hsiang KY, Lin CY, Lou ZF, Li ZX, Tseng HC et al. Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O. IEEE Electron Device Letters. 2022 Sep 1;43(9):1559-1562. doi: 10.1109/LED.2022.3189669 Liao, C. Y. ; Hsiang, K. Y. ; Lin, C. Y. et al. / Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O. In: IEEE Electron Device Letters. 2022 ; Vol. 43, No. 9. pp. 1559-1562. @article{133960ee874343e385662c49b1f971c3, title = "Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O", abstract = "Experimental insights into a reverse switching charge for antiferroelectric (AFE) Hf0.1Zr0.9O2 are validated by pulse measurement and capacitance-voltage (C-V). The difference between saturation polarization ( $\text{P}_{\mathrm {S}}$ ) and remnant polarization ( $\text{P}_{\mathrm {r}}$ ) plays an important role in the model and is confirmed by the steep and gradual slope of the P-V loop, which is made by AFE and antiferroelectric-dielectric (AFE-DE), respectively. AFE capacitor yield far superior released charge ( $\text{Q}_{\mathrm {D}}$ ) than capacitor of AFE-DE bilayers due to strong reverse switching of $\text{P}_{\mathrm {S}}$ and $\text{P}_{\mathrm {r}}$ difference. A nonhysteretic $\text{Q}_{\mathrm {D}}$ scheme is proposed by alternating bipolar AFE operation without a DE to achieve a bidirectional enhancement. This work demonstrates an experimental $\text{Q}_{\mathrm {D}}$ enhancement by an AFE system and supports the reverse switching concept.", keywords = "Antiferroelectric, charge enhancement, reverse switching", author = "Liao, {C. Y.} and Hsiang, {K. Y.} and Lin, {C. Y.} and Lou, {Z. F.} and Li, {Z. X.} and Tseng, {H. C.} and Chang, {F. S.} and Ray, {W. C.} and Wang, {C. C.} and Lee, {J. Y.} and Chen, {P. H.} and Tsai, {J. H.} and Liao, {M. H.} and Lee, {M. H.}", note = "Publisher Copyright: {\textcopyright} 1980-2012 IEEE.", doi = "10.1109/LED.2022.3189669", journal = "IEEE Electron Device Letters", T1 - Experimental Insights of Reverse Switching Charge for Antiferroelectric Hf.Zr.O AU - Liao, C. Y. AU - Hsiang, K. Y. AU - Lin, C. Y. AU - Lou, Z. F. AU - Li, Z. X. AU - Tseng, H. C. AU - Chang, F. S. AU - Ray, W. C. AU - Wang, C. C. AU - Lee, J. Y. AU - Chen, P. H. AU - Tsai, J. H. AU - Liao, M. H. AU - Lee, M. H. N1 - Publisher Copyright: © 1980-2012 IEEE. N2 - Experimental insights into a reverse switching charge for antiferroelectric (AFE) Hf0.1Zr0.9O2 are validated by pulse measurement and capacitance-voltage (C-V). The difference between saturation polarization ( $\text{P}_{\mathrm {S}}$ ) and remnant polarization ( $\text{P}_{\mathrm {r}}$ ) plays an important role in the model and is confirmed by the steep and gradual slope of the P-V loop, which is made by AFE and antiferroelectric-dielectric (AFE-DE), respectively. AFE capacitor yield far superior released charge ( $\text{Q}_{\mathrm {D}}$ ) than capacitor of AFE-DE bilayers due to strong reverse switching of $\text{P}_{\mathrm {S}}$ and $\text{P}_{\mathrm {r}}$ difference. A nonhysteretic $\text{Q}_{\mathrm {D}}$ scheme is proposed by alternating bipolar AFE operation without a DE to achieve a bidirectional enhancement. This work demonstrates an experimental $\text{Q}_{\mathrm {D}}$ enhancement by an AFE system and supports the reverse switching concept. AB - Experimental insights into a reverse switching charge for antiferroelectric (AFE) Hf0.1Zr0.9O2 are validated by pulse measurement and capacitance-voltage (C-V). The difference between saturation polarization ( $\text{P}_{\mathrm {S}}$ ) and remnant polarization ( $\text{P}_{\mathrm {r}}$ ) plays an important role in the model and is confirmed by the steep and gradual slope of the P-V loop, which is made by AFE and antiferroelectric-dielectric (AFE-DE), respectively. AFE capacitor yield far superior released charge ( $\text{Q}_{\mathrm {D}}$ ) than capacitor of AFE-DE bilayers due to strong reverse switching of $\text{P}_{\mathrm {S}}$ and $\text{P}_{\mathrm {r}}$ difference. A nonhysteretic $\text{Q}_{\mathrm {D}}$ scheme is proposed by alternating bipolar AFE operation without a DE to achieve a bidirectional enhancement. This work demonstrates an experimental $\text{Q}_{\mathrm {D}}$ enhancement by an AFE system and supports the reverse switching concept. KW - Antiferroelectric KW - charge enhancement KW - reverse switching U2 - 10.1109/LED.2022.3189669 DO - 10.1109/LED.2022.3189669 JO - IEEE Electron Device Letters JF - IEEE Electron Device Letters
CommonCrawl
A shift map with a discontinuous entropy function DCDS Home Convergence of the follow-the-leader scheme for scalar conservation laws with space dependent flux January 2020, 40(1): 267-318. doi: 10.3934/dcds.2020011 Almost sure global well posedness for the BBM equation with infinite $ L^{2} $ initial data Justin Forlano Maxwell Institute for Mathematical Sciences, Department of Mathematics, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom Received January 2019 Revised July 2019 Published October 2019 Fund Project: J. F. was supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University and the University of Edinburgh. J. F. also acknowledges support from Tadahiro Oh's ERC starting grant no. 637995 "ProbDynDispEq". We consider the probabilistic Cauchy problem for the Benjamin-Bona-Mahony equation (BBM) on the one-dimensional torus $ \mathbb{T} $ with initial data below $ L^{2}( \mathbb{T}) $. With respect to random initial data of strictly negative Sobolev regularity, we prove that BBM is almost surely globally well-posed. The argument employs the $ I $-method to obtain an a priori bound on the growth of the 'residual' part of the solution. We then discuss the stability properties of the solution map in the deterministically ill-posed regime. Keywords: BBM equation, almost sure local well-posedness, almost sure global well-posedness, ill-posedness, norm inflation, stability. Mathematics Subject Classification: Primary: 35Q53, 76B15. Citation: Justin Forlano. Almost sure global well posedness for the BBM equation with infinite $ L^{2} $ initial data. Discrete & Continuous Dynamical Systems, 2020, 40 (1) : 267-318. doi: 10.3934/dcds.2020011 A. A. Alazman, J. P. Albert, J. L. Bona, M. Chen and J. Wu, Comparisons between the BBM equation and a Boussinesq system, Adv. Differential Equations, 11 (2006), 121-166. Google Scholar T. B. Benjamin, J. L. Bona and J. J. Mahony, Model equations for long waves in nonlinear dispersive systems, Phil. Trans. R. Soc. Lond. A, 272 (1972), 47-78. doi: 10.1098/rsta.1972.0032. Google Scholar Á. Bényi, T. Oh and O. Pocovnicu, Higher order expansions of the probabilistic Cauchy theory of the cubic nonlinear Schödinger equation on $ \mathbb{R}^{3}$, Trans. Amer. Math. Soc. Ser. B, 6 (2019), 114-160. doi: 10.1090/btran/29. Google Scholar Á. Bényi, T. Oh and O. Pocovnicu, On the probabilistic Cauchy theory for nonlinear dispersive PDEs, Landscapes of Time-Frequency Analysis, Appl. Numer. Harmon. Anal., Birkhäuser/Springer, Cham, 2019, 1–32. Google Scholar J. L. Bona, M. Chen and J.-C. Saut, Boussinesq equations and other systems for small- amplitude long waves in nonlinear dispersive media. Ⅰ. Derivation and linear theory, J. Non- linear Sci., 12 (2002), 283-318. doi: 10.1007/s00332-002-0466-4. Google Scholar J. L. Bona, M. Chen and J.-C. Saut, Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media. Ⅱ. The nonlinear theory, Nonlinearity, 17 (2004), 925-952. doi: 10.1088/0951-7715/17/3/010. Google Scholar J. L. Bona, T. Colin and D. Lannes, Long wave approximations for water waves, Arch. Ration. Mech. Anal., 178 (2005), 373-410. doi: 10.1007/s00205-005-0378-1. Google Scholar J. L. Bona and M. Dai, Norm Inflation for the BBM equation, J. Math. Anal. Appl., 446 (2016), 879-885. doi: 10.1016/j.jmaa.2016.08.067. Google Scholar J. L. Bona, W. G. Pritchard and L. R. Scott, An evaluation of a model equation for water waves, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 302 (1981), 457-510. doi: 10.1098/rsta.1981.0178. Google Scholar J. L. Bona and N. Tzvetkov, Sharp well-posedness results for the BBM equation, Discrete Contin. Dyn. Syst., 23 (2009), 1241-1252. doi: 10.3934/dcds.2009.23.1241. Google Scholar J. Bourgain, Periodic nonlinear Schrödinger equation and invariant measures, Comm. Math. Phys., 166 (1994), 1-26. doi: 10.1007/BF02099299. Google Scholar J. Bourgain, Invariant measures for the 2D-defocusing nonlinear Schrödinger equation, Comm. Math. Phys., 176 (1996), 421-445. doi: 10.1007/BF02099556. Google Scholar N. Burq and N. Tzvetkov, Random data Cauchy theory for supercritical wave equations, Ⅰ: Local theory, Invent. Math., 173 (2008), 449-475. doi: 10.1007/s00222-008-0124-z. Google Scholar N. Burq and N. Tzvetkov, Random data Cauchy theory for supercritical wave equations. Ⅱ. A global existence result, Invent. Math., 173 (2008), 477-496. doi: 10.1007/s00222-008-0123-0. Google Scholar A. Choffrut and O. Pocovnicu, Ill-posedness of the cubic nonlinear half-wave equation and other fractional NLS on the real line, Int. Math. Res. Not., 2018 (2018), 699-738. doi: 10.1093/imrn/rnw246. Google Scholar M. Christ, Power series solution of a nonlinear Schrödinger equation, Mathematical Aspects of Nonlinear Dispersive Equations, 131–155, Ann. of Math. Stud., 163, Princeton Univ. Press, Princeton, NJ, 2007. Google Scholar J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Global well-posedness for Schrödinger equations with derivative, SIAM J. Math. Anal., 33 (2001), 649-669. doi: 10.1137/S0036141001384387. Google Scholar J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Sharp global well-posedness for KdV and Modified KdV on $ \mathbb{R}$ and $ \mathbb{T}$, J. Amer. Math. Soc., 16 (2003), 705-749. doi: 10.1090/S0894-0347-03-00421-1. Google Scholar J. Colliander and T. Oh, Almost sure well-posedness of the cubic nonlinear Schrödinger equation below $L^{2}( \mathbb{T})$, Duke Math. J., 161 (2012), 367-414. doi: 10.1215/00127094-1507400. Google Scholar G. Da Prato and A. Debussche, Two-dimensional Navier-Stokes equations driven by a space-time white noise, J. Funct. Anal., 196 (2002), 180-210. doi: 10.1006/jfan.2002.3919. Google Scholar A.-S. de Suzzoni, Continuity of the flow of the Benjamin-Bona-Mahony equation on probability measures, Discrete Contin. Dyn. Syst., 35 (2015), 2905-2920. doi: 10.3934/dcds.2015.35.2905. Google Scholar A.-S. de Suzzoni, Wave turbulence for the BBM equation: Stability of a Gaussian statistics under the flow of BBM, Comm. Math. Phys., 326 (2014), 773-813. doi: 10.1007/s00220-014-1897-0. Google Scholar A.-S. de Suzzoni and N. Tzvetkov, On the propagation of weakly nonlinear random dispersive waves, Arch. Ration. Mech. Anal., 212 (2014), 849-874. doi: 10.1007/s00205-014-0728-y. Google Scholar S. S. Dragomir, Some Gronwall Type Inequalities and Applications, , Nova Science Publishers, Inc., Hauppauge, NY, 2003. Google Scholar P. K. Friz and M. Hairer, A Course on Rough Paths. With an Introduction to Regularity Structures, Universitext. Springer, Cham, 2014. doi: 10.1007/978-3-319-08332-2. Google Scholar [26] P. K. Friz and N. Victoir, Multidimensional Stochastic Processes as Rough Paths. Theory and Applications., Cambridge Studies in Advanced Mathematics, 120. Cambridge University Press, Cambridge, 2010. doi: 10.1017/CBO9780511845079. Google Scholar J. Ginibre, Y. Tsutsumi and G. Velo, On the Cauchy problem for the Zahkharov system, J. Funct. Anal., 151 (1997), 384-436. doi: 10.1006/jfan.1997.3148. Google Scholar M. Gubinelli, H. Koch and T. Oh, Renormalization of the two-dimensional stochastic nonlinear wave equations, Trans. Amer. Math. Soc., 370 (2018), 7335-7359. doi: 10.1090/tran/7452. Google Scholar M. Gubinelli, H. Koch, T. Oh and L. Tolomeo, Global dynamics for the two-dimensional stochastic nonlinear wave equations, preprint. Google Scholar M. Gubinelli and N. Perkowski, Probabilistic approach to the stochastic Burgers equation, Stochastic Partial Differential Equations and Related Fields, 515–527, Springer Proc. Math. Stat., 229, Springer, Cham, 2018. Google Scholar M. Gubinelli and N. Perkowski, Lectures on singular stochastic PDEs, Ensaios Matemáticos [Mathematical Surveys], Sociedade Brasileira de Matemática, Rio de Janeiro, 29 (2015), 89 pp. Google Scholar M. Hairer, An Introduction to Stochastic PDEs, Available from: http://www.hairer.org/notes/SPDEs.pdf, 2009. Google Scholar T. Iwabuchi and T. Ogawa, Ill-posedness for the nonlinear Schrödinger equation with quadratic non-linearity in low dimensions, Trans. Amer. Math. Soc., 367 (2015), 2613-2630. doi: 10.1090/S0002-9947-2014-06000-5. Google Scholar [34] S. Janson, Gaussian Hilbert Spaces, Cambridge University Press, 1997. doi: 10.1017/CBO9780511526169. Google Scholar N. Kishimoto, A remark on norm inflation for nonlinear Schrödinger equations, Commun. Pure Appl. Anal., 18 (2019), 1375-1402. doi: 10.3934/cpaa.2019067. Google Scholar H. P. McKean, Statistical mechanics of nonlinear wave equations. Ⅳ. Cubic Schrödinger, Comm. Math. Phys., 168 (1995), 479–491. Erratum: Statistical mechanics of nonlinear wave equations. Ⅳ. Cubic Schrödinger, Comm. Math. Phys., 173 (1995), 675. doi: 10.1007/BF02101840. Google Scholar E. Nelson, A quartic interaction in two dimensions, 1966 Mathematical Theory of Elementary Particles (Proc. Conf., Dedham, Mass., 1965), 1966, 69–73, M.I.T. Press, Cambridge, Mass. Google Scholar T. Oh, A remark on norm inflation with general initial data for the cubic nonlinear Schrödinger equations in negative Sobolev spaces, Funkcial. Ekvac., 60 (2017), 259-277. Google Scholar T. Oh, Remarks on nonlinear smoothing under randomization for the periodic KdV and the cubic Szegö equation, Funkcial. Ekvac., 54 (2011), 335-365. Google Scholar T. Oh, M. Okamoto and N. Tzvetkov, Uniqueness and non-uniqueness of the Gaussian free field evolution under the two-dimensional Wick ordered cubic wave equation, preprint. Google Scholar T. Oh, O. Pocovnicu and N. Tzvetkov, Probabilistic local well-posedness of the cubic nonlinear wave equation in negative Sobolev spaces, arXiv: 1904.06792 [math.AP]. Google Scholar M. Panthee, On the ill-posedness result for the BBM equation, Discrete Contin. Dyn. Syst., 30 (2011), 253-259. doi: 10.3934/dcds.2011.30.253. Google Scholar D. H. Peregrine, Calculations of the development of an undular bore, J. Fluid Mech., 25 (1966), 321-330. Google Scholar D. Roumégoux, A symplectic non-squeezing theorem for BBM equation, Dyn. Partial Differ. Equ., 7 (2010), 289-305. doi: 10.4310/DPDE.2010.v7.n4.a1. Google Scholar L. Thomann and N. Tzvetkov, Gibbs measure for the periodic derivative nonlinear Schrödinger equation, Nonlinearity, 23 (2010), 2771-2791. doi: 10.1088/0951-7715/23/11/003. Google Scholar L. Tolomeo, Global well-posedness of the two-dimensional stochastic nonlinear wave equation on an unbounded domain, preprint. Google Scholar N. Tzvetkov, Random data wave equations, arXiv: 1704.01191 [math.AP]. Google Scholar N. Tzvetkov, Quasi-invariant Gaussian measures for one dimensional Hamiltonian PDE's, Forum Math. Sigma, 3, (2015), e28, 35 pp. doi: 10.1017/fms.2015.27. Google Scholar M. Wang, Sharp global well-posedness of the BBM equation in Lp type Sobolev spaces, Discrete Contin. Dyn. Sys., 36 (2016), 5763-5788. doi: 10.3934/dcds.2016053. Google Scholar B. Xia, Generic ill-posedness for wave equation of power type on 3D torus, arXiv: 1507.07179 [math.AP]. Google Scholar Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2699-2723. doi: 10.3934/dcds.2020382 Jerry Bona, Nikolay Tzvetkov. Sharp well-posedness results for the BBM equation. Discrete & Continuous Dynamical Systems, 2009, 23 (4) : 1241-1252. doi: 10.3934/dcds.2009.23.1241 G. Fonseca, G. Rodríguez-Blanco, W. Sandoval. Well-posedness and ill-posedness results for the regularized Benjamin-Ono equation in weighted Sobolev spaces. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1327-1341. doi: 10.3934/cpaa.2015.14.1327 Yannis Angelopoulos. Well-posedness and ill-posedness results for the Novikov-Veselov equation. Communications on Pure & Applied Analysis, 2016, 15 (3) : 727-760. doi: 10.3934/cpaa.2016.15.727 Nikolaos Bournaveas. Local well-posedness for a nonlinear dirac equation in spaces of almost critical dimension. Discrete & Continuous Dynamical Systems, 2008, 20 (3) : 605-616. doi: 10.3934/dcds.2008.20.605 Yonggeun Cho, Gyeongha Hwang, Soonsik Kwon, Sanghyuk Lee. Well-posedness and ill-posedness for the cubic fractional Schrödinger equations. Discrete & Continuous Dynamical Systems, 2015, 35 (7) : 2863-2880. doi: 10.3934/dcds.2015.35.2863 Boris Kolev. Local well-posedness of the EPDiff equation: A survey. Journal of Geometric Mechanics, 2017, 9 (2) : 167-189. doi: 10.3934/jgm.2017007 Jerry L. Bona, Hongqiu Chen, Chun-Hsiung Hsia. Well-posedness for the BBM-equation in a quarter plane. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1149-1163. doi: 10.3934/dcdss.2014.7.1149 Mahendra Panthee. On the ill-posedness result for the BBM equation. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 253-259. doi: 10.3934/dcds.2011.30.253 Xavier Carvajal, Mahendra Panthee. On ill-posedness for the generalized BBM equation. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4565-4576. doi: 10.3934/dcds.2014.34.4565 Hartmut Pecher. Almost optimal local well-posedness for the Maxwell-Klein-Gordon system with data in Fourier-Lebesgue spaces. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3303-3321. doi: 10.3934/cpaa.2020146 Ming Wang. Sharp global well-posedness of the BBM equation in $L^p$ type Sobolev spaces. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5763-5788. doi: 10.3934/dcds.2016053 M. Keel, Tristan Roy, Terence Tao. Global well-posedness of the Maxwell-Klein-Gordon equation below the energy norm. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 573-621. doi: 10.3934/dcds.2011.30.573 Christopher Henderson, Stanley Snelson, Andrei Tarfulea. Local well-posedness of the Boltzmann equation with polynomially decaying initial data. Kinetic & Related Models, 2020, 13 (4) : 837-867. doi: 10.3934/krm.2020029 Hartmut Pecher. Local well-posedness for the nonlinear Dirac equation in two space dimensions. Communications on Pure & Applied Analysis, 2014, 13 (2) : 673-685. doi: 10.3934/cpaa.2014.13.673 Jae Min Lee, Stephen C. Preston. Local well-posedness of the Camassa-Holm equation on the real line. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3285-3299. doi: 10.3934/dcds.2017139 Yongye Zhao, Yongsheng Li, Wei Yan. Local Well-posedness and Persistence Property for the Generalized Novikov Equation. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 803-820. doi: 10.3934/dcds.2014.34.803 Lassaad Aloui, Slim Tayachi. Local well-posedness for the inhomogeneous nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5409-5437. doi: 10.3934/dcds.2021082 Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1 Tadahiro Oh, Yuzhao Wang. On global well-posedness of the modified KdV equation in modulation spaces. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2971-2992. doi: 10.3934/dcds.2020393 \begin{document}$ L^{2} $\end{document} initial data" readonly="readonly">
CommonCrawl
Multiple positive solutions for the Schrödinger-Poisson equation with critical growth Caixia Chen 1,, and Aixia Qian 2, School of Mathematics and Computer Application Technology, Jining university, Shandong 273155, China School of Mathematical Sciences, Qufu Normal University, Shandong 273165, China * Corresponding author: Caixia Chen Received July 2021 Early access December 2021 Fund Project: Supported by the Shandong Province Science Foundation ZR2021MA096 and ZR2020MA005 In this paper, we consider the following Schrödinger-Poisson equation $ \left\{\begin{aligned} &-\triangle u + u + \phi u = u^{5}+\lambda g(u), &\hbox{in}\ \ \Omega, \\\ & -\triangle \phi = u^{2}, & \hbox{in}\ \ \Omega, \\\ & u, \phi = 0, & \hbox{on}\ \ \partial\Omega.\end{aligned}\right. $ $ \Omega $ is a bounded smooth domain in $ \mathbb{R}^{3} $ $ \lambda>0 $ and the nonlinear growth of $ u^{5} $ reaches the Sobolev critical exponent in three spatial dimensions. With the aid of variational methods and the concentration compactness principle, we prove the problem admits at least two positive solutions and one positive ground state solution. Keywords: Schrödinger-Poisson equation, Critical exponent, Variational methods, Concentration-compactness principle, Ground state solution. Mathematics Subject Classification: Primary: 35J05, 35J20; Secondary: 35J60. Citation: Caixia Chen, Aixia Qian. Multiple positive solutions for the Schrödinger-Poisson equation with critical growth. Mathematical Foundations of Computing, doi: 10.3934/mfc.2021036 C. O. Alves and M. A. Souto, Existence of least energy nodal solution for a Schrödinger-Poisson system in bounded domains, Z. Angew. Math. Phys, 65 (2014), 1153-1166. doi: 10.1007/s00033-013-0376-3. Google Scholar C. O. Alves, M. A. Souto and S. H. M. Soares, Schrödinger-Poisson equations without Ambrosetti-Rabinowitz condition, J. Math. Anal. Appl, 377 (2011), 584-592. doi: 10.1016/j.jmaa.2010.11.031. Google Scholar A. Ambrosetti, On Schrödinger-Poisson systems, Milan J. Math., 76 (2008), 257-274. doi: 10.1007/s00032-008-0094-z. Google Scholar A. Azzollini and A. Pomponio, Ground state solutions for the nonlinear Schrödinger-Maxwell equations, J. Math. Anal. Appl., 345 (2008), 90-108. doi: 10.1016/j.jmaa.2008.03.057. Google Scholar V. Benci and D. Fortunato, Solitary waves of the nonlinear Klein-Gordon equation coupled with Maxwell equations, Rev. Math. Phys., 14 (2002), 409-420. doi: 10.1142/S0129055X02001168. Google Scholar I. Catto and P. L. Lions, Binding of atoms and stability of molecules in Hartree and Thomas-Fermi type theories. Part 1: A necessary and sufficient condition for the stability of general molecular system, Comm. Partial Differential Equations, 17 (1992), 1051-1110. Google Scholar G. Cerami and G. Vaira, Positive solutions for some non-autonomous Schrödinger-Poisson systems, J. Differential Equations, 248 (2010), 521-543. doi: 10.1016/j.jde.2009.06.017. Google Scholar S. Chen and C. Tang, High energy solutions for the superlinear Schrödinger-Maxwell equations, Nonlinear Anal., 71 (2009), 4927-4934. doi: 10.1016/j.na.2009.03.050. Google Scholar H. Guo, Nonexistence of least energy nodal solutions for Schrödinger-Poisson equation, Appl. Math. Lett., 68 (2017), 135-142. doi: 10.1016/j.aml.2016.12.016. Google Scholar L. Huang, E. M. Rocha and J. Chen, On the Schrödinger-Poisson system with a general indefinite nonlinear, Nonlinear Anal., 28 (2016), 1-19. doi: 10.1016/j.nonrwa.2015.09.001. Google Scholar C. Y. Lei, G. S. Liu and L. T. Guo, Multiple positive solutions for a Kirchhoff type problem with a critical nonlinearity, Nonlinear Anal. Real World Appl., 31 (2016), 343-355. doi: 10.1016/j.nonrwa.2016.01.018. Google Scholar H. Liu, Positive solutions of an asymptotically periodic Schrödinger-Poisson system with critical exponent, Nonlinear Anal., 32 (2016), 198-212. doi: 10.1016/j.nonrwa.2016.04.007. Google Scholar Z. Liu and S. Guo, On ground state solutions for the Schrödinger-Poisson equations with critical growth, J. Math. Anal. Appl., 412 (2014), 435-448. doi: 10.1016/j.jmaa.2013.10.066. Google Scholar A. Mao, L. Yang, A. Qian and S. Luan, Existence and concentration of solutions of Schrödinger-Poisson system, Appl. Math. Lett, 68 (2017), 8-12. doi: 10.1016/j.aml.2016.12.014. Google Scholar D. Ruiz, The Schrödinger-Poisson equation under the effect of a nonlinear local term, J. Funct. Anal., 237 (2006), 655-674. doi: 10.1016/j.jfa.2006.04.005. Google Scholar J. Sun, Infinitely many solutions for a class of sublinear Schrödinger-Maxwell equations, J. Math. Anal. Appl., 390 (2012), 514-522. doi: 10.1016/j.jmaa.2012.01.057. Google Scholar J. Sun and T. Wu, Multiplicity and concentration of homoclinic solutions for some second order Hamiltonian systems, Nonlinear Anal., 114 (2015), 105-115. doi: 10.1016/j.na.2014.11.009. Google Scholar M. Willem, Minimax Theorems, Birthäuser, Boston, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 605-625. doi: 10.3934/dcds.2017025 Qian Shen, Na Wei. Stability of ground state for the Schrödinger-Poisson equation. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2805-2816. doi: 10.3934/jimo.2020095 Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang. Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2299-2324. doi: 10.3934/cpaa.2019104 Yu Su. Ground state solution of critical Schrödinger equation with singular potential. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3347-3371. doi: 10.3934/cpaa.2021108 Mengyao Chen, Qi Li, Shuangjie Peng. Bound states for fractional Schrödinger-Poisson system with critical exponent. Discrete & Continuous Dynamical Systems - S, 2021, 14 (6) : 1819-1835. doi: 10.3934/dcdss.2021038 Hangzhou Hu, Yuan Li, Dun Zhao. Ground state for fractional Schrödinger-Poisson equation in Coulomb-Sobolev space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (6) : 1899-1916. doi: 10.3934/dcdss.2021064 Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2693-2715. doi: 10.3934/cpaa.2019120 Lun Guo, Wentao Huang, Huifang Jia. Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1663-1693. doi: 10.3934/cpaa.2019079 Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. Communications on Pure & Applied Analysis, 2016, 15 (3) : 991-1008. doi: 10.3934/cpaa.2016.15.991 Kaimin Teng, Xian Wu. Concentration of bound states for fractional Schrödinger-Poisson system via penalization methods. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2022014 Xianhua Tang, Sitong Chen. Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 4973-5002. doi: 10.3934/dcds.2017214 Sitong Chen, Xianhua Tang. Existence of ground state solutions for the planar axially symmetric Schrödinger-Poisson system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4685-4702. doi: 10.3934/dcdsb.2018329 Sitong Chen, Junping Shi, Xianhua Tang. Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 5867-5889. doi: 10.3934/dcds.2019257 Sitong Chen, Wennian Huang, Xianhua Tang. Existence criteria of ground state solutions for Schrödinger-Poisson systems with a vanishing potential. Discrete & Continuous Dynamical Systems - S, 2021, 14 (9) : 3055-3066. doi: 10.3934/dcdss.2020339 Xia Sun, Kaimin Teng. Positive bound states for fractional Schrödinger-Poisson system with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3735-3768. doi: 10.3934/cpaa.2020165 Daniele Cassani, Luca Vilasi, Jianjun Zhang. Concentration phenomena at saddle points of potential for Schrödinger-Poisson systems. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1737-1754. doi: 10.3934/cpaa.2021039 Claudianor O. Alves, Geilson F. Germano. Existence of ground state solution and concentration of maxima for a class of indefinite variational problems. Communications on Pure & Applied Analysis, 2020, 19 (5) : 2887-2906. doi: 10.3934/cpaa.2020126 Yi He, Lu Lu, Wei Shuai. Concentrating ground-state solutions for a class of Schödinger-Poisson equations in $\mathbb{R}^3$ involving critical Sobolev exponents. Communications on Pure & Applied Analysis, 2016, 15 (1) : 103-125. doi: 10.3934/cpaa.2016.15.103 Rong Cheng, Jun Wang. Existence of ground states for Schrödinger-Poisson system with nonperiodic potentials. Discrete & Continuous Dynamical Systems - B, 2022 doi: 10.3934/dcdsb.2021317 Miao-Miao Li, Chun-Lei Tang. Multiple positive solutions for Schrödinger-Poisson system in $\mathbb{R}^{3}$ involving concave-convex nonlinearities with critical exponent. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1587-1602. doi: 10.3934/cpaa.2017076 Caixia Chen Aixia Qian
CommonCrawl
[Finland] Portable density meter: DMA 35 Rheometer Beer hydrometer: EasyDens Alcohol Meter Products by standards Atomic Force Microscope (AFM) CO₂, oxygen, and TPO meter Inline Beverage Analysis Instrumented Indentation Tester Pharmaceutical instrument qualification service ISO 17025 calibration Safety Declaration Anton Paar Technical Centers Surface characterization Anton Paar and COVID-19 Jobs & career overview Basics of thixotropy 35 Rates Many products used in daily life can be characterized by their thixotropic behavior. Thixotropy is the property that explains why personal care products like hair gels and toothpaste are liquid when squeezed out of the tube but recover to their initial solid state afterwards in order to remain in place. The perfectly adjusted rheological properties of structural decomposition and regeneration as a function of time are responsible for the quality of a product. This article describes how thixotropy testing can be performed with a rotational viscometer/rheometer to control/influence the application behavior of materials. Definition of thixotropy Thixotropy is the property of certain fluids and gels of becoming thinner when a constant force is applied and after reduction of the force the viscosity recovers fully to the initial state in an appropriate period of timei-ii. The higher the force that is applied, the lower the viscosity becomes. Thixotropy is a time-dependent phenomenon, as the viscosity of the substance must recover after a certain period of time when the applied force is removediii.The term thixotropy consists of the Greek words "thixis" (to touch) and "trepein" (to turn). It means the change or transition of a substance due to mechanical loadiv. Examples of thixotropic materials are lotions, gels, ketchup, paints, and gypsum. For example, ketchup flows out of the tube when it is pressed. Its viscosity becomes lower as force is applied. After the force lessens, the viscosity of the ketchup recovers to its initial state for perfect leveling on French fries. This means that thixotropic behavior is always combined with shear-thinning flow behavior. Shear thinning, also called 'pseudoplastic' flow behavior, is characterized by a decrease in viscosity due to an increasing applied force (shear load). All in all, there are three different types of time-dependent flow behaviors: Thixotropic behavior In thixotropic materials the structural strength decreases with a higher load (in rheological terms: while shearing) and recovers completely after a certain rest period. The rest period needed for recovery strongly depends on the application and has to be defined prior to the test. Thixotropic behavior is an important quality characteristic of, for example, paints and coatings. It influences the way paint levels out and prevents sagging but also ensures a sufficient and consistent wet layer thickness. Non-thixotropic behavior In non-thixotropic materials the structural strength decreases while shearing but the viscosity does not fully recover after an appropriate rest period. It remains thinner than the initial state which means that the structure does not fully recover (<100 %). A typical sample which shows this behavior is yogurt. After stirring, the viscosity of yogurt remains thinner than initially. Learn more about the rheology of dairy products here. Rheopectic behavior In rheopectic materials the structural strength increases while shearing and recovers after a certain rest period. This phenomenon is rare but can be found in suspensions with a high solid content like latex dispersions or ceramic casting slips. Test methods for thixotropy testing Thixotropy testing can be carried out with a viscometer or rheometer in rotation or oscillation. Rotational tests are described in the next chapter. There are diverse test methods available for analyzing thixotropic behavior. The focus of this article lies on the most common test methods. It has to be noted that each of the following test methods is performed with a different test procedure and, therefore, the outputs will differ from each other. Only thixotropic behavior tests conducted with the same method under the same conditions can be compared to each other. Step test (3 intervals thixotropy test, 3ITT) A step test is usually performed with a rotational rheometer by fast speed changes. The step test consists of three intervals and is therefore called "3 intervals thixotropy test (3ITT)". It can be either performed in a controlled shear rate (CSR) mode or in a controlled shear stress (CSS) mode: In CSR mode the shear rate or rotational speed is preset, whereas in CSS mode the shear stress or torque is preset on the viscometerv. The test is performed at two different speeds/shear rates. The first and last intervals are performed at a low shear rate and the second interval is performed at a high shear rate (Figure 1). In CSS mode the first and last intervals are performed at a low shear stress and the second interval is performed at a high shear stress. Figure 1: Step procedure of a rotational test consisting of a low-shear, high-shear, and low-shear phase. ẏ = shear rate; t = time Time-dependent changes in viscosity during the 3ITT test represent the sample's real behavior before, during, and after the application (see Figure 2): Low-shear phase: The aim of the first interval is to obtain a constant viscosity at a constant low shear rate. This interval provides the reference viscosity of the sample at rest. High-shear phase: In this interval the sample is strongly sheared at a constant high shear rate to simulate the sample's behavior during application, e.g during stirring, rolling, painting, spraying, and pumping. The structural decomposition can be determined due to the sample's shear-thinning behavior, also known as pseudoplastic behavior. Low-shear phase: Here, the same constant low shear rate is preset as in the first interval. This interval allows the sample to recover its structure/viscosity. The structural regeneration of the sample can be determined with one of the following analysis methods. Figure 2: Time-dependent viscosity of a sample with thixotropic behavior. ƞ = viscosity, t = time Analysis methods for the step test The third interval of the 3ITT test is used for analyzing the thixotropic behavior of the sample. There are different methods for analyzing the structural regeneration: Recovery ratio after a given time: Prior to starting the test, the user has to define the point of time at which the structural recovery should be analyzed. The points of time have to be set according to the requirements of the application. The viscosities at these points are then compared to the viscosity of the rest phase in the first interval. For example, the structure of the paint recovered up to 80 % after 60 seconds of the third interval (Figure 3). Figure 3: Analyzing the recovery ratio after a given time. ƞ = viscosity, t = time Time for a given recovery ratio: The time needed for structural recovery (100 %) is often very long. For example, after shaking paraffin oil, it needs about eight hours to fully recover to its initial solid state. For this reason, the time for a lower recovery ratio is usually analyzed. The recovery ratio of interest is set prior to the test. Then the time needed to recover to the set recovery ratio is calculated. The time is measured from the beginning of the third interval, the recovery interval. In Figure 4 the time needed for 25 % and 50 % structural recovery is analyzed. Figure 4: Analyzing the time for a given recovery ratio. ƞ = viscosity, t = time Hysteresis area method Another simple method for analyzing the time-dependent flow behavior is the hysteresis area. In older literature that is not up to date anymore, this behavior is called thixotropic or rheopectic, respectively. However, according to modern standards such as DIN spec 91143-2 and ISO/WD 3219-1 they are no longer valid in principle. The reason is: This measuring method evaluates the amount of structural breakdown (or build-up) under high shear conditions, but there is no interval available to evaluate structural recovery under really low shear conditions. In this test the sample is sheared at different speeds. The viscometer/rheometer is first set to a low speed. The speed is increased stepwise to higher speeds, generating an upwards ramp (e.g. 1 rpm to 100 rpm). After reading the shear stress at the top speed, the speed is either kept for a certain holding time (e.g. 60 seconds) and finally decreased stepwise to the lowest speed, generating a downwards ramp (e.g.100 rpm to 1 rpm) or the downwards ramp is generated immediately without a holding period. The result is plotted as a flow curve diagram showing the shear rate on the x-axis and the shear stress on the y-axis. Usually the shear rate is preset on the rheometer and the torque/force needed to rotate the bob in the cup filled with sample is measured. The area between the upwards- and downwards ramp is called the hysteresis area (Figure 5). Figure 5: Flow curve showing the hysteresis area. 1 = indication for structural breakdown; 2 = indication for structural build-up, Ԏ = shear stress, ẏ = shear rate The flow curve diagram shows how the shear stress changes with increasing shear rate/speed. A decrease in shear stress during the holding interval at a constantly high speed indicates that the viscosity of the sample decreases. If the upwards and downwards ramp do not differ from each other the sample's behavior is independent of time when shearing. If the upwards ramp shows a higher shear stress reading than the downwards ramp the sample's behavior is time-dependent under shear load, showing shear-thinning behavior then. If the upwards ramp shows a lower shear stress reading than the downwards ramp, then the sample shows time-dependent behavior when shearing, showing shear-thickening behavior. The amount of the hysteresis area is calculated as follows: Area between the upwards ramp and ẏ-axis Area between the downwards ramp and ẏ-axis If the value is positive the sample shows structural breakdown and if the value is negative the sample shows structural build-up on shearing. For very simple quality control tests some users perform the following method in order to evaluate thixotropic behavior. To analyze the time needed for recovery of the viscosity after shearing, the viscometer has to be stopped after the downwards ramp. After a certain waiting period, the viscometer is started again at the lowest speed available in order to see the build-up of the viscosity (structural regeneration). Comparing the viscosity of the sample before and after turning the viscometer off and on illustrates how quickly the sample's viscosity returns to its initial state after shearing. If the viscometer shows the same viscosity value as before, the viscosity has fully recovered in the waiting period. "Thixotropic Index" Sometimes the term "Thixotropic Index (TI)" is used in different ways concerning measurement methods and analysis. Some call TI the ratio between the viscosity of a sample at a low (ƞ A) and at a high (ƞ B) rotational speeds. For example, a material's viscosity was measured at 5 rpm (ƞ A) and at 50 rpm (ƞ B). Afterwards ƞ A is divided by ƞ B. If the value of TI = 1 the sample shows Newtonian flow behavior, i.e. it remains unchanged. If TI > 1 the sample shows speed-dependent shear-thinning flow behavior and if TI < 1 the sample shows speed-dependent shear-thickening flow behavior. However, here the term "thixotropic index" is misleading since this ratio quantifies time-independent non-Newtonian (shear-thinning or shear-thickening) behavior and not thixotropy. To quantify thixotropy, time-dependent structural decomposition and regeneration have to be measured. TI is sometimes also called the "Shear Thinning Index"vi, which is the better term in fact. Others may call TI the ratio between the viscosity values at two different points of time obtained at a constant rotational speed. For example a material's viscosity is measured after 30 s (ƞ A) and after 600 s (ƞ B) at 20 rpm. Afterwards ƞ A is divided by ƞ B. If TI = 1 the material shows time-independent flow behavior. If TI > 1 the material shows time-dependent shear-thinning behavior and if TI < 1 the material shows time-dependent shear-thickening behavior. Also here, the term "thixotropic index" is misleading since this ratio quantifies time-dependent structural decomposition of a material but not its structural regeneration. "Thixotropic breakdown coefficient" The "thixotropic breakdown (Tb) coefficient" is a simple test for analyzing the time-dependent behavior of samples. It is especially used for quick quality control checks with entry-level rotational viscometers. In this test the sample is sheared at a constant speed (or shear rate) for a certain period of time. The change in viscosity over time indicates the sample's time-dependent behavior. If the viscosity decreases, the sample shows time-dependent shear-thinning behavior and if the viscosity increases over time the sample features a time-dependent shear-thickening behaviorvii. For example, paint is measured while in rotation for 10 minutes by constantly maintaining 50 revolutions per minute (rpm). The viscosity of the sample has to be recorded at regular intervals (e.g. every 30 seconds). The viscometer reading (viscosity) is then plotted against time. Afterwards the Tb is quantified by a single number using equation 1viii. $$Tb= (\frac{St_1 - St_2}{ln (\frac{t_2}{t_1})}) ⋅ F$$ St1 = Viscometer reading at t1 minutes F = Factor for spindle/speed combination Equation 1: Formula for calculating the "thixotropic breakdown coefficient" Tb has the unit of viscosity (Pa•s or mPa•s, or in old literature P or cP). Also here, "Thixotropic breakdown coefficient" is not a very suitable name: According to modern standards this ratio does not describe thixotropic behavior since there is no structure recovery interval available afterwards. This method can be compared to those mentioned in chapter 2.4. Thixotropy tests give an insight into the sample's time-dependent flow behavior and can thereby be employed for quality control of various products. According to modern standards such as DIN spec 91143-2 and ISO/WD 3219-1 thixotropy is characterized by decreasing viscosity over time when a shear rate is applied and full structural regeneration after the shear rate is set to a very low value. Only materials which fully recover their structure after shearing, like most ketchup samples, are called thixotropic materials and can be analyzed using the step test. Simple methods, such as analyzing the hysteresis area, "thixotropic index", and the "thixotropic breakdown coefficient" are often used as a simple and quick quality control method. However, according to state-of-the-art standards they do not entirely evaluate thixotropic behavior. Learn how thixotropy testing with a rotational viscometer/rheometer can support you in the automotive paint application process. i DIN spec 91143-2 Modern rheological test methods – Part 2: Thixotropy - Determination of the time-dependent structural change - Fundamentals and interlaboratory test (2012) ii ISO 3219/WD 3219-1: General terms and definitions for rotational and oscillatory rheometry (2019) iii Mewis, J; Wagner, N J (2009). "Thixotropy". Advances in Colloid and Interface Science. 147-148, 214-227 iv Mezger, T. (2014). The Rheology Handbook. 4rd revised ed. Hanover: Vincentz Network. v Mezger, T.G.: Applied Rheology, 2018 (5th edition). vi ASTM D2196-10: Standard Test Methods for Rheological Properties of Non-Newtonian Materials by Rotational (Brookfield type) Viscometer vii Basu, S.; Shivhare, US.; Raghavan, GSV. (2007) Time Dependent Rheological Characteristics of Pineapple Jam. International Journal of Food Engineering 3, 3 viii Shapiro, I. (1946) The characteristic shear value: A coefficient of thixotropic breakdown. J. Am. Chem. Soc. 68 (10), 2122 – 2123 You had already rated this article 1. Definition of thixotropy 1.1. Thixotropic behavior 1.2. Non-thixotropic behavior 1.3. Rheopectic behavior 2. Test methods for thixotropy testing 2.1. Step test (3 intervals thixotropy test, 3ITT) 2.2. Analysis methods for the step test 2.3. Hysteresis area method 2.4. "Thixotropic Index" 2.5. "Thixotropic breakdown coefficient" Anton Paar specialists are close to you to provide service, support, and training. Sitemap | Copyright 2021 Anton Paar GmbH Privacy Policy|Legal Notice
CommonCrawl
Triangles can be classified according to the length of their sides: Equilateral triangle: all sides are equal Isosceles triangle: at least 2 sides are equal in length Scalene triangle: all sides are unequal Types of triangles based on the length of their sides. Triangles can also be classified according to their internal angles: Acute triangle: all interior angles measuring less than 90° Right triangle: one of its interior angles measures 90° Obtuse triangle: one interior angle measuring more than 90° Types of triangles based on internal angles. Length of sides between two points The length $$l$$ of a side that joins the internal angles $$(x_1, y_1)$$ and $$(x_2, y_2)$$ , can be calculated as \[ l = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2} \] Internal angle formed by 2 sides The size of an internal angle $$\gamma$$ formed by two sides of a triangle can be calculated using the law of cosines $$$cos(\gamma) = \frac{a^2+b^2-c^2}{2ab}$$. $$ , $$a$$ and $$b$$ represent the length of the sides that form the internal angle $$\gamma$$ , $$c$$ represents the length of the side right across the internal angle $$\gamma$ . Law of cosines Comparing floating point numbers For this assignment you have to compare the length of the sides with the size of the angles. Take into consideration that these figures are represented by floating point numbers. When inserted in a computer memory, minor mistakes in rounding off may occur. It is possible that a variable angle representing a 90° angle, has an actual value of 90.00000001. Therefore, testing an angle as follows is a bad idea: if angle == 90: print('right angle') Instead, the condition can be reformulated by stating that the size of the corner has to be "in a close range" from 90°. In the code fragment below, an angle may not deviate from 90° by more than 0.000001: if abs(angle - 90) < 1e-6: Watch this video1 containing further explanation about working with floating point numbers. Determine the name of the triangle formed by three non-collinear points, based both on its sides as well as on its internal angles. Six real numbers, each on a separate line. Every pair of consecutive numbers represents the $$(x, y)$$ co-ordinate for an internal angle of a triangle. You may presume that these internal angles are non-collinear. The name of the triangle that is formed by three points of which the co-ordinates are stated in the input. Use the template Triangle classification: classification_sides classification_angles. Here classification_sides is the term that represents a triangle classified according to the length of its sides. classification_angles is the term that represents a triangle classified according to the size of its internal angles. Always give the most specific term that applies to the triangle. -5.018651714882998 -4.1327268984230505 Triangle classification: isosceles right [1]: http://www.youtube.com/watch?v=IeN3TlC9lxM
CommonCrawl
block matrix multiplication If $A,B$ are $2 \times 2$ matrices of real or complex numbers, then $$AB = \left[ \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array} \right]\cdot \left[ \begin{array}{cc} b_{11} & b_{12} \\ b_{21} & b_{22} \end{array} \right] = \left[ \begin{array}{cc} a_{11}b_{11}+a_{12}b_{21} & a_{11}b_{12}+a_{12}b_{22} \\ a_{21}b_{11}+a_{22}b_{21} & a_{22}b_{12}+a_{22}b_{22} \end{array} \right] $$ What if the entries $a_{ij}, b_{ij}$ are themselves $2 \times 2$ matrices? Does matrix multiplication hold in some sort of "block" form ? $$AB = \left[ \begin{array}{c|c} A_{11} & A_{12} \\\hline A_{21} & A_{22} \end{array} \right]\cdot \left[ \begin{array}{c|c} B_{11} & B_{12} \\\hline B_{21} & B_{22} \end{array} \right] = \left[ \begin{array}{c|c} A_{11}B_{11}+A_{12}B_{21} & A_{11}B_{12}+A_{12}B_{22} \\\hline A_{21}B_{11}+A_{22}B_{21} & A_{22}B_{12}+A_{22}B_{22} \end{array} \right] $$ This identity would be very useful in my research. linear-algebra matrices numerical-linear-algebra cactus314 cactus314cactus314 $\begingroup$ Yes it does if the "blocking" is "conforming". $\endgroup$ – Algebraic Pavel May 9 '14 at 14:12 $\begingroup$ Yes. (The blocking is "confirming" in the situation you have given.) Discussed in detail in §6.12 of cip.ifi.lmu.de/~grinberg/primes2015/sols.pdf (specifically Exercise 38 and Remark 6.73; search for "block-matrix notation" if these numbers change). $\endgroup$ – darij grinberg Dec 4 '15 at 10:16 It depends on how you partition it, not all partitions work. For example, if you partition these two matrices $$\begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix}, \begin{bmatrix} a' & b' & c' \\ d' & e' & f' \\ g' & h' & i' \end{bmatrix} $$ in this way $$ \left[\begin{array}{c|cc}a&b&c\\ d&e&f\\ \hline g&h&i \end{array}\right], \left[\begin{array}{c|cc}a'&b'&c'\\ d'&e'&f'\\ \hline g'&h'&i' \end{array}\right] $$ and then multiply them, it won't work. But this would $$\left[\begin{array}{c|cc}a&b&c\\ \hline d&e&f\\ g&h&i \end{array}\right] ,\left[\begin{array}{c|cc}a'&b'&c'\\ \hline d'&e'&f'\\ g'&h'&i' \end{array}\right] $$ What's the difference? Well, in the first case, all submatrix products are not defined, like $\begin{bmatrix} a \\ d \\ \end{bmatrix}$ cannot be multiplied with $\begin{bmatrix} a' \\ d' \\ \end{bmatrix}$ So, what is the general rule? (Taken entirely from the Wiki page on Block matrix) Given, an $(m \times p)$ matrix $\mathbf{A}$ with $q$ row partitions and $s$ column partitions $$\begin{bmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} & \cdots &\mathbf{A}_{1s}\\ \mathbf{A}_{21} & \mathbf{A}_{22} & \cdots &\mathbf{A}_{2s}\\ \vdots & \vdots & \ddots &\vdots \\ \mathbf{A}_{q1} & \mathbf{A}_{q2} & \cdots &\mathbf{A}_{qs}\end{bmatrix}$$ and a $(p \times n)$ matrix $\mathbf{B}$ with $s$ row partitions and $r$ column parttions $$\begin{bmatrix} \mathbf{B}_{11} & \mathbf{B}_{12} & \cdots &\mathbf{B}_{1r}\\ \mathbf{B}_{21} & \mathbf{B}_{22} & \cdots &\mathbf{B}_{2r}\\ \vdots & \vdots & \ddots &\vdots \\ \mathbf{B}_{s1} & \mathbf{B}_{s2} & \cdots &\mathbf{B}_{sr}\end{bmatrix}$$ that are compatible with the partitions of $\mathbf{A}$, the matrix product $ \mathbf{C}=\mathbf{A}\mathbf{B} $ can be formed blockwise, yielding $\mathbf{C}$ as an $(m\times n)$ matrix with $q$ row partitions and $r$ column partitions. The very fluffy PandaThe very fluffy Panda $\begingroup$ Use \pmatrix{a&b&c\\d&e&f\\g&h&i} $\endgroup$ – Berci May 9 '14 at 14:37 $\begingroup$ @Berci I know that. But getting those horizontal and vertical lines is the difficult part. $\endgroup$ – The very fluffy Panda May 9 '14 at 14:38 $\begingroup$ @PandaBear You can use $\color{blue}{\text{colors}}$ :) $\endgroup$ – Algebraic Pavel May 9 '14 at 14:45 $\begingroup$ @PavelJiranek Your comment is funny looking. $\endgroup$ – The very fluffy Panda May 9 '14 at 14:46 $\begingroup$ I think this is wrong. You can't partition both of them same way. If you partition after x rows in first matrix , you've to partition after x columns (not rows ) in the second matrix. Otherwise while multiplying you'll have to multiply mn block with another mn block which is not possible. (you need np block) Try it with your example. $\endgroup$ – A Googler Oct 1 '15 at 18:08 Not the answer you're looking for? Browse other questions tagged linear-algebra matrices numerical-linear-algebra or ask your own question. Can entries of an matrix be matrices? How multiply Blocked Matrices? Block matrix notation Eigenvalues of Block Anti-Diagonal Matrix Jordan block in a matrix with a complex eigenvalue Partitioned matrix of partitioned matrices Permanent of a square block matrix Block diagonal matrix multiplication Is there a block structure on the U matrix of the SVD of a block matrix? Minimizing the condition number of a block matrix Simplification for Kronecker product between block matrix and identity matrix (Khatri-Rao product) Permutation matrix using block partitioned matrix
CommonCrawl
Detection and classification of social media-based extremist affiliations using sentiment analysis techniques Shakeel Ahmad1, Muhammad Zubair Asghar2, Fahad M. Alotaibi3 & Irfanullah Awan4 Human-centric Computing and Information Sciences volume 9, Article number: 24 (2019) Cite this article A Correction to this article was published on 11 July 2019 This article has been updated Identification and classification of extremist-related tweets is a hot issue. Extremist gangs have been involved in using social media sites like Facebook and Twitter for propagating their ideology and recruitment of individuals. This work aims at proposing a terrorism-related content analysis framework with the focus on classifying tweets into extremist and non-extremist classes. Based on user-generated social media posts on Twitter, we develop a tweet classification system using deep learning-based sentiment analysis techniques to classify the tweets as extremist or non-extremist. The experimental results are encouraging and provide a gateway for future researchers. With the tremendous increase in the use of social network sites like Twitter and Facebook, online community is exchanging information in the form of opinions, sentiments, emotions, and intentions, which reflect their affiliations and aptitude towards an entity, event and policy [1,2,3]. The propagation of extremist content has also been increasing and being considered as a serious issue in the recent era due to the rise of militant groups such as Irish Republican Army, Revolutionary Armed Forces of Colombia (FARC), Al Quaeda, ISIS (Daesh), Al Shabaab, Taliban, Hezbollah and others [4]. These groups have spread their roots not only at the community levels but also their networks are gaining control of social networking sites [5]. These networking sites are vulnerable and approachable platforms for the group strengthening, propaganda, brainwashing, and fundraising due to its massive impact on public sentiments and opinions. Opinions expressed on such sites give an important clue about the activities and behavior of online users. Detection of such extremist content is important to analyze user sentiment towards some extremist group and to discourage such associated unlawful acts. It is also beneficial in terms of classifying user's extremist affiliation by filtering tweets prior to their onward transmission, recommendation or training AI Chatbot from tweets [6]. The traditional techniques of filtering extremist tweets arenot scalable, inspiring researchers to develop automated techniques. In this study, we focus on the problem of classifying a tweet as extremist or non-extremist. The task faces different challenges, such as different kinds of extremism, various targets and multiple ways of representing the same semantics. The existing studies of extremism informatics are based on classical machine learning techniques [7, 8] or use classical feature representation schemes followed by a classifier. In their work, Wei et al. [8] proposed a machine learning based classification system for identifying extremist-related conversations on Twitter. Different features are investigated for identifying extremist behavior o Twitter based on public tweets by applying KNN classifier. Based on the social media communication, Azizan and Aziz [7] conducted a study for the detection of extremist affiliations using machine learning technique, namely Naïve Bayes algorithm. It has shown best results over other ML classifiers. However, in the state of the art work [7] performed on extremist affiliation detection, authors have applied machine learning classifier with classical features. Furthermore, they have classified user reviews into positive and negative sentiments reflecting affiliations with extremist groups. However, classification of tweets into positive and negative classes does not provide an efficient way of distinguishing between extremist and non-extremist tweets. Another, major limitation of their approach is that it lacks the ability to take the overall dependencies related to a sentences in a document. Therefore, the machine learning model does not provide an efficient way for classifying text into extremist and non-extremist. To overcome the aforementioned limitations of state of the art study [7], we investigate deep learning-based sentiment analysis techniques, which have already shown promising performance across a large number of complicated problems in different domains like vision, speech and text analytics [9, 10]. We propose to apply LSTM-CNN model, which works as follows: (i) CNN model is applied for feature extraction, and (ii) LSTM model receives input from the output of the CNN model and retains the sequential correlation by taking into account the previous data for capturing the global dependencies of a sentence in the document with respect to tweet classification into extremist and non- extremist. We take the task of extremist affiliation detection as a binary classification task. We take the training set Tr = {t1, t2, t3,…..tn} and class tags (labels) has Extrimist_affliation = {yes, no}. Each tweet is assigned a tag. The aim is to design a model which can learn from the training data set and can classify a new tweet as either extremist or non-extremist. The Twitter-based messaging is a major element of communication among individuals and groups, including extremists and extremist's groups. Using this sort of communication, future terrorist activities can potentially be traced. We propose a technique to identify tweets containing such content. Additionally, we classify sentiments of users in terms of emotional affiliations expressed towards individuals and groups having extremist thoughts. For this purpose, we apply IBM Watson API for tone analysis [11]. In this work, we experiment with multiple Machine Learning (ML) classifiers such as Random Forest, Support Vector Machine, KN-Neighbors, Naïve Bayes Classifiers, and deep learning (DL) classifiers. The feature set for such classifiers is encoded by task-driven embedding trained over different classifiers: CNN, LSTM, and CNN + LSTM. As baselines, we compare with feature set which consists of n-grams [12], TF–DF, and bag of words (BoW) [13]. The proposed system aims at applying deep learning-based sentiment analysis technique to answer following research questions: RQ#1: How to recognize and classify tweets as extremist vs non-extremist, by applying deep learning-based sentiment analysis techniques? RQ#2: What is the performance of classical feature sets liken-grams, bag-of-words, TF-IDF, bag-of-words (BoW) over word embedding learned using CNN, LSTM, FastText, and GRU? RQ#3: What is the performance of proposed technique for extremist affiliation classification with respect to the state-of-the-art methods? RQ#4: How to perform the sentiment classification of user reviews w.r.t emotional affiliations of Extremists on Twitter and Deep Web? Following contributions are made in this study: Classifying user reviews (Tweets) as extremist or non-extremist affiliations deep learning-based sentiment analysis techniques. To investigate the classical feature sets like n-grams, bag-of-words, TF-IDF, bag-of-words (BoW) over word embedding learned using CNN, LSTM, FastText, and GRU, for tweet classification as extremist and non-extremist. Sentiment classification of user reviews w.r.t emotional affiliations of Extremists on Twitter and Deep Web? Comparing the efficiency of the proposed model with other baseline methods. Our method outperforms baseline methods by a significant margin in terms improved precision, recall, f-measure an accuracy. The rest of the article is organized as follows: related work is presented in "Proposed approach" section; "Experimental setup" section presents proposed methodology; in "Conclusions and future work" section, we present experimental setup; in "Experimental setup" section, we analyze the results obtained from experiments and final section concludes the study and gives a recommendation for future work. In this section, we present a review of relevant studies conducted on the classification of social media-based extremist affiliations. With the development of machine learning, it has gradually been applied to the analysis of extremist content and sentiments. Ferrara et al. [5] applied machine learning techniques on social media text to detect the interaction of extremist users. The proposed system has experimented on a set of more than 20,000 tweets generated from extremist accounts, which were later suspended by Twitter. The main emphasis was on three tasks, namely: (i) detection of extremist users, (ii) identifying users having with extremist content, and (iii) predicting users' response to extremists' postings. The experiments are conducted in two dimensions, i.e. time-independent and real-time prediction tasks. An accuracy of about 93% is achieved with respect to extremist detection. With the same purpose, a machine learning-based technique is proposed by [7] for classifying of extremist affiliations. The Naïve Bayes algorithm is applied with the classical feature set. The system is based on the classification of user reviews into positive and negative classes with less focus on identifying, which sentiment class (positive or negative) is associated with extremist communication. In contrast to Ferrara et al. [5] work, which mainly emphasizes on the classification extremist's affiliations on skewed data; their method applies NB algorithm on balanced data giving more robust results. However, the overall dependencies in the sentence are not considered. This issue can be handled by applying deep learning models based on word embedding features. Researchers have also begun to investigate various ways of automatically analyzing extremist affiliations in languages other than English. In this connection, Hartung et al. [13] proposed a machine learning technique for detecting extremist posts in German Twitter accounts. Different features are experimented, such as emotions, linguistic patterns, and textual clues. The system yielded improved results over the state-of-the-art works. Studies on classifying extremist affiliations in the context of social media content are also noticeable in illegal drug usage. For example, in their work on marijuana-related microblogs, Nguyen et al. [14] collected more than thirty thousand tweets pertaining to marijuana during 2016. The text mining technique provides some useful insights to the acquired data such as (i) user attitude can be categorized as positive or negative, (ii) more than 65% tweets are originated from mobile phones, and (iii) frequency of tweets on weekend is higher than other days. Lexicon-based unsupervised techniques for sentiment classification mainly rely on some sentiment lexicon and sentiment scoring modules [15]. Like other areas of sentiment analysis, extremist affiliation has been investigated by Ryan et al. [16], by proposing a novel technique based on part-of-speech tagging and sentiment-driven detection of extremist writers from web forums. The study was based on about 1 million posts from more than 25,000 distinct users surfing on four extremist forums. The proposed method was based on the user's sentiment score, computed by aggregating the score of no. of negative posts, duration of negative posts and severity of negative posts. The system is flexible to detect online suspicious activities on extremist users. In 2012, Chalothorn and Ellman [17] proposed a sentiment analysis model to analyze online radical posts using different lexical resources such as SentiWordNet, WordNet and NLTK toolkit. The sentiment class and intensity of the text is computed. Initially, textual data were acquired from different web forums such as Montada and Qawem, and after performing necessary pre-processing tasks, different feature-driven measures were applied to detect and manipulate religious and extremists content. Experimental results show that Montada forum has more positive posting than the Qawem forum. It was concluded that Qawem forum is suffered from more radical postings. Another worth noting work is carried out by [18] by collecting a huge dataset from YouTube group with extremist ideology. Different sentiment analysis techniques were applied to examine the topics under discussion and classified into positive (radical) and negative (non-radical) classes. Furthermore, gender-wise sentiments were also highlighted to observe the opinions expressed by male and female users. Unsupervised techniques like clustering, have successfully been applied in different domains like aspect-based sentiment analysis [19], stock prediction [20] and sentiment classification. Skillicorn [21], in his work on crime investigations, proposed a framework for the adversarial analysis of data. The framework is comprised of three major segments including data collection, detection of suspects, and finding of suspicious individuals using network-driven association methods. Another method is based on the data clustering and visual analysis techniques for the investigation and implementation of terrorism informatics [22]. For this purpose, authors have used Twitter to detect and classify terrorist events by utilizing civilian sentiments. Hybrid approach for developing sentiment-based applications have received considerable attention of researchers in different domains, such as business, health-care and politics [23]. In such approaches, different features of supervised, unsupervised and semi supervised techniques are adopted [19]. In the context of extremist affiliation classification, Zeng et al. [24] worked on the Chinese text segmentation issue in terrorism domain using a suffix tree and mutual information. The core module uses mutual information and the suffix tree for manipulating data in terrorism domain. The technique has applicability for processing huge amount of Chinese textual data. Analyzing militant conversation from online conversation, Prentice et al. [25] investigated the intents and content generated during the militant's conversation on social media with respect to Gaza violence in 2008/2009. Over 50 online text conversations were analyzed by applying both qualitative and quantitative techniques. Their proposed system includes a manual coding approach to detect the presence of a persuasive metaphor and semantics of the underlying text. The aforementioned studies on detection and classification of social media-based extremist affiliations have used different approaches, such as supervised machine learning, an unsupervised technique like lexicon-based and clustering-based, and hybrid models. However, there is a need to investigate the applicability of state of the art sentiment-based deep learning models for classifying extremist affiliations using social media content. Proposed approach We used Twitter streaming API [26] to scrap tweets containing one or more extremism-related keywords (ISIS, bomb, suicide etc.). Furthermore, we also investigated different Dark Web forums, such as Al-Firdaws, Montada, alokab, and Islamic Network [27]. The first three are Arabic forums and third one is in English. We collected over 25,000 postings by translating the non-English postings to English using Python-based Google Translate API (https://pypi.org/project/googletrans/). Each review is matched with the seed words present in the manually built extremist's vocabulary lexicon acquired from BiSAL [28], a bilingual sentiment lexicon for analyzing dark web forums. In this way, all postings containing one or more keywords from the manually acquired lexicon, are collected. For this purpose, we used a python-based beautiful-soupscript (https://codeburst.io/web-scraping-101-with-python-beautiful-soup-bb617be1f486). The acquired data is stored is in a machine readable ".CSV" file. In this way, we acquired manually tagged training datasets for conducting experiments. The training dataset is comprised of 12,754 tweets labeled as "extremist" and 8432 as "non-extremist" [29]. Table 1 shows the detail of the used dataset. Table 2 shows a sample list of frequently occurring terms in the dataset, showing term frequency (tf), document frequency (df) and user frequency (uf). Table 1 Dataset statistics Table 2 Top 25 frequently occurring terms We applied different preprocessing techniques, such as tokenization, stop word removal, case conversion, and special symbol removal [30]. The tokenization yields a set of unique tokens (356,242), which assist in building a vocabulary from the training set, used for encoding the text. Training, validation and testing We divided the dataset into three parts: train, validate, and test. The DL model is trained with the Keras library [31] based on TensorFlow. The hardware requirements include 4 Titan X GPUson, a 128 GB memory with Intel Core i7 node. Figure 1 shows a diagramatic representation of train, validation and test split. Train, validation and test split Training data is used to train the model. In this work, 80% of the data is used training and it may vary as per requirements of the experiment. The training data includes both the input and the expected output. It includes both the input and the corresponding expected output [32]. Table 3 shows a sample list of review sentences in training data. Table 3 A partial listing of training data Data validation is used to minimize the overfitting and under fitting [33], which usually happens because the accuracy of the training phase is often high and performance gets degraded against test data. Therefore, 10% validation set is used to avoid performance error by applying parameter tuning. For this purpose, we applied automatic verification of dataset [34], which provides an unbiased evaluation of the model and minimize the overfitting [35]. The test data (20%) is used to check whether the trained model performs well on the unseen data. It is used for the final evaluation of the model when it is trained completely. A list of sample entries in the test data is presented in Table 4. Table 4 A partial listing of test data Proposed network model The proposed method implements and evaluates the performance of long short term memory with Convolutional Neural Network (CNN) model to identify tweets/reviews containing content with extremist clues. We train the neural classifier, for the classification of extremist affiliation content. The working flow of the network is comprised of following steps: (i) word embedding, in which each word in a sentence is assigned a unique index to form a fixed-length vector, (ii) dropout layer is used to avoid overfitting, (iii)LSTM layer is incorporated to capture long-distance dependency across tweets/reviews (iii) feature extraction is performed using a convolution operation, (iv) pooling layer aims at minimizing the dimension of feature map by aggregating the information, (v) Flatten layer converts the pooled feature map into a column vector, and (vi) at the output layer, softmax function is used for the classification. Figure 2 presents a network diagram for classifying the sentence as "extremist" or "non-extremist content". In the rest of this paper, we give detailed working of these layers. LSTM + CNN architecture for extremist affiliation classification Word embedding (input) layer The embedding or input layer is the first layer of LSTM + CNN model, which transforms the words into real-valued vector representation, i.e. a vocabulary of the words is created, which is then converted into a numeric form, known as word embedding. The word embedding is given as input (sentence matrix) to the next layer. As shown in the pseudocode, there are different parameters, namely (i) max features, (ii) embed dim, and (iii) input length. The "max_features" holds the top words and presents the size of vocabulary; "embed_dim" shows the dimension of the real-valued vector, and the "input_length" describes the length of each of the input sequence. The sentence consists of a sequence of words: x1, x2…xn, and each word are assigned an exclusive index number. The embedding layer transforms such indices into D dimensional word vector. For this purpose, an embedding matrix of size [vocabulary size × embedding size]) is learned over a 10 × 4-dimensional matrix i.e. ([V × D] = [10 × 4]). As in this case, the vocabulary size is 10, while embedding size is 4, therefore, the individual word "Baghdadi" is represented as 1 by 4-dimensional vector i.e. (1 × D = [1 × 4]. For example, the word "Baghdadi" with index "1" contains an embedding vector [0.2, 0.4, 0.1, 0.2], represented by the first row shown in Fig. 3. Similarly, the second row is [0.6, 0.2, 0.8, 0.8] and same is the case for others. Thus, we can clearly see that each word has an embedding of size "1 × D", as depicted in Fig. 3. The embedding matrix is denoted as E ϵ RV×D. The word embedding process is illustrated as follows: Word representation in input layer Dropout layer The function of the dropout layer is to avoid overfitting. The value 0.5 represents the "rate" parameter of the dropout layer and the value of this parameter falls between 0 and 1 [36]. The dropout layer randomly deletes or turnoff the activation of neurons in the embedding layer as the dropout is applied on embedding layer, whereas each neuron in the embedding layer depicts the dense representation of a word in a sentence. The modeling of dropout on a single neuron is presented in Eq. (1): $$f(k,p) = \left\{ {\begin{array}{ll} p & \quad {if \ k = 1} \\ {1 - p} & \quad {if \ k = 0} \\ \end{array} } \right.$$ k depicts the desirable results and p is the probability related to real-valued word representation. So, when p value is 1 the neuron holding a real value will be deleted and is activated otherwise. Figure 4 shows the working of dropout layer. Operation of dropout layer Figure 4 illustrates the embedding layer which holds the real-valued representation of a given sentence: Baghdadi… our last and only hope, I simply love you. so, after adding a dropout layer some of the values in the embedding layer are deactivated randomly (Fig. 4). Long short term memory We used single LSTM layer, which consists of a 100 lstm cells/units. LSTM performs some pre-calculations before it produces an output. In each cell, four independent calculations are performed using four gates: forget (ft), input (it), candidate (c ~ t) and output (ot). The equations for these gates are given below [37]: $$ft = \sigma \left( {Wfxt + Uf ht - 1 + bf} \right)$$ $$it = \sigma \left( {Wixt + Ui ht - 1 + bi} \right)$$ $$Ot = \sigma \left( {Wo xt + Uo ht - 1 + bo} \right)$$ $$C\sim t = \tau \left( {Wc xt + Uc ht - 1 + bc} \right)$$ $$Ct = ft oCt - 1 + it oC\sim t$$ $$ht = Ot o\tau \left( {Ct} \right)$$ The graphical illustration of entire LSTM cell in green block is shown in Fig. 5. Long short term memory cell Now, the example sentence: "Baghdadi… our last and only hope, I simply love you", is passed through the LSTM cell. The execution first starts with the forget gate using Eq. (2) in which input xt and the previous output ht is multiplied with their respective weights Wf, Uf. Next bias bf is added which outputs (4 × 1) vector. Then sigmoid activation function is applied to transform the values between 0 and 1 and the values greater than 0.5 is assumed as 1 for three gates ft, it, Ot as shown in below computation [38]. Next, for input and output gate, the same procedure is repeated and for candidate gate, instead of the sigmoid, tangent function is used. Finally, the output ht and the next cell state ct vectors, are calculated using Eqs. (6) and (7). Putting values in Eqs. (2), (3), (4), (5), (6), (7) $${\text{f}}_{\text{t}} = \sigma \left( {\left[ {\begin{array}{*{20}c} {0.1} & {0.3} & {0.2} & {0.4} \\ {0.2} & {0.1} & {0.5} & {0.3} \\ {0.3} & {0.6} & {0.3} & {0.2} \\ {0.1} & {0.2} & {0.5} & {0.3} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} {0.2} \\ {0.4} \\ {0.1} \\ {0.2} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {0.1} & {0.2} & {0.5} & {0.6} \\ {0.3} & {0.4} & {0.1} & {0.7} \\ {0.2} & {0.3} & {0.5} & {0.4} \\ {0.4} & {0.1} & {0.3} & {0.2} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {0.1} \\ {0.2} \\ {0.3} \\ {0.4} \\ \end{array} } \right]} \right) = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ \end{array} } \right]$$ $${\text{i}}_{\text{t}} = \sigma \left( {\left[ {\begin{array}{*{20}c} {0.4} & {0.3} & {0.1} & {0.5} \\ {0.2} & {0.6} & {0.4} & {0.1} \\ {0.3} & {0.5} & {0.6} & {0.7} \\ {0.2} & {0.1} & {0.4} & {0.3} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} {0.2} \\ {0.4} \\ {0.1} \\ {0.2} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {0.5} & {0.1} & {0.4} & {0.6} \\ {0.4} & {0.3} & {0.2} & {0.7} \\ {0.6} & {0.2} & {0.4} & {0.8} \\ {0.2} & {0.7} & {0.8} & {0.1} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {0.1} \\ {0.3} \\ {0.4} \\ {0.2} \\ \end{array} } \right]} \right) = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ \end{array} } \right]$$ $${\text{c}} \sim {\text{t}} = \tau \left( {\left[ {\begin{array}{*{20}c} {0.3} & {0.2} & {0.5} & {0.6} \\ {0.1} & {0.3} & {0.4} & {0.2} \\ {0.5} & {0.6} & {0.7} & {0.1} \\ {0.4} & {0.3} & {0.2} & {0.7} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} {0.2} \\ {0.4} \\ {0.1} \\ {0.2} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {0.5} & {0.1} & {0.4} & {0.6} \\ {0.4} & {0.3} & {0.2} & {0.7} \\ {0.6} & {0.2} & {0.4} & {0.8} \\ {0.2} & {0.7} & {0.8} & {0.1} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {0.4} \\ {0.1} \\ {0.2} \\ {0.3} \\ \end{array} } \right]} \right) = \left[ {\begin{array}{*{20}c} {0.6} & {0.3} & {0.5} & {0.5} \\ \end{array} } \right]\,$$ $${\text{o}}_{t} = \sigma \left( {\left[ {\begin{array}{*{20}c} {0.2} & {0.1} & {0.5} & {0.6} \\ {0.3} & {0.4} & {0.2} & {0.1} \\ {0.1} & {0.5} & {0.6} & {0.7} \\ {0.6} & {0.4} & {0.3} & {0.2} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} {0.2} \\ {0.4} \\ {0.1} \\ {0.2} \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {0.1} & {0.2} & {0.5} & {0.3} \\ {0.6} & {0.3} & {0.2} & {0.4} \\ {0.7} & {0.4} & {0.1} & {0.8} \\ {0.8} & {0.5} & {0.2} & {0.3} \\ \end{array} } \right] \times \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {0.7} \\ {0.6} \\ {0.5} \\ {0.4} \\ \end{array} } \right]} \right) = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ \end{array} } \right]$$ $${\text{c}}_{\text{t}} = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} 0 & 0 & 0 & 0 \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {0.6} & {0.3} & {0.5} & {0.5} \\ \end{array} } \right]$$ $${\text{h}}_{\text{t}} = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ \end{array} } \right] \cdot \tau \left[ {\begin{array}{*{20}c} {0.6} & {0.3} & {0.5} & {0.5} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {0.5} & {0.2} & {0.4} & {0.} \\ \end{array} 4} \right]$$ From here, we pass our ht and ct to begin the next lstm cell calculations. As a result, LSTM produces an output sequence P = [p0, p1, p2,……., pl] of a matrix \(P \in R^{lxw}\). Finally, this representation is fed to the CNN layer. Convolutional layer In this layer, a convolutional operation is performed which is a mathematical operation implemented on two functions, yielding a third function. To perform the convolutional operation, the dimensions of an input matrix (P), filter matrix (F) and output matrix (T), are represented as follows: $$P = R^{lxw}$$ In Eq. (8), P represents input matrix, produced by the LSTM layer, R denotes all real numbers, l is the length and w is the width of input matrix which is shown as R10×4. $$F = R^{nxm}$$ In Eq. (9), F represents filter matrix, R denotes all real numbers, n is the length and m is the width of the filter matrix, which is shown as R2×2, and $$T = R^{sxd}$$ In Eq. (10), T represents output matrix, R denotes all real numbers, s is the length and d is the width of output matrix, which is shown as R10×4. The convolutional operation is formulated as shown in Eq. (11): $$t_{i,j} = \mathop \sum \limits_{l = 1}^{n} \mathop \sum \limits_{w = 1}^{m} f_{l,w} \otimes p_{i + l - 1,j + w - 1}$$ where, ti,j ϵ Rs×dis the tth element of output matrix, fl,w ϵ Rn×n represents fth element of weight matrix, ⊗ denotes element-wise cross multiplication, and pi+l−1,j+w−1 ϵ Rl×w represents the pth elements of the input matrix. For the example sentence: "Baghdadi… our last and only hope, I simply love you", the convolutional operation is executed as follows: (i) Elements of input matrix 0.5, 0.2, 0.2, 0.7, (ii) Elements of filter matrix 0.7, 0.4, 0.9, 0.5, and (iii) Convolutional operation 0.5 × 0.7 + 0.2 × 0.4 + 0.2 × 0.7 + 0.7 × 0.5 = 0.92, where 0.92 is the first element of the output matrix. Similarly, the process of element-wise cross multiplication and addition continues until all the values of input matrix are covered and this will be done through the sliding of the filter over the input matrix. Feature map After adding bias and an activation function to the output matrix, a feature map (A) for a given sentence is computed as follows [see Eq. (12)]: $$A = a_{i,j} = f\left( {t_{i,j} + b} \right)$$ where dimension of feature map (A) for a given sentence = Rq×r = R10×4, where, ai, j ϵRq×r, is an ath element of a feature map, b is a bias term, and f is an activation function. In Fig. 2, each element of the output matrix is then added to the bias term after the convolutional operation. For example, adding a bias value to the first element of the output matrix that is 0.92 + 1 = 1.92 which represents the first element of the feature map for a given sentence (Fig. 2). As, shown in Algorithm no. 1, the parameters used in this layer are: (i) "filters", it describes the filter number within the convolutional layer; (ii) "kernel_size", shows the dimensionality of convolutional window; (iii) "padding" holds single value among the three values: "valid", "same", and "casual". If padding contains the value of "valid", then it shows no padding, and padding with the value "same", depicts that original input length equals to output length. Furthermore, when padding contains the value "casual", then it produces dilated convolution; and (iv) the parameter "activation = relu" means activation is exploited to reveal nonlinearity. Finally, a feature map is generated on which relu activation function is applied to remove non-linearity and its mathematical expression is: Output = max (Zero, Input), where Input means an element of the feature map. For example, in the input sentence, the first element of the feature map is 1. 92. If we apply relu activation function on it, then Output = max (0, 1.92), will be Output = 1.92, since 1.92 > 0. So, in this way, other elements of rectified feature map for a given sentence is calculated. Pooling layer The pooling layer is used to minimize the dimension of the feature map by aggregating the information. So, the max pooling is applied to every sentence in a dataset. We used max pooling to get the required feature of a sentence by selecting the maximum value. Equation (13) presents the formula for pooling layer. $$g_{p,h} = max_{{i,j \in Z_{p,h} }} a_{i,j}$$ Suppose, Zp,h is a small matrix with size 2 × 2, also gp,h ϵ R5×2 is an element of matrix G, and ai,j is an element of matrix A. The elements \(g_{p,h}\) of matrix G is obtained after picking the maximum element from the matrix A, which is a rectified feature map for a given sentence, within the given window matrix that is Zp,h. Hence, the matrix G depicts the pooled feature map of the given sentence, which is illustrated in Fig. 6. A pooled feature map is created by setting a window size (2 × 2), placing it on the feature map, and finally extracting the maximal element inside the window. Since in our case, among the selected window size, that is max (1.92, 2.3, 2.33, 2.22), the largest element is 2.33, which shows the first element of the pooled feature map related to the given sentence. Hence, the same procedure will be performed for the other values of the pooled feature map. Flatten layer The flatten layer of Convolutional Neural Network transforms the pooled feature map into a column vector, which is made input to the neural network for the classification task [39]. The column vector represents the feature map for the desired sentence. To make the feature vector rows concatenated, the pooled feature map is flattened through reshape function of numpy as shown in an Eq. (14): $$Flattening \, = \, pooled.reshape \, (4*2, \, 1)$$ This equation takes row 1, row 2, row 3 and so on, then append them all to make a single column vector. Figure 7 describes the function of flatten layer in which the matrix shows the pooled feature map for a sentence and the single column vector depicts that flattening operation is performed on a sentence pooled feature map. At the output layer, an activation function like sigmoid, tanh, or softmax, is applied to compute the probability for the two classes, i.e. extremist and non-extremist. For example, the desired input text "Baghdadi… our last and only hope, I simply love you", is tagged as "extremist", when passed through the proposed CNN model. In Fig. 8 the classification of the input vector is performed using the softmax function. The net input is obtained by applying Eq. (15). $$u_{j} = \mathop \sum \limits_{i}^{l} w_{i} x_{i} + b$$ where "w" represents weight vector, "x" represents an input vector and "b" is a bias term. Applying softmax function for classification Softmax layer Following are additional functions used in LSTM + CNN model, as shown in Algorithm 1. Compile function The compile method is used for model configuration. It covers different parameters: (i) Loss, it is an objective function, (ii) Optimizer, an instance/name of an optimizer which is used for the model compilation, and (iii) Metrics, it holds the evaluation metrics. Summary of the model The summary of the model will be shown using summary function after model creation. Fitting a model This section of the pseudo-code involves training of the required dataset and after that, evaluation of the model is performed on the test dataset. Finally, accuracy will be the output of the model. A sample implementation code is given in Additional file 1: Appendix A. Analyzing sentiments of users w.r.t emotional affiliation with extremists This module deals with emotion classification of the user's sentiments showing affiliation with the extremists' postings. Each of the input text is tagged with an emotion category using the Python-based tone analyzer API [11]. It returns an emotion set: {anger, sadness, fear, joy, confident, analytical and tentative}. In this module, an input text is analyzed and the corresponding emotion class is identified. For example, when the input text: "Great news, ISIS fight Afghan forces to capture Helmand.." is passed through the emotion analyzer, it returns an emotion class "joy". A sample set of user reviews and the detected emotion class is shown in Table 5. Table 5 Extremist-related emotion classification To conduct the experiments, we used the Python and Anaconda framework (https://anaconda.org/anaconda/python). In this section, we discuss results obtained by conducting different experiments to answer the posed research questions. Answer to RQ#1: How to recognize and classify tweets as extremist vs non-extremist, by applying deep learning-based sentiment analysis techniques? To answer this research question, we applied deep-learning-based sentiment analysis technique, namely LSTM + CNN (discussed in detail in the proposed methods section). Additionally, we conducted experiments on 1-Layer CNN, 1-Layer LSTM. Table 6 shows results obtained on account of applying different DL models and it is obvious that the proposed LSTM + CNN model achieves best results. Table 6 Experimental results of different DL-based SA models The LSTM + CNN model attained best results, as the LSTM layer generates a new representation of the input tweet received from the embedding layer by capturing information from both the current and previous inputs, reducing the information loss. The LSTM model retains the context information using current and previous states for sufficient duration to make predictions. At the next level, the CNN layer captures additional features (n-gram) from the richer collection, yielding better and improved performance results. Parameter settings Table 7 presents the parameter setting for the proposed model (LSTM + CNN). The experiment is conducted using different parameters, listed as follows: the parameter, namely 'number of filter', receives a value varying from 2 to 16, while the values of other parameters, such as kernel size, padding, pooling size, optimizer, batch size, epochs, and units, are fixed. Table 7 Parameter setting regarding proposed LSTM + CNN model The configuration setting regarding 8 LSTM + CNN models with selected parameters (number of filters, kernel size, pooling size, and LSTM units), is shown in Table 8. Table 8 All variants of LSTM + CNN with parameter setting We have listed the accuracy, loss score, and training time in Table 9. After conducting experiments with varying parameter setting of LSTM + CNN models, it is noted that the performance of LSTM + CNN8 model is better with lstm units = 100 (cells), pooling size = 2 × 2, and number of filters = 16 and it's achieved accuracy is 92.66%. It is noted that the accuracy of the model increases by increasing the number of filters. Table 9 LSTM + CNN models training time, loss score and accuracy Answer to RQ2: What is the performance of classical feature sets liken-grams, bag-of-words, TF-IDF, bag of-words (BoW) over word embedding learned using CNN, LSTM, FastText, and GRU? Firstly, we discuss a few baseline/state-of-the-art methods, in all such techniques, a feature vector is created for a given tweet, which is applied as its feature set with the classifier. Baseline methods To conduct experiments, we used different classifiers for the following three feature representation techniques in the baselines [7, 8, 12]: (i) n-gram It is the state-of-the-art technique [12]) (ii) bag-of-words The bag-of-word, also called Count vectorizer technique, makes use of word frequency [7, 13], and (iii) TF-IDF A feature vector is created for text classification [8]. Deep learning methods with variants Different variants of DL techniques (CNN + Random Embedding, LSTM + Random Embedding, FastText + Random Embedding, and GRU + Random Embedding), are used for tweet classification as extremist and non-extremist. The results are reported in Table 10. The proposed LSTM + CNN performs better than the other DL methods. Table 10 Comparison of proposed work with baseline methods Proposed method For the extremist classification task, we experimented different DL-based SA models, namely CNN, LSTM, FastText and GRU, initialized with random word embeddings. The proposed LSTM + CNN model for extremist classification outperforms baseline methods (Part A of Table 10) and it also performs better than the other DL models (Part B of Table 10). Parameter setting for LSTM + CNN Following parameters are used for the LSTM + CNN model namely: (i) Max features (10 000), (ii) embedding dimension (128), (iii) LSTM unit size (200), convolutional filter size (200), and (iv) a batch size 32 with 3 epochs which yielded best performance results as shown in (Part C of Table 10). Answer to RQ3#: What is the performance of proposed model for extremist affiliation classification with respect to state-of-the-art methods? To find an answer for RQ3, we conducted experiments using supervised, lexicon-based, benchmark proposed technique for extremist Classification on Twitter. Supervised techniques In Table 11, the results of the proposed method (LSTM + CNN), are compared with different state-of-the-art techniques [7, 8] based on machine classifiers, namely, KN-Neighbors, Naïve Bayes Classifier. Furthermore, we also applied other machine and deep learning classifiers, namely Random Forest, Support Vector Machine, LSTM, and CNN. The objective of the experimentation is to perform extremist classification of Tweets using different ML and DL classifiers. The performance evaluation results of various machine learning classifiers are presented in terms of accuracy, precision, recall, and f-measure. The KNN exhibited the lowest performance result (72% accuracy). Table 11 Comparison with state-of-art techniques Lexicon-based technique The SentiWordNet is used [17] to classify the reviews as extremist or nob-extremist using sentiment analysis. The reviews tagged as positive are treated as having affiliations with extremist, whereas negative reviews are treated as non-extremist reviews. The experimental results are presented in Table 11. The proposed technique (LSTM + CNN) produced the best results (Table 11), when compared with the other comparing methods. Answer to RQ4: How to perform the sentiment classification of user reviews w.r.t emotional affiliations of Extremists on Twitter and Deep Web? To find an answer for RQ2, we conducted experiments using tone analyzer API [11] for Emotion Classification of User reviews w.r.t Extremist's Affiliation on Dark Web. Results reported in Table 12 show that the proposed module for emotion classification outperforms the comparing supervised machine learning classifiers in terms of improved accuracy, precision, recall, and F-measure. Table 12 Comparative results of sentiments of users w.r.t emotional affiliation with extremists Conclusions and future work This study presents a sentiment-based extremist classification system based on users' postings made on Twitter. The proposed work operates in three modules: (i) users' tweet collection, (ii) preprocessing, and (iii) classification with respect to extremist and non-extremist classes using LSTM + CNN model and other ML and DL classifiers. The experimental results show that the proposed system outperformed the comparing methods in terms of better precision, recall, f-measure, and accuracy. However, the system has certain limitations, such as (i) lack of an automated method for crawling, cleaning and storing Twitter content, (ii) lack of considering visual and social context features for obtaining more robust results, and (iii) investigating other types of extremism by apply the DL methods for multi-class label classification. Our future work aims at applying more advanced techniques, such as attention-based mechanism for extremist affiliation detection with multi-class labels. Furthermore, the inclusion of context-aware features can also improve the performance of the system. In the original publication of this article [1], the Acknowledgements and Funding section in Declarations need to be revised. AI: ML: DL: CNN: LSTM: long short-term memory TF–DF: term frequency and inverse document frequency BoW: bag of words KNN: SVM: Support Vector Machine Naïve Bayesian NLTK: Natural Language Toolkit Hao F, Park DS, Pei Z (2018) When social computing meets soft opportunities and insights. Human-centric Comput Inform Sci 8(8):1–18 Hao F, Li S, Min G, Kim HC, Yau SS, Yang LT (2015) An efficient approach to generating location-sensitive recommendations in Ad hoc social network environments. IEEE Trans Serv Comput. 8(3):520–533 Hao F, Min G, Pei Z, Park DS, Yang LT (2017) k-clique communities detection in social networks based on formal concept analysis. IEEE Syst J. 11(1):250–259 Iskandar B (2017) Terrorism detection based on sentiment analysis using machine learning. J Eng Appl Sci 12–3:691–698 Ferrara E, Wang WQ, Varol O, Flammini A, Galstyan A (2016) Predicting online extremism, content adopters, and interaction reciprocity. International conference on social informatics. Springer, New York, pp 22–39 Badjatiya P, Gupta S, Gupta M, Varma V (2017) Deep learning for hate speech detection in tweets. In: Proceedings of the 26th international conference on world wide web companion. International World Wide Web conferences steering committee, pp 759–760 Azizan SA, Aziz IA (2017) Terrorism detection based on sentiment analysis using machine learning. J Eng Appl Sci 12(3):691–698 Wei Y, Singh L, Marti S (2016) Identification of extremism on Twitter. Proceedings of the IEEE/ACM international conference on advances in social networks analysis and mining. IEEE, New Jersey, pp 1251–1255 Zhang H, Wang J, Zhang J, Zhang X (2017) Ynu-hpcc at semeval 2017 task 4: using a multi-channel cnn-lstm model for sentiment classification. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017) pp 796–801 Yenter A, Verma A (2017) Deep CNN-LSTM with combined kernels from multiple branches for IMDB review sentiment analysis. In: 2017 IEEE 8th annual ubiquitous computing, electronics and mobile communication conference (UEMCON). IEEE. pp 540–546 IBM Watson tone analyzer. https://www.ibm.com/watson/developercloud/tone-analyzer/api/v3/. Accessed 19 July 2018 Sureka A, Agarwal S, Schmidtke F (2014) Learning to classify hate and extremism promoting tweets. 2014 IEEE Joint intelligence and security informatics conference (JISIC). IEEE, New Jersey, p 320 Hartung M, Klinger R, Schmidtke F, Vogel L (2017) Identifying right-wing extremism in german Twitter profiles: a classification approach. International conference on applications of natural language to information systems. Springer, Cham, pp 320–325 Nguyen A, Hoang Q, Nguyen H, Nguyen D, Tran T (2017) Evaluating marijuana-related tweets on Twitter. IEEE 7th annual computing and communication workshop and conference (CCWC). IEEE, New Jersey, pp 1–7 Asghar MZ, Khan A, Ahmad S, Qasim M, Khan IA (2017) Lexicon-enhanced sentiment analysis framework using rule-based classification scheme. PLoS ONE 12(2):e0171649 Ryan S, Garth D, Richard F (2018) Searching for signs of extremism on the web: an introduction to sentiment-based identification of radical authors. Behav Sci Terror Pol Aggres 10:39–59. https://doi.org/10.1080/19434472.2016.1276612 Chalothorn T, Ellman J (2012) Using SentiWordNet and sentiment analysis for detecting radical content on web forums Bermingham A, Conway M, McInerney L, O'Hare N, Smeaton AF (2009) Combining social network analysis and sentiment analysis to explore the potential for online radicalisation. In: IEEE international conference on advances in social network analysis and mining, ASONAM'09. pp 231–236 Asghar MZ, Khan A, Zahra SR, Ahmad S, Kundi FM (2017) Aspect-based opinion mining framework using heuristic patterns. Cluster Comput pp 1–19 Asghar MZ, Rahman F, Kundi FM, Ahmad S (2019) Development of stock market trend prediction system using multiple regression. In: Computational and mathematical organization theory. pp 1–31 Skillicorn D (2011) Computational approaches to suspicion in adversarial settings. Inform Syst Front. https://doi.org/10.1007/s10796-010-9279-4 Cheong M, Lee VC (2011) A microblogging-based approach to terrorism informatics: exploration and chronicling civilian sentiment and response to terrorism events via Twitter. Inform Syst Front 13–1:45–59 Asghar MZ, Kundi FM, Ahmad S, Khan A, Khan F (2018) T-SAF: Twitter sentiment analysis framework using a hybrid classification scheme. Expert Syst 35(1):e12233 Zeng D, Wei D, Chau M, Wang F (2011) Domain-specific Chinese word segmentation using suffix tree and mutual information. Inform Syst Front. https://doi.org/10.1007/s10796-010-9278-5 Prentice S, Taylor P, Rayson P, Hoskins A, O'Loughlin B (2011) Analyzing the semantic content and persuasive composition of extremist media: a case study of texts produced during the Gaza conflict. Inform Syst Front. https://doi.org/10.1007/s10796-010-9272-y Consuming streaming data. https://developer.twiiter.com/en/docs/tutoroals/consuming-streaming-data.html. Accessed 19 July 2018 Zhang Y, Zeng S, Fan L, Dang Y, Larson CA, Chen H (2009) Dark web forums portal: searching and analyzing jihadist forums. In: IEEE international conference on intelligence and security informatics, ISI'09. pp. 71–76 BiSAL-A Bilingual sentiment analysis lexicon to analyze dark web forums for cyber security Omer E (2015) Using machine learning to identify jihadist messages on Twitter Asghar MZ, Khan A, Khan F, Kundi FM (2018) RIFT: a rule induction framework for Twitter sentiment analysis. Arabian J Sci Eng 43–2:857–877 Erickson BJ, Korfiatis P, Akkus Z, Kline T, Philbrick K (2017) Toolkits and libraries for deep learning. J Digit Imaging 30–4:400–405 Chomba B (2018) What is the difference between a training set and a test set? https://www.quora.com/What-is-the-difference-between-a-training-set-and-a-test-set. Accessed 10 Dec 2018 Acharya A (2017) Comparative study of machine learning algorithms for heart disease prediction Jason B (2018) Evaluate the performance of deep learning models in Keras. https://machinelearningmastery.com/evaluate-performance-deep-learning-models-keras/#comment-460892. Accessed 14 Oct 2018 What's is the difference between train, validation and test set, in neural networks? https://stackoverflow.com/questions/2976452/whats-is-the-difference-between-train-validation-and-test-set-in-neural-netwo. Accessed 2 Dec 2018 A Gentle introduction to dropout for regularizing deep neural networks. https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/. Accessed 4 Nov 2018 Understanding LSTM cells using C#. https://msdn.microsoft.com/en-us/magazine/mt846470.aspx. Accessed 16 Oct 2018 A numerical example of LSTMs. https://statisticalinterference.wordpress.com/2017/06/01/lstms-in-even-more-excruciating-detail/. Accessed 05 Oct 2018 Convolutional Neural Networks (CNN): Step 3—flattening. https://www.superdatascience.com/blogs/convolutional-neural-networks-cnn-step-3-flattening. Accessed 10 Dec 2018 This article was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah. The authors, therefore, acknowledge with thanks DSR for technical and financial support. The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, funded this article. The authors, therefore, acknowledge with thanks DSR for technical and financial support. Faculty of Computing and Information Technology at Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia Shakeel Ahmad Institute of Computing and Information Technology, Gomal University, Dera Ismail Khan (KP), Pakistan Muhammad Zubair Asghar Department of Information Systems, Faculty of Computing and Information Technology (FCIT), King Abdulaziz University, Jeddah, Saudi Arabia Fahad M. Alotaibi Department of Computing, University of Bradford, Bradford, UK Irfanullah Awan Conceptualization: SA, MZA. Data curation: FMA, IA. Formal analysis: SA, FMA. Investigation: MZA, FMA. Methodology: SA, MZA. Project administration: SA. Resources: SA, IA, FMA. Software: MZA, SA. Supervision: SA. Validation: MZA, IA. Visualization: SA, IA, Writing—original draft: MZA. Writing—review and editing: FMA, IA. All authors read and approved the final manuscript. Correspondence to Shakeel Ahmad. The research data used to support the findings of this study are available from the corresponding author upon request. Additional file 1: Appendix A. A sample implementation of LSTM+CNN for extremist classification. Ahmad, S., Asghar, M.Z., Alotaibi, F.M. et al. Detection and classification of social media-based extremist affiliations using sentiment analysis techniques. Hum. Cent. Comput. Inf. Sci. 9, 24 (2019). https://doi.org/10.1186/s13673-019-0185-6 Sentiment classification Extremist sentiments Extremist affiliations
CommonCrawl
Chapter Test Quadratic Relations 10 Principles of Mathematics McGraw-Hill Purchase this Material for $4 You need to sign up or log in to purchase. Subscribe for All Access Sketch a graph of each parabola. Label the coordinates of the vertex and the equation of the axis of symmetry. \displaystyle y = x^2 - 6 Buy to View \displaystyle y = 2(x-5)^2 \displaystyle y = -\frac{1}{3}(x+3)^2 + 4 Sketch a graph of each relation. Label the x-intercepts and the vertex. \displaystyle y = (x-6)(x + 2) \displaystyle y = -4(x-1)(x - 9) Determine an equation to represent each parabola. a) 4^0 b) 5^{-1} c) (-3)^{-3} d) \left(\dfrac{3}{4}\right)^{-2} The table shows the length of a spring under a specific load. a) Use finite differences to determine whether this is a quadratic relation. b) Make a scatter plot of the data. Draw a curve of best fit. c) Use your curve of best fit to predict the length of the spring under a load of 8 kg Board-feet are used to measure the total length, in feet, of boards that are 1 inch thick and 1 foot wide that can be cut from a tree to make lumber. You can use the equation l=0.011a^2-0.68a+13.31 unfinished question The St. Louis Gateway Arch in St. Louis, Missouri. was built in 1005 and was designed as a catenary. which is a curve that approximates a parabola. The arch is 102 m wide and 102 m tall. a) Sketch a graph of the arch that is symmetrical about the y-axis. b) Label the x-intercepts and the vertex. c) Determine an equation to model the arch. When a car is traveling at a given speed, there is a minimum turn radius it can safely make. A particular car's minimum radius can be calculated by r = 0.6s^2, where s is the speed, in kilometres per hour, and r is the turning radius, in metres. If the car uses tires with better grip, how does this affect the equation? Justify your response. The maximum viewing distance on a clear day is related to how high you are above the surface of Earth. This relationship can be approximated by the formula \displaystyle h = \frac{3}{40}d^2 , where d is the maximum distance, in kilometres, and h is your height, in metres, above the ground. a) How high do you need to be in order to see a distance of 25 km? b) How would the formula change if you were standing on a 20 m cliff? A volleyball's height, h, in metres, above the ground after 1 seconds is modelled by the relation h = -4.9t^2 + 5t + 2. a) Graph the relation. b) What is the h-intercept? What does it represent? c) How long will it take the volleyball to hit the ground? What feature on the graph models this? Explain your answer. An ant colony has 5000 ants on July 1 and doubles every year. This can be expressed as N = 5000 \times 2^t, where N represents the number of ants and t represents time, in years. a) Find the number of ants after 2, 3, 4, and 5 years. b) What does t = 0 represent in this situation? What does t = -2 represent? c) When were there 625 ants? Explain. The approximate cost of operating a certain car at a constant speed is given by the formula C= 0.006(s-50)^2 + 20, for 10 \leq s \leq 130, where s is the speed, in kilometres per hour, and C is the most, in cents per kilometre. Use a graphing calculator to compare the operating costs, at different speeds, to those of a second vehicle with formula C = 0.008(s-55)^2+15 a) Graph the following relations by developing a table of values and plotting points. Then, find the first and second differences. \displaystyle \begin{array}{lllll} &y = x^2 -4x \\ &y = x^2 -4x + 5 \\ &y = x^2 -4x -2 \end{array} b) Examine each graph and use its properties to write an equation in the form y=(x-h)^2+k. c) What conclusions can you make about the relation y = x^2 - 4x + c for different values of 0?
CommonCrawl
Strengthening convex relaxations of 0/1-sets using Boolean formulas Full Length Paper Samuel Fiorini1, Tony Huynh2 & Stefan Weltge3 Mathematical Programming volume 190, pages 467–482 (2021)Cite this article In convex integer programming, various procedures have been developed to strengthen convex relaxations of sets of integer points. On the one hand, there exist several general-purpose methods that strengthen relaxations without specific knowledge of the set S of feasible integer points, such as popular linear programming or semi-definite programming hierarchies. On the other hand, various methods have been designed for obtaining strengthened relaxations for very specific sets S that arise in combinatorial optimization. We propose a new efficient method that interpolates between these two approaches. Our procedure strengthens any convex set containing a set \( S \subseteq \{0,1\}^n \) by exploiting certain additional information about S. Namely, the required extra information will be in the form of a Boolean formula \(\phi \) defining the target set S. The new relaxation is obtained by "feeding" the convex set into the formula \(\phi \). We analyze various aspects regarding the strength of our procedure. As one application, interpreting an iterated application of our procedure as a hierarchy, our findings simplify, improve, and extend previous results by Bienstock and Zuckerberg on covering problems. In convex integer programming, there exist various procedures to strengthen convex relaxations of sets of integer points. Formally, given a set \( S \subseteq {\mathbb {Z}}^n \) of integer points and a convex set \( Q \subseteq {\mathbb {R}}^n \) with \( Q \cap {\mathbb {Z}}^n = S \), these methods aim to construct a new convex set f(Q) satisfying \( S \subseteq f(Q) \subseteq Q \). On the one hand, there exist several general-purpose methods that strengthen relaxations without specific knowledge of the set S, in a systematic way. The hierarchies of Sherali and Adams [27], Lovász and Schrijver [21], and Lasserre [20], which are tailored to 0/1-sets \( S \subseteq \{0,1\}^n \), are methods of this type. On the other hand, various methods have been designed for obtaining strengthened relaxations for specific sets S. Such methods include, as an example, an impressive collection of families of valid inequalities of the traveling salesperson polytope that strengthen the classical subtour elimination formulation. Similar research has been performed for many other polytopes arising in combinatorial optimization, such as stable set polytopes and knapsack polytopes. In this work, we propose a new method that interpolates between the two approaches described above. We design a procedure to strengthen any convex set \( Q \subseteq {\mathbb {R}}^n \) containing a set \( S \subseteq \{0,1\}^n \) by exploiting certain additional information about S. Namely, the required extra information will be in the form of a Boolean formula \( \phi \) defining the target set S. Instead of viewing a Boolean formula as taking 0/1-vectors as input, the improved relaxation is obtained by "feeding" the convex set Q into the formula \( \phi \), and will be denoted by \( \phi (Q) \). While the formula \( \phi \) has to be provided as a further input, for certain problems, there is an "obvious" candidate for \(\phi \). For example, suppose that the set S arises from a 0/1-covering problem. That is, it is given by a matrix \( A \in \{0,1\}^{m \times n} \) such that \( S = \{ x \in \{0,1\}^n : Ax \geqslant \mathbf {1} \} \), where \( \mathbf {1} \) is the all-ones vector. Then S can be equivalently specified by the following Boolean formula in conjunctive normal form $$\begin{aligned} \phi := \bigwedge _{i = 1}^m \bigvee _{j : A_{ij} = 1} x_j. \end{aligned}$$ In addition, there is a vast literature on representing sets of 0/1-points via Boolean formulas, and we are free to use any of these formulas for our procedure. An important property of \( \phi (Q) \) is that it can be described by an extended formulation whose size is bounded by the size of the formula \( \phi \) times the size of an extended formulation defining the input relaxation Q. Recall that an extended formulation of size m of a polytope P is determined by matrices \( T \in {\mathbb {R}}^{n \times d} \), \( A \in {\mathbb {R}}^{m \times d} \) and vectors \( t \in {\mathbb {R}}^n \), \( b \in {\mathbb {R}}^m \) such that \( P = \{ x \in {\mathbb {R}}^n : \exists y \in {\mathbb {R}}^d : Ay \geqslant b, \, x = Ty + t\} \). Therefore, provided that \(\phi \) has polynomial size, our procedure is efficient in the sense that a small extended formulation for Q can be converted into a small extended formulation for \(\phi (Q)\) in polynomial time. To illustrate, the well-studied procedure [10] that maps a relaxation Q to its Chvátal-Gomory closure f(Q) is not efficient in the above sense (this is to be expected since determining membership in f(Q) is \(\mathsf {NP}\)-complete [12]). A striking example is given by choosing Q as the fractional matching polytope [26, Section 30.2]. In this case, the Chvátal-Gomory closure f(Q) is the matching polytope. It was recently shown by Rothvoß that all extended formulations of the matching polytope have exponential size [25]. Another property of our procedure is that it is complete in the sense that iterating it a finite number of times (in fact, at most n times) always yields the convex hull of S. Furthermore, our procedure can be applied to any convex set \( Q \subseteq [0,1]^n \) that contains the target set S. In particular, the set Q is even allowed to contain 0/1-points that do not belong to S. As an example, we can always apply our method with \( Q = [0,1]^n \), and thus finding an initial relaxation for \({\text {conv}}(S)\) is never an issue. Intuitively, this is possible since the information of which points belong to S is stored in \( \phi \). By viewing an iterated application of our procedure as a hierarchy, we obtain a significant simplification of the Bienstock-Zuckerberg hierarchy [7]. This is a powerful hierarchy tailored to 0/1-covering problems. However, one of its drawbacks is that the definition of the hierarchy is quite complicated. Prior to this work, it had been simplified by Mastrolilli [22] using a modification of the Sherali-Adams hierarchy that is based on appropriately defined high-degree polynomials. Subsequent to our work, it has also been simplified by Bienstock and Zuckerberg [9] themselves. Despite the simplicity of our method, we obtain extended formulations whose size is often vastly smaller than those of Bienstock and Zuckerberg [7, 8] and Mastrolilli [22]. We discuss this in more detail in Sect. 6. Another aspect of our work is that it should serve as a bridge between combinatorial optimization and circuit complexity. As a concrete example, our procedure yields a very simple proof that Rothvoß' result [25] on the extension complexity of the matching polytope implies a seminal result of Raz and Wigderson [23, Theorem 4.1] on the size of monotone formulas required to describe the matching function. In the other direction, constructions of small formulas describing a set \( S \subseteq \{0,1\}^n \) can now be used to obtain small extended formulations for \({\text {conv}}(S)\). We give a few non-trivial examples of this in Sect. 5, but of course there are many more. For readers familiar with circuit complexity, we mention that our work is inspired by a relatively unknown connection between Karchmer-Wigderson games [18] and nonnegative factorizations, pioneered by Hrubeš [17]. This connection was recently rediscovered by Göös, Jain and Watson [15] and exploited in [3]. As such, the proofs of the main properties of our hierarchy are also very short. Indeed, the proof of our main theorem can be regarded as a "polyhedral" Karchmer-Wigderson game, but no knowledge of communication complexity is required. Paper Outline We start by describing our procedure to obtain \( \phi (Q) \) in Sect. 2. In Sect. 3, we introduce notions that allow us to quantify the strength of our relaxations. Our main results regarding properties of the set \( \phi (Q) \) are presented in Sect. 4. In Sect. 5, we discuss several applications of our method in detail. In Sect. 6 we compare our procedure to related work of Bienstock and Zuckerberg [7, 8] and Mastrolilli [22], and state some open problems. Description of the procedure In order to present the construction of the procedure, let us fix some notation concerning Boolean formulas. We consider formulas that are built out of input variables \( x_1,\dotsc ,x_n \), conjunctions \( \wedge \), disjunctions \( \vee \), and negations \( \lnot \) in the standard way. Here, we define the sizeFootnote 1 of a Boolean formula as the total number of occurrences of input variables. We denote by \(|\phi |\) the size of \(\phi \). Given a Boolean formula \( \phi \), we can interpret it as a function from \( \{0,1\}^n \rightarrow \{0,1\} \) and for an input \( x = (x_1,\dotsc ,x_n) \in \{0,1\}^n \) we will denote its output by \( \phi (x) \). We say that the set \( S = \{ x \in \{0,1\}^n : \phi (x) = 1 \} \) is defined by \( \phi \). Two formulas are said to be equivalent if they define the same set. We say that a formula is reduced if negations are only applied to input variables. Note that, by De Morgan's laws, every Boolean formula can be brought into an equivalent reduced formula of the same size. As an example, the formulas $$\begin{aligned} \phi _1&= \lnot \left( \left( x_1 \wedge \lnot x_2 \right) \vee \left( \lnot \left( x_1 \vee x_3 \right) \right) \right) \nonumber \\ \phi _2&= \left( \lnot x_1 \vee x_2 \right) \wedge \left( x_1 \vee x_3 \right) \end{aligned}$$ are equivalent and both have size 4, but only the second is in reduced form. Below, we will repeatedly use the elementary fact that for every reduced formula \(\phi \) of size \(|\phi |\), one of the following holds: \(|\phi | = 1\) and either \(\phi = x_i\) or \(\phi = \lnot x_i\) for some \(i \in [n]\), or \(|\phi | \geqslant 2\) and \(\phi \) is either the conjuction or the disjunction of two reduced formulas \(\phi _1,\phi _2\) such that \( |\phi | = |\phi _1| + |\phi _2| \). This gives a way to represent any reduced Boolean formula as a rooted tree each of whose inner nodes is labeled with \(\wedge \) or \(\vee \) and each of whose leaves is labeled with a non-negated variable \(x_i\) or a negated variable \(\lnot x_i\). Note that there may be many trees that represent the same reduced Boolean formula, but this will not matter. Observe that the size of a formula is the number of leaves in any one of its trees. We are ready to describe our method to strengthen a convex relaxation of a given set of points in \( \{0,1\}^n \). Definition 1 Let \( \phi \) be a reduced Boolean formula with input variables \( x_1,\dotsc ,x_n \) and let \( Q \subseteq [0,1]^n \) be any convex set. The set \( \phi (Q) \subseteq {\mathbb {R}}^n \) is recursively constructed from the formula \( \phi \) as follows. Replace any non-negated input variable \( x_i \) by the set \( \{ x \in Q : x_i = 1 \} \). Replace any negated input variable \( \lnot x_i \) by the set \( \{ x \in Q : x_i = 0 \} \). Replace any conjuction \( \wedge \) of two sets by their intersection. Replace any disjunction \( \vee \) of two sets by the convex hull of their union. As an example, given any convex set \( Q \subseteq [0,1]^3 \) and the formula \( \phi _2 \) defined in (2), we have $$\begin{aligned} \phi _2(Q)&= {\text {conv}}\big ( \{ x \in Q : x_1 = 0 \} \cup \{ x \in Q : x_2 = 1 \} \big ) \\&\quad \cap {\text {conv}}\big ( \{ x \in Q : x_1 = 1 \} \cup \{ x \in Q : x_3 = 1 \} \big ). \end{aligned}$$ In the remainder of this work, we will analyze several properties of \( \phi (Q) \). One simple observation will be that, if S is defined by \( \phi \) and \( S \subseteq Q \), then \( S \subseteq \phi (Q) \subseteq Q \). Furthermore, \( \phi (Q) \) is strictly contained in Q unless \( {\text {conv}}(S) = Q \). In order to quantify this improvement over Q, we will introduce useful measures in the next section. Measuring the strength: pitch and notch We now introduce two quantities that measure the strength of our procedure. To this end, note that for every linear inequality in variables \( x_1, \dotsc , x_n \) we can partition [n] into sets \( I^+, I^- \subseteq [n] \) (with \( I^+ \cup I^- = [n] \) and \( I^+ \cap I^- = \emptyset \)) such that the inequality can be written as $$\begin{aligned} \sum _{i \in I^+} c_i x_i + \sum _{i \in I^-} c_i (1 - x_i) \geqslant \delta , \end{aligned}$$ where \( c = (c_1,\dotsc ,c_n)^\intercal \in {\mathbb {R}}^n_{\ge 0} \) and \( \delta \in {\mathbb {R}}\). Since we will only consider the intersection of \( [0,1]^n \) with the set of points satisfying such an inequality, we are only interested in inequalities where \( \delta \geqslant 0 \). In this case, we call (3) an inequality in standard form. The notch of an inequality in standard form is the smallest number \( \nu \) such that $$\begin{aligned} \sum _{j \in J} c_j \geqslant \delta \end{aligned}$$ holds for every \( J \subseteq [n] \) with \( |J| \geqslant \nu \), while its pitch is the smallest number \( p\) such that (4) holds for every \( J \subseteq {\text {supp}}(c) \) with \( |J| \geqslant p\). Note that the pitch of an inequality is at most its notch. For instance, the notch of the inequality \(x_1 + x_n \geqslant 1\) is \(n-1\), while its pitch equals 1. Both quantities appear in the study of Chvátal-Gomory closures of polytopes in \( [0,1]^n \). Intuitively, the notch of an inequality is related to how "deep" it cuts the 0/1-cube. For simplicity, assume that \(I^- = \emptyset \), so that the origin minimizes the left-hand side of (3) over the cube. The notch of (3) is then the smallest number \( \nu \) such that no 0/1-vector of Hamming weight \( \nu \) or more is cut by the inequality. A similar intuition applies to the pitch. We extend the definition of notch from inequalities to sets of 0/1-points as follows. The notch of a non-empty set \( S \subseteq \{0,1\}^n \), denoted \(\nu (S)\), is the largest notch of any inequality in standard form that is valid for S. It can be shown that \( \nu (S) \) is equal to the smallest number k such that every k-dimensional face of \( [0,1]^n \) contains a point from S. This equivalent definition of notchFootnote 2 was introduced in [6]. The main result of [6] is that if S has bounded notch and \({\text {conv}}(S)\) has bounded facet coefficients, then every polytope \(Q \subseteq [0,1]^n\) whose set of 0/1-points is S has bounded Chvátal-Gomory rank. The term pitch was used by Bienstock and Zuckerberg [7], who defined it for monotone inequalities in standard form, that is, where \( I^- = \emptyset \). Bounded pitch inequalities are related to the Chvátal-Gomory closure as follows. Consider any constants \(\varepsilon > 0\) and \(\ell \in {\mathbb {Z}}_{\geqslant 1}\), and any relaxation \(Q := \{x \in [0,1]^n : Ax \geqslant b\}\) of a set \(S := Q \cap \{0,1\}^n\), with A, b nonnegative. Bienstock and Zuckerberg [8, Lemma 2.1] proved that adding all valid pitch-\(p\) inequalities for \(p\leqslant \lceil \ell / \ln (1+\varepsilon ) \rceil = \Theta (\ell / \varepsilon )\) to the system defining Q gives a relaxation R that is a \((1+\varepsilon )\)-approximationFootnote 3 of the \(\ell \)-th Chvátal-Gomory closure of Q. In this section, we prove several properties of the set \( \phi (Q) \). Let us start with the following simple observation. For every reduced Boolean formula \(\phi \) and every convex set \(Q \subseteq [0,1]^n\), the set \(\phi (Q)\) is a convex subset of Q. Moreover, \(\phi (Q)\) contains every point \(x \in \{0,1\}^n\) such that \(x \in Q\) and \(\phi (x) = 1\). In other words, \(\phi (Q)\) contains \(Q \cap \phi ^{-1}(1)\). The fact that \(\phi (Q)\) is a convex set contained in Q is clear, since \(\phi (Q)\) is constructed from faces of Q by taking intersections and convex hulls of unions. We prove the second part by induction on the size of \(\phi \). If \(|\phi | = 1\), then \(\phi \) is either \(\phi = x_i\) or \(\phi = \lnot x_i\) for some \(i \in [n]\). So either \(\phi (Q) = \{x \in Q : x_i = 1\}\) or \(\phi (Q) = \{x \in Q : x_i = 0\}\), respectively. We see immediately that \(\phi (Q)\) contains \(Q \cap \phi ^{-1}(1)\). Now if \(|\phi | \geqslant 2\), then \(\phi \) is the conjunction or disjunction of two formulas of smaller size, say \(\phi _1\) and \(\phi _2\). In the first case, \(\phi = \phi _1 \wedge \phi _2\) and we have \(\phi (Q) = \phi _1(Q) \cap \phi _2(Q) \supseteq (Q \cap \phi ^{-1}_1(1)) \cap (Q \cap \phi ^{-1}_2(1)) = Q \cap (\phi ^{-1}_1(1) \cap \phi ^{-1}_2(1)) = Q \cap \phi ^{-1}(1)\), where the inclusion follows from induction. In the second case, \(\phi = \phi _1 \vee \phi _2\) and \(\phi (Q) = {\text {conv}}(\phi _1(Q) \cup \phi _2(Q)) \supseteq (Q \cap \phi ^{-1}_1(1)) \cup (Q \cap \phi ^{-1}_2(1)) = Q \cap (\phi ^{-1}_1(1) \cup \phi ^{-1}_2(1)) = Q \cap \phi ^{-1}(1)\). \(\square \) Next, we argue that we can use \(\phi \) to transform any extended formulation for Q into one for \( \phi (Q) \). To this end, we make use of the extension complexity of a polytope P, which is defined as the smallest size of any extended formulation for P, and is denoted by \( {\text {xc}}(P) \). We need the following standard facts about extension complexity. First, if F is a non-empty face of P, then \( {\text {xc}}(F) \le {\text {xc}}(P) \). Second, for any non-empty polytopes \( P_1, P_2 \subseteq {\mathbb {R}}^n \) one has \( {\text {xc}}(P_1 \cap P_2) \le {\text {xc}}(P_1) + {\text {xc}}(P_2) \). Third, a slight refinement of Balas' theorem [2] states that \( {\text {xc}}({\text {conv}}(P_1 \cup P_2)) \le \max \{{\text {xc}}(P_1), 1\} + \max \{{\text {xc}}(P_2), 1\} \), see [30, Prop. 3.1.1]. Let \( \phi \) be a reduced Boolean formula and let \( Q \subseteq [0,1]^n \) be a polytope such that \( \phi (Q) \ne \emptyset \). Then \( \phi (Q) \) is a polytope with extension complexity \( {\text {xc}}(\phi (Q)) \le |\phi | {\text {xc}}(Q) \). First, note that if \( {\text {xc}}(Q) = 0 \), then Q is a single point and so is \( \phi (Q) \), which implies \( {\text {xc}}(\phi (Q)) = 0 \) and hence the claimed inequality holds trivially. Thus, we may assume that \( {\text {xc}}(Q) \geqslant 1 \) holds. We prove the claim by induction over the size of \( \phi \). If \( |\phi | = 1 \), then \( \phi = x_i\) or \(\phi = \lnot x_i\) for some \(i \in [n]\). So either \(\phi (Q) = \{x \in Q : x_i = 1\}\) or \(\phi (Q) = \{x \in Q : x_i = 0\}\), respectively. In both cases, \( \phi (Q) \) is a face of Q and hence \( {\text {xc}}(\phi (Q)) \leqslant {\text {xc}}(Q) \). If \( |\phi | \geqslant 2 \), there exist reduced Boolean formulas \( \phi _1,\phi _2 \) (of size smaller than \( |\phi | \)) with \( |\phi | = |\phi _1| + |\phi _2| \) such that \( \phi = \phi _1 \wedge \phi _2 \) or \( \phi = \phi _1 \vee \phi _2 \). First, consider the case \( \phi = \phi _1 \wedge \phi _2 \), in which we have \( \phi (Q) = \phi _1(Q) \cap \phi _2(Q) \). Since \( \phi (Q) \) is non-empty, the same holds for \( \phi _1(Q) \) and \( \phi _2(Q) \) and hence, by the induction hypothesis, we have \( {\text {xc}}(\phi _i(Q)) \leqslant |\phi _i| {\text {xc}}(Q) \) for \( i = 1,2 \). Therefore, $$\begin{aligned} {\text {xc}}(\phi (Q)) \leqslant {\text {xc}}(\phi _1(Q)) + {\text {xc}}(\phi _2(Q)) \leqslant |\phi _1| {\text {xc}}(Q) + |\phi _2| {\text {xc}}(Q) = |\phi | {\text {xc}}(Q). \end{aligned}$$ It remains to consider the case \( \phi = \phi _1 \vee \phi _2 \), in which we have \( \phi (Q) = {\text {conv}}(\phi _1(Q) \cup \phi _2(Q)) \). Note that the claimed inequality holds if \( \phi _1(Q) = \emptyset \) or \( \phi _2(Q) = \emptyset \). Thus, we may assume that \( \phi _1(Q) \) and \( \phi _2(Q) \) are both non-empty. By the induction hypothesis, \( {\text {xc}}(\phi _i(Q)) \leqslant |\phi _i| {\text {xc}}(Q) \) for \( i = 1,2 \). Therefore, $$\begin{aligned} {\text {xc}}(\phi (Q))&\leqslant \max \{{\text {xc}}(\phi _1(Q)), 1\} + \max \{{\text {xc}}(\phi _2(Q)), 1\} \\&\leqslant \max \{|\phi _1| {\text {xc}}(Q), 1\} + \max \{|\phi _2| {\text {xc}}(Q), 1\} \\&= |\phi _1| {\text {xc}}(Q) + |\phi _2| {\text {xc}}(Q) \\&= |\phi | {\text {xc}}(Q). \end{aligned}$$ \(\square \) We remark that the upper bound provided by Proposition 3 is quite generous, and can be improved in some cases. For instance, if we let \(\tau \) denote the number of maximal rooted subtrees of \(\phi \) whose nodes are either input variables or \(\wedge \) gates, then we have \({\text {xc}}(\phi (Q)) \leqslant \tau {\text {xc}}(Q)\). This is due to the well-known fact that any intersection of faces of Q is a face of Q. A Boolean formula is monotone if it does not contain negations. We are ready to prove our main theorem in the monotone case. Theorem 4 Let \( \phi \) be a monotone Boolean formula defining a set \( S \subseteq \{0,1\}^n \) and let \( Q \subseteq [0,1]^n \) be any convex set containing S. If Q satisfies all monotone inequalities of pitch at most \( p\) that are valid for S, then \( \phi (Q) \) satisfies all monotone inequalities of pitch at most \( p+ 1 \) that are valid for S. Moreover, if Q is a polytope defined by an extended formulation of size \( \sigma \), then \( \phi (Q) \) is a polytope that can be defined by an extended formulation of size \( |\phi | \sigma \), where \( |\phi | \) is the size of the formula. The second part of the theorem is implied by Proposition 3. For the first part, consider any monotone pitch-\((p+1)\) inequality in standard form that is valid for \(S = \{x \in \{0,1\}^n : \phi (x) = 1\}\), $$\begin{aligned} \sum _{i \in I^+} c_i x_i \geqslant \delta . \end{aligned}$$ By the definition of pitch, we may assume \(c_i > 0\) for all \(i \in I^+\). We also assume \(\delta >0\); otherwise, there is nothing to prove. Let \(a \in \{0,1\}^n\) be the characteristic vector of \([n] {\setminus } I^+\). Thus, \(a_i = 1\) if \(i \in [n] {\setminus } I^+\) and \(a_i = 0\) if \(i \in I^+\). Notice that a violates (5). This implies \(\phi (a) = 0\). By contradiction, suppose that (5) is not valid for \(\phi (Q)\). That is, there exists a point in \(\phi (Q)\) that violates (5). Let T be a tree that represents the formula \(\phi \). Each \(v \in V(T)\) has a corresponding formula, which is the formula computed by the subtree of T rooted at v. For notational convenience, we identity each node of T with its corresponding formula. Our strategy is to find a root-to-leaf path in T such that for every node \(\psi \) on this path, $$\begin{aligned} (\star ) \qquad \psi (a) = 0 \quad \hbox { and }\quad \hbox { there exists a point }\tilde{x} = \tilde{x}(\psi ) \in \psi (Q)\hbox { that violates }(5). \end{aligned}$$ This is satisfied at the root node \(\phi \). Now consider any non-leaf node \(\psi \) in T that satisfies (\(\star \)). Let \(\psi _1\) and \(\psi _2\) denote the children of \(\psi \), so that \(\psi = \psi _1 \wedge \psi _2\) or \(\psi = \psi _1 \vee \psi _2\). We claim that, in both cases, there exists an index \(k \in \{1,2\}\) such that \(\psi _k\) satisfies (\(\star \)). First, in case \(\psi = \psi _1 \wedge \psi _2\), we let \(\tilde{x}(\psi _1) = \tilde{x}(\psi _2) := \tilde{x}(\psi )\) and choose \(k \in \{1,2\}\) such that \(\psi _k(a) = 0\). Such an index is guaranteed to exist since \(\psi (a) = 0\). Then \(\psi _k\) satisfies (\(\star \)). Second, in case \(\psi = \psi _1 \vee \psi _2\), we have \(\psi _1(a) = \psi _2(a) = \psi (a) = 0\). We let \(\tilde{x}(\psi _1)\) and \(\tilde{x}(\psi _2)\) be any points of \(\psi _1(Q)\) and \(\psi _2(Q)\) (respectively) such that the segment \([\tilde{x}(\psi _1),\tilde{x}(\psi _2)]\) contains \(\tilde{x}\). For at least one \(k \in \{1,2\}\), the point \(\tilde{x}(\psi _k)\) violates (5). Thus \(\psi _k\) satisfies (\(\star \)) for that choice of k. By iterating the argument above, starting at the root node \(\phi \), we reach a leaf node \(\psi \) that satisfies (\(\star \)). Note that \(\psi = x_j\) for some j, since \(\phi \) is monotone. We have \(a_j = \psi (a) = 0\), so \(j \in I^+\). Moreover, there exists a point \(\tilde{x} = \tilde{x}(\psi ) \in \psi (Q) = \{x \in Q : x_j = 1\}\) that violates (5). Now consider the monotone inequality $$\begin{aligned} \sum _{\begin{array}{c} i \in I^+ \\ i \ne j \end{array}} c_i x_i \geqslant \delta - c_j\,. \end{aligned}$$ This inequality is valid for S since it is the sum of (5) and \(c_j (1-x_j) \geqslant 0\), which are both valid. Since \(c_j (1-\tilde{x}_j) = 0\), (6) is also violated by \(\tilde{x} \in \psi (Q) \subseteq Q\). The key observation is that the pitch of (6) is at most \(p\), which contradicts our assumption that Q satisfies all monotone inequalities of pitch at most \(p\). \(\square \) In the non-monotone case, we now prove a statement analogous to Theorem 4 where the pitch is replaced by the notch. Let \( \phi \) be a reduced Boolean formula defining a set \( S \subseteq \{0,1\}^n \) and let \( Q \subseteq [0,1]^n \) be any convex set containing S. If Q satisfies all inequalities of notch at most \( \nu \) that are valid for S, then \( \phi (Q) \) satisfies all inequalities of notch at most \( \nu + 1\) that are valid for S. Moreover, if Q is a polytope defined by an extended formulation of size \( \sigma \), then \( \phi (Q) \) is a polytope that can be defined by an extended formulation of size \( |\phi | \sigma \). The proof is almost identical to that of Theorem 4. Instead of repeating the whole proof, here we only explain the differences. The starting point is a notch-\((\nu +1)\) inequality $$\begin{aligned} \sum _{i \in I^+} c_i x_i + \sum _{i \in I^-} c_i (1 - x_i) \geqslant \delta \,, \end{aligned}$$ where \(I^+ \subseteq [n] \) and \(I^- \subseteq [n] \) satisfy \( I^+ \cap I^- = \emptyset \) and \( I^+ \cup I^- = [n] \), \(\delta >0\), and \(c_i \geqslant 0\) for all \(i \in [n]\). Contrary to the previous proof, here we allow \(c_i=0\). Let \(a \in \{0,1\}^n\) be the characteristic vector of \(I^-\). Notice that a violates (7). This implies \(\phi (a) = 0\). Let T be a tree that represents the formula \(\phi \). Using the same proof strategy, we find a leaf node \(\psi = x_j\) or \(\psi = \lnot x_j\) of T such that \(\psi (a) = 0\), and there exists a point \(\tilde{x} = \tilde{x}(\psi ) \in \psi (Q)\) that violates (7). If \(\psi = x_j\), then \(j \in I^+\) and we consider the valid inequality $$\begin{aligned} \sum _{\begin{array}{c} i \in I^+ \\ i \ne j \end{array}} c_i x_i + \sum _{i \in I^-} c_i (1 - x_i) + \delta (1-x_j) \geqslant \delta - c_j\,. \end{aligned}$$ Otherwise, \(\psi = \lnot x_j\) and thus \(j \in I^-\). In this case, we consider the valid inequality $$\begin{aligned} \sum _{i \in I^+} c_i x_i + \sum _{\begin{array}{c} i \in I^- \\ i \ne j \end{array}} c_i (1 - x_i) + \delta x_j \geqslant \delta - c_j\,. \end{aligned}$$ Since (7) is a notch-\((\nu +1)\) inequality, it is easy to check that the notch of both of the above inequalities is at most \(\nu \). However, they are violated by the point \(\tilde{x} = \tilde{x}(\psi ) \in Q\). As in the proof of Theorem 4, this gives the desired contradiction. \(\square \) Setting \( \phi ^1(Q) := \phi (Q) \) and \( \phi ^{\ell + 1}(Q) := \phi (\phi ^\ell (Q)) \) for \( \ell \in {\mathbb {Z}}_{\ge 1} \), and using the trivial fact that the notch of a non-trivial inequality is at most n, we immediately obtain the following corollary. Corollary 6 Let \( \phi \) be a reduced Boolean formula defining a set \( S \subseteq \{0,1\}^n \) and let \( Q \subseteq [0,1]^n \) be any convex set containing S. Then we have \( \phi ^n(Q) = {\text {conv}}(S) \). Another consequence of Theorem 5 is that integer points not belonging to S are already excluded from \( \phi (Q) \). Let \( \phi \) be a reduced Boolean formula defining a set \( S \subseteq \{0,1\}^n \) and let \( Q \subseteq [0,1]^n \) be any convex set containing S. Then we have \( \phi (Q) \cap {\mathbb {Z}}^n = S \). It suffices to show that no point from \( \{0,1\}^n {\setminus } S \) is contained in \( \phi (Q) \). To this end, fix \( {\bar{x}} \in \{0,1\}^n {\setminus } S \) and consider the inequality $$\begin{aligned} \sum _{i \in [n] : \bar{x}_i = 0} x_i + \sum _{i \in [n] : \bar{x}_i = 1} (1 - x_i) \geqslant 1\,, \end{aligned}$$ which is violated by \( {\bar{x}} \), but valid for all other points of \(\{0,1\}^n\). Since the inequality has notch 1, by Theorem 5 it is also valid for \( \phi (Q) \) and hence \( {\bar{x}} \) is not contained in \( \phi (Q) \). \(\square \) In this section, we present several applications of our procedure, in which we repeatedly make use of Theorems 4 and 5. Monotone formulas for matching As a first application, we demonstrate how our findings together with Rothvoß' result [25] on the extension complexity of the matching polytope yield a very simple proof of a seminal result of Raz and Wigderson [23, Theorem 4.1], which statesFootnote 4 that any monotone Boolean formula deciding whether a graph on n nodes contains a perfect matching has size \( 2^{\Omega (n)} \). Before giving any further detail, we point out that Raz and Wigderson's result extends to the bipartite case [23, Theorem 4.2], which is not the case of the polyhedral approach described below. The fact that Rothvoß' theorem implies Raz and Wigderson's was first discovered by Göös, Jain and Watson [15]. While their arguments are based on connections between nonnegative ranks of certain slack matrices and Karchmer-Wigderson games, which implicitly play an important role in the proofs of Theorems 4 and 5, our results yield a straightforward proof that does not require any further notions. To this end, let \( n \in {\mathbb {Z}}_{\ge 2} \) be even and let \( G = (V, E) \) denote the complete undirected graph on n nodes. The set S considered by Raz and Wigderson is the set $$\begin{aligned} S := \{ x \in \{0,1\}^E : {\text {supp}}(x) \subseteq E \text { contains a perfect matching} \}. \end{aligned}$$ Let \( \phi \) be any monotone Boolean formula in variables \( x_e \) (\( e \in E \)) that defines S. Next, define the polytope $$\begin{aligned} P := \{ x \in [0,1]^E : x(\delta (U)) \geqslant 1 \text { for every } U \subseteq V \text { with } |U| \text { odd} \}. \end{aligned}$$ It is a basic fact that S is contained in P. Furthermore, observe that every non-trivial inequality in the definition of P has pitch 1. Thus, we have \( {\text {conv}}(S) \subseteq \phi ([0,1]^E) \subseteq P \). Moreover, if we consider the affine subspace $$\begin{aligned} D := \{ x \in {\mathbb {R}}^E : x(\delta (\{u\}) = 1 \text { for every } u \in V \}, \end{aligned}$$ it is well-known that both \( {\text {conv}}(S) \cap D \) and \( P \cap D \) are equal to the perfect matching polytope of G, and hence we obtain that \( \phi ([0,1]^E) \cap D \) is also equal to the perfect matching polytope of G. By Rothvoß' result, this implies \( {\text {xc}}(\phi ([0,1]^E)) = 2^{\Omega (n)} \). On the other hand, by Proposition 3 we also have \( {\text {xc}}(\phi ([0,1]^E)) \le |\phi | \cdot {\text {xc}}([0,1]^E) = |\phi | \cdot 2 |E| \le n^2 |\phi | \) and hence \( |\phi | \) must be exponential in n. Covering problems: the binary case In this section, we consider sets \( S \subseteq \{0,1\}^n \) that arise from 0/1-covering problems, in which there is a matrix \( A \in \{0,1\}^{m \times n} \) such that \( S = \{ x \in \{0,1\}^n : Ax \geqslant \mathbf {1} \} \), where \( \mathbf {1} \) is the all-ones vector. As an example, if A is the node-edge incidence matrix of an undirected graph G, then the points of S correspond to vertex covers in G. This shows that, in general, the convex hull of such sets S may not admit polynomial-size (in n) extended formulations, see for example, [4, 14, 15]. Moreover, general 0/1-hierarchies may have difficulties identifying basic inequalities even in simple instances. For example, in [8] it is shown that if \( Ax \geqslant \mathbf {1} \) consists of the inequalities \( \sum _{i \in [n] \setminus \{j\}} x_i \geqslant 1 \) for each \( j \in [n] \), then it takes at least \( n - 2 \) rounds of the Lovász-Schrijver or Sherali-Adams hierarchy to satisfy the pitch-2 inequality \( \sum _{i \in [n]} x_i \geqslant 2 \). By developing a hierarchy tailored to 0/1-covering problems, Bienstock and Zuckerberg [7] were able to bypass some of these issues. As their main result, for each \( k \in {\mathbb {N}} \), they construct a polytope \( f^k(Q) \) containing S satisfying the following two properties. First, every inequality of pitch at most k that is valid for S is also valid for \( f^k(Q) \). Second, \( f^k(Q) \) can be described by an extended formulation of size \( (m + n)^{g(k)} \), where \( g(k) = \Omega (k^2) \). However, constructing the polytope \( f^k(Q) \) is quite technical and involved. In contrast, our procedure directly implies significantly simpler and smaller extended formulations that satisfy all pitch-k inequalities. Let \( A \in \{0,1\}^{m \times n} \), \( S = \{ x \in \{0,1\}^n : Ax \geqslant \mathbf {1} \} \), and \(k \in {\mathbb {N}}\). Then there is a polyhedral relaxation P of S such that all points of P satisfy all valid inequalities of pitch at most k, and P can be defined by an extended formulation of size at most \(2n \cdot (mn)^k\). Let \(\phi := \bigwedge _{i = 1}^m \bigvee _{j : A_{ij} = 1} x_j\). Since \([0,1]^n\) has 2n facets and \(\phi \) has size at most mn, we may take \(P=\phi ^k([0,1]^n)\) by Theorem 4. \(\square \) Covering problems: bounded coefficients Next, we consider a more general form of a covering problem in which \( S = \{ x \in \{0,1\}^n : Ax \geqslant b \} \) for some non-negative integer matrix \( A \in {\mathbb {Z}}_{\ge 0}^{m \times n} \) and \( b \in {\mathbb {Z}}_{\ge 1}^m \). We first restrict ourselves to the case that all entries in A and b are bounded by some constant \( \Delta \in {\mathbb {Z}}_{\ge 2} \). Based on their results in [7], Bienstock and Zuckerberg [8] provide an extended formulation of size \( O(m + n^\Delta )^{g(k)} \), where \( g(k) = \Omega (k^2) \). Our method yields a significantly smaller extended formulation, via the following lemma. Lemma 9 For every \( A \in {\mathbb {Z}}_{\ge 0}^{m \times n} \) and \( b \in {\mathbb {Z}}_{\ge 1}^m \) with entries bounded by \( \Delta \), the set \( S = \{ x \in \{0,1\}^n : Ax \geqslant b \} \) can be defined by a monotone formula \( \phi \) of size at most \( \Delta ^{5.3} m n \log ^{O(1)}(n) \). Moreover, this formula can be constructed in randomized polynomial time. Fix \( i \in [m] \), let \( n' := \sum _{j=1}^n A_{ij} \) and let \( \psi _i \) be a monotone formula defining the set \( \{ y \in \{0,1\}^{n'} : \sum _{k=1}^{n'} y_k \ge b_i\} \). Next, pick any function \( h : [n'] \rightarrow [n] \) such that \( |h^{-1}(j)| = A_{ij} \) for all \( j \in [n] \). In formula \(\psi _i\), replace every occurrence of \( y_{k} \) by \( x_{h(k)} \), for \( k \in [n'] \). We obtain a monotone formula \(\phi _i\) defining the set \( \{x \in \{0,1\}^n : \sum _{j=1}^n A_{ij} x_j \ge b_i\} \). By using the construction of Hoory, Magen and Pitassi [16] for the initial formula \( \psi _i \), the resulting formula \( \phi _i \) has size $$\begin{aligned} |\phi _i| = |\psi _i| \le \Delta ^{4.3} n' \log ^{O(1)}(n'/\Delta ) \le \Delta ^{5.3} n \log ^{O(1)} (n)\,, \end{aligned}$$ since \( n' \le n \Delta \) and \( b_i \le \Delta \). The result follows by taking \( \phi := \bigwedge _{i=1}^m \phi _i \). \(\square \) Corollary 10 Let \( A \in {\mathbb {Z}}_{\ge 0}^{m \times n} \), \( b \in {\mathbb {Z}}_{\ge 1}^m \) with entries bounded by \( \Delta \), \( S = \{ x \in \{0,1\}^n : Ax \geqslant b \} \), and \(k \in {\mathbb {N}}\). Then there is a polyhedral relaxation P of S such that all points of P satisfy all valid inequalities of pitch at most k, and P can be defined by an extended formulation of size at most \((\Delta ^{5.3} m n \log ^{O(1)}(n))^k\). By Theorem 4, we may take \(P= \phi ^k([0,1]^n) \), where \(\phi \) is the formula from Lemma 9. \(\square \) Covering problems: the general case In some cases, especially when \( m = O(1) \), the matrix \( A \in {\mathbb {Z}}_{\ge 0}^{m \times n} \) and vector \( b \in {\mathbb {Z}}_{\ge 0}^m \) may have coefficients as large as \( 2^{\Omega (n \log n)} \). For such general instances, we can improve the bound from Corollary 10. Let \( A \in {\mathbb {Z}}_{\ge 0}^{m \times n} \), \( b \in {\mathbb {Z}}_{\ge 1}^m \), \( S = \{ x \in \{0,1\}^n : Ax \geqslant b \} \), and \(k \in {\mathbb {N}}\). Then there is a polyhedral relaxation P of S such that all points of P satisfy all valid inequalities of pitch at most k, and P can be defined by an extended formulation of size at most \( \left( m n^{O(\log n)} \right) ^k \). Beimel and Weinreb [5] show that, for every \( a_1,\dotsc ,a_n,\delta \in {\mathbb {R}}_{\ge 0} \), the set \( \{ x \in \{0,1\}^n : \sum _{j=1}^n a_j x_j \geqslant \delta \} \) can be decided by a monotone formula of size \( n^{O(\log n)} \). Let \( \phi \) be the conjunction of these formulas for each inequality in \( Ax \geqslant b \). By Theorem 4, we may take \(P= \phi ^k([0,1]^n) \). \(\square \) In comparison, for this general case, Bienstock and Zuckerberg [7] have no nontrivial upper bound. Constant notch 0/1-sets In this section, we consider non-empty sets \(S \subseteq \{0,1\}^n\) with constant notch \( \nu (S) \). These sets have several desirable properties. For example, as noted in [3] (and implicitly in [11]), there is an easy polynomial-time algorithm to optimize a linear function over a constant notch set S, provided that we have a polynomial-time membership oracle for S. On the other hand, sets with constant notch do not necessarily admit small extended formulations. Indeed, counting arguments developed in [1, 24] show that even for a "generic" set \( S \subseteq \{0,1\}^n \) with notch \( \nu (S) = 1 \), \({\text {conv}}(S)\) requires extended formulations of size \( 2^{\Omega (n)} \). This raises the question of which constant notch sets do admit compact extended formulations. As an immediate corollary to Theorem 5, we have the following nice partial answer. If \( S \subseteq \{0,1\}^n \) has constant notch and S can be described by a formula \(\phi \) of size polynomial in n, then \({\text {conv}}(S)\) can be described by a polynomial-size extended formulation. Notice that every explicit 0/1-set S of constant notch such that \({\text {xc}}({\text {conv}}(S))\) is large would thus provide an explicit Boolean function requiring large depth circuits, and solve one of the hardest open problems in circuit complexity. Comparison and conclusion In this paper, we propose a new method for strengthening convex relaxations of 0/1-sets. Our approach currently yields the simplest and smallest linear extended formulations expressing inequalities of constant pitch in the monotone case, and constant notch in the general case. By viewing an iterated application of our procedure as a hierarchy, we obtain a significant simplification of the Bienstock-Zuckerberg hierarchy [7]. Prior to our work, [7] had been simplified by Mastrolilli [22] using a modification of the Sherali-Adams hierarchy. Subsequent to our work, [7] has also been simplified by Bienstock and Zuckerberg [9] themselves for the case of \( A \in \{0,1\}^{m \times n} \). The way [9] construct their extended formulation is similar to what we do, except that they replace the canonical monotone formula in (1) by a (logically equivalent) non-monotone formula, which might yield a tighter relaxation in some cases. Although [22] is an important simplification of [7], our approach is from first principles and assumes no knowledge of polynomial optimization. Moreover, despite the simplicity of our approach, our extended formulations (see Corollaries 8, 10, and 11) are significantly smaller than those provided by [7, 22]. This is possible since we allow any monotone formula, and can thus use any known construction from the literature. In contrast, Bienstock and Zuckerberg [7] implicitly only consider formulas in conjunctive normal form. The number of clauses in every formula in conjunctive normal form is at least the number of minimal coversFootnote 5, which makes it impossible for them to construct small extended formulations in situations where the number of minimal covers is large. Furthermore, the way in which we derive our extended formulations is conceptually different than [22]. Mastrolilli [22] first writes down a proof of validity of any bounded-pitch inequality that has a certain "polynomial" form (similar to a sum-of-squares proof, except that no square is necessary). He then uses this proof to recursively define a set of polynomials \(\mathcal {S} = \mathcal {S}(A,k)\), and then constructs an extended formulation generalizing the Sherali-Adams hierarchy from \(\mathcal {S}\). At the heart of his approach is a lemma due to Bienstock and Zuckerberg [7, Lemma 4.2]. In our paper, we give a direct way to strengthen any given relaxation by "feeding" it in a Boolean formula \(\phi \) defining the set of feasible 0/1 solutions. That is, we first describe how to construct the extended formulation. Then we prove that each iteration (of the same procedure) "gives at least one extra unit of pitch". At the heart of our analysis lies a new ingredient (coming from a Karchmer-Widgerson game) replacing the lemma from Bienstock and Zuckerberg. This is the reason why we improve the exponential dependence in k from \(k^2\) to k in Corollary 8. Finally, as far as we can tell, our results from Sects. 5.1 and 5.5 are completely independent from [7, 9, 22]. To conclude, we state a few open questions raised by our work. Do the new extended formulations lead to any new interesting algorithmic application, in particular for covering problems? This appears to be connected to the following question. How good are the lower bounds on the optimum value obtained after performing a few rounds of the Chvátal-Gomory closure? For some problems, such as the vertex cover problem in graphs or more generally in q-uniform hypergraphs with \(q = O(1)\), the bounds turn out to be quite poor in the worst case [4, 28]. The situation is less clear for other problems, such as network design problems. Recent work [13] on the tree augmentation problem uses certain inequalities from the first Chvátal-Gomory closure in an essential way. For the related 2-edge connected spanning subgraph problem, our work implies that one can approximately optimize over the \(\ell \)-th Chvátal-Gomory closure in quasi-polynomial time, for every \(\ell = O(1)\). For which classes of polytopes in \( [0,1]^n \) can one approximate a constant number of rounds of the Chvátal-Gomory closure with compact extended formulations? Mastrolilli [22] show that this is possible for packing problems. However, his approach crucially uses positive semi-definite extended formulations. Packing problems are unlikely to admit compact linear extended formulations, although we do not have a proof of this. Can one find polynomial-size monotone formulas for any nonnegative weighted threshold function, that is, for every min-knapsack \(\{x \in \{0,1\}^n : \sum _{i=1}^n a_i x_i \geqslant \beta \}\)? This would improve on the \(n^{O(\log n)}\) upper bound by Beimel and Weinreb [5]. Klabjan, Nemhauser and Tovey show that separating pitch-1 inequalities for such sets is NP-hard [19]. However, this does not rule out a polynomial-size extended formulation defining a relaxation that would be stronger than that provided by pitch-1 inequalities. Some sources also count the number of occurrences of \( \wedge \), \( \vee \) and \( \lnot \), which is not necessary for our purposes. To avoid possible confusion, we warn the reader that in a previous version of [6], this notion is called pitch instead of notch. Here, this means that \(\min \{c^\intercal x : x \in f^\ell (Q)\} \leqslant (1+\varepsilon ) \min \{c^\intercal x : x \in R\}\) for every nonnegative cost vector c, where \(f^\ell (Q)\) denotes the \(\ell \)-th Chvátal-Gomory closure of Q. The original result of Raz and Wigderson states that the depth of any monotone circuit computing the mentioned function is \( \Omega (n) \), which is equivalent to the mentioned result, see [29]. A point \(x \in \{0,1\}^n\) is called a min-term, and its characteristic vector a minimal cover, if x is a minimal element of S with respect to the component-wise order \(\leqslant \). Averkov, G., Kaibel, V., Weltge, S.: Maximum semidefinite and linear extension complexity of families of polytopes. Math. Program. 167(2), Ser. A, pp. 381–394 (2018). MR 3755737 Balas, E.: Disjunctive programming. Ann. Discrete Math. 5, 3–51 (1979). Discrete optimization (Proceedings of the Advanced Research Institute on Discrete Optimization and Systems Applications, Banff, Alta., 1977), II. MR 558566 Bazzi, A., Fiorini, S., Huang, S., Svensson, O.: Small extended formulation for knapsack cover inequalities from monotone circuits. In: Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms 2017, (2017) Bazzi, A., Fiorini, S., Pokutta, S., Svensson, O.: No small linear program approximates vertex cover within a factor 2–e. In: 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), pp. 1123–1142 (2015) Beimel, A., Weinreb, E.: Monotone circuits for monotone weighted threshold functions. Inf. Process. Lett. 97(1), 12–18 (2006) Benchetrit, Y., Fiorini, S., Huynh, T., Weltge, S.: Characterizing Polytopes Contained in the \(0/1\)-Cube with Bounded Chvátal-Gomory Rank. Mathematics of Operations Research (2018) Bienstock, D., Zuckerberg, M.: Subset algebra lift operators for 0–1 integer programming. SIAM J. Optim. 15(1), 63–95 (2004). MR 2112976 Bienstock, D., Zuckerberg, M.: Approximate fixed-rank closures of covering problems. Math. Program. Ser. A 105, 9–27 (2006) Bienstock, D., Zuckerberg, M.: Simpler derivation of bounded pitch inequalities for set covering, and minimum knapsack sets, arXiv:1806.07435 (2018) Chvátal, V.: Edmonds polytopes and a hierarchy of combinatorial problems. Discrete Math. 4, 305–337 (1973). MR 0313080 Cornuéjols, G., Lee, D.: On some polytopes contained in the 0,1 hypercube that have a small Chvátal rank. In: International Conference on Integer Programming and Combinatorial Optimization, pp. 300–311. Springer (2016) Eisenbrand, F.: On the membership problem for the elementary closure of a polyhedron. Combinatorica 19(2), 297–300 (1999) Fiorini, S., Groß, M., Könemann, J., Sanità, L.: Approximating weighted tree augmentation via Chvátal-Gomory cuts. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 817–831. SIAM, Philadelphia, PA (2018). MR 3775841 Fiorini, S., Massar, S., Pokutta, S., Tiwary, H. R., de Wolf, R.: Exponential lower bounds for polytopes in combinatorial optimization. J. ACM 62(2), 1–23 (2015) Göös, M., Jain, R., Watson, T.: Extension complexity of independent set polytopes. In: Proc. FOCS 2016, (2016) Hoory, S., Magen, A., Pitassi, T.: Monotone circuits for the majority function. In: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 9th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2006 and 10th International Workshop on Randomization and Computation, RANDOM 2006, Barcelona, Spain, August 28-30 2006, Proceedings, pp. 410–425 (2006) Hrubeš, Pavel: On the nonnegative rank of distance matrices. Inf. Process. Lett. 112(11), 457–461 (2012). MR 2905148 Karchmer, M., Wigderson, A.: Monotone circuits for connectivity require super-logarithmic depth. SIAM J. Discrete Math. 3, 255–265 (1990) Klabjan, D., Nemhauser, G.L., Tovey, C.: The complexity of cover inequality separation. Oper. Res. Lett. 23, 35–40 (1998) Lasserre, J. B.: An explicit exact SDP relaxation for nonlinear 0-1 programs. Integer programming and combinatorial optimization (Utrecht, 2001), Lecture Notes in Comput. Sci., vol. 2081, pp. 293–303. Springer, Berlin (2001). MR 2041934 Lovász, L., Schrijver, A.: Cones of matrices and set-functions and \(0\)-\(1\) optimization. SIAM J. Optim. 1(2), 166–190 (1991). MR 1098425 Mastrolilli, M: High degree sum of squares proofs, Bienstock-Zuckerberg hierarchy and CG cuts. In: Integer Programming and Combinatorial Optimization - 19th International Conference, IPCO 2017, Waterloo, ON, Canada, June 26-28, 2017, Proceedings, pp. 405–416 (2017) Raz, R., Wigderson, Avi: Monotone circuits for matching require linear depth. J. Assoc. Comput. Mach. 39(3), 736–744 (1992). MR 1177960 Rothvoß, T.: Some 0/1 polytopes need exponential size extended formulations. Math. Program. A 142, 255–268 (2013) Rothvoß, T.: The matching polytope has exponential extension complexity. J. ACM 64, no. 6, Art. 41, 19 (2017). MR 3713797 Schrijver, A.: Combinatorial optimization. Polyhedra and efficiency. Vol. A, Algorithms and Combinatorics, vol. 24, Springer-Verlag, Berlin, 2003, Paths, flows, matchings, Chapters 1–38. MR 1956924 (2004b:90004a) Sherali, H.D., Adams, W.P.: A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM J. Discrete Math. 3(3), 411–430 (1990). MR 1061981 Singh, M., Talwar, K.: Improving integrality gaps via Chvátal-Gomory rounding. In: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 13th International Workshop, APPROX 2010, and 14th International Workshop, RANDOM 2010, Barcelona, Spain, September 1-3, 2010. Proceedings, pp. 366–379 (2010) Wegener, I.: Relating monotone formula size and monotone depth of Boolean functions. Inf. Process. Lett. 16(1), 41–42 (1983) Weltge, S.: Sizes of linear descriptions in combinatorial optimization. dissertation, Otto-von-Guericke-Universität Magdeburg, (2016) Open Access funding provided by Projekt DEAL. This work was done while Samuel Fiorini was visiting the Simons Institute for the Theory of Computing. It was supported in part by the DIMACS/Simons Collaboration on Bridging Continuous and Discrete Optimization through NSF grant #CCF-1740425, and in part by ERC Consolidator Grant 615640-ForEFront. We also thank all the anonymous referees for their constructive comments on the paper. Université libre de Bruxelles, Brussels, Belgium Samuel Fiorini Monash University, Melbourne, Australia Tony Huynh Technical University of Munich, Munich, Germany Stefan Weltge Correspondence to Stefan Weltge. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Fiorini, S., Huynh, T. & Weltge, S. Strengthening convex relaxations of 0/1-sets using Boolean formulas. Math. Program. 190, 467–482 (2021). https://doi.org/10.1007/s10107-020-01542-w Issue Date: November 2021 Mathematics Subject Classification 90Cxx 52Bxx
CommonCrawl
Spatiotemporal epidemiology of cryptosporidiosis in the Republic of Ireland, 2008–2017: development of a space–time "cluster recurrence" index M. Boudou1, E. Cleary1, C. ÓhAiseadha2, P. Garvey3, P. McKeown3, J. O'Dwyer4,5 & Paul Hynds1,5 Ireland frequently reports the highest annual Crude Incidence Rates (CIRs) of cryptosporidiosis in the EU, with national CIRs up to ten times the EU average. Accordingly, the current study sought to examine the spatiotemporal trends associated with this potentially severe protozoan infection. Overall, 4509 cases of infection from January 2008 to December 2017 were geo-referenced to a Census Small Area (SA), with an ensemble of geo-statistical approaches including seasonal decomposition, Local Moran's I, and space–time scanning used to elucidate spatiotemporal patterns of infection. One or more confirmed cases were notified in 3413 of 18,641 Census SAs (18.3%), with highest case numbers occurring in the 0–5-year range (n = 2672, 59.3%). Sporadic cases were more likely male (OR 1.4) and rural (OR 2.4), with outbreak-related cases more likely female (OR 1.4) and urban (OR 1.5). Altogether, 55 space–time clusters (≥ 10 confirmed cases) of sporadic infection were detected, with three "high recurrence" regions identified; no large urban conurbations were present within recurrent clusters. Spatiotemporal analysis represents an important indicator of infection patterns, enabling targeted epidemiological intervention and surveillance. Presented results may also be used to further understand the sources, pathways, receptors, and thus mechanisms of cryptosporidiosis in Ireland. Cryptosporidium is an oocyst-forming protozoan parasite first identified as a causative agent of gastrointestinal infection in the mid-1970s [1]. Cryptosporidiosis is associated with a wide range of symptoms including watery diarrhoea, weight loss, vomiting, abdominal pain, nausea and fever [2]. In the most severe cases, infection may lead to acute dehydration and death, particularly among immuno-compromised individuals, including children aged ≤ 5 years, the elderly (≥ 65) and patients with underlying health conditions (i.e., immunosuppressed) [3]. To date, approximately 40 genetically distinct Cryptosporidium species have been identified, with C. parvum and C. hominis the most frequently confirmed species among cases of human infection [4]. Transmission typically occurs via the faecal-oral route through consumption of contaminated water or food, in addition to direct human-animal contact and exposure to contaminated environments including recreational water [2, 5,6,7]. A previous experimental study of healthy adult volunteers indicated that ingestion of 30 oocysts is sufficient to initiate infection, with a significantly lower threshold dose (≈ 10 oocysts) associated with specific C. hominis and C. parvum strains [5]. Cryptosporidiosis occurs in both rural and urban environments, with several studies indicating that C. hominis is more frequent in urban areas (due to increased rates of person-to-person transmission) while C. parvum predominates in rural areas [7]. Environmental transmission in rural areas represents a particular concern due to the ability of oocysts to survive for prolonged periods in the natural environment (e.g., soil, water) due to temperature buffering and high humidity [8]. Human cryptosporidiosis became a notifiable disease in Ireland on January 1st 2004 under the Infectious Diseases (Amendment) (No. 3) Regulations 2003 (S.I. 707 of 2003). As such, all medical practitioners are required to notify the regional Medical Officer of Health (MOH)/Director of Public Health of all confirmed cases. According to the most recent European Centre for Disease Prevention and Control (ECDC) report, Ireland consistently reports the highest Crude Incidence Rates (CIR) of confirmed cryptosporidiosis infection in the European Union [9]. For example, during 2017 Ireland reported a cryptosporidiosis CIR of 12.0/100,000 residents, compared with an EU mean CIR of 3.2/100,000 (including 15 member states with national notification rates < 1/100,000) [10]. Nationally, cryptosporidiosis represents the most frequently reported protozoan infection, with CIRs having remained relatively consistent over the past decade, ranging from 11.0/100,000 in 2004 to 13.2/100,000 in 2018 [10]. Unlike other gastroenteric infections (e.g., giardiasis), cryptosporidiosis in Ireland is primarily associated with domestic (indigenous) exposure and transmission. For example, 81% (556/629) of confirmed cases during 2018 were identified as sporadic domestic cases, 12% (n = 73) were associated with a recognised cluster/outbreak, while travel-related cases accounted for 7% (n = 43) of the total case number [10]. The largest Irish cryptosporidiosis outbreak to date was attributed to C. hominis and occurred in the west of Ireland during March/April 2007. This was concentrated around Galway city, with at least 242 confirmed cases caused by municipal wastewater ingress to Lough Corrib, a lake employed for public water supply in the region [11]. The economic and human health burden accruing from events like the "Galway outbreak", recently estimated at approximately €19 million [11] coupled with the high baseline incidence of cryptosporidiosis, create a need for a greater understanding of the sources and transmission routes for the disease. While several studies have examined the likely routes of exposure to Cryptosporidium spp. in Ireland [e.g., 12,13], few epidemiological investigations of the spatiotemporal dynamics of confirmed cryptosporidiosis infection has been undertaken. This represents a significant knowledge gap with respect to understanding pathogen sources and pathways, particularly in light of the endemic nature of cryptosporidiosis in Ireland. An improved mechanistic understanding of infection occurrence would enable earlier detection, enhanced surveillance, and more focused public-health and healthcare policies. The current study sought to explore the temporal and spatial patterns of domestically acquired (sporadic and outbreak-related) cases of cryptosporidiosis in Ireland via identification of infection clustering. To accurately describe the epidemiological patterns of this important zoonotic parasite, the study integrated several modelling approaches including seasonal decomposition, spatial autocorrelation (Anselin Local Moran's I), hot-spot analysis (Getis-Ord Gi*) and space–time scanning with a large georeferenced dataset of confirmed cryptosporidiosis cases (n = 4509) over a 10-year period (2008–2017). To the authors' knowledge, this represents the first spatio-temporal study of its kind in Ireland, which as previously described, exhibits the highest national cryptosporidiosis infection CIRs in the EU. Irreversibly anonymised cases of cryptosporidiosis reported by regional departments of public health between 1st January 2008 and 31st December 2017 were provided from the national Computerised Infectious Disease Reporting (CIDR) database. Data prior to 2008 were excluded to avoid potential bias being introduced by the large number of cases reported during the 2007 Galway outbreak. All confirmed cases, including patient-specific data fields [age, gender, date of reporting, and case outcome (severity)] were geo-spatially linked to the geographical centroid of their associated Census Small Area (SA) (the smallest administrative unit currently employed for census reporting in Ireland) using the Health Service Executive (HSE) Health Intelligence Unit's geocoding tools. Sporadic, outbreak-related, and travel-related (non-outbreak) cases were defined and discretized for analyses. Outbreak-related cases are defined as confirmed cases with an attached "CIDR outbreak ID", used for identifying cases associated with a recognised infection outbreak or cluster. Travel-related cases are specifically categorised for purposes of analytical exclusion or adjustment (i.e., national reporting) and defined as any patient self-reporting travel outside of Ireland within the likely incubation period. Sporadic cases were subsequently delineated via exclusion of the two previous categories from the total case dataset. All case data and analyses were granted full research ethics approval by the Royal College of Physicians of Ireland Research Ethics Committee (RCPI RECSAF_84). As cryptosporidiosis in Ireland is most prevalent among children ≤ 5 years of age and in rural areas [10], specific analyses were undertaken with respect to case age (≤ 5 years, ≥ 6 years) and land-use classification (rural/urban). The Central Statistics Office (CSO) Census of 2011 and 2016 were used to extract Electoral Division (ED)- and SA-specific human population counts, permitting calculation of cryptosporidiosis incidence rates at both spatial (administrative unit) scales. The CSO's 14 urban/rural categories were used to classify each spatial unit as rural or urban. Population density and settlement size were employed to verify all classifications. For reporting purposes within the current article, Ireland has been delineated into eight distinct geographical zones (Fig. 1). Zone NE (corresponding to Northern Ireland) is located outside Irish public health legislative jurisdiction and was not included for analyses. Pearsons χ2 test with Yates' continuity corrections and Fisher's exact test (where any cell had < 5 cases) were used to test for association between categorical case classifications. Geographical zonation of the Republic of Ireland Seasonal decomposition Seasonal decomposition was carried out using Seasonal and Trend (STL) decomposition via the LOESS (Locally Estimated Scatterplot Smoothing) method on different subsets of the case dataset, e.g., sporadic cases, outbreak-related cases, cases in children ≤ 5 years of age, cases in people ≥ 6 years of age, travel-related cases and cases in urban vs. rural areas. The monthly incidence of infection was calculated for each case sub-category. The STL method decomposes incidence data (Yv) time-series into three separate component series: seasonal variation (Sv), overall trend over time (Tv) and residuals (Rv), whereby the incidence data is equal to the sum of all three trends denoted by [14]: $$Y_{v} = \, T_{v} + \, S_{v} + \, R_{v}$$ An additive seasonal decomposition formula was used, as opposed to multiplicative, to remove seasonality (Sv) and trend (Tv) from the overall time-series (Yv) and filter random variation from long-term trends given by the residuals (Rv) so that: Residuals (Rv) = Time series (Yv) − Seasonal trend (Sv) − Trend (Tv). Spatial autocorrelation The total number of sporadic cryptosporidiosis cases, sporadic cases in children ≤ 5, people aged ≥ 6 years, and outbreak-related cases were mapped to individual SA centroids. Age-adjusted infection rates within each sub-category were calculated at both SA and ED level, based on 2011/2016 census data. Outbreak-related infection rates were calculated as a proportion of overall cases within each SA and ED. Data aggregation and infection rate calculation were carried out in R statistical software version 3.6.0 (R Foundation for Statistical Computing, Vienna, Austria). Anselin Local Moran's I was employed for spatial autocorrelation. Anselin Local Moran's I focuses on the relationship of individual features with nearby features and assigns clusters based on variance assigned to individual spatial units, thus negating the assumption underlying the Global Moran's I statistic that a single statistic appropriately accounts for clustering and dispersion of the spatial predominance of infection across the entire study area [15]. The Anselin Local Moran's I statistic is calculated by generating a neighbour list of spatially proximal SAs or EDs and calculating spatial autocorrelation of similar infection rates as a function of distance bands, thus identifying localised clusters which are correlated based on the variance assigned to all individual spatial units [15, 16]. Clusters of high-high (H–H) and low–low (L–L) infection, and outliers of high–low (H–L) and low high (L–H) infection are subsequently identified. Local Moran's I statistics were calculated using the cluster and analysis tool in ArcGIS version 10.6 (ESRI, Redlands, California) which generates a Moran's I statistic, z-score and pseudo p-value for each spatial unit. A positive I value is indicative of spatial units with a high or low infection rate, surrounded by SAs or EDs with similarly high or low infection rates. Conversely, a negative I value indicates outliers of infection where an SA or ED with a high rate of infection is surrounded by SAs or EDs with low rates of infection, and vice versa [15]. Hot-spot analysis (Getis-Ord GI*) Hot-spot analysis was carried out for all sporadic cases, sporadic cases among children ≤ 5, cases ≥ 6 years, and outbreak cases by calculating spatially specific Getis-Ord GI* statistics in ArcGIS. The Getis-Ord Gi* statistic is calculated for each feature (SA or ED) in the dataset, generating a unit-specific z-score and p-value, used to statistically determine significant spatial clustering of features in the dataset [17]. Statistically significant clusters are clusters which have high values surrounded by SAs or EDs with similarly high values, and vice versa [18]. Hot- and cold-spots of infection are determined based on the spatial proximity of high/low values statistically similar to neighbouring features. Compared with Anselin local Moran's I statistics, clusters based on the Getis-Ord GI* statistic are determined by comparing the sum of local features and their neighbours with the overall sum of all features. Getis-Ord GI* statistics were used to examine whether differing statistical analyses of spatial clustering of infection between spatial units yield varying results. The Getis-Ord GI* statistic is given as [18]: $$G_{i}^{*} = \frac{{\sum\limits_{j = 1}^{n} {w_{i,j} x_{j} - \overline{X}\sum\limits_{j = 1}^{n} {w_{i,j} } } }}{{S\sqrt {\frac{{\left[ {n\sum\limits_{j = 1}^{n} {w_{i,j}^{2} } - \left( {\sum\limits_{j = 1}^{n} {w_{i,j} } } \right)^{2} } \right]}}{n - 1}} }}$$ where χj is the attribute value for feature j, wi,j is the spatial weight between feature i and j, n is equal to the total number of features and: $$\begin{gathered} \overline{X} = \frac{{\sum\limits_{j = 1}^{n} {x_{j} } }}{n} \hfill \\ S = \sqrt {\frac{{\sum\limits_{j = 1}^{n} {x_{j}^{2} } }}{n} - \left( {\overline{X}} \right)^{2} } \hfill \\ \end{gathered}$$ Space–time scanning Space–time scanning was undertaken using SaTScan v9.6 software (Kulldorf and Information Management Services, Inc., MA, USA). SaTScan detects spatial clusters of areal units (i.e., SAs/EDs) by imposing an infinite number of overlapping circular (or elliptical) scanning windows of predetermined sizes across a defined geographic area [19]. Temporal clusters were simultaneously assessed using the scan statistic, which includes an infinite number of overlapping cylindrical windows defined by a base (spatial scan statistic) and height (temporal scan statistic) [20]. A discrete Poisson model was employed for space–time scanning to account for the high-resolution spatial scale (n = 18,488 SAs), resulting in high zero/one inflation (i.e., high numbers of SAs with 0 or 1 case). A case threshold of 10 cases (minimum) per cluster was selected to ensure that identified clusters were significant i.e., avoidance of single household clusters. Similarly, a maximum of 10% of population at risk (PAR) was employed concurrently with a maximum cluster radius of 50 kms to account for low case numbers within individual Small Areas. Data were aggregated at a monthly scale, with maximum cluster duration set to 3 months to account for known seasonal variation of cryptosporidiosis in Ireland. SaTScan analyses produce two primary outputs; a spatial cluster location(s) (cluster centroid and diameter) and descriptive cluster data (start/end dates, total population, number of observed-expected cases, relative risk, and p-value). The authors have developed a novel mapping approach for representing SaTScan results, whereby all significant clusters (p < 0.05) are selected and mapped in ArcGIS (ArcGIS 10.6), with binary cluster location [i.e., Cluster Membership (0/1)] for annual space–time scans summed at the CSO SA scale. The final mapping provides a "cluster recurrence" index ranging from 0 to 10 (i.e., annual absence/presence of cluster over 10-year study period). Occurrence of cryptosporidiosis infection in the Republic of Ireland (2008–2017) The dataset comprised 4,633 confirmed cases of cryptosporidiosis from 2008 to 2017, of which 4509 cases (97%) were successfully geo-linked to a distinct spatial unit (SA/ED centroid). Overall, 1964 Electoral Divisions (58% of 3,409), 3413 Small Areas (18.3% of 18,488) and all (26/26) Irish counties were associated with at least one confirmed case. Most cases were associated with children ≤ 5 years (n = 2672, 59.3%) (Fig. 2), with a slightly higher incidence rate reported among males (53%) (Table 1). Age and gender distributions of cryptosporidiosis cases in the Republic of Ireland (2008–2017) Table 1 Pearson χ2 test results for cryptosporidiosis cases in the Republic of Ireland, delineated by case type (sporadic, outbreak-related, travel), and age, gender and CSO classification As shown (Table 1), sporadic cases were statistically more likely to be male (OR 1.4, 95% CI 1.2, 1.6), ≤ 5 years of age (OR 1.5, 95% CI 1.3, 1.8), and associated with a categorically rural area (OR 2.4, 95% CI 2, 28). Conversely, outbreak-related cases were associated with females (OR 1.4, 95% CI 1.2, 1.7) and urban areas (OR 1.5, 95% CI 1.3, 1.9). Travel-related cases were more likely to be female (OR 1.3, 95% CI 1, 1.6), > 5 years of age (OR 2.4, 95% CI 1.5, 3.1), and resident in an urban conurbation (OR 3.6, 95% CI 2.8, 4.6). Temporal cumulative incidence rates (Fig. 3) indicate a marked annual peak in late spring (n = 1812), with a maximum incidence rate occuring during April (n = 916). Lowest incidence rates were recorded during winter months (n = 493) with the lowest incidence rate being recorded in January (n = 136). Case numbers peaked in 2017 (n = 584). Temporal distribution of cryptosporidiosis cases in Ireland (2008–2017) Seasonal decomposition of sporadic infection over the ten-year study period indicates a clear seasonal peak in mid-spring (April) annually (Fig. 4). Residual trends show a generally consistent annual and long-term trend with a notable peak of infection in April 2016 (Residual: + 56). Outbreak cases exhibit a similar seasonal trend to that of sporadic cases with annual peaks occurring in April, followed by a secondary peak in September. The overall long-term trend in outbreak cases displayed a marked increase during 2011, continuing until 2014. Residuals calculated for outbreak cases point to more variation in 10-year trends with peaks observed during the late winter/early spring months (January to March) of 2011, 2012 and 2017 while late spring/early summer peaks (April to June) were observed in 2013. A peak in outbreak-associated cases was also observed during the winter months (October to November) of 2013. Seasonal decomposition of cryptosporidiosis in the Republic of Ireland (2008–2017), delineated by sporadic (left) and outbreak-related cases (right) There was an increasing trend in the number/rate of travel-associated cases with an annual peak occurring in August/September (Fig. 5). The long-term trends varied significantly between delineated age categories with considerably more variation noted among children ≤ 5 years of age (Fig. 6), albeit annual peaks were observed among both age cohorts during April of each year. Residuals again point to a large transmission peak (Residuals: + 22, + 34) within both sporadic and outbreak cohorts during April 2016. Seasonal decomposition of travel-related cryptosporidiosis in the Republic of Ireland (2008–2017) Seasonal decomposition of cryptosporidiosis in the Republic of Ireland (2008–2017), delineated by epidemiologically relevant age sub-categories Annual decomposed patterns of infection peaked in April of each year followed by a significantly smaller peak during September in both urban and rural areas (Fig. 7). Calculated residuals point to an infection peak in April 2016 in both urban (+ 18) and rural (+ 38) areas, consistent with trends observed among sporadic and age-delineated infection peaks. Seasonal decomposition of cryptosporidiosis in the Republic of Ireland (2008–2017), delineated by CSO urban/rural classification A significant H–H cluster of sporadic cases was observed in the midland (M) region, with a large L–L cluster identified along the eastern seaboard (E), surrounding the greater Dublin urban area and commuter belt (Fig. 8a). L–L clusters were also observed in the S and SE regions, spatially proximal to the urban conurbations of Cork, Waterford, and Limerick cities. Smaller H–H clusters of infection were observed in the S, SE and W regions of the country, consistent with an overarching urban/rural pattern. Notable L–L outbreak-related case clusters were observed in the east of the country surrounding Dublin city and in the south surrounding Limerick city (Fig. 8b). Few H–H clusters were associated with outbreak-related cases, however H–H cases identified in the midland region (M) were surrounded by L–H clusters, thus indicating potential neighbouring outliers. H–H and L–L clusters of infection in children ≤ 5 followed a broadly similar spatial pattern to that observed within the sporadic case cohort, due to the large proportion of cases from this cohort comprising the total dataset (Fig. 8c). A large H–H cluster was observed in the M region, with smaller H–H clusters again identified in S, SE and W regions. L–L clusters of infection were also consistent with sporadic case clusters and typically identified around urban areas in the S and SE of the country. The spatial predominance of infection cold spots (L–L) among people age > 5 (Fig. 8d) followed a similar pattern of infection cold spots among sporadic cases and paediatric (≤ 5 years) cases (Fig. 8c). However, the spatial predominance of infection hot spots among this cohort was markedly different to sporadic and ≤ 5-year hot spots, with smaller and more spatially dispersed hot spots identified, primarily in the midlands (region M) and SW regions. a Sporadic cryptosporidiosis case clusters and outliers determined by Anselin Local Moran's I clusters b Outbreak-related cryptosporidiosis case clusters and outliers determined by Anselin Local Moran's I clusters c Sporadic cryptosporidiosis case clusters and outliers among children aged 5 years and younger determined by Anselin Local Moran's I clusters d Sporadic cryptosporidiosis case clusters and outliers among the cohort of people age 6 years and older determined by Anselin Local Moran's I clusters Getis-Ord GI* analyses identified notable hot spots among sporadic cases in the midlands (M), east and north-east of Galway city, with smaller hot spots also evident in the midlands, south and south-east (M, S and SE) (Fig. 9a). Again, a spatially extensive cold spot was identified in the east of the country (E), encompassing the greater Dublin metropolitan urban area, and in the south and south-east (S and SE) around Waterford, Limerick and Cork cities. a Sporadic cryptosporidiosis case hot and cold spots determined by Getis-Ord Gi* hot-spot analysis—b Outbreak-related cryptosporidiosis hot and cold spots determined by Getis-Ord Gi* hot-spot analysis—c Sporadic cryptosporidiosis case hot and cold spots among children aged 5 years and younger determined by Getis-Ord Gi* hot-spot analysis—d Sporadic cryptosporidiosis case hot and cold spots among the cohort of people age 6 years and older determined by Getis-Ord Gi* hot-spot analysis The spatial predominance of hot and cold spots among children ≤ 5 years again followed a similar pattern to clustering of infection among all sporadic cases (Fig. 9c). Large hot spots were observed in the midlands and south (M and S), with a previously identified sporadic infection hot spot in the west (W) demonstrating a significantly more pronounced occurrence among the paediatric subpopulation (NE of Galway city). A large cold spot among children ≤ 5 was also observed in the in the greater Dublin area (E), albeit significantly reduced when compared with that observed among all sporadic infections. The spatial distribution of hot and cold spots of infection among people aged > 5 varied (Fig. 9d), with the spatial distribution of hot and cold spots of sporadic infection and infection in children ≤ 5. One hot spot was identified in the SW region, which was not observed using other statistical methods or among other subcategories of infection. Space–Time clustering recurrence and cluster temporality for sporadic cryptosporidiosis cases are presented in Fig. 10, with results of year-on-year space–time scanning presented in Additional file 1: Appendix 1. Annual space-time clusters of Cryptosporidiosis in Ireland from 2008 to 2017; Appendix 2. Space-time clusters of Cryptosporidiosis in Ireland during 2008. As shown (Fig. 10), three primary hot spots were identified: south-west and east of Limerick city (SW, S, SE), and north-east of Galway city (M). Cold spots are persistent along much of the eastern seaboard, and particularly around the larger urban conurbations of greater Dublin and Cork city, in addition to significant areas of the western coastline. The temporal window for space–time clusters mirrors the general seasonal distribution of cryptosporidiosis infection (Sect. 3.2), with peak cluster identification occurring from March to June and peaking in April. Space–time "cluster recurrence" index (0–10) for sporadic cryptosporidiosis cases in the Republic of Ireland, 2008–2017 Significantly lower levels of space–time clustering were found among outbreak-related cases (Fig. 11), with largest hot spots located in western and midland regions (M, W), and a maximum cluster recurrence of 30% (i.e., geographic area included in 3 identified clusters over 10 annual iterations). Two additional space–time clusters were identified to the north-east of Cork city (S) and County Donegal (N). Most (8/9) outbreak-related clusters were observed from March to June, with one cluster occurring during October/November (2013). Cluster index mapping for ≤ 5 year sub-population mirrored that of sporadic cases, with three primary hot spots identified; again, a large area located north-east of Galway city (M), and two "secondary" (i.e., lower cluster recurrence indices) areas located south-west and south-east of Limerick city (SW,S) (Fig. 12). Results for the sub-population > 5 years point to a lower level of clustering, with hot spots located south-west of Limerick city (SW), the Midlands (M) and south-east (SE) (Fig. 13). Space–time "cluster recurrence" index (0–10) for outbreak-related cryptosporidiosis cases in the Republic of Ireland, 2008–2017 Space–time "cluster recurrence" index (0–10) for sporadic cryptosporidiosis cases in the Republic of Ireland, 2008–2017 (Delineated by epidemiologically relevant age category—Population ≤ 5 years) Space–time "cluster recurrence" index (0–10) for sporadic cryptosporidiosis cases in the Republic of Ireland, 2008–2017 (Delineated by epidemiologically relevant age category—Population > 5 years) Occurrence of sporadic and outbreak-associated cryptosporidiosis Cryptosporidiosis exhibits a relatively wide geographical distribution in Ireland with 58% and 18.3% of Electoral Divisions and Small Areas associated with at least one confirmed case during the study period, respectively. Crude Incidence Rates (CIRs) of infection indicate a moderately increasing trend, ranging from 9.8/100,000 in 2008 to 12.4/100,000 in 2017 [10]. Most (59.3%) sporadic cases were associated with children ≤ 5 years, which concurs with several previous studies [3, 7]. Within the ≤ 5 years cohort, cases were more frequently associated with male children (OR 1.3873), potentially reflecting the tendency of male children to mount weaker immune responses [21], an enhanced susceptibility to environmental exposures via gender-related outdoor activities [22], or a gender-related bias in healthcare-seeking behaviours [23]. Conversely, female children were statistically associated with outbreak-related cryptosporidiosis, potentially reflecting higher levels of direct contact (and subsequent transmission) between parents/family members and female children [24]. A recent small-scale investigation of the regional epidemiology of cryptosporidiosis in County Cork, Ireland, demonstrated moderately increased infection rates among 20–34-year olds, suggesting likely anthroponotic transmission via caregiver contact with infected children [25]. Geographically, most sporadic cases (65.8%) occurred in categorically rural areas (χ2 = 110.493, p < 0.001; Table 1), where approximately 37.3% of the Irish populace reside [26]. A previous Scottish study by Pollock et al. similarly found C. parvum infection was associated with areas characterised by lower human population density and a higher ratio of farms to humans, both indicators of rurality [27]. While the current study represents the first nationwide study of the spatiotemporal epidemiology of cryptosporidiosis in Ireland, this finding was expected, and likely attributable to increased exposure to sources of Cryptosporidium spp. oocysts in rural areas, including farmyard animals [28], direct exposure to contaminated surface waters [29] and the use of groundwater as a drinking water source [6]. Conversely, urban areas exhibited a significantly higher secondary (OR 1.5383) and travel-related (OR 3.5742) case occurrence, likely indicative of C. hominis infections as opposed to the agriculturally (rural) associated C. parvum, however, as Cryptosporidium spp. is not identified within the Irish disease reporting system, this is somewhat speculative. Seasonal decomposition points to an overall increasing temporal trend over the ten-year study period (Fig. 4), consistent with previously reported trends in the west of Ireland during 2004–2007 [30]. Specifically, the annual peak found during April is consistent with previously reported regional peaks (March/April) [30], in addition to those reported in Scotland (April/May) [27], likely associated with agricultural cycles in temperate regions i.e. lambing/calving and manure spreading. While not reported in the current study, seasonal patterns may vary among differing Cryptosporidium species; for example, C. hominis is more prevalent during autumn in the UK and New Zealand (increased travel and school/childcare attendance), whereas C. parvum is more typically encountered during spring in Canada, Ireland and The Netherlands [9]. The secondary peak observed among outbreak-related cases during September (Fig. 4) is consistent with the bimodal peaks observed in C. hominis in Scotland in August and October [27], and may reflect the increase in national/international travel and children returning to childcare/school after summer break. Seasonal decomposition also identified several notable deviations (i.e., residual peaks) from the overarching temporal trend which merit closer investigation, particularly regarding dynamic drivers of exposure/transmission such as extreme weather events [28, 31]. A marked positive residual was identified during April 2016 (Fig. 4), initiating further exploration with respect to dynamic meteorological events, particularly in light of severe flooding experienced across Ireland and the UK [32]. Winter 2015/2016 was characterised by a succession (n = 6) of Atlantic storms across Ireland, resulting in exceptional and widespread flooding with all synoptic weather stations reporting rainfall volumes significantly above their Long-Term Average (LTA) [32]. Recent work by Boudou et al. have shown that excess cases of cryptosporidiosis were widespread during and after the flood period, with areas characterised by the presence of a surface water body exhibiting significantly higher incidence rates (OR 1.363; p < 0.001) [32]. Time-series modelling of the event presented a clear association between rainfall, surface water discharge, groundwater levels and infection incidence, with lagged associations from 16 to 20 weeks particularly strong, thus indicating a link between infection peaks (April 2016) and the flood event which began approximately 18 weeks earlier [32]. Thus, it was concluded that increases in storm water, soil saturation and surface runoff increased pathogen mobility for a significant period, thus exacerbating transmission of cryptosporidiosis both directly (i.e., contaminated 'raw' water and food) and indirectly (i.e., long-term soil saturation) [32]. Similarly, a cryptosporidiosis outbreak which occurred during August 2013 in Halle, Germany, began six weeks after the river Saale inundated the floodplain and parts of the city centre [3], thus emphasising the (lagged) impact of local meteorological conditions on the incidence of infection. Spatial autocorrelation and Hot-Spot analysis Incorporating a spatial dimension into investigations of infectious disease epidemiology is of primary importance considering the spatial variation of environmental exposure such as land use, local climate, and socioeconomic status, particularly in Ireland which has previously been described as "the perfect storm" with regard to potential gastroenteric infection risk factors [33]. Results of Anselin Local Moran's I statistics and the Getis Ord GI* statistic provided relatively similar spatial patterns. High incidence (H-Hs) clusters were identified in the Irish Midlands (M), a predominantly rural area with a high level of dependence on pastoral agriculture and "private infrastructure" (e.g., one-off housing with on-site wastewater treatment and domestic water supplies). Several previous studies have documented strong associations between cryptosporidiosis and cattle density [27, 34]. Similarly, a study from central Wisconsin previously found the incidence of endemic diarrhoeal infections significantly higher in areas characterised by elevated septic tank (OR 1.22) and private water supply (OR 6.18) densities among a population-based cohort [35]. Conversely, low incidence (L-L) clusters were primarily located in the vicinity of Ireland's capital (Dublin) and other relatively large cities (Waterford, Cork, Limerick, Galway), thus likely highlighting the protective effect of urban living within the Irish context, where reduced environmental exposure to pathogen sources coupled with reduced pathogen transport (i.e., treated drinking water supply) may reduce the risk of exposure and subsequent infection [36]. Conversely, recent studies have shown rates of cryptosporidiosis are typically higher in urban areas characterised by elevated human population densities, for example Cohen et al. previously reported that higher population density and above average household sizes were associated with increased odds of reported cases of cryptosporidiosis in Massachusetts [37]. Likewise, Greenwood & Reid have found that most cryptosporidiosis clusters identified across Queensland, Australia from 2001 to 2015 centred on major and regional cities [38]. Both geostatistical techniques suggest a disparity exists with respect to outbreak-related clustering over the 10-year study period, as they relate to clustering of sporadic cases (Figs. 8a, b, 9a, b), with outbreak-related clusters occurring in the north Midlands and "border area", regions traditionally characterised by relatively low population densities. This merits further investigation within the context of population age structure, household size and domestic water source, along with close monitoring and surveillance by the relevant Departments of Public Health. Space–Time scanning and cluster recurrence Space–time scan statistics detect temporally-specific clusters characterised by a significantly higher observed case number than expected (e.g., space–time randomness not present), based on calculated baseline incidence rates [20] with the approach employing a 3-dimensional (cylindrical) scanning window comprising both height (time) and space (geographic area) [19]. Over the past decade, space–time scan statistics have been recognised as a powerful tool for endemic disease surveillance and early outbreak detection [39], however to the authors knowledge, this represents the first time it has been applied to infectious disease incidence in Ireland. A total of 69 space–time clusters (≥ 10 confirmed cases) were identified over the 10-year study period, of which 55 (79.7%) were clusters of sporadic infection, ranging from a minimum of 4 (7.3%) during 2017 to a maximum of 7 (12.7%) during 2009. No statistical association was found between annual sporadic and outbreak-related cluster number during the study period, however development of the "cluster recurrence" index (e.g., Figs. 10, 11, 12, 13) permits identification of discernible spatial and temporal patterns defining the formation of clusters across the decade-long period. Three regions exhibited particularly recurrent space–time clusters of infection, with occurrences during ≥ 8 out of 10 years, namely south-west and east of Limerick city (SW, S, SE), and north-east of Galway city (M), with neither urban conurbation actually located within a high recurrence region. The spatiotemporal frequency of space–time clusters suggests the presence of persistent reservoirs in these areas thus maintaining community and/or transmission pathways [38]. The proximity of large urban centres to each high-recurrence region may potentially reflect relatively narrow transitional zones between urban fabric and populated rural regions i.e., rural commuter belts which remain un-serviced with respect to municipal wastewater treatment and/or drinking supplies. Additionally, all three regions are predominantly underlain by karstified carboniferous limestone aquifers [40] which have previously been associated with the presence of Cryptosporidium spp. in private and small public drinking water supplies [12, 41]. Conversely, the Greater Dublin area, characterised by a large urban commuter belt, spatially extensive consolidated bedrocks and high levels of municipal water and wastewater infrastructure, did not exhibit any space–time clusters over the study period. A significant majority of space–time clusters occurred over the 4-month period May–June, thus mirroring findings from the overall case cohort, and further highlighting the likely association between agricultural cycles and the incidence of infection in temperate regions including Ireland, Scotland and New Zealand [27, 42]. Additionally, Lal et al. have signalled a need to study the effect of spatial and temporal variations in ecological and social risk factors on the incidence of cryptosporidiosis with specific emphasis on the potential for socioeconomic disadvantage to amplify disease risk within populations, e.g., in areas of low educational attainment and lower income levels, which are often associated with rural living [28]. From a public-health surveillance perspective, identification of 55 space–time clusters of sporadic cryptosporidiosis infection over a 10-year period represents a concern, while underscoring the major challenges involved in decreasing the incidence of infection via enhanced surveillance and subsequent intervention. For example, during 2008, a spatially restricted space–time cluster which was identified in the northern Midlands (Cluster 2, Additional file 1: Appendix) was characterised by almost 18 times more cases of infection than would be expected (RR 17.95) over a three-month period (February–April), with several identified space–time clusters occurring over time periods as short as 4 weeks. As such, this level of clustering may suggest the need for new surveillance and/or analytical methods to elucidate hitherto unidentified sources and pathways of infection, and to identify space–time clusters while they exist i.e., real-time or prospective scanning [43]. It is important to note that a lack of species information, and particularly the inability to discern between C. parvum and C. hominis, the two most frequently encountered Cryptosporidium species in Ireland, represents a study limitation. As previously outlined, Pollock et al. found C. parvum infection to be associated with lower population density and higher ratio of farms to humans, indicators of rurality, while C. hominis was more likely to be found in the more urban area of southern Scotland [27]. Speciation would thus permit closer elucidation of sociodemographic influences on rural/urban distribution. Further investigation is required to elucidate potential sources and pathways of infection, with particular regard to livestock densities, climate, hydrogeology and socioeconomic status. In conclusion, despite mandatory surveillance of cryptosporidiosis due to its communicable disease status in Ireland, it is widely regarded that cryptosporidiosis remains under-reported in Ireland and on a broader European level. The spatiotemporal epidemiology of cryptosporidiosis in Ireland reflects the diverse population and geography of the country, albeit with a markedly higher rate of occurrence in rural areas, likely due to the ubiquity of Cryptosporidium spp. sources (e.g., cattle) and pathways (e.g., karstic limestone bedrocks). The elevated burden among children ≤ 5-years is likely related to both immunological status and specific routes of exposure and warrants further study. The presented study represents a significant advance in efforts to investigate the spatiotemporal epidemiology of cryptosporidiosis with a view to further elucidating pathways of infection to guide public-health interventions through an improved understanding of its spatio-temporal occurrence, clustering mechanisms, levels of recurrence, and associated drivers, pathways, and receptors. Due to the sensitive nature of the study data, datasets are not publicly available. For further information related to data acquisition, please contact the corresponding author, Dr Paul Hynds (email: Paul.Hynds@tudublin. i.e., phone: 0,838,256,888). Nime FA, Burek JD, Page DL, Holscher MA, Yardley JH. Acute enterocolitis in a human being infected with the protozoan Cryptosporidium. Gastroenterology. 1976;70(4):592–8. Fayer R, Ungar BL. Cryptosporidium spp. and cryptosporidiosis. Microbiol Rev. 1986;50(4):458. Chalmers RM, Cacciò S. Towards a consensus on genotyping schemes for surveillance and outbreak investigations of Cryptosporidium, Berlin, June 2016. Eurosurveillance. 2016;21(37):30338. Feng Y, Ryan UM, Xiao L. Genetic diversity and population structure of Cryptosporidium. Trends Parasitol. 2018;34(11):997–1011. Chappell CL, Okhuysen PC, Langer-Curry R, Widmer G, Akiyoshi DE, Tanriverdi S, Tzipori S. Cryptosporidium hominis: experimental challenge of healthy adults. Am J Trop Med Hyg. 2006;75(5):851–7. Chique C, Hynds P, Andrade L, Burke L, Morris D, Ryan MP, O'Dwyer J. Cryptosporidium spp. in groundwater supplies intended for human consumption–a descriptive review of global prevalence, risk factors and knowledge gaps. Water Res. 2020;115726. Putignani L, Menichella D. Global distribution, public health and clinical impact of the protozoan pathogen Cryptosporidium. Interdisciplinary perspectives on infectious diseases, 2010. Thompson RA, Koh WH, Clode PL. Cryptosporidium—what is it? Food Waterborne Parasitol. 2016;4:54–61. European Centre for Disease Prevention and Control. Cryptosporidiosis. In: ECDC. Annual epidemiological report for 2017. Stockholm: ECDC; 2019. Health Protection Surveillance Centre (HPSC). (2019) Cryptosporidiosis in Ireland, 2018. Dublin, Ireland. https://www.hpsc.ie/a-z/gastroenteric/cryptosporidiosis/publications/epidemiologyofcryptosporidiosisinirelandannualreports/. Chyzheuskaya A, Cormican M, Srivinas R, O'Donovan D, Prendergast M, O'Donoghue C, Morris D. Economic assessment of waterborne outbreak of cryptosporidiosis. Emerg Infect Dis. 2017;23(10):1650. Zintl A, Proctor AF, Read C, Dewaal T, Shanaghy N, Fanning S, Mulcahy G. The prevalence of Cryptosporidium species and subtypes in human faecal samples in Ireland. Epidemiol Infect. 2009;137(2):270–7. Cummins E, Kennedy R, Cormican M. Quantitative risk assessment of Cryptosporidium in tap water in Ireland. Sci Total Environ. 2010;408(4):740-753.12. Cleveland RB, Cleveland WS, McRae JE, Terpenning I. STL: a seasonal-trend decomposition. J Off Stat. 1990;6(1):3–73. Anselin L, Syabri I, Smirnov O. Visualizing multivariate spatial correlation with dynamically linked windows. In Proceedings, CSISS Workshop on New Tools for Spatial Data Analysis, Santa Barbara. 2002. Mao Y, Zhang N, Zhu B, Liu J, He R. A descriptive analysis of the Spatio-temporal distribution of intestinal infectious diseases in China. BMC Infect Dis. 2019;19(1):766. Guo C, Du Y, Shen SQ, Lao XQ, Qian J, Ou CQ. Spatiotemporal analysis of tuberculosis incidence and its associated factors in mainland China. Epidemiol Infect. 2017;145(12):2510–9. Varga C, Pearl DL, McEwen SA, Sargeant JM, Pollari F, Guerin MT. Area-level global and local clustering of human Salmonella Enteritidis infection rates in the city of Toronto, Canada, 2007–2009. BMC Infect Dis. 2015;15(1):1–13. Kulldorff M, Heffernan R, Hartman J, Assunçao R, Mostashari F. A space–time permutation scan statistic for disease outbreak detection. PLoS Med. 2005;2(3):e59. Linton SL, Jennings JM, Latkin CA, Gomez MB, Mehta SH. Application of space-time scan statistics to describe geographic and temporal clustering of visible drug activity. J Urban Health. 2014;91(5):940–56. Muenchhoff M, Goulder PJ. Sex differences in pediatric infectious diseases. J Infect Dis. 2014;209(suppl_3):S120–6. Jarman AF, Long SE, Robertson SE, Nasrin S, Alam NH, McGregor AJ, Levine AC. Sex and gender differences in acute pediatric diarrhea: a secondary analysis of the Dhaka study. J Epidemiol Global Health. 2018;8(1):42–7. Sarker AR, Sultana M, Mahumud RA, Sheikh N, Van Der Meer R, Morton A. Prevalence and health care–seeking behavior for childhood diarrheal disease in Bangladesh. Global Pediatric Health, 2016;3: 2333794X16680901. Guerra-Silveira F, Abad-Franch F. Sex bias in infectious disease epidemiology: patterns and processes. PLoS ONE. 2013;8(4):e62390. O'Leary JK, Blake L, Corcoran D, Elwin K, Chalmers R, Lucey B, Sleator RD. Cryptosporidium spp surveillance and epidemiology in Ireland: a longitudinal cohort study employing duplex real-time PCR based speciation of clinical cases. J Clin Pathol. 2020;73(11):758–61. Central Statistics Office (CSO). Census of Population, 2016 (Ireland)—Profile 2 Population Distribution and Movements. 2019. https://www.cso.ie/en/releasesandpublications/ep/p-cp2tc/cp2pdm/pd/. Pollock KGJ, Ternent HE, Mellor DJ, Chalmers RM, Smith HV, Ramsay CN, Innocent GT. Spatial and temporal epidemiology of sporadic human cryptosporidiosis in Scotland. Zoonoses Public Health. 2010;57(7–8):487–92. Lal A, Hales S, French N, Baker MG. Seasonality in human zoonotic enteric diseases: a systematic review. PLoS ONE. 2012;7(4):e31883. Hamilton KA, Waso M, Reyneke B, Saeidi N, Levine A, Lalancette C, et al. Cryptosporidium and Giardia in wastewater and surface water environments. J Environ Qual. 2018;47(5):1006–23. Callaghan M, Cormican M, Prendergast M, Pelly H, Cloughley R, Hanahoe B, O'Donovan D. Temporal and spatial distribution of human cryptosporidiosis in the west of Ireland 2004–2007. Int J Health Geogr. 2009;8(1):1–9. Britton E, Hales S, Venugopal K, Baker MG. The impact of climate variability and change on cryptosporidiosis and giardiasis rates in New Zealand. J Water Health. 2010;8(3):561–71. Boudou M, ÓhAiseadha C, Garvey P, O'Dwyer J, Hynds P. Flood hydrometeorology and gastroenteric infection: the Winter 2015–2016 flood event in the Republic of Ireland. J Hydrol. 2021;599:126376. O'Dwyer J, Hynds PD, Byrne KA, Ryan MP, Adley CC. Development of a hierarchical model for predicting microbiological contamination of private groundwater supplies in a geologically heterogeneous region. Environ Pollut. 2018;237:329–38. Luffman I, Tran L. Risk factors for E. coli O157 and cryptosporidiosis infection in individuals in the karst valleys of east Tennessee, USA. Geosciences. 2014;4(3):202–18. Borchardt MA, Chyou PH, DeVries EO, Belongia EA. Septic system density and infectious diarrhea in a defined population of children. Environ Health Perspect. 2003;111(5):742–8. Lal A, Dobbins T, Bagheri N, Baker MG, French NP, Hales S. Cryptosporidiosis risk in New Zealand children under 5 years old is greatest in areas with high dairy cattle densities. EcoHealth. 2016;13(4):652–60. Cohen SA, Egorov AI, Jagai JS, Matyas BT, DeMaria A Jr, Chui KK, et al. The SEEDs of two gastrointestinal diseases: socioeconomic, environmental, and demographic factors related to cryptosporidiosis and giardiasis in Massachusetts. Environ Res. 2008;108(2):185–91. Greenwood KP, Reid SA. Clustering of cryptosporidiosis in Queensland, Australia, is not defined temporally or by spatial diversity. Int J Parasitol. 2020;50(3):209–16. Tango T, Takahashi K, Kohriyama K. A space–time scan statistic for detecting emerging outbreaks. Biometrics. 2011;67(1):106–15. Woodcock NH, Strachan RA. Geological history of Britain and Ireland. John Wiley & Sons; 2009. Darnault CJ, Peng Z, Yu C, Li B, Jacobson AR, Baveye PC. Movement of Cryptosporidium parvum oocysts through soils without preferential pathways: exploratory test. Front Environ Sci. 2017;5:39. Khan A, Shaik JS, Grigg ME. Genomics and molecular epidemiology of Cryptosporidium species. Acta Trop. 2018;184:1–14. Jones RC, Liberatore M, Fernandez JR, Gerber SI. Use of a prospective space-time scan statistic to prioritize shigellosis case investigations in an urban jurisdiction. Public Health Rep. 2006;121(2):133–9. The authors would like to acknowledge the CIDR Review Committee for data acquisition and the Royal College of Physicians of Ireland (RCPI) Research Ethics Review Committee. The authors also wish to acknowledge the Irish Research Council (COALESCE Research Programme) and Irish Environmental Protection Agency (STRIVE Research Programme) for provision of research funding. This work was funded by the Environmental Protection Agency (EPA) under the STRIVE Research Programme (2018-W-MS-33) and the Irish Research Council under the COALESCE Funding Programme (COALESCE/2019/53). Environmental Sustainability and Health Institute (ESHI), Technological University Dublin, Greenway Hub, Grangegorman, Dublin 7, D07 H6K8, Republic of Ireland M. Boudou, E. Cleary & Paul Hynds Department of Public Health, Health Service Executive (HSE), Dr. Steevens' Hospital, Dublin 8, Republic of Ireland C. ÓhAiseadha Health Protection Surveillance Centre, 25 Middle Gardiner Street, Dublin 1, Republic of Ireland P. Garvey & P. McKeown School of Biological, Earth and Environmental Sciences, Environmental Research Institute (ERI), University College Cork, Cork, Republic of Ireland J. O'Dwyer Irish Centre for Research in Applied Geosciences (iCRAG), University College Dublin, Dublin 4, Republic of Ireland J. O'Dwyer & Paul Hynds M. Boudou E. Cleary P. Garvey P. McKeown Paul Hynds MB: Methodology, software, validation, formal analysis, writing, preparation of Figures 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13. EC: Preparation of Figs. 8, 9, Writing. CO: resources, data curation, writing—review and editing. PG: resources, data curation, writing—review and editing. PM: resources, data curation, writing—review and editing. JO: Conceptualization, supervision, funding acquisition, writing—review and editing. PH: conceptualization, supervision, funding acquisition, writing—review and editing. All authors read and approved the final manuscript. Correspondence to M. Boudou or Paul Hynds. With the exception of age, gender and minimal clinical data such as date of onset, no personal data, as defined by the Irish Health Research Regulations, were used for this research. The employed anonymisation protocol is considered equivalent to irreversible anonymisation, appropriate for release to academic researchers, as approved by the Irish Data Protection Commissioner, and as such, informed consent was considered unnecessary, as set out in the Research Ethical Approval documents provided by the Royal College of Physicians of Ireland, comprising both data usage and epidemiological methodologies employed. Additionally, the authors can confirm that all methods and analyses have been carried out in accordance with the International Ethical Guidelines for Epidemiological Studies as stipulated in the conditions of the aforementioned project Ethical Approval document. All authors consent with study submission for publication. The authors declare they have no competing interests. Appendix 1. Annual space-time clusters of Cryptosporidiosis in Ireland from 2008 to 2017; Appendix 2. Space-time clusters of Cryptosporidiosis in Ireland during 2008. Boudou, M., Cleary, E., ÓhAiseadha, C. et al. Spatiotemporal epidemiology of cryptosporidiosis in the Republic of Ireland, 2008–2017: development of a space–time "cluster recurrence" index. BMC Infect Dis 21, 880 (2021). https://doi.org/10.1186/s12879-021-06598-3 Cryptosporidiosis Cryptosporidium Spatiotemporal epidemiology
CommonCrawl
Is information entropy the same as thermodynamic entropy? In one of his most popular books Guards! Guards!, Terry Pratchett makes an entropy joke: Knowledge equals Power, which equals Energy, which equals Mass Pratchett is a fantasy comedian and every third phrase in his book is a joke, therefore there is no good reason to believe it. Pratchett uses that madness to make up that a huge library has a tremendous gravitational push. I work with computers and mostly with encryption. My work colleagues believe Terry Pratchett's statement because of entropy. On the other hand, I believe, it is incorrect since entropy of information is a different entropy than the one used in thermodynamics. Am I correct? And if so, why do use the same name (entropy) to mean two different things? Also, what would be a good way to explain that these two "entropies" are different things to non-scientists (i.e. people without a chemistry or physics background)? thermodynamics entropy information grochmalgrochmal $\begingroup$ Maxwell's Demon thought experiment proves you wrong. As shown by Landauer, the fact that reversible computations and interaction free measurements are possible means that you can lower the entropy of a thermodynamic system as the price of increasing the information entropy of the computer's memory by at least the same amount. $\endgroup$ – Count Iblis Jun 17 '16 at 21:26 $\begingroup$ Trying to identify in exactly which ways a statement that was never meant to hold true in the scientific sense fails is a fruitless exercise. Go and enjoy Pratchett's wonderful books instead of debating the scientific truth of unscientific statements. $\endgroup$ – ACuriousMind♦ Jun 17 '16 at 21:26 $\begingroup$ von Neumann to Shannon: "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage." $\endgroup$ – lemon Jun 17 '16 at 21:42 $\begingroup$ @lemon: You will have the advantage until someone says $dS=dQ_{rev}/T$ and then you either understand thermodynamics for good or you are out of arguments. $\endgroup$ – CuriousOne Jun 17 '16 at 22:44 $\begingroup$ The difference depends upon whether you have knowledge of prepared quantum states of a system or not. For classical thermodynamics you have no such knowledge, so entropy is "thermal." If you have knowledge of microstates or quantum states then it is information entropy or entanglement entropy. In the end thermal entropy is information entropy, and all of this is ultimately entanglement entropy. $\endgroup$ – Lawrence B. Crowell Jun 17 '16 at 23:42 So Pratchett's quote seems to be about energy, rather than entropy. I supposed you could claim otherwise if you assume "entropy is knowledge," but I think that's exactly backwards: I think that knowledge is a special case of low entropy. But your question is still interesting. The entropy $S$ in thermodynamics is related to the number of indistinguishable states that a system can occupy. If all the indistinguishable states are equally probable, the number of "microstates" associated with a system is $\Omega = \exp( S/k )$, where the constant $k\approx\rm25\,meV/300\,K$ is related to the amount of energy exchanged by thermodynamic systems at different temperatures. The canonical example is a jar of pennies. Suppose I drop 100 coins on the floor. There are 100 ways that I can have one heads-up and the rest tails-up; there are $100\cdot99/2$ ways to have two heads; there are $10 \cdot99\cdot98/6$ ways to have three heads; there are about $10^{28}$ ways to have forty heads, and $10^{29}$ ways to have fifty heads. If you drop a jar of pennies you're not going to find them 3% heads up, any more than you're going to get struck by lightning while you're dealing yourself a royal flush: there are just too many other alternatives. The connection to thermodynamics comes when not all of my microstates have the same energy, so that my system can exchange energy with its surroundings by having transitions. For instance, suppose my 100 pennies aren't on the floor of my kitchen, but they're in the floorboard of my pickup truck with the out-of-balance tire. The vibration means that each penny has a chance of flipping over, which will tend to drive the distribution towards 50-50. But if there is some other interaction that makes heads-up more likely than tails-up, then 50-50 isn't where I'll stop. Maybe I have an obsessive passenger who flips over all the tails-up pennies. If the shaking and random flipping over is slow enough that he can flip them all, that's effectively "zero temperature"; if the shaking and random flipping is so vigorous that a penny usually flips itself before he corrects the next one, that's "infinite temperature." (This is actually part of the definition of temperature.) The Boltzmann entropy I used above, $$ S_B = k_B \ln \Omega, $$ is exactly the same as the Shannon entropy, $$ S_S = k_S \ln \Omega, $$ except that Shannon's constant is $k_S = (\ln 2)\rm\,bit$, so that a system with ten bits of information entropy can be in any one of $\Omega=2^{10}$ states. This is a statement with physical consequences. Suppose that I buy a two-terabyte SD card (apparently the standard supports this) and I fill it up with forty hours of video of my guinea pigs turning hay into poop. By reducing the number of possible states of the SD card from $\Omega=2\times2^{40}\times8$ to one, Boltzmann's definition tells me I have reduced the thermodynamic entropy of the card by $\Delta S = 2.6\rm\,meV/K$. That entropy reduction must be balanced by an equal or larger increase in entropy elsewhere in the universe, and if I do this at room temperature that entropy increase must be accompanied by a heat flow of $\Delta Q = T\Delta S = 0.79\rm\,eV = 10^{-19}\,joule$. And here we come upon practical, experimental evidence for one difference between information and thermodynamic entropy. Power consumption while writing an SD card is milliwatts or watts, and transferring my forty-hour guinea pig movie will not be a brief operation --- that extra $10^{-19}\rm\,J$, enough energy to drive a single infrared atomic transition, that I have to pay for knowing every single bit on the SD card is nothing compared to the other costs for running the device. The information entropy is part of, but not nearly all of, the total thermodynamic entropy of a system. The thermodynamic entropy includes state information about every atom of every transistor making up every bit, and in any bi-stable system there will be many, many microscopic configurations that correspond to "on" and many, many distinct microscopic configurations that correspond to "off." CuriousOne asks, How comes that the Shannon entropy of the text of a Shakespeare folio doesn't change with temperature? This is because any effective information storage medium must operate at effectively zero temperature --- otherwise bits flip and information is destroyed. For instance, I have a Complete Works of Shakespeare which is about 1 kg of paper and has an information entropy of about maybe a few megabytes. This means that when the book was printed there was a minimum extra energy expenditure of $10^{-25}\rm\,J = 1\,\mu eV$ associated with putting those words on the page in that order rather than any others. Knowing what's in the book reduces its entropy. Knowing whether the book is sonnets first or plays first reduces its entropy further. Knowing that "Trip away/Make no stay/Meet me all by break of day" is on page 158 reduces its entropy still further, because if your brain is in the low-entropy state where you know Midsummer Night's Dream you know that it must start on page 140 or 150 or so. And me telling you each of these facts and concomitantly reducing your entropy was associated with an extra energy of some fraction of a nano-eV, totally lost in my brain metabolism, the mechanical energy of my fingers, the operation energy of my computer, the operation energy of my internet connection to the disk at the StackExchange data center where this answer is stored, and so on. If I raise the temperature of this Complete Works from 300 k to 301 K, I raise its entropy by $\Delta S = \Delta Q/T = 1\,\rm kJ/K$, which corresponds to many yottabytes of information; however the book is cleverly arranged so that the information that is disorganized doesn't affect the arrangements of the words on the pages. If, however, I try to store an extra megajoule of energy in this book, then somewhere along its path to a temperature of 1300 kelvin it will transform into a pile of ashes. Ashes are high-entropy: it's impossible to distinguish ashes of "Love's Labours Lost" from ashes of "Timon of Athens." The information entropy --- which has been removed from a system where information is stored --- is a tiny subset of the thermodynamic entropy, and you can only reliably store information in parts of a system which are effectively at zero temperature. A monoatomic ideal gas of, say, argon atoms can also be divided into subsystems where the entropy does or does not depend temperature. Argon atoms have at least three independent ways to store energy: translational motion, electronic excitations, and nuclear excitations. Suppose you have a mole of argon atoms at room temperature. The translational entropy is given by the Sackur-Tetrode equation, and does depend on the temperature. However the Boltzmann factor for the first excited state at 11 eV is $$ \exp\frac{-11\rm\,eV}{k\cdot300\rm\,K} = 10^{-201} $$ and so the number of argon atoms in the first (or higher) excited states is exactly zero and there is zero entropy in the electronic excitation sector. The electronic excitation entropy remains exactly zero until the Boltzmann factors for all of the excited states add up to $10^{-24}$, so that there is on average one excited atom; that happens somewhere around the temperature $$ T = \frac{-11\rm\,eV}{k}\ln 10^{-24} = 2500\rm\,K. $$ So as you raise the temperature of your mole of argon from 300 K to 500 K the number of excited atoms in your mole changes from exactly zero to exactly zero, which is a zero-entropy configuration, independent of the temperature, in a purely thermodynamic process. Likewise, even at tens of thousands of kelvin, the entropy stored in the nuclear excitations is zero, because the probability of finding a nucleus in the first excited state around 2 MeV is many orders of magnitude smaller than the number of atoms in your sample. Likewise, the thermodynamic entropy of the information in my Complete Works of Shakespeare is, if not zero, very low: there are a small number of configurations of text which correspond to a Complete Works of Shakespeare rather than a Lord of the Rings or a Ulysses or a Don Quixote made of the same material with equivalent mass. The information entropy ("Shakespeare's Complete Works fill a few megabytes") tells me the minimum thermodynamic entropy which had to be removed from the system in order to organize it into a Shakespeare's Complete Works, and an associated energy cost with transferring that entropy elsewhere; those costs are tiny compared to the total energy and entropy exchanges involved in printing a book. As long as the temperature of my book stays substantially below 506 kelvin, the probability of any letter in the book spontaneously changing to look like another letter or like an illegible blob is zero, and changes in temperature are reversible. This argument suggests, by the way, that if you want to store information in a quantum-mechanical system you need to store it in the ground state, which the system will occupy at zero temperature; therefore you need to find a system which has multiple degenerate ground states. A ferromagnet has a degenerate ground state: the atoms in the magnet want to align with their neighbors, but the direction which they choose to align is unconstrained. Once a ferromagnet has "chosen" an orientation, perhaps with the help of an external aligning field, that direction is stable as long as the temperature is substantially below the Curie temperature --- that is, modest changes in temperature do not cause entropy-increasing fluctuations in the orientation of the magnet. You may be familiar with information-storage mechanisms operating on this principle. rob♦rob $\begingroup$ @CuriousOne It will if you change the temperature enough; see edit. $\endgroup$ – rob♦ Jun 18 '16 at 14:28 $\begingroup$ I am sorry, but that is complete nonsense. Shannon entropy doesn't include the physical terms that you can find in thermodynamics. It is simply, by definition, not sensitive to temperature. If you change the definition to "Shannon entropy = Thermodynamic entropy", it loses all of its utility, not to mention that it invites the question: why are you renaming something that has a name, already? $\endgroup$ – CuriousOne Jun 18 '16 at 18:35 $\begingroup$ I believe what I said was that the Shannon entropy is a tiny subset of the information entropy, and I gave a quantitative example of a temperature change which results in an irreversible increase in the information entropy of a storage system. You seem to be bothered that I can change the temperature of a book from, say, 250 K to 300 K without changing its information entropy; but that particular change is reversible, and reversible processes in thermodynamics are by definition isentropic. $\endgroup$ – rob♦ Jun 18 '16 at 22:55 $\begingroup$ What I am bothered by is that people can't seem to understand simple definitions. Shannon entropy was and still is clearly defined over symbol sequences. It's an abstract measure of the information content of e.g. a document and as such extremely useful. Thermodynamics entropy is a physical measure of the movement of homogeneous dynamic systems. You can change the temperature of a book and that will change the entropy of the paper, but it won't change the Shannon entropy of the text in the book by a bit. It's really not that hard to grasp. There is no information in homogeneous systems, BTW. $\endgroup$ – CuriousOne Jun 19 '16 at 0:27 $\begingroup$ @CuriousOne Shannon entropy is an abstract measure of information content with physical consequences. In particular Shannon entropy is quantized, and your intuition about entropy from continuum thermodynamics may not apply to systems with quantized degrees of freedom. I've edited in another example. I'm learning quite a bit by addressing your comments, so thank you. $\endgroup$ – rob♦ Jun 19 '16 at 14:40 Formally, the two entropies are the same thing. The Gibbs entropy, in thermodynamics, is $$S = -k_B \sum p_i \ln p_i$$ while the Shannon entropy of information theory is $$H = -\sum p_i \log_2 p_i.$$ These are equal up to some numerical factors. Given a statistical ensemble, you can calculate its (thermodynamic) entropy using the Shannon entropy, then multiplying by constants. However, there is a sense in which you're right. Often when people talk about Shannon entropy, they only use it to count things that we intuitively perceive as information. For example, one might say the entropy of a transistor, flipped to 'on' or 'off' with equal likelihood, is 1 bit. But the thermodynamic entropy of the transistor is thousands, if not millions of times higher, because it counts everything, i.e. the configurations of all the atoms making up the transistor. (If you want to explain it to your programmer colleagues, say they're not counting whether each individual atom is "on" or "off".) In general, the amount of "intuitive" information (like bits, or words in a book) is a totally negligible fraction of the total entropy. The thermodynamic entropy of a library is about the same as that of a warehouse of blank books. knzhouknzhou $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – Manishearth Jun 18 '16 at 13:41 $\begingroup$ Take it to that chat room that @Manishearth set up or post another answer, folks. $\endgroup$ – dmckee♦ Jun 19 '16 at 18:11 $\begingroup$ Technically, one cannot compare the magnitude of the thermodynamic and information entropies of a transistor in a direct way unless one specifies the macrostates of the transitor. I agree, however, that one can perhaps not measure more than a few basic quantities for a transistor (perhaps resistance, inductance, etc.), leaving the space of possible microstates still incredibly large. Hence for all purposed we can say that thermodynamic entropy is higher than the information entropy. $\endgroup$ – Yiteng Aug 2 '16 at 9:30 $\begingroup$ Entropy is an anthropomorphic quantity, defined in terms of the macroscopic observables we choose to specify the system in. See a classic paper by ET Jaynes $\endgroup$ – Yiteng Aug 2 '16 at 9:31 So far there have been quite a few insightful answers about statistical mechanical entropy, but so far the only mention of thermodynamic entropy has been made by CuriousOne in the comments, so I thought it would be useful to give a short general reminder about the subtle difference between the notion of entropy in thermodynamics and the formulas that come up from statistical mechanics and coding theory. One approach to understanding thermodynamic entropy is via the fundamental (or technological) constraints on the maximum attainable efficiency of heat engines. Volume 1 of the Feynman lectures has a section on thermodynamics that eloquently describes how the Carnot efficiency provides a universal temperature scale $T$ (up to an arbitrary choice of units), so that the quantity $\frac{d Q}{T}$ is the differential of a state function $S$ that is called entropy. Since it is essentially defined through the performance of heat engines, thermodynamic entropy is only sensitive to features of a system that are able to absorb heat and relax in ways that allow work to be extracted. In this sense, information theoretical entropy is a measure of what features you are cognizant of*, while thermodynamic entropy you can think of as a measure of what features at small scales collectively influence systems at larger scales. *Information theoretical entropy, and statistical mechanical entropy, are (by themselves) essentially just measures of volume for a space of possible outcomes. TotallyRhombusTotallyRhombus $\begingroup$ I appreciate you bringing up the distinction between statistical and thermodynamics notions of entropy. However, this point has a natural response: the connection between these is quite thoroughly explored in standard textbooks. I follow Kittel and Kroemer, for example, in defining thermodynamic entropy through statistical mechanics (as $S_{Gibbs}$ or similar), and defining temperature as $1/ T=\partial S / \partial U$ (equivalent to the definition given in my answer), and using these as the basis for standard thermodynamics. $\endgroup$ – Rococo Jun 21 '16 at 0:09 $\begingroup$ I unfortunately do not have Kittel and Kroemer. However, if they define thermodynamic entropy the way you describe, then I would say that the connection between thermodynamics and statistical mechanics is not thoroughly explored in that book. However, Reif and Schroeder (as well as Feynman vol. 1) are careful to establish thermodynamics on solid conceptual footing independently of statistical mechanics. To my ears, defining thermodynamic entropy from $S_{Gibbs}$ sounds circular, and diminishes the brilliance of Boltzmann's equation. $\endgroup$ – TotallyRhombus Jun 21 '16 at 2:06 $\begingroup$ Well, where you see circular reasoning I see a particularly elegant example of reduction to general principles. But I'll have to look as those treatments. $\endgroup$ – Rococo Jun 21 '16 at 2:54 To be honest, I believe this question is not really settled, or at least that there is not yet a consensus in the scientific community about what the answer is. My understanding of the relation is, I think, slightly different than knzhou, rob, or CuriousOne. My understanding is that thermodynamic entropy can be thought of as a particular application of information entropy. In particular, one can apply the principles of information and informational entropy to ask how much one knows about the state of a quantum system, and under certain conditions the thermodynamic Boltzmann entropy seems to be recovered. As a concrete example, a recent experiment related to this question (1) studies the "entanglement entropy" of an interacting quantum system, which is an application of informational entropy to a quantum state. Under the appropriate circumstances (looking at the single-particle density matrix of a thermalized quantum state), this informational entropy is shown to be identical to the thermodynamic Boltzmann entropy. From this viewpoint, thermodynamics is "just" a particular application of informational principles. Of course, one can also apply informational principles to entirely different systems such as books and radio communications and so on. As a result, thermodynamic and informational entropies are not the same, but are two particular applications of the same general principle. However, this opinion is by no means shared by all, and while this correspondence seems to work in some cases like the above experiment, it remains to be explained in a more general setting. Two somewhat related questions that you might find interesting: Spontaneous conversion of heat into work at negative temperatures What are the phenomena responsible for irreversible increase in entropy? Appendix: Entropy Hierarchy Here is the hierarchy of entropies I am claiming here (ignoring constants like $k_B$): Shannon entropy: $S_\textrm{Shannon}=− \sum_i p_i \log p_i$ . Describes, roughly, how much one knows about the state of some system, with $i$ being the possible states. This system could be, for example, a string of binary bits. Applying this to an unknown quantum state, one gets the Gibbs entropy: $S_\textrm{Gibbs}=− \sum_i p_i \log p_i$, where the $i$ are now specifically the possible quantum states of the system. For this expression to make physical sense, $i$ must be the eigenstates of the system in a basis in which the density matrix is diagonal*. With this stipulation, $S_\textrm{Gibbs}$ is identical to the Von Neumann entropy of a quantum state: $S_\textrm{VN}=− \text{tr}(\rho \log \rho)$, with $\rho$ the density matrix. The entanglement entropy is simply an application of $S_\textrm{VN}$ to a particular spatial subset of a (usually isolated) system: $S_{EE,A}=− \text{tr}(\rho_A \log \rho_A)$, where $\rho_A$ is the density matrix resulting from the partial trace over the density matrix of a large system, keeping only some local subsystem. In other words, it is the entropy of a particular part of some larger system. The highly nontrivial claim made in (1) (and elsewhere) is that for a thermalized system, the $S_{EE,A}$ of a small local subsystem $\rho_A$ is equivalent to the Boltzmann thermodynamic entropy, defined as: $S_\textrm{Boltz}=-\sum_i(p_{i,\textrm{th}} \log p_{i,\textrm{th}}) $, with $p_{i,\textrm{th}}=\frac{e^{-E_i/k_B T}}{\sum_i e^{-E_i/k_B T}}$, $i$ as the possible states of $\rho_A$, and $k_B T$ chosen so that the system has the correct average energy. This claim is known, by the way, as the "eigenstate thermalization hypothesis." *There's nothing too mysterious about this requirement: it is simply because for entropy to have some "nice" properties like additivity the state $i$ must be uncorrelated. RococoRococo Not the answer you're looking for? Browse other questions tagged thermodynamics entropy information or ask your own question. Is physical entropy opposite to information entropy? Relationship between physical and information entropy Why are "degrees" and "bytes" not considered base units What is entropy really? Information entropy and physics correlation What's the most fundamental definition of temperature? Why is there less information in a written book (which makes me see things) than in a random string of letters and spaces? Is there an equivalence between Boltzmann entropy and Shannon entropy..? Do information entropy and thermodynamic entropy evolve the same way?
CommonCrawl
Is there a way to split a black hole? Classically, black holes can merge, becoming a single black hole with an horizon area greater than the sum of both merged components. Is it thermodynamically / statistically possible to split a black hole in multiple black holes? If the sum of the areas of the product black holes would exceed the area of the original black hole, it seems to be a statistically favorable transition by the fact alone that would be a state with larger entropy than the initial state thermodynamics general-relativity black-holes entropy black-hole-thermodynamics Qmechanic♦ lurscherlurscher $\begingroup$ let us continue this discussion in chat $\endgroup$ – Alan Rominger Nov 29 '12 at 19:05 $\begingroup$ Thermodynamics aside, it's classically impossible for a black hole to split into two, as a consequence of the causal structure of a black hole region. See theorem 12.2.1 on page 308 of Wald's 'General Relativity'. $\endgroup$ – gj255 Oct 26 '16 at 10:35 I) Let us choose units where $c=1=G$ for simplicity. Recall that a Kerr-Newman black-hole with mass $M > 0$, charge $Q\in [-M,M]$, and angular momentum $J\geq 0$, has surface area given by $$\frac{A}{4\pi}~:=~ r^2_+ +a^2~=~ M^2+ \delta + 2M \sqrt{\Delta}, \tag{1}$$ $$ r_+~:=~M+\sqrt{\Delta}, \qquad \Delta~:=~ \delta -a^2~\geq~0,\qquad \delta~:=~M^2-Q^2~\geq~0, \qquad a~:=~\frac{J}{M}.\tag{2}$$ The entropy $$S~=~\frac{k_B}{\ell_P^2}\frac{A}{4}\tag{3}$$ is proportional to the area $A$. II) An interesting question asks the following. If we merge $n$ Kerr-Newman black-holes $$(M_i>0, Q_i, J_i),\qquad i\in\{1,\ldots,n\},\tag{4}$$ into one Kerr-Newman black-hole $(M>0,Q,J)$, such that mass and charge are conserved$^1$ $$ M~=~\sum_i M_i , \qquad Q~=~\sum_i Q_i , \qquad J~\leq ~\sum_i J_i, \tag{5}$$ and the angular momentum satisfies the triangle inequality; would the discriminant $$ \Delta~\geq~0 \tag{6}$$ for the merged black hole be non-negative, and would the Kerr-Newman area formula (1) respect the second law of thermodynamics $$ A~>~ \sum_i A_i~? \tag{7}$$ The answer is in both cases Yes! The ineq. (7) in turn shows that the opposite splitting process is impossible, cf. OP's question. Proof of ineqs. (6) & (7): First note that $$ \delta~\stackrel{(2)}{=}~(M+Q)(M-Q)~ \stackrel{(5)}{=}~\sum_i(M_i+Q_i)(M_i-Q_i) +\sum_{i\neq j}(M_i+Q_i)(M_j-Q_j)$$ $$~\stackrel{(2)}{\geq}~\sum_i(M_i+Q_i)(M_i-Q_i) ~\stackrel{(2)}{=}~ \sum_i \delta_i , \tag{8}$$ and hence $$ \frac{\delta}{2}~\stackrel{(8)}{\geq}~\frac{\delta_i +\delta_j}{2}~\geq~ \sqrt{\delta_i \delta_j}, \tag{9}$$ due to the ineq. of arithmetic & geometric means. Next consider $$ M^2\Delta - \left(\sum_i M_i\sqrt{\Delta_i}\right)^2 ~\stackrel{(2)}{=}~(M^2\delta -J^2) - \sum_i M_i^2\Delta_i - \sum_{i\neq j}M_i\sqrt{\Delta_i}M_j\sqrt{\Delta_j} $$ $$~\stackrel{(2)+(5)}{\geq}~\left(\delta \sum_i M_i^2 + \delta\sum_{i\neq j} M_iM_j -J^2\right) - \sum_i (M_i^2\delta_i -J^2_i) - \sum_{i\neq j}M_i\sqrt{\delta_i}M_j\sqrt{\delta_j} $$ $$~\stackrel{(8)+(9)}{\geq}~ \sum_{i\neq j}M_i\sqrt{\delta_i}M_j\sqrt{\delta_j} +\sum_i J^2_i -J^2 ~\stackrel{(2)}{\geq}~ \sum_{i\neq j}J_iJ_j +\sum_i J^2_i -J^2 $$ $$~=~\left(\sum_i J_i\right)^2 -J^2 ~\stackrel{(5)}{\geq}~0. \tag{10}$$ Ineq. (10) implies ineq. (6) and $$ M\sqrt{\Delta} ~\stackrel{(10)}{\geq}~ \sum_i M_i\sqrt{\Delta_i}. \tag{11}$$ $$ M^2 ~>~ \sum_i M^2_i,\tag{12} $$ eqs. (8) & (11) yield ineq. (7). $\Box$ $^1$ We assume that the system can be treated as isolated. In particular, we ignore outgoing gravitational radiation. As we know from recent gravitational wave detections, this assumption is violated in practice for black hole merges. However, for the opposite hypothetical splitting process, which OP asks about, this is a reasonable assumption. Qmechanic♦Qmechanic $\begingroup$ Best answer. Precise and correct. $\endgroup$ – Physics Guy Oct 28 '16 at 16:40 $\begingroup$ I do not think that this answers the question. Is there a way to split? implies a sequence of actions performed by experimenter, not a spontaneous disintegration. There may be a sequence of reasonably simple actions: 1) charge BH to $Q=M/2$ 2) Irradiate BH with EM wave of wavelength, intensity … which would split the BH but that are not covered in your argument, $\endgroup$ – A.V.S. May 21 '18 at 17:06 The question asks for a black hole splitting such that "the product black holes would exceed the area of the original black hole". In the above answer I have argued that to do so requires at least two black holes colliding. However, the question continuous with the remark that such a splitting into black holes with larger horizon area "seems to be a statistically favorable transition by the fact alone that would be a state with larger entropy than the initial state". The edit in my answer above suggests this assertion to be correct. However, this is not the case. To determine what is a statistically favorable transition requires a comparison between alternative results. If there is an outcome that can be realized in overwhelmingly more ways than any of the alternatives, that is the statistically favorable outcome. Let's see how this works out for two colliding black holes. As an example we take two black holes of 4N Planck masses each. Let's consider two alternative scenarios: A) 'splitting': 4N + 4N --> 6N + N + N B) 'merging': 4N + 4N --> 8N A black hole containing N Planck masses has entropy $S = 4\pi N^2$. Therefore, the initial state has total entropy $S = 128\pi N^2$ and can be realized in $e^S = e^{128\pi N^2}$ ways. The end products from scenario A) has larger entropy ($S = 152\pi N^2$) and can be realized in $e^{152\pi N^2}$ ways. For large N this number is way larger than the number of realizations for the initial state. Yet, scenario A) does not represent the statistically favorable transition. This is because scenario B) leads to entropy $S = 256\pi N^2$ encompassing overwhelmingly more microscopic states: $e^{256\pi N^2}$. The conclusion is that although entropy-increasing black hole splitting reactions can be defined, these are not realizable from a statistical physics perspective. JohannesJohannes $\begingroup$ Which is probably why repeated black mergers (early in the universe's lifetime) end up up forming ever larger black holes - makes one wonder if this was how intermediate mass black holes or even super-massive black holes formed. $\endgroup$ – dualredlaugh Oct 27 '16 at 22:26 $\begingroup$ In the above answer should be In the below answer. $\endgroup$ – Ruslan Oct 28 '16 at 16:28 Thermodynamics forbids the splitting of a black hole in multiple smaller black holes. Reason being that the result of such a splitting would violate the first law of thermodynamics (energy conservation) and/or the second law of thermodynamics (entropy non-decrease). If energy conservation is honored, the product would be multiple black holes with a sum of circumferences (sum of energy contents) equal to the circumference (energy content) of the original black hole. As a consequence, the sum of surface areas would be less than the surface area of the original black hole. As surface area corresponds to entropy, this would violate the second law of thermodynamics. However, it is possible to split a black hole in multiple non black hole components. This is in fact easy: just sit back and let Hawking radiation do its thing. Key is that the resulting long-wavelength non black hole components are not localized enough to form small black holes. What can happen though, is that this Hawking radiation gets captured by other black holes. This would effectively give a simple scenario for splitting a small black hole into components that feed multiple larger black holes (with much longer evaporation times). This is thermodynamically feasible, but probably not what OP has in mind. [edit]If you interpret 'splitting' broadly and classify the latter scenario as 'black hole splitting', then lots of 'splitting processes' are thermodynamically allowed. For instance, you can in theory have two colliding black holes of three solar masses each, yielding three black holes, two of one solar mass and one of four solar masses: 3M + 3M --> 4M + M + M. Key is that the splitting of one black hole into two is not possible. You need an additional black hole participating in the process to ensure energy conservation and at the same time avoid entropy decrease.[/edit] $\begingroup$ That would be an explanation for why black holes won't spontaneously split, but the "lower entropy" could be compensated by something else (see the comments under the OP's question). In other words, can we get a reaction where: entropy of original hole = entropy of smaller holes PLUS entropy of 'something else' ? $\endgroup$ – Chris Gerig Dec 1 '12 at 6:47 $\begingroup$ Sure, but that 'something else' needs to be a localized source of entropy. Lots of it. In other words: it needs to be one (or more) black holes. For instance: starting with three equal mass black holes, there is no thermodynamics law that would forbid a reaction leading to two black holes of half the mass and one of double the mass. If you see that as a 'splitting process', yes it is possible (at least in principle). $\endgroup$ – Johannes Dec 1 '12 at 9:39 $\begingroup$ The problem is: nobody knows thermodynamics holds below event horizon or not. $\endgroup$ – Schrödinger's Cat Dec 7 '12 at 5:55 $\begingroup$ Thermodynamics holds for the whole observable universe. What is your observable universe depends on your position and your state of motion. But for each observer his/her whole observable universe stretches out over everything that he/she can observe at finite redshift. The boundaries where the redshifts diverge, we call horizons. Whatever might be behind these horizons is causally disconnected from the observer and utterly irrelevant to the physics he/she observes. Therefore it makes no sense to talk about "thermodynamics behind horizons". $\endgroup$ – Johannes Dec 7 '12 at 9:11 Setting the principle of entropy aside for the moment, I believe it should be logical for a black hole to be able to split apart. The energy added to the system would simply have to be larger than the total sum energy released by all that matter which entered the black hole. Actually, the inside of a black hole must be very hot. Think about it. All that mass that accelerated inward as the black hole coalesced, with nowhere for that energy to escape. The logical end conclusion is that it doesn't take much energy to skim off matter from the surface of a black hole, relatively speaking of course. (Of course, as matter is skimmed off there would be adiabatic cooling of the inside, analogous to what happens with liquid nitrogen boiling off and keeping the remaining liquid cold) Here's a thought experiment. Imagine two extremely heavy spherical objects. Their sum total mass is enough to create a black hole as soon as they meet. As they approach each other they begin moving faster and faster, finally at close contact approaching light speed. Well there is an unusual effect here. As mass is transferred to energy, the rest masses of each individual object decrease, even though the net mass of the two-body system does not change. So rest mass is decreasing while velocity is increasing. You can see that it approaches light speed exponentially. However, here's the thing: You have all that energy, yet if some of the mass were to move away from the center of the black hole it would gain more mass. In other words, the traditional escape velocity equation does not apply. There's a saying from Newton, "What goes up must come down". Well, when it comes to a black hole one could say, in a sense, that "What goes down must come up". All the energy converted from mass when the object fell in is also enough to bring the object back out. It's commonly repeated, and assumed as truth, that light cannot escape a black hole, but that's not entirely true. It just becomes red-shifted "out of existence" (but not literally out of existence), or the ray of light does not leave at a narrow enough angle to being perpendicular to the the black hole's tangent, so it gets bent back. Now this is all on the most basic theoretical level and doesn't take into account energy loss through gravitational waves, which could very well make the temperature of a black hole much cooler than classical mechanics would predict if it had resulted from the collision of two smaller black holes. ZachZach $\begingroup$ There are numerous errors in this answer. $\endgroup$ – PM 2Ring Aug 29 '18 at 17:40 Not the answer you're looking for? Browse other questions tagged thermodynamics general-relativity black-holes entropy black-hole-thermodynamics or ask your own question. Is it possible (theoretically) to divide Black Hole into two parts? Merging and Splitting of Black Holes Is there a geometric reason why two merging black holes never "decay" into two separate black holes Pulling apart a Black Hole What law prevents the disintegration (not evaporation) of a black hole? Simple proof of the area theorem for static black holes? How to uncurve a black hole? Black hole entropy from collapsed entangled pure light Evaporating Black holes and entropy No hair theorem and black hole entropy Is the black hole surface area actually zero? Black hole entropy versus entropy of normal matter If Black holes are maximal entropy how can they evaporate? Evaporation of BH and area theorem Transfer of area from one black hole to another, without a merger?
CommonCrawl
Effects of stability on model composition effort: an exploratory study Regular Paper Kleinner Farias1, Alessandro Garcia1 & Carlos Lucena1 Software & Systems Modeling volume 13, pages 1473–1494 (2014)Cite this article Model composition plays a central role in many software engineering activities, e.g., evolving design models to add new features. To support these activities, developers usually rely on model composition heuristics. The problem is that the models to-be-composed usually conflict with each other in several ways and such composition heuristics might be unable to properly deal with all emerging conflicts. Hence, the composed model may bear some syntactic and semantic inconsistencies that should be resolved. As a result, the production of the intended model is an error-prone and effort-consuming task. It is often the case that developers end up examining all parts of the output composed model instead of prioritizing the most critical ones, i.e., those that are likely to be inconsistent with the intended model. Unfortunately, little is known about indicators that help developers (1) to identify which model is more likely to exhibit inconsistencies, and (2) to understand which composed models require more effort to be invested. It is often claimed that software systems remaining stable over time tends to have a lower number of defects and require less effort to be fixed than unstable systems. However, little is known about the effects of software stability in the context of model evolution when supported by composition heuristics. This paper, therefore, presents an exploratory study analyzing stability as an indicator of inconsistency rate and resolution effort on model composition activities. Our findings are derived from 180 compositions performed to evolve design models of three software product lines. Our initial results, supported by statistical tests, also indicate which types of changes led to lower inconsistency rate and lower resolution effort. Model composition plays a central role in many software engineering activities [18], e.g., evolving design models to add new features and reconciling multiple models developed in parallel by different software development teams [28, 38]. The composition of design models can be defined as a set of activities that should be performed over two input models, \(M_\mathrm{A}\) and \(M_\mathrm{B}\), in order to produce an output intended model, \(M_\mathrm{AB}\). To put the model composition in practice, software developers usually make use of composition heuristics [9] to produce \(M_\mathrm{AB}\). These heuristics match the model elements of \(M_\mathrm{A}\) and \(M_\mathrm{B}\) by automatically "guessing" their semantics and then bring the similar elements together to create a "big picture" view of the overall design model. The problem is that, in practice, the output composed model (\(M_\mathrm{CM}\)) and the intended model (\(M_\mathrm{AB}\)) often do not match (i.e., \(M_\mathrm{CM} \ne \, M_\mathrm{AB})\) because \(M_\mathrm{A}\) and \(M_\mathrm{B}\) conflict with each other in some way [18]. Hence, these conflicts are converted into syntactic and semantics inconsistencies in \(M_\mathrm{CM}\). Consequently, software developers should be able to anticipate composed models that are likely to exhibit inconsistencies and transform them into \(M_\mathrm{AB}\). In fact, it is well known that the derivation of \(M_\mathrm{AB }\) from \(M_\mathrm{CM}\) is considered an error-prone task [18, 35]. The developers do not even have practical information or guidance to plan this task. Their inability is due to two main problems. First, developers do not have any indicator to reveal which \(M_\mathrm{CM}\) should be reviewed (or not), given a sequence of output composed models produced by the software development team. Hence, they have no means to identify or prioritize parts of design models that are likely to have a higher density of inconsistencies. They are often forced to go through all output models produced or assume an overoptimistic position, i.e., all output composed models produced is a \(M_\mathrm{AB}\). In both cases, the inadequate identification of an inconsistent \(M_\mathrm{CM}\) can even compromise the evolution of the existing design model (\(M_\mathrm{A}\)) as some composition inconsistencies can affect further model compositions. Second, model managers are unable to grasp how much effort the derivation of \(M_\mathrm{AB }\) from \(M_\mathrm{CM}\) can demand, given the problem at hand [35]. Hence, they end up not designating the most qualified developers for resolving the most critical effort-consuming cases where severe semantic inconsistencies are commonly found. Instead, unqualified developers end up being allocated to deal with these cases. In short, model managers have no idea about which \(M_\mathrm{CM}\) will demand more effort to be transformed into a \(M_\mathrm{AB}\) [35]. If the effort to resolve these inconsistencies is high, then the potential benefits of using composition heuristics (e.g., gains in productivity) may be compromised. The literature in software evolution highlights that software remaining stable over time tends to have a lower number of flaws and require less effort to be fixed than its counterpart [21, 32]. However, little is known whether the benefits of stability are also found in the context of the evolution of design models supported by composition heuristics. This is by no means obvious for us because the software artifacts (code and models) can have different level of abstraction. In fact, design model has a set of characteristics (defined in language metamodel expressing it) that are manipulated by composition heuristics and can assume values close to what is expected (or not), i.e. \(M_\mathrm{CM }\approx M_\mathrm{AB}\). If the assigned value of a characteristic is close to the one found in the intended model, then the composed model is considered stable concerning that characteristic. For example, if the difference between the coupling of the composed model and the intended model is small, then they can be considered stable considering coupling. Although researchers recognize software stability as a good indicator to address the two problems described above in the context of software evolution, most of the current research on model composition is focused on building new model composition heuristics (e.g., [9, 25, 34, 44]). That is, nothing has been done to evaluate stability as an indicator of the presence of semantic inconsistencies and of the effort that, on average, developers should spend to derive \(M_\mathrm{AB}\) from \(M_\mathrm{CM}\). Today, the identification of critical \(M_\mathrm{CM}\) and the effort estimation to produce \(M_\mathrm{AB}\) are based on the evangelists' feedback that often diverge [28]. This paper, therefore, presents an initial exploratory study analyzing stability as an indicator of composition inconsistencies and resolution effort. More specifically, we are concerned with understanding the effects of the model stability on the inconsistency rate and inconsistency resolution effort. We study a particular facet of model composition in this paper: the use of model composition in adding new features to models for three realistic software product lines. Software product lines (SPLs) commonly involve model composition activities [20, 43] and, while we believe the kinds of model composition in SPLs are representative of the broader issues, we make no claims about the generality of our results beyond SPL model composition. Three well-established composition heuristics [9], namely override, merge, and union, were employed to evolve the SPL design models [1, 20] along eighteen releases. SPLs are chosen because designers need to maximize the modularization of features allowing the specification of the compositions. The use of composition is required to accommodate new variabilities and variants (mandatory and optional features) that may be required when SPLs evolve. We analyze if stability is a good indicator of high inconsistency rate and resolution effort. Our findings are derived from 180 compositions performed to evolve design models of three software product lines. Our results, supported by statistical tests, show that stable models tend to manifest a lower inconsistency rate and require a lower resolution effort than their counterparts. In other words, this means that there is significant evidence that the higher the model stability, the lower the model composition effort. In addition, we discuss scenarios where the use of the composition heuristics became either costly or prohibitive. In these scenarios, developers need to invest some extra effort to derive \(M_\mathrm{AB}\) from \(M_\mathrm{CM}\). Additionally, we discuss the main factors that contributed to the stable models outnumber the unstable one in terms of inconsistency rate and inconsistency resolution effort. For example, our findings show that the highest inconsistency rates are observed when severe evolution scenarios are implemented, and when inconsistency propagation happens from model elements implementing optional features to ones implementing mandatory features (Sect. 4.1.3). We also notice that the higher instability in the model elements of the SPL design models realizing optional features, the higher the resolution effort. To the best of our knowledge, our results are the first to investigate the potential advantages of model stability in realistic scenarios of model composition. We therefore see this paper as a first step in a more ambitious agenda to assess model stability empirically. The remainder of the paper is organized as follows: Sect. 2 describes the main concepts and knowledge that are going to be used and discussed throughout the paper. Section 3 presents the study methodology. Section 4 discusses the study results. Section 5 compares this work with others, presenting the main differences and commonalities. Section 6 points out some threats to validity. Finally, Sect. 7 presents some concluding remarks and future work. Model composition effort To produce an output intended model (\(M_\mathrm{AB}\)), a set of activities is performed over \(M_\mathrm{A}\) and \(M_\mathrm{B}\). \(M_\mathrm{A}\) is the current design model, while \(M_\mathrm{B}\) is the model expressing the evolution (delta model), for example, the upcoming changes being added. \(M_\mathrm{B}\) is inserted into the \(M_\mathrm{A}\) using some composition heuristics, which are responsible for defining the semantics of the composition and specify how \(M_\mathrm{A}\) and \(M_\mathrm{B}\) should be manipulated in order to produce \(M_\mathrm{AB}\). We will use the terms composed model (\(M_\mathrm{CM}\)) and intended model (\(M_\mathrm{AB}\)) to differentiate between the output model produced by a composition heuristic and one is desired by the developers. As previously mentioned, usually \(M_\mathrm{CM}\) and \(M_\mathrm{AB}\) do not match (\(M_\mathrm{CM} \ne \,M_\mathrm{AB}\)) because the input models conflict with each other in some way. The higher the number of inconsistencies in \(M_\mathrm{CM}\), the more distant it is from the intended model. This may mean a high effort to be spent to derive \(M_\mathrm{AB}\) from \(M_\mathrm{CM}\) (or not). Once \(M_\mathrm{CM}\) has been produced, the next step is to measure the effort to transform \(M_\mathrm{CM}\) into \(M_\mathrm{AB}\), i.e., the effort to resolve inconsistencies. If \(M_\mathrm{CM}\) is equal to \(M_\mathrm{AB}\), then this implies that the design characteristics of \(M_\mathrm{CM}\) keep stable over composition. Therefore, the inconsistency resolution effort is equal to zero. Otherwise, the effort is higher than zero. The composition effort can be understood by using the equation defined in Fig. 1. The equation gives an overview of how composition effort can be measured and what part we focus our study on. The equation makes it explicit that the model composition effort includes: (1) the effort to apply a composition heuristic: f(\(M_\mathrm{A},M_\mathrm{B}\)); (2) the effort to detect undesirable inconsistencies in the output model: diff(\(M_\mathrm{CM},M_\mathrm{AB}\)); and (3) the effort to resolve inconsistencies: \(g(M_\mathrm{CM})\). Once \(M_\mathrm{CM}\) has been produced, the next step is to measure the effort to transform \(M_\mathrm{CM}\) into \(M_\mathrm{AB}\). If \(M_\mathrm{CM}\) is equal to \(M_\mathrm{AB}\), then diff(\(M_\mathrm{CM},M_\mathrm{AB}\)) and \(g(M_\mathrm{CM})\) are equal to zero. Otherwise, diff(\(M_\mathrm{CM},M_\mathrm{AB}\)) and \(g(M_\mathrm{CM})\) are higher than zero. This study focuses specifically on evaluating the effort to inconsistency resolution (i.e., \(g(M_\mathrm{CM})\)) rather than inconsistency detection and algorithm application. Model composition effort: an equation Model stability According to [21, 30], a design characteristic of software is stable if, when compared to other, the differences in the indicator associated with that characteristic are considered, in the context, to be small. In a similar way in the context of model composition, \(M_\mathrm{CM}\) can be considered stable if its design characteristics have a low variation concerning the characteristics of \(M_\mathrm{AB}\). In [21], Kelly studies stability from a retrospective view, i.e., comparing the current version to previous ones. In our study, we compare the current model and the intended model. We define low variation as being equal to (or less than) 20 %. This choice is based on previous empirical studies [21] on software stability that has demonstrated the usefulness of this threshold. For example, if the measure of a particular characteristic (e.g., coupling and cohesion) of the \(M_\mathrm{CM}\) is equal to 9, and the measure of the \(M_\mathrm{AB}\) is equal to 11. So \(M_\mathrm{CM}\) is considered stable concerning \(M_\mathrm{AB}\) (because 9 is 18 % lower than 11) with respect to the measure under analysis. Following this stability threshold, we can systematically identify weather (or not) \(M_\mathrm{CM}\) keeps stable considering \(M_\mathrm{AB}\), given an evolution scenario. Note that threshold is used more as a reference value rather than a final decision maker. The results of this study can regulate it, for example. The differences between the models are computed comparing the measures obtained by a set of metrics (Table 1) [12]. Table 1 Metrics used in our study We adopt the definition of stability from [21] (and its threshold) because of some reasons. First, it defines and validates the quantification method of stability in practice. This method is used to examine software systems that have been actively maintained and used over a long term. Second, the quantification method of stability has demonstrated to be effective to flag evolutions that have jeopardized the system design. Third, many releases of the system under study was considered. This is a fundamental requirement to test the usefulness of the method. As such, all these factors provided a solid foundation for our study. These metrics were used because previous works [21] have already observed the effectiveness of these indicators for the quantification of software stability. Knowing the stability in relation to the intended model it is possible to identify evolution scenarios, where composition heuristics are able to accommodate upcoming changes effectively and the effort spent to obtain the intended model. The stability quantification method is presented later in Sect. 3.4. Composition heuristics Composition heuristics rely on two key activities: matching and combining the input model elements [14]. Usually they are used to modify, remove, and add features to an existing design model. This paper focuses on three composition heuristics: override, merge, and union [9]. These heuristics were chosen because they have been applied to a wide range of model composition scenarios such as model evolution, ontology merge, and conceptual model composition. In addition, they have been recognized as effective heuristics in evolving product-line architectures (e.g., [4]). In the following, we briefly define these three heuristics, and assume \(M_\mathrm{A}\) and \(M_\mathrm{B}\) as the two input models. The input model elements are corresponding if they can be identified as equivalent in a matching process. Matching can be achieved using any kind of standard heuristics, such as match-by-name [9]. The design models used are typical UML class and component diagrams [37] (see Fig. 2), which have been widely used to represent software architecture in mainstream software development [26]. In Fig. 2, for example, \(R2\) diagram plays the role of the base model (\(M_\mathrm{A}\)) and \(Delta(R2,R3)\) diagram plays the role of the delta model (\(M_\mathrm{B}\)). The components \(R2.BaseController\) and \(Delta(R2,R3).BaseController\) are considered as equivalent. We defer further considerations about the design models used in our study to Sect. 3.3. The composition heuristics considered in our study are discussed in the following paragraphs. Practical examples of model composition of the Mobile Media product line Override For all pairs of corresponding elements in \(M_\mathrm{A}\) and \(M_\mathrm{B},\,M_\mathrm{A}\)'s elements should override \(M_\mathrm{B}\)'s corresponding elements. The model elements that do not match remain unchanged. They are just inserted into the output model. For example, Fig. 2 shows an example where the output composed model, \(R3\), is produced following this heuristic applied to \(R2\) and \(Delta(R2,R3).\) Merge For all corresponding elements in \(M_\mathrm{A}\) and \(M_\mathrm{B}\), the elements should be combined. The combination depends on the element type. Elements in \(M_\mathrm{A}\) and \(M_\mathrm{B}\) that are not equivalent remain unchanged and are inserted into the output model directly (see Fig. 2). Union For all elements in the \(M_\mathrm{A}\) and \(M_\mathrm{B}\) that are corresponding elements, they should be manipulated in order to preserve their distinguished identification; it means that they should coexist in the output models with different identifiers; elements in the \(M_\mathrm{A}\) and \(M_\mathrm{B}\) that are not involved in a correspondence match remain unchanged and they are inserted into the output model, \(M_\mathrm{AB}\). For example, the \(Delta(R2,R3).BaseController\) has its name modified to \(R3.BaseController\) (see Fig. 3). The intended model (left) and composed model (right) produced following the union heuristic Inconsistencies Inconsistencies emerge in the composed model when its properties assume values other than those would be expected. These values can affect the syntactic and semantic properties of the model elements. Usually such undesired values come from conflicting changes that were incorrectly realized. We can identify two broad categories of inconsistencies: (i) syntactic inconsistencies, which arise when the composed model elements do not conform to the modeling language's metamodel; and (ii) semantic inconsistencies, which mean that static and behavioral semantics of the composed model elements do not match those of the intended model elements. In our study, we take into account syntactic inconsistencies that were identified by the IBM Rational Software Architecture's model validation mechanism [35]. For example, this robust tool is able to detect the violation of well-formedness rules defined in the UML metamodel specification [37]. In order to improve our inconsistency analysis, we also considered the types of inconsistencies shown in Table 2 [12], which were checked by using the SDMetrics tool [46]. In particular, these inconsistencies were used because their effectiveness has been demonstrated in previous works [13–16]. In addition, both syntactic and semantic inconsistencies are manually reviewed as well. All these procedures were followed in order to improve our confidence that a representative set of inconsistencies were tackled by our study. Many instances of these inconsistency types (Table 2) were found in our study. For example, the static property of a model element, isAbstract, assumes the value true rather than false. The result is an abstract class where a concrete class was being expected. Another typical inconsistency considered in our study was when a model element provides (or requires) an unexpected functionality or even requires a functionality that does not exist. Table 2 The inconsistencies used in our case study The absence of this functionality can affect other design model elements responsible for implementing other functionalities, thereby propagating an undesirable ripple effect between the model elements of \(M_\mathrm{CM}\). In Fig. 3 (override), for example, the AlbumData does not provide the service "update image information" (from the feature "edit photo's label") because the method \(updateImageInfo()\) :void is not present in the ManagePhotoInfoInterface. Hence, the PhotoSorting component is unable to provide the service "sorting photos." This means that the feature "sorting photo" (feature 'F' in Fig. 2)—a critical feature of the software product line—is not correctly realized. On the other hand, this problem is not present in Fig. 2 (merge), in which the AlbumData implements two features (C, model management, and E, edit photo's label). We defer further discussion about the examples and the quantification of these types of inconsistencies to Sect. 3.4. Study methodology This section presents the main decisions underlying the experimental design of our exploratory study. To begin with, the objective and research questions are presented (Sect. 3.1). Next, the study hypotheses are systematically stated from these research questions (Sect. 3.2). The product lines used in our studies are also discussed in detail as well as their evolutionary changes (Sect. 3.3). Then, the variables and quantification methods considered are precisely described (Sect. 3.4). Finally, the method used to produce the releases of the target architectures is carefully discussed (Sect. 3.5). All these methodological steps were based on practical guidelines on empirical studies [42, 45]. Objective and research questions This study essentially attempts to evaluate the effects of model stability on two variables: the inconsistency rate and inconsistency resolution effort. These effects are investigated from concrete scenarios involving design model compositions so that practical knowledge can be generated. In addition, some influential factors are also considered into precisely revealing how they can affect these variables. With this in mind, the objective of this study is stated based on the GQM template [3] as follows: analyze the stability of design models for the purpose of investigating its effect with respect to inconsistency rate and resolution effort from the perspective of developers in the context of evolving design models with composition heuristics. In particular, this study aims at revealing the stability effects while evolving composed design models (Sect. 3.3) on inconsistency rate and the inconsistency resolution effort. Thus, we focus on the following two research questions: RQ1: What is the effect of stability on the inconsistency rate? RQ2: What is the effect of stability on the developers' effort? Hypothesis formulation First hypotheses: effect of stability on inconsistency rate In the first hypothesis, we speculate that a high variation of the design characteristics of the design models may lead to a higher incidence of inconsistencies; since, it increases the chance for an incorrect manipulation of the design characteristic by the composition heuristics. In fact, modifications from severe evolutions may lead the composition heuristics to be ineffective or even prohibitive. In addition, these inconsistencies may also propagate. As a higher incidence of changes is found in unstable models, we hypothesize that unstable models tend to have a higher (or equal to) inconsistency rate than stable models. The first hypothesis evaluates whether the inconsistency rate in unstable models is significantly higher (or equal to) than in stable models. Thus, our hypotheses are summarized as follows: Null Hypothesis 1, H \(_{1-0}\): Stable design models have similar or higher inconsistency rate than unstable design models. H \(_{1-0}\): Rate(stable design models) \(\ge \) Rate(unstable design models). Alternative Hypothesis 1, H \(_{1-1}\): Stable design models have a lower inconsistency rate than unstable design models. H \(_{1-1}\): Rate(stable design models) \(<\) Rate(unstable design models). By testing this first hypothesis, we evaluate if stability is a good indicator to identify the most critical \(M_\mathrm{CM}\) in term of inconsistency rate from a sequence of \(M_\mathrm{CM}\) produced from multiple software development teams. Hence, developers can then review the design models having a higher density of composition inconsistencies. We believe that this strategy is a more effective one than going through all \(M_\mathrm{CM}\) produced or assuming an overoptimistic position where all \(M_\mathrm{CM}\) produced is a \(M_\mathrm{AB}\). Second hypothesis: effect of stability on developer effort As previously mentioned, developers tend to invest different quantity of effort to derive \(M_\mathrm{AB}\) from \(M_\mathrm{CM}\). Today, model managers are unable to grasp how much effort this transformation can demand. This variation is because developers need to resolve different types of problems in a composed model, from a simple renaming of elements to complex modifications in the structure of the composed model. In fact, the structure of the composed models may be affected in different ways during the composition, e.g., creating unexpected interdependences between the model elements. Even worse, these modifications in the structure of the model may cause ripple effects, i.e., inconsistency propagation between the model elements. The introduction of one inconsistency can often lead to multiple other inconsistencies because of a "knock-on" effect. An example would be the inconsistency whereby a client component is missing an important operation in the interface of a server component (see example in Sect. 2.4). This semantic inconsistency leads to a "knock-on" syntactic inconsistency if another component requires the operation. In the worst case, there may be long chains of inconsistencies all derived from a single inconsistency. Given a composed model at hand, developers need to know if they will invest little or too much effort to transform \(M_\mathrm{CM}\) into \(M_\mathrm{AB}\), given the problem at hand. Based on this knowledge, they will be able to prioritize the review of the output composed models and to better comprehend the effort to be invested, e.g., reviewing the models that require higher effort first and those requiring less effort after. With this in mind, we are interested in understanding the possible difference of effort to resolve inconsistencies in stable and unstable design models. The expectation is that stable models require a lower developers' effort to produce the output intended model. This expectation is based on the speculation that unstable models may demand more restructuring modifications than stable models; hence, requiring more effort. This leads to the second null and alternative hypotheses as follows: Null Hypothesis 2, H \(_{2-0}\): Stable models require similar or higher effort to resolve inconsistencies than unstable models. H \(_{2-0}\): Effort(stable models) \(\ge \) Effort(unstable models). Alternative Hypothesis 2, H \(_{2-1}\): Stable models tend to require a lower inconsistency resolution effort than unstable ones. H \(_{2-1}\): Effort(stable models) \(<\) Effort(unstable models). By testing this first hypothesis, we evaluate if stability is a useful indicator to identify the most critical effort-consuming cases in which severe semantic inconsistencies in architectural components are more often. This knowledge helps model managers to allocate qualified developers to overcome the composition inconsistencies in \(M_\mathrm{CM}\). Target cases: evolving product-line design models Model Composition for Expressing SPL Evolution We apply the composition heuristics (Sect. 2.3) to evolve design models of three realistic SPLs for a set of evolution scenarios (Table 3). That is, the compositions are defined to generate the new releases of the SPL design models. These three SPLs are described below and soon after the evolution scenarios are presented. Table 3 Descriptions of the SPL releases The first target case is a product-line called MobileMedia [17], whose purpose is to support the manipulation of photos, music, and videos on mobile devices. The last release of its design model consists of a UML component diagram with more than 50 component elements. Figures 2 and 3 show a practical example of the use of composition to evolve this SPL. The second SPL, called Shogi Game [12], is a two-player board game, whose purpose is to allow users to move, customize pieces, save, and load the game. All these pieces' movements are governed by a set of well-defined rules. The last SPL, called Checkers Game [12], is a draughts board game played on an eight by eight-squared board with 12 pieces on each side. The purpose of Checkers is to essentially move and capture diagonally forwards. In [12], it is possible to find a fine-grained description about their characteristics and details about their evolutions. The reason for selecting these SPLs in our evaluation is manifold. Firstly, the models are well designed. Next, 12 releases of Mobile Media's architectural models are considered by independent developers using the model composition heuristics. These releases are produced from five evolution scenarios. Note that an evolution is the production of a release from another one, e.g., from R1 to R2 (see Table 3). In addition, 12 releases of Shogi's and Checkers' architectural models were available as well. In both cases, six releases were produced from five evolution scenarios. Together the 36 releases provide a wide range of SPL evolution scenarios to enable us to investigate our hypotheses in detail. These 36 releases were produced from the 18 evolution scenarios described in Table 3. Moreover, these releases were available for our investigation and had a considerable quantity of structural changes in the evolution scenarios. Table 3 describes the evolution scenarios. Each scenario represents the addition of a feature. All evolution scenarios were obtained from the addition of optional features, totaling 15 optional features. Another reason to choose these SPLs is that the original developers are available to help us to validate the identified list of syntactic and semantic inconsistencies. In total, eight developers worked during the development of the SPLs used in our study being three developers from the Lancaster University (UK), two from the Pontifical Catholic University of Rio de Janeiro (Brazil), two from University of São Paulo (Brazil), one from Federal University of Pernambuco (Brazil). These are fundamental requirements to test our hypotheses (Sect. 3.2) in a reliable fashion. Equally important, each SPL has more than one hundred modules and their architecture models are the main artifact to reason about change requests and derive new products. Moreover, the SPL designs are produced by the original developers without any of the model composition heuristics under assessment in mind. It helped to avoid any bias and entailed natural software development scenarios. Finally, these SPLs have a number of other relevant characteristics for our study, such as: (i) proper documentation of the driving requirements; and (ii) different types of changes are realized in each release, including refinements over time of the architecture style employed. After describing the SPLs employed in our empirical studies, the evolution scenarios suffered by them are explained in Table 3. Measured variables and quantification method First dependent variable The dependent variable of hypothesis 1 is the inconsistency rate. It quantifies the amount of composition inconsistencies (Sect. 2.4) divided by the total number of elements in the composed model. That is, it allows computing the density of composition inconsistencies in the output composed models. This metric makes it possible to assess the difference between the inconsistency rate of stable models and unstable models (H1). It is important to point out that the inconsistency rate is defined from multiple inconsistency metrics (see Table 2). Second dependent variable The dependent variable of the hypothesis 2 is the inconsistency resolution effort, \(g(M_\mathrm{CM})\)—that is, the number of operations (creations, removals, and updates) required to transform the composed model into the intended model. We compute these operations because they represent the main operations performed by developers to evolve software in realistic settings [28]. Thus, this computation represents an estimation of the inconsistency resolution effort. The collected measures of inconsistency rate are used to assess if the composed model has inconsistencies after the composition heuristic is applied (diff\((M_\mathrm{CM},M_\mathrm{AB}) > 0\)). Then, a set of removals, updates, and creations are performed to resolve the inconsistencies. As a result, the intended model is produced and the inconsistency resolution effort is computed. Independent variable The independent variable of hypotheses 1 and 2 is the Stability (S) of the output composed model (\(M_\mathrm{CM}\)) with respect to the output intended model (\(M_\mathrm{AB}\)). The Stability is defined in terms of the Distance (D) between the measures of the design characteristics of \(M_\mathrm{CM}\) and \(M_\mathrm{AB}\). Table 1 defines the method used to quantify the design characteristics of the models, while Formula 1 shows how the Distance is computed. $$\begin{aligned} \text{ Distance}(x, y)=\frac{|\text{ Metric}\left( x \right)-\text{ Metric}(y)|}{\text{ Metric}(y)} \end{aligned}$$ where Metric are the indicators defined in Table 1, \(X\) is the output composed model, \(M_\mathrm{CM}\), \(Y\) is the output intended model, \(M_\mathrm{AB}\). The Stability can assume two possible values: one, indicating that \(M_\mathrm{CM}\) and \(M_\mathrm{AB}\) are stable, and zero, indicating that \(M_\mathrm{CM}\) and \(M_\mathrm{AB}\) are unstable. \(M_\mathrm{CM}\) is stable concerning \(M_\mathrm{AB}\) if the distance between \(M_\mathrm{CM}\) and \(M_\mathrm{AB}\) (considering a particular design characteristic) assumes a value equal (or lower than) to 0.2. That is, if 0 \(\le \) Distance(\(M_\mathrm{CM},M_\mathrm{AB}) \le 0.2\)), then Stability(\(M_\mathrm{CM},M_\mathrm{AB}) = 0\). On the other hand, \(M_\mathrm{CM}\) is unstable if the distance between \(M_\mathrm{CM}\) and \(M_\mathrm{AB}\) (regarding a specific design characteristic) assumes a value higher than 0.2. That is, if Distance(\(M_\mathrm{CM},M_\mathrm{AB}) >0.2\)), then Stability \((M_\mathrm{CM},M_\mathrm{AB}) = 0\). We use this threshold to point out the most severe unstable models. For example, we check if architectural problems happen even in cases where the output composed models are considered stable. In addition, we also analyze the models that are closer to the threshold in Sect.4. Formula 2 shows how the measure Stability is computed. $$\begin{aligned} \text{ Stability}\left( {x,y} \right)=\left\{ {{\begin{array}{l} {1,\quad \text{ if}\,0 \le \text{ Distance}\left( {x,y} \right)\le 0.2} \\ {0,\quad \text{ if}\,\text{ Distance}\left( {x,y} \right)>0.2} \\ \end{array} }} \right. \end{aligned}$$ For example, \(M_\mathrm{CM}\) and \(M_\mathrm{AB}\) have the number of classes equals to 8 and 10, respectively (i.e., NClass = 8 and NClass = 10). To check the stability of \(M_\mathrm{CM}\) regarding this metric, we calculate the distance between \(M_\mathrm{CM}\) and \(M_\mathrm{AB}\) considering the metric NClass as described below. $$\begin{aligned} \text{ Distance}\!\left( {M_\mathrm{CM}, M_\mathrm{AB} } \right)&= \frac{\left| {\text{ NClass}\left( {M_\mathrm{CM} } \right)\!-\!\text{ NClass}\left( {M_\mathrm{AB} } \right)} \right|}{\text{ NClass}\left( {M_\mathrm{AB} } \right)}MYAMP]=&\frac{|8-10|}{10}=0.2 \end{aligned}$$ As the Distance(\(M_\mathrm{CM},M_\mathrm{AB}\)) = 0.2, then we can consider that \(M_\mathrm{CM}\) = 1. Therefore, \(M_\mathrm{CM}\) is stable considering \(M_\mathrm{AB}\) in terms of the number of classes. Elaborating on the previous example, we can now consider two design characteristics: the number of classes (NClass), the afferent coupling (DepOut), and the number of attributes (NAttr). Assuming DepOut (\(M_\mathrm{CM}) = 12\), DepOut(\(M_\mathrm{AB})\) = 14, NAttr (\(M_\mathrm{CM}) = 6\), and NAttr(\(M_\mathrm{AB}) = 7\), the Distance is calculated as follows: $$\begin{aligned} \text{ Distance}\left( {M_\mathrm{CM}, M_\mathrm{AB} } \right)\!&= \!\frac{\left| {\text{ DepOut}\left( {M_\mathrm{CM} } \right)\!-\!\text{ DepOut}\left( {M_\mathrm{AB} } \right)} \right|}{\text{ DepOut}\left( {M_\mathrm{AB} } \right)}\\\!&= \!\frac{|12\!-\!14|}{14}\!=\!0.14\\ \text{ Distance}\left( {M_\mathrm{CM}, M_\mathrm{AB} } \right)\!&= \!\frac{\left| {\text{ NAttr}\left( {M_\mathrm{CM} } \right)-\text{ NAttr}\left( {M_\mathrm{AB} } \right)} \right|}{\text{ NAttr}\left( {M_\mathrm{AB} } \right)}\\\!&= \!\frac{|7-9|}{9}=0.22 \end{aligned}$$ Therefore, \(M_\mathrm{CM}\) is stable concerning \(M_\mathrm{AB}\) in terms of NClass and DepOut. However, \(M_\mathrm{CM}\) is unstable in terms of NAttr. In this example, we evaluate the stability of \(M_\mathrm{CM}\) considering three design characteristics, which was stable in two cases. As developers can consider various design characteristics to determine the stability of the \(M_\mathrm{CM}\), we define the Formula 3 that calculates the overall stability of \(M_\mathrm{CM}\) with respect to \(M_\mathrm{AB}\). Refining the previous example, we evaluate the stability of \(M_\mathrm{CM}\) considering two additional design characteristics: the number of interfaces (NInter) and the depth of the class in the inheritance hierarchy (DIT). Supposing that NInter(\(M_\mathrm{CM}) = 15\), NInter(\(M_\mathrm{AB}) = 17\), DIT(\(M_\mathrm{CM}) = 11\), and DIT(\(M_\mathrm{AB}) = 13\), the Distance is calculated as follows: $$\begin{aligned} \text{ Distance}\!\left( {M_\mathrm{CM}, M_{AB} } \right)&= \frac{\left| {\text{ NInter}\left( {M_{CM} } \right)-\text{ NInter}\left( {M_{AB} } \right)} \right|}{\text{ NInter}\left( {M_{AB} } \right)}MYAMP]=&\frac{|15-17|}{17}=0.11\\ \text{ Distance}\!\left( {M_\mathrm{CM}, M_{AB} } \right)&= \frac{\left| {\text{ DIT}\left( {M_{CM} } \right)-\text{ DIT}\left( {M_{AB} } \right)} \right|}{\text{ DIT}\left( {M_{AB} } \right)}MYAMP]=&\frac{|11-13|}{13}=0.15 \end{aligned}$$ In both cases, \(M_\mathrm{CM}\) is stable as the values 0.1 and 0.15 are \(\ge \) 0 and \(\le \)0.2. Investigating this overall stability, we are able to understand how far the measures of the design characteristics of \(M_\mathrm{CM}\) in relation to \(M_\mathrm{AB}\) are. The overall stability of \(M_\mathrm{CM}\) in terms of NClass, DepOut, NAttr, NInter, and DIT is calculated as follows: As the overall stability is equal to 0.2, we can consider that \(M_\mathrm{CM}\) is stable considering \(M_\mathrm{AB}\). $$\begin{aligned} \text{ Stability}(x,y)_\mathrm{overall} \!=\!1-\frac{\mathop \sum \nolimits _{k=0}^{j-1} \left( {\text{ Stability}_k } \right)}{j} \end{aligned}$$ Legend: \(j\): number of metrics used (e.g., 10 metrics in case of Table 1). $$\begin{aligned}&\text{ Stability}(x,y)_\mathrm{overall} \!=\!1-\frac{\mathop \sum \nolimits _{k=0}^4 \left( {\text{ Stability}(x,y)} \right)}{5}\\&\mathop \sum \limits _{k=0}^4 \left( {\text{ Stability}\left( {x,y} \right)} \right)=\frac{\left| {\text{ NClass}\left( {M_{CM} } \right)-\text{ NClass}\left( {M_{AB} } \right)} \right|}{\text{ NClass}\left( {M_{AB} } \right)}\\&\quad +\frac{\left| {\text{ DepOut}\left( {M_{CM} } \right)-\text{ DepOut}\left( {M_{AB} } \right)} \right|}{\text{ DepOut}\left( {M_{AB} } \right)}\\&\quad +\frac{\left| {\text{ NAttr}\left( {M_{CM} } \right)-\text{ NAttr}\left( {M_{AB} } \right)} \right|}{\text{ NAttr}\left( {M_{AB} } \right)}\\&\begin{array}{l} \quad +\frac{\left| {NInter\left( {M_{CM} } \right)-\text{ NInter}\left( {M_{AB} } \right)} \right|}{\text{ NInter}\left( {M_{AB} } \right)}+\frac{\left| {\text{ DIT}\left( {M_{CM} } \right)-\text{ DIT}\left( {M_{AB} } \right)} \right|}{\text{ DIT}\left( {M_{AB} } \right)}\\ =0.2\!+\!0.14\!+\!0.22\!+\!0.11\!+\!0.11\left( {\text{ applying} \text{ the} \text{ Formula} \text{2}} \right) \\ =1+1 +0+1+ 1= 4 \\ \end{array} \end{aligned}$$ $$\begin{aligned} \text{ Stability}(x,y)_\mathrm{overall} =1-\frac{4}{5}=1-0.8=0.2. \end{aligned}$$ Target model versions and releases To test the study hypotheses, we use the releases described in Table 3. Our key concern is to investigate these hypotheses considering a larger number of realistic SPL releases as possible in order to avoid bias of specific evolution scenarios. Deriving SPL model releases For each release of the three product-line architectures, we have applied each of the composition heuristics [override, merge, and union (Sect. 2.3)] to compose two input models in order to produce a new release model. That is, each release was produced using the three algorithms. Similar compositions were performed using the override, merge, and union heuristics. This has helped us to identify scenarios where the SPL design models succumb (or not). For example, to produce the release three (R3) of the Mobile Media (Table 3), the developers combine R3 with a delta model that represents the model elements that should be inserted into R3 in order to transform it into R4. For this, the developers use the composition heuristics described in Sect. 2.3. A practical example about how these models are produced can be seen in Figs. 2 and 3. Model releases and composition specification In Table 3, the releases were selected because visible and structural modifications in the architectural design were carried out to add new features. For each new release, the previous release was changed in order to accommodate the new features. To implement a new evolution scenario, a composition heuristic can remove, add, or update the entities present in the previous model release. Throughout the design of all releases, a main concern was to use good modeling practices in addition to the design-for-change principles. For example, assuming that the mean of the coupling measure of \(M_\mathrm{CM}\) and \(M_\mathrm{AB }\) = 9 and 11, respectively. So \(M_\mathrm{CM}\) is stable regarding \(M_\mathrm{AB}\) (because nine is 18 % lower than 11). Following this stability threshold, we can systematically identify if the \(M_\mathrm{CM}\) keeps stable over the evolution scenarios. Execution and analysis phase Model definition stage This step is a pivotal activity to define the input models and to express the model evolution as a model composition. The evolution has two models: the base model, \(M_\mathrm{A}\), the current release, and the delta model, \(M_\mathrm{B}\), which represents the changes that should be inserted into \(M_\mathrm{A}\) to transform it into \(M_\mathrm{CM}\), as previously discussed. Considering the product-line design models used in the case studies, \(M_\mathrm{B}\) represents the new design elements realizing the new feature. Then, a composition relationship is specified between \(M_\mathrm{A}\) and \(M_\mathrm{B}\) so that the composed model can be produced, \(M_\mathrm{CM}\). Composition and measurement stage In total, 180 compositions were performed, being 60 in the Mobile Media, 60 in the Shogi Game and 60 in the Checkers Game. The compositions were performed manually using the IBM RSA [19, 35]. The result of this phase was a document of composition descriptions, including the gathered data from the application of our metrics suite and all design models created. We used a well-validated suite of inconsistency metrics applied in previous work [14] focused on quantifying syntactic and semantic inconsistencies. The syntactic inconsistencies were quantified using the IBM RSA's model validation mechanism. The semantic inconsistencies were quantified using the SDMetrics tool [46]. In addition, we also check both syntactic and semantic inconsistencies manually because some metrics, e.g., "the number of non-meaningful model elements" depends on the meaning of the model elements and the current modeling tools are unable to compute this metric. The identification of the inconsistencies was performed in three review cycles in order to avoid false positives and false negatives. We also consulted the developers as needed, such as checking and confirming specific cases of semantic inconsistencies. On the other hand, the well-formedness (syntactic and semantic) rules defined in the UML metamodel were automatically checked by the IBM RAS's model validation mechanism. Effort assessment stage The goal of the third phase was to assess the effort to resolve the inconsistencies using the quantification method described in Sect. 3.4. The composition heuristics were used to generate the evolved models, so that we could evaluate the effect of stability on the model composition effort. In order to support a detailed data analysis, the assessment phase was further decomposed in two main stages. The first stage is concerned with pinpointing the inconsistency rates produced by the compositions (H1). The second stage aims at assessing the effort to resolve a set of previously identified inconsistencies (H2). All measurement results and the raw data are available at [12]. This section analyzes the data set obtained from the experimental procedures described in Sect. 3. Our findings are derived from both the numerical processing of this data set and the graphical representation of interesting aspects of the gathered results. Section 4.1 elaborates on the gathered data in order to test the first hypothesis (H1). Section 4.2 discusses the collected data related to the second hypothesis (H2). H1: Stability and inconsistency rate This section describes aspects of the collected data with respect to the impact of stability on the inconsistency rate. For this, descriptive statistics are carefully computed and discussed. Understanding these statistics are key steps to know the data distribution and grasp the main trends. To go about this direction, not only the main trend was calculated using the two most used statistics to discover trends (mean and median); the dispersion of the data around them was also computed mainly making use of the standard deviation. Note that these statistics are calculated from 180 compositions, i.e., with 60 compositions applied to the evolution of MobileMedia SPL, 60 compositions applied to the Shogi SPL, and 60 compositions applied to the Checkers SPL. Table 4 shows descriptive statistics about the collected data regarding inconsistency rate. Figure 4 depicts the box-plot of the collected data. By having carried out a thorough analysis of this statistic, we can observe the positive effects of high level of stability on the inconsistency rate. In fact, we observe only harmful effects in the absence of stability. The main outstanding finding is that inconsistency rate in the stable design model is lower than in the unstable design model. This result is supported by some observations described as follows (see Fig. 4): Box-plot of inconsistency rate Table 4 Descriptive statistics of the inconsistency rate First, the median of inconsistency rate in stable models is considerably lower than in unstable models. That is, a mean of 0.31 in relation to the intended model instead of 3.86 presented by unstable models. This means, for example, that stable SPL models can present no inconsistencies in some cases. On the other hand, unstable models probably hold a higher inconsistency rate than that presented by stable models. This comprises normally 3.86 inconsistencies in relation to the intended model. This implies, for example, that if the output composed model is unstable, then there is a high probability of having inconsistencies in these models. Stable models have a favorable impact on the inconsistency rate. More importantly, its absence has harmful consequences for the number of inconsistencies. These negative effects are evidenced by the significant difference between the number of inconsistencies in stable and unstable models. In fact, stable models tend to have just 8.1 % of the inconsistencies that are found in unstable models, compared with the medians 0.31 (stable) and 3.86 (unstable). One of the main reasons is because inconsistency propagations are found in unstable models more frequently. This means that developers must check all model elements so that they can identify and manipulate the composed model so that the intended model can be obtained. Another interesting finding is that the inconsistencies tend to be quite close to the central tendency in stable models, with a standard deviation equal to 0.84. On the other hand, in unstable models, these inconsistencies tend to spread out over a large range of values. This is represented by a high value of the standard deviation that is equal to 2.63. It is important to point out that to draw out valid conclusions from the collected data it is necessary to analyze and possibly remove outliers from the data. Outliers are extreme values assumed by the inconsistency measures that may influence the study's conclusions. To analyze the threat of these outliers to the collected data, we made use of box-plots (Fig. 4). According to [45], it is necessary to verify whether the outliers are caused by an extraordinary exception (unlikely to happen again), or whether the cause of the outlier can be expected to happen again. Considering the first case, the outliers must be removed, and in the last case, they must not be removed. In our study, some outliers were identified; however, they were not extraordinary exceptions since they could happen again. Consequently, they were left in the collected data set, as they do not affect the results. We performed a statistical test to evaluate whether in fact the difference between the inconsistency rates of stable and unstable models are statistically significant. As we hypothesize that stable models tend to exert a lower inconsistency rate than unstable models, the test of the mean difference between stable and unstable groups will be performed as one-tailed test. In the analyses, we considered significance level at 0.05 level (\(p \le \) 0.05) to indicate a true significance. a. Mann–Whitney test As the collected data violated the assumption of normality, the non-parametric Mann–Whitney test was used as the main statistical test. The results produced are \(U^{\prime } = 7.21, U = 744, z = 9.33 \text{ and} p < 0.001\). The p value is lower than z and 0.05. Therefore, the null hypothesis of no difference between the rates of inconsistency in stable and unstable models \((H_{1-0})\) can be rejected. That is, there is sufficient evidence to say that the difference between the inconsistency rates of stable and unstable models are statistically significant. Table 5 depicts that the mean rank of inconsistency rate for unstable models are higher than that of stable models. As the Mann–Whitney test [45] relies on ranking scores from lowest to highest, the group with the lowest mean rank is the one that contains the largest amount of lower inconsistency rate. Likewise, the group with the highest mean rank is the group that contains the largest amount of higher inconsistency rate. Hence, the collected data confirm that unstable models tend to have a higher inconsistency rate than the stable design models. Table 5 Mann–Whitney test and Spearman's correlation analysis b. Correlation To examine the strength of the relationship (the correlation coefficient) between stability and inconsistency rate, the Spearman's correlation (SC) test was applied (see Table 5). Pearson's correlation is not used because the data sets are not normally distributed. Note that this statistic test assumes that both variables are independent. The correlation coefficient takes on values between \(-1\) and 1. Values close to 1 or \(-1\) indicate a strong relationship between the stability and inconsistency rate. A value close to zero indicates a weak or non-existent relationship. As can be seen in Table 5, the t test of significance of the relationship has a low p value, indicating that the correlation is significantly different from zero. Spearman's correlation analysis resulted in a negative and significant correlation (SC \(= - 0.71\)). The negative value indicates an inverse relationship. That is, as one variable increases, the other decreases. Hence, composition inconsistencies tend to manifest more often in unstable models than stable models. The above correlation suggests that whereas the model stability of the output composed model decreases the inconsistency rate in their models increases. Therefore, the results suggest that, on average, stable models tend to have a significantly lower inconsistency rate than unstable design models. Therefore, we believe that the results confirm the indication of correlation between stability and inconsistency rate. Consequently, the null hypothesis \((H_{1-0})\) can be rejected and the alternative hypothesis \((H_{1-1})\) confirmed. a. The effect of severe evolution categories After discussing how the dataset is grouped, grasping the main trends, and studying the relevance of the outliers, the main conclusion is that stable models tend to present a lower inconsistency rate than unstable models. This finding can be seen as the first step to overcome the lack of practical knowledge about the effects of the model stability on the inconsistency rate in realistic scenarios of model evolution supported by composition heuristics. Some previous studies (e.g., [21, 38]) also check similar insights on the code level. These studies report a positive association between low variation of coupling and size with stability. We have noticed that although the input design models (\(M_\mathrm{A}\) and \(M_\mathrm{B}\)) are well structured, they are the target of widely scoped inconsistencies in certain model composition scenarios. These widely scoped inconsistencies are motivated by unexpected modifications in specific design characteristics of the design models such as coupling and cohesion. These scenarios mainly occurred when composition heuristics accommodate unanticipated, severe changes from \(M_\mathrm{A}\) to \(M_\mathrm{B}\). The most challenging changes observed are those related to the refinement of the MVC (Model-View-Controller) architecture design of the SPLs used in this study. Another observation is that the composition heuristics (override, merge, and union) are not effective to accommodate these changes from \(M_\mathrm{A}\) to \(M_\mathrm{B}\). The main reason is that the heuristics are unable to "restructure" the design models in such way that these changes do not harm static or behavioral aspects of the design models. These harmful changes usually emerge with a set of ever-present evolving change categories, such as a modification of the model properties and derivation of new model elements (e.g., components or classes) from other existing ones. In the first category, modification, model elements have some properties affected. This is typically the case when a new operation conflicts with an operation defined previously. In Fig. 2, for example, the operation \(getImage()\) in the interface R2.HandleException had its return type, \(String[]\), conflicting with the return type, \( ImageData[]\) of the interface \(Delta(R2,R3).HandleException\). Another example is the component ManageAlbum that had its name modified to ManageLabel to express semantic alterations in the concepts used to realize the error-handling feature. Only one of the names and return types can be accepted, but the two modifications cannot be combined. Both cases are scenarios in which the heuristics are unable to correctly pick out what element must be renamed and what return type must be considered. The problem is that detection and decision of these inconsistencies demand a thorough understanding of: (i) what the design model elements actually mean as well as the domain terms "Album" and "Label"; and (ii) the expected semantics of the modified method. In addition, semantic information is typically not included in any formal way so that the heuristics can infer the most appropriated choice. Consequently, the new model elements responsible for implementing the added features are presented with overlapping semantic values and unexpected behaviors. Interestingly, this has been the case where existing optional as well as alternative features are involved in the change. In the second category, derivation, the changes are more severe. Architectural elements are refined and/or moved in the model to accommodate the new changes. Differently from the previous category, the affected architectural elements are usually mandatory features because this kind of evolution in software product lines is mainly required to facilitate the additions of new variabilities or variants later in the project. Unfortunately, in this context of more widely scoped changes, the heuristic-based composition heuristics have demonstrated to be ineffective. A concrete example of this inability is the refinement of the MVC architecture style of the MobileMedia SPL in the third evolution scenario. In practical terms, the central architectural component, BaseController, is broken into other controllers such as PhotoListController, AudioController, VideoController and LabelController to support a better manipulation of the upcoming media like photo, audio, video and the label attached to them. This is partially due to the name-based model comparison policy in the heuristics, which are unable to recognize more intricate equivalence relationships between the model elements. Indeed, this comparison strategy is very restrictive whenever there is a correspondence relationship 1:N between elements in the two input models. That is, it is unable to match the upcoming four controllers with the previous one, BaseController. A practical example of this category of relationship (1:N) involves the required interface ControlPhoto (release three) of the AlbumListScreen component. This interface was decomposed into two new required interfaces ControlAlbum and ControlPhotoList (release four), thereby characterizing a relationship 1:2. In this particular case, the name-based model comparison should be able to "recognize" that ControlAlbum and ControlPhotoList are equivalent to ControlPhoto. However, in the output model (release four), the AlbumListScreen component provides duplicate services to the environment giving rise to a severe inconsistency. b. Inconsistency propagation After addressing the hypotheses and knowing that instabilities have a detrimental effect on the density of inconsistencies, we analyze whether the location where they arise (i.e., architectural elements realizing mandatory, or optional features) can cause some unknown side effects. Some interesting findings were found, which is properly discussed as follows: To begin with, instability problems are more harmful when they take place in design model elements realizing mandatory features. This can be explained by some reasons. First, the inconsistency propagation is often higher in model elements implementing mandatory features than in alternatives (or optional features). When inconsistencies arise in elements realizing optional and alternative features they also tend to naturally cascade to elements realizing mandatory features. Consequently, the mandatory features end up being the target of inconsistency propagation. Based on the knowledge that mandatory features tend to be more vulnerable to ripple effects of inconsistencies, developers must structure product-line architectures in such a way that inconsistencies can keep precisely "confined" in the model elements where they appear. Otherwise, the quality of the products extracted from the SPL can be compromised as the core elements of the SPL can suffer from problems caused by incorrect feature compositions. The higher the number of inconsistencies, the higher the chance of them to continue in the same output model, even after an inspection process performed by a designer. Consequently, the extraction of certain products can become error-prone or even prohibitive. The second interesting insight is that the higher the instability in optional features, the higher the inconsistency propagation toward mandatory features. However, the propagation in the inverse order (i.e., from alternative and optional to mandatory features) seems to be less common. In Fig. 2 (override), a practical example can be seen. The instability in mandatory features, "album and photo management," compromises the optional feature, "edit photo's label." The NewLabelScreen component (optional feature) has its two services i.e., \(getLabelName()\) and \(getFormType()\) (specified in the interface \(ManageLabel()\) compromised. The reason is that the required service \(editLabel()\) cannot be provided by the BaseController (mandatory feature). Thus, the "edit photo' label" feature can no longer be provided due to problems in the mandatory feature "album and photo management." For example, in the fourth evolution scenario of the Checkers Game, the optional feature, Customize Pieces, is correctly glued to the R4 using the override heuristic so that the new release, R5, can be generated. The problem is that the inconsistencies that emerge in the architectural component, Command, are propagated to the architectural elements CustomizePieces and GameManager. Thus, the mandatory feature "piece management" implemented by the Command is affecting the optional feature "customize pieces" implemented by the components CustomizePieces and GameManager. Although the optional feature, Customize Pieces, has been correctly attached to the base architecture, the composed models will not have the expected functionality related to the customization of pieces. H2: Stability and resolution effort This section discusses interesting aspects of the collected data concerning the impact of stability on the developers' effort. The knowledge derived from them helps to understand the effects of model stability on the inconsistency resolution effort. In a similar way to the previous section, we calculate the main trend and the data dispersion. Table 6 provides the descriptive statistics of sampled inconsistency resolution effort in stable and unstable model groups. Figure 5 graphically depicts the collected data by using box-plot. To begin with our discussion, we first compare the median values of the inconsistency resolution effort of the both stable and unstable groups. We can observe that the median of the stable models (equals to six) is much lower than that one of unstable models (equals to 111). Box-plot of resolution effort in relation to the intended model Table 6 Descriptive statistics of the resolution effort (min) This superiority of the unstable models is also observed in the mean and standard deviation, which represent the main trend and dispersion measures, respectively. The gathered results, therefore, indicate that stable models claim less resolution effort than unstable models. This means that developers tend to perform a lower amount of tasks (creations, removals, and modifications) to transform the composed model into the intended model. Although we have observed some outliers, e.g., the maximum value (368) registered in unstable models, they are not an extraordinary exception as they could happen again. Consequently, they were left in the collected data set, as they do not tamper the results. Given the difference between the mean and median described in the descriptive analysis, statistical tests are applied to assess whether in fact the difference in effort to fix unstable model and stable model is statistically significant. We conjecture that stable models tend to require a lower inconsistency resolution effort than unstable models. Hence, a one-tailed test is performed to test the significance of the mean difference between stable and unstable groups. Again, in the analyses we considered significance level at 0.05 level ( \(p \le \) 0.05) to indicate a true significance. As the dataset does not respect the assumption of normality, we use the non-parametric Mann–Whitney test as the main statistical test. The results of the Mann–Whitney test produced are \(U^{\prime } = 7.372, U = 584, z = 9.79 \text{ and} p < 0.001\). The p value is lower than z and 0.05; therefore, the null hypothesis can be rejected. In other words, there exists a difference between the efforts required to resolve inconsistencies in stable and unstable model groups. In fact, there is substantial evidence pointing out the difference between the median measures of the two groups. Table 7 shows that the difference between the mean ranks is significant. The mean of rank in stable models consists of about 38 of the mean rank in unstable models. As the Mann–Whitney test relies on ranking scores from lowest to highest, the group with the lowest mean rank is the one that requires the highest incidence of lowest effort. Likewise, the group with the highest mean rank is the group that contains the largest occurrence of higher effort needed. Hence, the collected data show that unstable models that are not stable tend to have higher effort than the stable models. b. Correlation Analysis As the gathered data do not follow a normal distribution, we apply the Spearman's correlation test. Table 7 provides the results of the Spearman's correlation test. The low p value \(<\) 0.001 indicates that the correlation significantly departs from zero. Recall that Spearman's correlation value close to 1 or \(-1\) indicates a strong relationship between the stability and effort. On the other hand, a value close to 0 indicates a weak or non-existent relationship. The results (SC \(= - 0.698\)) suggest that there is a negative and significant correlation between the two variables. This implies that whereas the stability increases the effort to resolve inconsistency decreases. Consequently, stable models required much lesser effort to be transformed into the intended model than unstable models. Based on such results, we can reject the null hypothesis \((H_{2-0})\), and accept the alternative hypothesis \((H_{2-1})\): stable models tend to require lower effort to resolve composition inconsistency than unstable models. a. The effect of instability on resolution effort In Sect. 4.1, we discuss that the inconsistencies in the model elements realizing optional features tend to propagate to ones realizing mandatory features. Inconsistencies in elements realizing optional features tend to affect the structure of model elements realizing mandatory features. The reason is that some relationships are often introduced between elements realizing mandatory and optional features during the composition. Considering the resolution effort, we have observed that the higher instability in optional features, the higher the resolution effort. Developers need to resolve a cascading chain of inconsistencies, and usually this process should be recursively applied until all inconsistencies have been resolved. This resolution is more effort consuming because widely scoped changes are required to tame such ripple effects. The required effort is to restructure the composed model. We have identified that this superior effort to resolve inconsistencies is due to the syntactic-based composition heuristics are unable to deal with occurring semantic conflicts between the model elements of mandatory and optional features. As a result, inconsistencies are formed. In Fig. 3, for example, the component BaseController requires services from a component NewALbumScreen that provides just one mandatory feature "create album" rather than from a component that provides two features: "create album" and "edit photo's label." This is because releases R2 and R3 use different component names (R2.NewAlbumScreen and R3.NewLabelScreen) for the same purpose. That is, they implement the mandatory feature Create Album in components with contracting names. A syntax-based composition is unable to foresee these kinds of semantic inconsistencies, or even indicate any problem in BaseController as the component remains syntactically correct. From R2 to R3, the domain term Album was replaced by Label. However, the purely syntactical, match-by-name mechanism is unable to catch and incorporate this simple semantic change into the composition heuristic. To overcome this, a semantic-based approach would be required to allow, for example, a semantic alignment between these two domain terms. Consequently, the heuristics would be able to properly match R2.NewAlbumScreen and R3.NewLabelScreen. Still in Fig. 2, the architectural model R3, which was produced following merge heuristic, contains a second facet of semantic problem: behavioral inconsistency. The component ExceptionHandling provides two services with the same purpose, \(getImage()\text{:}String[]\) and \(getImage()\text{:}ImageData[]\) . However, they have different semantic values. This contrasting characteristic is emphasized by the different return types, \(String[]\) and \(ImageData[]\). However, in this case, the inconsistency got confined in the optional feature rather than propagating to model elements implementing mandatory features. To resolve the problem, the method \(getImage()\text{:}String[]\) should be removed. In total, only one operation is performed. Thus, these inconsistencies can be only pinpointed by resorting to sophisticated semantics-based composition, which relies on the action semantics of the model elements. According to [28], the current detection of behavioral inconsistency is based on the complex mathematical, program slicing, and program dependence graphs. Unfortunately, none of them is able to systematically compare behavioral aspects of components neither realizing two features nor even composing them properly. Even worse, the composition techniques would be unable to match, for example, ManageAlbum and ManageLabel interface. b. The effect of multiple concerns on resolution effort Another finding is that the higher the number of features implemented by a model element, the higher the resolution effort. We have observed that model elements realizing multiple features tend to require more inconsistency resolution effort than those realizing just one feature. The reason is that the model elements realizing multiple features tend to receive a higher number of upcoming changes to-be accommodated by the composition heuristics than ones realizing a single feature. These model elements become more vulnerable to the unpredictable effects of the severe evolution categories. This means that developers tend to invest more effort to resolve all possible inconsistencies. In fact, a higher number of inconsistencies have been observed in 'multiple-featured' components rather than in 'single-featured' components. As developers cannot foresee or even precisely identify all ripple effects of these inconsistencies through other model elements, the absence of stability can be used as a good indicator of inconsistency. Let us consider the BaseController, the central controller in MobileMedia architecture that implements two features (see Fig. 2). The collected data show that the BaseController was modified in almost all evolution scenarios because it is a pivotal architectural component in the model-view-control architectural style of the SPL MobileMedia. Unfortunately, the changes cannot be properly realized in all cases. In addition, we observe that BaseController's inconsistencies affect other four components, namely NewLabelScreen, AlbumListScreen, PhotoListScreen, PhotoViewScreen, and AddPhotoToAlbumScreen. All these affected components require the provided services by the BaseController. Moreover, we notice that the BaseController had a higher likelihood to receive inconsistencies from other model elements than any other components. The reason is that it also depends on many other components to provide the services of the multiple features. For example, BaseController can be harmed by inconsistencies arising from the componentsManageAlbum, ManagePhotoInfo, and ControlPhoto. This means that, at some point, BaseController can no longer provide its services because it was probably affected by inconsistencies located in these components. It is interesting to note that NewAlbumScreen is also affected by an inconsistency that emerged from AlbumData, as it requires the service (viewPhoto) provided by the BaseController in the interface, ControlPhoto that cannot be accessed. The main reason is that the service, resetImageData(), specified in the interface ManagePhotoInfo can no longer be provided by the component AlbumData, compromising the serviced offered in the interface ControlPhoto. Since BaseController is not able to correctly provide all services defined in the provided interface ControlPhoto, it is also re-affected by an inconsistency that previously arose from it. This happens because NewAlbumScreen does not provide the services described in the interface ManageAlbum. This phenomenon represents the cyclic inconsistency propagation. Understanding this type of phenomenon, the software designer can examine upfront and more precisely the design models in order to localize undetected cyclic dependence between the model elements. Another observation is that optional features are also harmed by this propagation on the mandatory features. For example, the PhotoSorting component (realizing optional feature "sorting photos") is unable to provide the service, \(sortCommand()\), specified in the interface SoftPhoto. This is due to the absence of the required service, \(resetImageData()\) from the ManagePhotoInfo interface, which the mandatory feature "album management." In practical terms, it indicates that undesired effects in features can be due to some unexpected instabilities in the mandatory features. In collaborative software development, for example, this is a typical problem because the model elements implementing different features are developed in parallel, but they rarely prepared upfront to-be composed. Hence, developers should invest some considerable effort to properly promote the composition. To the best of our knowledge, our results are the first to investigate empirically the relation between quality attributes and model composition effort in a broader context. In [13], we initially investigated the research questions addressed in this paper, but they were evaluated in a smaller scope. This paper, therefore, represents an extension of the results obtained previously. The main extensions can be described as follows: (1) two more case studies were performed, i.e., the evolution studies with the Shogi and Checkers SPLs. This implies that the number of compositions jumped from 60 to 180; (2) new lessons learned were obtained from a broader study; and (3) the size of the sample data was higher than the previously found; hence, the hypotheses might be better tested. We have observed not only a wide variety of model composition techniques [9, 25] have been created, but also some previous works [13, 33] have demonstrated that stability is a good predictor of defects [33] and the presence of good designs [21]. However, none of them has directly investigated the impact of stability on model composition effort. The lack of empirical evidence hinders the understanding of the side effects peculiar to stability on developers' effort. Consequently, developers in industrial projects have to rely solely on feedback from experts to determine "the goodness" of the input models and their compositions. In fact, according to several recent observations [9, 18, 28], the state of the practice in model quality assessment indicates that modeling is still in the craftsmanship era and this problem is even more accentuated in the context of model composition. The current model composition literature does not provide any support to perform empirical studies considering model composition effort [18, 28], or even to evaluate the effects of model stability on composition effort. In [18], the authors highlight the need empirical studies in model composition to provides insights about how deal with ever-present problems such as conflicts and inconsistencies in real world settings. In [28], Mens also reveals the need for more "experimental researches on the validation and scalability of syntactic and semantic merge approaches, not only regarding conflict detection, but also regarding the amount of time and effort required to resolve the conflicts." Without empirical studies, researchers and developers are left without any insight about how to evaluate model composition in practice. For example, there is no metric, indicator, or criterion available to assess the UML models that are merged through, for instance, the UML built-in composition mechanism (i.e., package merge) [11, 37]. There are some specific metrics available in the literature for supporting the evaluation of model composition specifications. For instance, Chitchyan et al. [8] have defined some metrics, such as scaffolding and mobility, to quantify quality attributes of compositions between two or more requirements artifacts. However, their metrics are targeted at evaluating the reusability and stability of explicit descriptions of model composition specifications. In other words, their work is not targeted at evaluating model composition heuristics. Boucké et al. [4] also propose a number of metrics for evaluating the complexity and reuse of explicitly defined compositions of architectural models. Their work is not focused on heuristic-based model composition as well. Instead, we have focused on analyzing the impact of stability on the effort to resolve emerging inconsistencies in output models. Therefore, existing metrics (such as those described in [36]) cannot be directly applied to our context. Although we have proposed a metric suite for quantifying inconsistencies in UML class diagrams and then applied these metrics to evaluate the composition of aspect-oriented models and UML class diagrams [14], nothing has been done to understand the effects of model stability on the developers' effort. Some previous works investigated the effect of using UML diagrams and its profiles with different purposes. In [6], Briand et al. looked into the formality of UML models and its relation with model quality and comprehensibility. In particular, Briand et al. investigated the impact of using OCL (Object Constraint Language [37]) on defect detection, comprehension, and impact analysis of changes in UML models. In [40], Filippo et al. carried out a series of four experiments to assess how developer's experience and ability influence Web application comprehension tasks supported by UML stereotypes. Although they have found that the use of UML models provide real benefits for typical software engineering activities, none has investigated the peculiarities of UML models in the context of model composition. Finally, we therefore see this paper as a first step in a more ambitious agenda to support empirically the assessment of model composition techniques in general. Threats to validity Our exploratory study has obviously a number of threats to validity that range from internal, construct, statistical conclusion validity threats to external threats. This section discusses how these threats were minimized and offers suggestions for improvements in future study. Internal validity Inferences between our independent variable (stability) and the dependent variables (inconsistency rate and composition effort) are internally valid if a causal relation involving these two variables is demonstrated [5, 41]. Our study met the internal validity because: (1) the temporal precedence criterion was met, i.e., the instability of design models preceded the inconsistencies and composition effort; (2) the covariation was observed, i.e., instability of design models varied accordingly to both inconsistencies and composition effort; and (3) there is no clear extra cause for the detected covariation. Our study satisfied all these three requirements for internal validity. The internal validity can be also supported by other means. First, the detailed analysis of concrete examples demonstrating how the instabilities were constantly the main drivers of inconsistencies presented in this paper. Second, our concerns throughout the study to make sure that the observed values in the inconsistency rates and composition effort were confidently caused by the stability of the design models. However, some threats were also identified, which are explicitly discussed below. First, due to the exploratory nature of our study, we cannot state that the internal validity of our findings is comparable to the more explicit manipulation of independent variables in controlled experiments. This exceeding control employed to deal with some factors (i.e., with random selection, experimental groups, and safeguards against confounding factors) was not used because it would significantly jeopardize the external validity of the findings. Second, another threat to the internal validity is related to the imperfections governing the measurements of inconsistency rate and resolution effort. As the measures were partially calculated in a manual fashion, there was the risk that collected data would not be always reliable. Hence, this could lead to inconsistent results. However, we have mitigated this risk by establishing measurement guidelines, two-round data reviews with the actual developers of the SPL design models, and by engaging them in discussions in cases of doubts related to, for instance, the semantic inconsistencies. Next, usually the confounding variable is seen as the major threat to the internal validity [41]. That is, rather than just the independent variable, an unknown third variable unexpectedly affects the dependent variable. To avoid confounding variables in our study, a pilot study was carried out to make sure that the inconsistency rate and composition effort were not affected by any existing variable other than stability. During this pilot study, we tried to identify which other variables could affect the inconsistency rate and resolution effort such as the size of the models. Another concern was to deal with the experimenter bias. That is, the experimenters inadvertently affect the results by unconsciously realizing experimental tasks differently that would be expected. To minimize the possibility of experimenter bias, the evaluation tasks were performed by developers, which that know neither the purpose of the study nor the variables involved. For example, developers created the input design models of the SPLs without being aware of the experimental purpose of the study. In addition, the composition heuristics can be automatically applied. Consequently, the study results can be more confidently applied to realistic development settings without suffering influences from experimenters. Finally, the randomization of the subjects was not performed because it would require simple task simple software engineering task. Hence, this would undermine the objective of this study (Sect. 3.1). Statistical conclusion validity We evaluated the statistical conclusion validity checking if the independent and dependent variables (Sect. 3.4) were submitted to suitable statistical methods. These methods are useful to analyze whether (or not) the research variables covary [10]. The evaluation is concerned on two related statistical inferences: (1) whether the presumed cause and effect covary, and (2) how strongly they covary [10]. Considering the first inferences, we may improperly conclude that there is a causal relation between the variables when, in fact, they do not. We may also incorrectly state that the causal relation does not exist when, in fact, it exists. With respect to the second inference, we may incorrectly define the magnitude of covariation and the degree of confidence that the estimate warrants [7, 45]. Covariance of cause and effect We eliminated the threats to the causal relation between the research variables studying the normal distribution of the collected sample. Thus, it was possible to verify if parametric or non-parametric statistical methods could be used (or not). For this purpose, we used the Kolmogorov–Smirnov test to determine how likely the collected sample was normally distributed. As the dataset did not assume a normal distribution, non-parametric statistics were used (Sects. 4.1 and 4.2). Hence, we are confident that the test statistics were applied correctly, as the assumptions of the statistical test were not violated. Statistical significance Based on the significance level at 0.05 level (\(p \le 0.05\)), Mann–Whitney test was used to evaluate our formulated hypotheses. The results collected from this test indicated \(p < 0.001\). This shows sufficient evidence to say that the difference between the inconsistency rates (and composition effort) of stable and unstable models are statically significant. The correlation between the independent and dependent variables is also evaluated. For this, Spearman's correlation test was used. The low collected p value (\(<\)0.001) indicated that there is a significant correlation between the inconsistency rate and stability as well as composition effort and stability. In addition, we followed some general guidelines to improve conclusion validity [39, 45]. First, a high number of compositions were performed to increase the sample size, hence improving the statistical power. Second, experienced developers used more realistic design models of SPLs, state-of-practice composition heuristics, and robust software modeling tool. These improvements reduced "errors" that could obscure the causal relationship between the variable under study. Consequently, it brought a better reliability for our results. Construct validity concerns the degree to which inferences are warranted from the observed cause and effect operations included in our study to the constructs that these instances might represent. That is, it answers the question: "Are we actually measuring what we think we are measuring?" With this in mind, we evaluated (1) whether the quantification method is correct, (2) whether the quantification was accurately done, and (3) whether the manual composition threats the validity. Quantification method All variables of this study were quantified using a suite of metrics, which was previously defined and independently validated [14, 21]. Moreover, the concept of stability used in our study is well known in the literature [21] and its quantification method was reused from previous work. The inconsistencies were quantified automatically using the IBM RSA's model validation mechanisms and manually by the developers through several cycles of measurements and reviews. In practice, the developers' effort is computed by "time spent." However, the "time spent" is a reliable metric when used in controlled experiments. Unfortunately, controlled experiments require that the software engineering tasks are simple; hence, it harms the objective of our investigation (Sect. 3.1) and hypotheses (Sect. 3.2). Moreover, we have observed in the examples of recovering models that, in fact, the "time spent" is actually greater for unstable models than stable models, independently of the type of inconsistencies. In addition, the number of syntactic and semantic inconsistencies was always higher in unstable models than stable models. Correctness of the quantification Developers worked together to assure that the study does not suffer from construct validity problems with respect to the correctness of the compositions and application of the suite of metrics. We checked if the collected data were in line with the objective and hypotheses of our study. It is important to emphasize that just one facet of composition effort was studied: the effort to evolve well-structured design models using composition heuristics. The quantification procedures were carefully planned and followed well-known quantification guidelines [3, 23, 24, 45]. Execution of the compositions Another threat that we have controlled is if the use of manual composition might unintentionally avoid conflicts. We have observed that the manual composition helps to minimize problems that are directly related to model composition tools. There are some tools to compose design models, such as IBM Rational Software Architect. However, the use of these tools to compose the models was not included in our study for several reasons. First, the nature of the compositions would require that developers understood the resources/details of the tools. Second, even though the use of these tools might intentionally reduce (or exacerbate) the generation of specific categories of inconsistencies in the output composed models, it was not our goal to evaluate particular tools. Therefore, we believe that by using a model composition tool would impose more severe threats to the validity of our experimental results. Finally, and more importantly, we do not think the manual composition would be a noticeable problem in the study for two reasons. First, even if the conflicts were unconsciously avoided, we deeply believe that the heuristics should be used as "rules of thumb" (guidelines) even if tool support is somehow available. Second, we have reviewed the produced models, at least, three times in order to ensure that conflicts were injected accordingly; in the case they still made their way to the models used in our analysis, they should be minimal. External validity External validity refers to the validity of the obtained results in other broader contexts [31]. That is, to what extent the results of this study can be generalized to other realities, for instance, with different UML design models, with different developers and using different composition heuristics. Thus, we analyzed whether the causal relationships investigated in this study could be held over variations in people, treatments, and other settings. As this study was not replicated in a large variety of places, with different people, and at different times, we made use of the theory of proximal similarity (proposed by Campbell [7]) to identify the degree of generalization of the results. The goal is to define criteria that can be used to identify similar contexts where the results of this study can be applied. Two criteria are shown as follows: First, developers should be able to make use of composition heuristics (Sect. 2.3) to evolve UML design models such as UML class and component diagrams. Second, developers should also be able to apply the inconsistency metrics described in Table 2 and use some robust software modeling tool (e.g., IBM RSA [19]). Given that these criteria can be seen as ever-present characteristics in mainstream software development, we conclude that the results of our study can be generalized to other people, places, or times that are more similar to these requirements. Some characteristics of this study contributed strongly to its external validity as follows: First, the reported exploratory study is realistic and, in particular, when compared to previously reported case studies and controlled experiments on composing design models [6, 14]. Second, experienced developers used: (1) state-of-practice composition heuristics to evolve three realistic design models of software product lines; (2) industrial software modeling tool (i.e., IBM RSA) to create and validate the design models; and (3) metrics that were validated in previous works [14]. Next, the design models used were planned with the design-for-change principles upfront. Finally, this work investigates only one facet of model composition: the use of model composition heuristics in adding new features to a set of design models for three realistic software product lines. Conclusions and future work Model composition plays a pivotal role in many software engineering activities, e.g., evolving SPL design models to add new features. Hence, software designers are naturally concerned with the quality of the composed models.This paper, therefore, represents a first exploratory study to empirically evaluate the impact of stability on model composition effort. More specifically, the focus was on investigating whether the presence of stable models reduces (or not) the inconsistency rate and composition effort. In our study, model composition was exclusively used to express the evolution of design models along eighteen releases of three SPL design models. Three state-of-practice composition heuristics have been applied, and all were discussed in detail throughout this paper. The main finding was that the model stability is a good indicator of composition inconsistencies and resolution effort. More specifically, we found that stable models tend to minimize the inconsistency rate and alleviate the model composition effort. This observation was derived from statistic analysis of the collected empirical data that have shown a significant correlation between the independent variable (stability) and the dependent variables (inconsistency rate and effort). Moreover, our results also revealed that instability in design models would be caused by a set of factors as follows: First, SPL design models are not able to support all upcoming changes, mainly unanticipated incremental changes. Next, the state-of-practice composition heuristics are unable to semantically match simple changes in the input model elements, mainly when changes take place in crosscutting requirements. Finally, design models implementing crosscutting requirements tend to cause a higher number of inconsistencies than the ones modularizing their requirements more effectively. The main consequence is that the evolution of the design models using composition heuristics can even become prohibitive given the effort required to produce the intended model. As future work, we will replicate the study in other contexts (e.g., evolution of statecharts) to check whether (or not) our findings can be extended to different evolution scenarios of design models supported by composition heuristics. We also consider exploring different variants of the stability metrics. We also wish to better understand if design models with superior stability have some gain (or not): (i) when produced from another composition heuristics, and (ii) on the effort localizing the inconsistencies. It would be useful if, for example, intelligent recommendation systems could help the developers to indicate the best heuristic to-be applied to a given evolution scenario or even recommending how the input model should be restructured to prevent inconsistencies. Finally, we hope that the issues outlined throughout the paper encourage other researchers to replicate our study in the future under different circumstances and that this work represents a first step in a more ambitious agenda on better supporting model composition tasks. Apel, S., Janda, F., Trujillo, S., Kästner, C.: Model superimposition in software product lines. In: International Conference on Model Transformation (ICMT), vol. 5563 (LNCS), pp. 4–19, Springer, Berlin (2009) Asklund, U.: Identifying inconsistencies during structural merge. In: Proceedings of the Nordic Workshop Programming Environment Research, pp. 86–96 (1994) Basili, V., Caldiera, G., Rombach, H.: The goal question metric paradigm. In: Encyclopedia of Software Engineering, vol. 2, pp. 528–532. Wiley, Hoboken (1994) Boucké, N., Weyns, D., Holvoet, T.: Experiences with Theme/UML for architectural design in multiagent systems. In: MASSAA'06, pp. 87–110 (2006) Brewer, M.: Research design and issues of validity. In: Handbook of Research Methods in Social and Personality Psychology, Cambridge University Press, Cambridge (2000) Briand, L., Labiche, Y., Di Penta, M., BondocL, H.: An experimental investigation of formality in UML-based development. IEEE Trans. Softw. Eng. 31(10), 833–849 (2005) Campbell, D., Russo, M.: Social Experimentation. SAGE Classics, Beverly Hills (1998) Chitchyan, R., Greenwood, P., Sampaio, A., Rashid, A., Garcia, A., Silva, L.: Semantic vs. syntactic compositions in aspect-oriented requirements engineering: an empirical study. In: International Conference on Aspect-Oriented Software, Development (AOSD'09), pp. 36–48 (2009) Clarke, S., Walker, R.: Composition patterns: an approach to designing reusable aspects. In: 23rd International Conference on Software Engineering (ICSE'01), pp. 5–14, Toronto (2001) Cook, T., Campbell, D., Day, A.: Quasi-Experimentation: Design & Analysis Issues for Field Settings. Houghton Mifflin, Boston (1979) Dingel, J., Diskin, Z., Zito, A.: Understanding and improving UML package merge. J. SoSym 7(4), 443–467 (2008) Effects of stability on model composition effort: an exploratory study. http://www.les.inf.puc-rio.br/opus/sosym2012 (2012) Farias, K., Garcia, A., Lucena, C.: Evaluating the effects of stability on model composition effort: an exploratory study. In : VIII Experimental Software Engineering Latin American Workshop collocated at XIV Iberoamerican Conference on Software Engineering, Rio de Janeiro (2011) Farias, K., Garcia, A., Whittle, J.: Assessing the impact of aspects on model composition effort. In: AOSD'10, pp. 73–84, Saint Malo (2010) Farias, K., Garcia, A., Lucena, C.: Evaluating the impact of aspects on inconsistency detection effort: a controlled experiment. In: 15th International Conference on Model-Driven Engineering Languages and Systems (MODELS'12), pp. 219–234, Innsbruck (2012) Farias, K., Garcia, A., Whittle, J., Chavez, C., Lucena, C.: Evaluating the effort of composing design models: a controlled experiment. In: 15th International Conference on Model-Driven Engineering Languages and Systems (MODELS'12), pp. 676–691, Innsbruck (2012) Figueiredo, et al.: Evolving software product lines with aspects: an empirical study on design stability. In: International Conference on Software Engineering (ICSE'08), pp. 261–270, Leipzig (2008) France, R., Rumpe, B.: Model-driven development of complex software: a research roadmap. In: Future of Software Engineering at ICSE'07, pp. 37–54, Minneapolis (2007) IBM Rational Software Architecture (IBM RSA). http://www.ibm.com/developerworks/rational/products/rsa/ (2011) Jayaraman, P., Whittle, J., Elkhodary, A., Gomaa, H.: Model composition in product lines and feature interaction detection using critical pair analysis. In: International Conference on Model Driven Engineering Languages and Systems (MODELS), pp. 151–165, Nashville (2007) Kelly, D.: A study of design characteristics in evolving software using stability as a criterion. IEEE Trans. Softw. Eng. 32(5), 315–329 (2006) Kemerer, C., Slaughter, S.: An empirical approach to studying software evolution. IEEE Trans. Softw. Eng. 25(4), 493–509 (1999) Kitchenham, B., Al-Kilidar, H., Babar, M., Berry, M., Cox, K., Keung, J., Kurniawati, F., Staples, M., Zhang, H., Zhu, L.: Evaluating guidelines for reporting empirical software engineering studies. Emp. Softw. Eng. 13(1), 97–112 (2008) Kitchenham, B.: Empirical Paradigm—the role of experiments, pp. 25–32. Empirical Software Engineering, Issues (2006) Kompose: a generic model composition tool. http://www.kermeta.org/kompose (2010) Larman, C.: Applying UML and patterns: an introduction to object-oriented analysis and design and iterative development, 3rd edn. Prentice Hall (2004). ISBN 0131489062 Martin, R.: Agile software development, principles, patterns, and practices, 1st edn. Prentice Hall (2002). ISBN 0135974445 Mens, T.: A state-of-the-art survey on software merging. IEEE Trans. Softw. Eng. 28(5), 449–562 (2002) Menzies, T., Chen, Z., Hihn, J., Lum, K.: Selecting best practices for effort estimation. IEEE Trans. Softw. Eng. (TSE) 32(11), 883–895 (2006) Meyer, B.: Object-oriented software construction, 1st edn. Prentice-Meyer Hall, Englewood Cliffs (1988) Mitchell, M., Jolley, J.: Research design explained, 4th edn. Harcourt, New York (2001) Molesini, A., Garcia, G., Chavez, C., Batista, T.: Stability assessment of aspect-oriented software architectures: a quantitative study. J. Syst. Softw. 38(5), 711–722 (2009) Nagappan, N., Zeller, A., Zimmermann, T., Herzig, K., Murphy, B.: Change bursts as defect predictors. In: 21st International Symposium on Software Reliability Engineering, pp. 309–318, San Jose (2010) Nejati, S., Sabetzadeh, M., Chechik, M., Easterbrook, S., Zave, P.: Matching and merging of statecharts specifications. In: International Conference on Software Engineering (ICSE'07), pp. 54–64, Minneapolis, EUA (2007) Norris, N., Letkeman, K.: Governing and managing enterprise models: Part 1. Introduction and concepts. IBM Developer Works. http://www.ibm.com/developerworks/rational/library/09/0113_letkeman-norris (2011) Nugroho, A., Flaton, B., Chaudron, M.: Empirical analysis of the relation between level of detail in UML models and defect density. In: International Conference on Model Driven Engineering Languages and Systems (MoDELS'08), pp. 600–614, Toulouse (2008) OMG.: Unified modeling language: infrastructure version 2.2. Object Management Group (2008) Perry, D., Siya, P., Votta, L.: Parallel changes in large scale software development: an observational case study. In: International Conference on, Software Engineering (ICSE'98), pp. 251–260 (1998) Research method knowledge base: improving conclusion validity. http://www.socialresearchmethods.net/kb/concimp.php(2011) Ricca, F., Penta, M., Torchiano, M., Tonella, P., Ceccato, M.: How developers' experience and ability influence web application comprehension tasks supported by UML stereotypes: a series of four experiments. IEEE Trans. Softw. Eng. 96(1), 96–118 (2010) Shadish, W., Cook, T., Campbell, D.: Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin, Boston (2002) Sjøberg, D., Anda, B., Arisholm, E., Dybå, T., Jørgensen, M., Karahasanovic, A., Koren, E., Vokác, M.: Conducting realistic experiments in software engineering. In: 1st International Symposium on, Empirical Software Engineering, pp. 17–26 (2002) Thaker, S., Batory, D., Kitchin, D., Cook, W.: Safe composition of product lines. In: 6th International Conference on Generative Programming and Component Engineering (GPCE'07), pp. 95–104, Salzburg (2007) Whittle, J., Jayaraman, P.: Synthesizing hierarchical state machines from expressive scenario descriptions. ACM Trans. Softw. Eng. Methodol. (TOSEM'10) 19(3), 1–45 (2010) Wohlin, C., Runeson, P., Höst, M., Ohlsson, M., Regnell, B., Wesslén, A.: Experimentation in software engineering: an introduction. Kluwer Academic Publishers, Norwell (2000) Wust, J.: The software design metrics tool for the UML. http://www.sdmetrics.com OPUS Research Group, LES, Informatics Department, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, RJ, Brazil Kleinner Farias, Alessandro Garcia & Carlos Lucena Kleinner Farias Alessandro Garcia Correspondence to Kleinner Farias. Communicated by Prof. Lionel Briand. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Farias, K., Garcia, A. & Lucena, C. Effects of stability on model composition effort: an exploratory study. Softw Syst Model 13, 1473–1494 (2014). https://doi.org/10.1007/s10270-012-0308-2 Revised: 17 August 2012 Issue Date: October 2014 Model composition Software development effort Design stability
CommonCrawl
What Makes a Student Public? An Alternative Outcome for the Douglas County Voucher Program Although the ruling came in about two weeks ago, lately the Douglas County voucher program hasn't been far from my mind. I credit George Will's column in Friday's Washington Post for making me rethink the case and its outcome, and finally motivating me to organize some loose thoughts that have been floating around in my head. If you aren't familiar with the case, it basically boils down to this: The Douglas County School Board believed so strongly in school choice that it voted to give 500 "choice scholarships" (vouchers) to its own students to attend area private schools. The scholarships are worth $4,575, or 75% of the district's per pupil revenue. (The district is keeping the remaining 25%.) Several groups representing the interests of taxpayers and those worried about public funding of private religious schools filed lawsuits, and on August 12th Judge Michael Martinez ruled in their favor, saying the plan "violates both financial and religious provisions" of the Colorado Constitution. Some of the consequences of this ruling are unclear, as many students have already accepted part of their scholarships and are enrolled at private schools. I was a bit surprised by Judge Martinez's ruling. Not because it came down on the side of the plaintiffs, but because it used the public funding of religious schools as a primary reason. In reaching this decision, Martinez seems at odds with Zelman v. Simmons-Harris, a 2002 case where the Supreme Court upheld an Ohio voucher program primarily because the vouchers went to parents and not directly to religious schools. Although I don't know many of the details surrounding either Zelman v. Simmons-Harris or the DougCo case, George Will's assertion that the two are "legally indistinguishable" seems to have some merit. But that got me thinking -- what if Martinez had found different reasoning for his decision, one not relying on religion at all? I think we tend to focus the debate on public vs. private schools. Instead, let's focus on students. Previous Supreme Court decisions have upheld both students' rights to receive a free appropriate public education and attend private schools, as well as receive vouchers. Students who attend public schools are public school students. Students who attend private schools (without vouchers) are private school students. But what are students who use vouchers to attend private schools? Public or private? Can we have something in-between? If so, what rights do those students have? In Douglas County, students who receive the voucher are still required to take the CSAP, Colorado's annual standardized test. Participating private schools are required to provide information to the district about their attendance and the qualifications of their teachers, as well as be willing to waive requirements that participating students attend religious services. It's clear that these provisions are included to meet various requirements of federal education law, namely No Child Left Behind's testing and "highly qualified educator" requirements. These are requirements of public schools and public school students. And let's not forget that Douglas County is keeping 25% of each student's share of state funding. Could it do this if it didn't claim the students were not, at least in some way, students of the Douglas County School District? Instead of essentially upholding the Establishment Clause, what if Judge Martinez had instead declared the DougCo "scholarship" students as public students and decided that voucher students had the right to simultaneously receive both a free and private education? In essence, what if he had told Douglas County that the vouchers would be legal so long as they covered the entire cost of each student's education at their chosen private school? If the court decided that (a) by benefiting from public monies, the students were public school students (and there is no murky in-between), and (b) public school students have the right to a free education, then (ironically!) Douglas County would be facing a difficult choice about whether or not a voucher program was in their best interest. As proponents of school choice it would be awkward for the district to back away because they couldn't afford the vouchers, although the high cost is surely what keeps many families away from private schools, voucher or no voucher. At this point I realize that my knowledge of the law is rather limited and this issue was probably dealt with a long time ago. Still, I find it an interesting perspective and it makes me want to hunt down Kevin Welner (who knows a thing or two about vouchers) in the hallways next week to ask him about it. If any of you have any knowledge or thoughts you'd like to share, I'd love to hear it in the comments below. Posted by Raymond Johnson at 8/28/2011 11:35:00 PM Labels: Colorado, money, policy, reform Modeling Dimensional Analysis I generally ask myself two questions when I examine the design of a mathematical task: What is the context? How can we model the mathematics? Mathematical concepts with tasks for which these two questions can be answered easily tend to be easier to learn, while teaching and learning generally becomes more difficult when one or both of those questions can't be answered. For dimensional analysis (sometimes called the unit factor method or the factor-label method), the first question is easy to answer. It doesn't take much of an imagination to design a measurement conversion task that is set in a real-world context. A model, however -- whether visual, mental, or a concrete manipulative -- is generally absent. Typical dimensional analysis problems look like this: Q: What is 60 miles per hour in meters per second? A: \( \frac{60 \mbox{mi}}{1 \mbox{hr}} \times \frac{5280 \mbox{ft}}{1 \mbox{mi}} \times \frac{12 \mbox{in}}{1 \mbox{ft}} \times \frac{2.54 \mbox{cm}}{1 \mbox{in}} \times \frac{1 \mbox{m}}{100 \mbox{cm}} \times \frac{1 \mbox{hr}}{60 \mbox{min}} \times \frac{1 \mbox{min}}{60 \mbox{sec}} = \frac{9656064 \mbox{m}}{360000 \mbox{sec}} = \frac{26.8224 \mbox{m}}{\mbox{sec}} \) For those who successfully learn dimensional analysis this way, there's a certain beauty to how the units drive the problem and how the conversion factors are nothing more than cleverly written values of one, the multiplicative identity. Unfortunately, many students struggle with this method. Some are intimidated by the fractions, some can't get the labels in the right place, and some just can't get the problem started. What we need is a model. Let's start with the most basic of unit conversion models, a ruler with both inches and centimeters: (Yes, I'm still using the same ruler I got as a 7th grader in a regional MathCounts competition.) With only simple visual inspection, students should be able to use a ruler to estimate conversions between inches and centimeters. This is an informal model, one students can literally get their hands on. We can assist the learning by making the models progressively more formal. Here we model a trivial conversion from one inch to centimeters with a double number line: (Yes, you still have to know your conversion factors!) Such a simple example looks almost too easy to be useful, but we can add number lines for more complex conversions. We can even abstract the model further and go beyond conversions of distance. Suppose we wanted to convert 3 gallons to liters. I could model that conversion with number lines this way: (I could have used any number of transition units, but I knew 1 quart was roughly 946 milliliters.) Filling in the question marks from top to bottom, I'll see that 3 gallons, 12 quarts, 11,352 milliliters, and 11.352 liters are all the same volume. It's easy to see they're the same because on each number line those values are the same distance from zero. Because we're only converting one kind of unit (volume), we only need one dimension. In our initial example we were converting 60 miles per hour to meters per second. That's two kinds of units, distance and time, so our model needs two dimensions. Furthermore, it can help to think of 60 miles per hour as a line, not just a point. After all, we often travel at a speed of 60 miles per hour without actually traveling a distance of 60 miles in exactly one hour. Can you guess where our double (or however many are necessary) number lines will go in this model? The following video will demonstrate what I would call the graphing model or two dimensional model for performing conversions. With the work shown in the video, we haven't just done one conversion. In fact, we're prepared to write 60 miles per hour 15 different ways, not that we'll ever be asked to do that. If we needed 60 miles per hour in centimeters per minute or feet per second, all the work is done. Just choose the appropriate quantity from the vertical and divide by the appropriate quantity from the horizontal. Of course, if we're in a hurry, we won't find all those intermediate figures and instead just proceed from miles to meters and hours to seconds as quickly as possible. Will that be quicker than the traditional method shown above? Probably not, but the purpose of using a model is understanding, not speed. Once the understanding is established, students can move on to a formal method or use technology when appropriate. Posted by Raymond Johnson at 8/15/2011 12:59:00 AM Labels: algebra, arithmetic, design theory, fractions, math lesson RYSK: Butler's Effects on Intrinsic Motivation and Performance (1986) and Task-Involving and Ego-Involving Properties of Evaluation (1987) This is the third in a series of posts describing "Research You Should Know" (RYSK). As teachers, we care not only about what students learn, but why students learn. In a perfect world, we would all agree on what's important to learn and do and be self-motivated to learn and do those things. But our world isn't perfect, and students are motivated to learn and do things for many reasons. Understanding those reasons is important if we want students to be properly motivated and to perform well with the right attitude. Ruth Butler earned her Ph.D. in developmental psychology from the Hebrew University of Jerusalem in 1982 and was a relatively new professor there when she teamed with veteran educational psychologist Mordecai Nisan, whose career includes time spent at the University of Chicago, Harvard University, The Max Planck Institute for Human Development, and Oxford University. Together, they sought to build upon studies that compared extrinsic vs. intrinsic motivation and positive vs. negative feedback, looking specifically at how different feedback conditions -- ones that can be manipulated by teachers -- affect students' intrinsic motivation. For their 1986 paper, Effects of No Feedback, Task-Related Comments, and Grades on Intrinsic Motivation and Performance, Butler and Nisan expected that students who received feedback in the form of simple positive and negative comments (without elements of praise or grading/ranking) would remain motivated, while students who received grades or no feedback would generally become less motivated. To test this hypothesis, Butler and Nisan randomly assigned 261 sixth grade students to one of three groups. They gave the students two types of tasks: Task A was a quantitative "speed" task where students created words from the letters of a longer word, while Task B was a qualitative "power" task that encouraged problem solving and divergent thinking. Butler and Nisan conducted three sessions with the groups: Session 1: Students performed the tasks. Session 2: Two days after Session 1 the tasks were returned. Students in the first group got comments in the form of simple phrases such as, "Your answers were correct, but you did not write many answers," or "You wrote many answers, but not all were correct." Students in the second group got numerical grades that were computed to reflect a normal distribution of scores from 30 to 100. Students in the third group got their work returned with no feedback. After students reviewed their previous work, they were given new tasks and told to expect the same type of feedback when they returned for Session 3. Session 3: Two hours after Session 2 students again reviewed their work and feedback (except for the third group, who got no feedback) from Session 2 and then got a third set of tasks. Students were asked to complete the tasks and were told that they would not get them back. The session ended with a survey of students attitudes towards the tasks. When Butler and Nisan compared the students' average performance on the tasks in Session 1, all three groups scored approximately the same. That changed in Session 3. On Task A, students receiving comments and grades scored about the same in Session 3 (with an edge to the comments group for the creation of long words), but students receiving no feedback did far worse. For Task B, students receiving comments did significantly better than students who received grades or no feedback, who performed about the same. The only students doing well in Session 3 -- in fact, the only students consistently scoring higher, on average, in Session 3 than in Session 1 -- were the students who received comments. The survey also showed attitudinal benefits for the comments group, who indicated they found the tasks more interesting and were most willing to do more tasks. Furthermore, 70.5% of students who received comments attributed their effort to their interest in the tasks, compared to only 34.4% of those graded and 43.4% of those receiving no feedback. Only 9% of students receiving comments said their effort was due to a desire to avoid poor achievement, compared to 26.7% of students receiving grades and 9.6% of the no feedback group. Lastly, 86.3% of students receiving comments wanted to keep receiving comments, while only 21% of the graded group wanted to keep receiving grades. The vast majority of graded students, 78.9%, wanted comments. The no feedback group was roughly split 50/50 on wanting comments or grades. None wanted to keep receiving no feedback. Butler modified this study for her 1987 paper Task-Involving and Ego-Involving Properties of Evaluation: Effects of Different Feedback Conditions on Motivational Perceptions, Interest, and Performance. In it, Butler adapted a theory of task motivation used by Nicholls (1979, 1983, as cited in Butler, 1987): Task involvement: Activities are inherently satisfying and individuals are concerned with developing mastery in relation to the task or prior performance. Ego involvement: Attention is focused on ability compared to the performance of others. Extrinsic motivation: Activities are undertaken as a means to some other end, and the focus is that goal, not mastery or ability. Butler believed comments would promote task involvement, while grades would promote ego involvement. While both of these can be seen as intrinsic motivation, a third type of feedback needed to be considered: praise. Previous research on praise had gotten mixed results, possibly because researchers hadn't considered if the praise was task- or ego-involved. Butler's study would include ego-involving praise using comments designed to focus a student's attention on their self-worth and not on the task. Therefore, Butler hypothesized that praise and grades would generate similar results, results less desirable than task-involved comments. The study was similar to the 1986 study, with 200 fifth and sixth graders split into four groups (comments, grades, praise, and no feedback) with subgroups in each for high- and low-achieving students. Tasks were administered in three sessions, with no feedback given after the third session. The tasks this time were divergent thinking tasks, used as Task B in the 1986 study. Praise would come in the form of a single phrase: "Very good." An attitude survey was given after Session 3. As Butler expected, comments promoted task-involved attitudes while grades and praise promoted ego-involved attitudes. Students' interest in the tasks after Session 3 was higher for the comments group than for the grades, praise, and no feedback groups combined. Students who received praise showed more interest than those who received grades. As for performance, the comments group easily performed the best in Session 3, with both high and low groups improving their scores over Session 1, while all other groups performed about the same or worse compared to their Session 1 performance. So what does this mean? As a teacher who struggled with assessment and grading, it was Butler's work that most inspired me to start this RYSK blog series. Despite these results being 25 years old, there's not much evidence that Butler's findings have had a serious impact on the practice of most teachers. I suspect that few teachers know about Butler's work -- I certainly didn't. I was wrapped up in the scores and grades game, not fully aware of the impact those scores were having on my students. I knew it wasn't working, but I didn't have this kind of theoretical knowledge to support a significant change in my practice. I'm not suggesting that we should suddenly demand a grade-free world. That's just not a realistic thing to expect given where we are now. What I would like to suggest is that teachers become more aware of how the feedback they give affects student motivation, and be careful to focus on task-involved comments whenever possible. Because students aren't likely to get this kind of feedback from standardized tests or computer-based learning systems (i.e., Khan Academy), it takes a teacher's touch to carefully craft the kind of feedback a student needs to sustain their motivation. Butler, R., & Nisan, M. (1986). Effects of no feedback, task-related comments, and grades on intrinsic motivation and performance. Journal of Educational Psychology, 78(3), 210-216. doi:10.1037/0022-0663.78.3.210 Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance. Journal of Educational Psychology, 79(4), 474-482. Labels: assessment, grading, research, RYSK What Makes a Student Public? An Alternative Outcom... RYSK: Butler's Effects on Intrinsic Motivation and...
CommonCrawl
A Solver for Problems with Second-Order Stochastic Dominance Constraints ======================================================================== Victor Zverovich, Gautam Mitra, Csaba Fábián AMPL Optimization ICSP2013, Bergamo, Italy. July 8-12, 2013 Second-Order Stochastic Dominance --------------------------------- Let \(R\) and \(R'\) be random variables defined on the probability space \((\Omega, \mathcal{F}, P)\). \(R\) dominates \(R'\) with respect to SSD if and only if \(\textrm{E}[U(R)] \ge \textrm{E}[U(R')]\) for any nondecreasing and concave utility function \(U\). This sets out the use of SSD relation to determine preferences of a risk-averse decision maker. Denoted as \(R \succeq_{_{SSD}} R'\). Strict relation: \(R \succ_{_{SSD}} R' \Leftrightarrow R \succeq_{_{SSD}} R' \mbox{ and } R' \not{\succeq_{_{SSD}}} R.\) Alternative Definitions of SSD ------------------------------ Definition using the performance function (Fishburn and Vickson, 1978): \[ F^{(2)}_R(t) \leq F^{(2)}_{R'}(t) \mbox{ for all $t \in \mathbb{R}$,} \] where the performance function \(F^{(2)}_R(t) = \int_{-\infty}^{t} F_R(u) \mathrm{d}u\) represents the area under the graph of the cumulative distribution function \(F_R(t) = P(R \leq t)\) of a real-valued random variable \(R\). Definition using the \(\textrm{Tail}\) function (Ogryczak and Ruszczyński, 2002): \[ \textrm{Tail}_{\alpha}(R) \geq \textrm{Tail}_{\alpha}(R') \mbox{ for all $0 \lt \alpha \leq 1$,} \] where \(\textrm{Tail}_{\alpha}(R)\) denotes the unconditional expectation of the smallest \(\alpha \cdot 100\%\) of the outcomes of \(R\). Illustration of Second-Order Stochastic Dominance ------------------------------------------------- Performance Functions Portfolio Problem/Constraints ----------------------------- There are \(n\) assets and at the beginning of a time period an investor has to decide what proportion \(x_i\) of the initial wealth to invest in asset \(i\). So a portfolio is represented by a vector \(\textbf{x} = (x_1, x_2, \dots, x_n) \in X \subset \mathbb{R}^n\), where \(X\) is a bounded convex polytope representing the set of feasible portfolios; in particular it can be defined as \[ X = \{\textbf{x} \in \mathbb{R}_+^n: \sum_{i=1}^n{x_i} = 1\}, \] if short positions are not allowed and there are no other modelling restrictions. Let \(\textbf{R}\) denote the \(n\)-dimensional random vector of asset returns at the end of the time period. Then the real-valued random variable \(R_{\textbf{x}} = \textbf{R}^T \textbf{x}\) is the random return of portfolio \(\textbf{x}\). Model of Dentcheva and Ruszczynski ---------------------------------- Dentcheva and Ruszczyński (2006) proposed the following model with an SSD constraint: \[ \begin{array}{ll} \mathrm{maximize} & f(x) \\ \mathrm{s.t.} & x \in X, \\ & R_{\textbf{x}} \succeq_{_{SSD}} \widehat{R}, \\ \end{array} \] where \(f\) is a concave continuous function, \(\widehat{R}\) is a reference random return such as the return of a stock market index. Special case: \(f(x) = \textrm{E}[{R_{\textbf{x}}}]\) Model of Roman, Darby-Dowman, and Mitra --------------------------------------- Roman et al. (2006) formulated a multiobjective LP model, the Pareto efficient solutions of which are SSD efficient portfolios. Assuming finite discrete distributions of returns with equiprobable outcomes, Fábián et al. (2009) converted it into a more efficient computational model with single objective and a finite system of inequalities representing an SSD constraint: \[ \begin{array}{ll} \mathrm{maximize} & \vartheta \\ \mathrm{s.t.} & \vartheta \in \mathbb{R}, \textbf{x} \in X \\ & \mathrm{Tail}_{\frac{i}{S}}(R_{\textbf{x}}) \geq \mathrm{Tail}_{\frac{i}{S}}(\widehat{R}) + \vartheta, \quad i = 1, 2, \ldots, S. \\ \end{array} \] Here one seeks a portfolio with a distribution which dominates the reference one or comes close to it uniformly (the smallest tail difference \(\vartheta\) is maximized). Model with SSD Constraints -------------------------- Fábián et al. (2010) proposed an enhanced version of the model of Roman et al. which is expressed in the following SSD constrained form: \[ \begin{array}{ll} \textrm{maximize} & \vartheta \\ \textrm{s.t.} & \vartheta \in \mathbb{R}, \textbf{x} \in X, \\ & R_{\textbf{x}} \succeq_{_{SSD}} \widehat{R} + \vartheta. \\ \end{array} \] In this model one computes a portfolio that dominates a sum of the reference return and a riskless return \(\vartheta\). Formulation Using Tails Let \(S\) denote the number of equiprobable outcomes, \(\textbf{r}^{(1)}, \textbf{r}^{(2)}, \ldots, \textbf{r}^{(S)}\) - the realisations of \(\textbf{R}\), \(\widehat{r}^{(1)}, \widehat{r}^{(2)}, \ldots, \widehat{r}^{(S)}\) - the realisations of \(\widehat{R}\). The enhanced model can be formulated as follows: \[ \begin{array}{ll} \mathrm{maximize} & \vartheta \\ \mathrm{s.t.} & \vartheta \in \mathbb{R}, \textbf{x} \in X, \\ & \mathrm{Tail}_{\frac{i}{S}}(R_{\textbf{x}}) \geq \mathrm{Tail}_{\frac{i}{S}}(\widehat{R}) + \frac{i}{S} \vartheta, \\ & \quad i = 1, 2, \ldots, S. \\ \end{array} \] Cutting-Plane Formulation Using Tails Fábián et al. (2009) obtained the cutting-plane representation of the \(\textrm{Tail}\) function: \[ \begin{array}{ll} \textrm{Tail}_{\frac{i}{S}}(R_{\textbf{x}}) = & \displaystyle \min \frac{1}{S} \sum_{j \in J_i} \textbf{r}^{(j) T} \textbf{x} \\ & \mbox{such that } J_i \subset \{1, 2, \ldots, S\}, \quad |J_i| = i. \\ \end{array} \] Cutting-plane representation of the enhanced model: \[ \begin{array}{lll} \textrm{maximize} & \vartheta & \\ \textrm{s.t.} & \vartheta \in \mathbb{R}, \textbf{x} \in X, & \\ & \displaystyle \frac{1}{S} \sum_{j \in J_i} \textbf{r}^{(j) T} \textbf{x} \geq \widehat{\tau_i} + \frac{i}{S} \vartheta, & \forall J_i \subset \{1, 2, \ldots, S\}, \\ & & |J_i| = i, \; i = 1, 2, \ldots, S, \\ \end{array} \] where \(\widehat{\tau_i} = \textrm{Tail}_{\frac{i}{S}}(\widehat{R})\). Cutting-Plane Method By changing the scope of optimisation we get a problem of minimising a piecewise-linear convex function: \[ \begin{array}{ll} \mathrm{minimize} & \varphi(\textbf{x}) \\ \mathrm{s.t.} & \textbf{x} \in X, \\ \end{array} \] where \[ \begin{array}{ll} \varphi(\textbf{x}) =& \displaystyle \max \left( -\frac{1}{i} \sum_{j \in J_i} \textbf{r}^{(j) T} \textbf{x} + \frac{S}{i} \widehat{\tau_i} \right), \\ & \mbox{such that } J_i \subset \{1, 2, \ldots, S\}, |J_i| = i, \\ & i = 1, 2, \ldots, S. \\ \end{array} \] It can be regularised by the level method. Cut Generation The cut \(l(x)\) at the iteration \(k\) is constructed as follows: Let \(\textbf{x}^* \in X\) denote the solution of the approximation function at iteration \(k\) and \(\textbf{r}^{(j_1^*)} \leq \textbf{r}^{(j_2^*)} \leq \ldots \leq \textbf{r}^{(j_S^*)}\) denote the ordered realisations of \(R_{\textbf{x}^*}\). Select \( \displaystyle i^* \in \textrm{argmax}_{1 \leq i \leq S} \left( -\frac{1}{i} \sum_{j \in J_i^*} \textbf{r}^{(j)T} \textbf{x}^* + \frac{S}{i} \widehat{\tau}_i \right).\) Then \( \displaystyle l(\textbf{x}) = -\frac{1}{i^*} \sum_{j \in J_{i^*}^*} \textbf{r}^{(j)T} \textbf{x} + \frac{S}{i^*} \widehat{\tau}_{i^*}.\) Sets \(J_i^* = (j^*_1, \ldots, j^*_i)\) correspond to ordered realisations. Why a New Solver? ----------------- * Old implementation: * Cuts are a part of the model * Difficult to reuse * New implementation: * Cuts are added automatically by the solver * Easy to use * "Clean" model * Faster AMPL Solver Library ------------------- AMPL Solver Library (ASL) is an open-source library for connecting solvers to AMPL. * C interface: - described in [Hooking Your Solver to AMPL](http://www.ampl.com/hooking.html) - used by most solvers * [C++ interface](https://github.com/vitaut/ampl/tree/master/solvers/util): - makes connecting new solvers super easy - type-safe: no casts needed when working with expression trees - efficient: no overhead compared to the C interface - used by several CP solvers and the SSD solver SSD Solver Architecture ----------------------- * ASL does all the heavy lifting such as interaction with AMPL and an external solver which makes SSD solver implemenation very simple (~300 LOC!) * Function library provides the ssd_uniform function that is translated into an SSD relation by the solver. * External solver is used for subproblems. * Solver library is optional but facilitates testing. The solver extracts linear expressions from the expression trees representing arguments of ssd_uniform. Portfolio Model in AMPL with Cuts --------------------------------- param nASSET integer >= 0; # number of assets set ASSETS := 1..nASSET; # set of assets param nSCEN > 0; param asset_returns{1..nASSET, 1..nSCEN}; param index_returns{1..nSCEN}; param nCUT integer >= 0 default 0; # number of cuts set CUTS := 1..nCUT; # set of cuts param cut_const {CUTS}; # constant in cut param cut {CUTS,ASSETS}; # multipliers in cut param scaling_factor {CUTS} default 1; # portfolio: investments into different assets var Invest {ASSETS} >= 0 default 1 / nASSET; var Dom; # dominance measure maximize Uniform_Dominance: Dom; subject to Dom_constraint {c in CUTS}: scaling_factor[c] * Dom + cut_const[c] <= sum {a in ASSETS} cut[c,a] * Invest[a]; subject to Budget: sum {a in ASSETS} Invest[a] = 1; Portfolio Model in AMPL using SSD Solver ---------------------------------------- include ssd.ampl; param NumScenarios; param NumAssets; set Scenarios = 1..NumScenarios; set Assets = 1..NumAssets; # Return of asset a in senario s. param Returns{a in Assets, s in Scenarios}; # Reference return in scenario s. param Reference{s in Scenarios}; # Fraction of the budget to invest in asset a. var invest{a in Assets} >= 0 <= 1; subject to ssd_constraint{s in Scenarios}: ssd_uniform(sum{a in Assets} Returns[a, s] * invest[a], Reference[s]); subject to budget: sum{a in Assets} invest[a] = 1; Reference Returns ----------------- Performance ----------- * 100 scenario problem with FTSE100 used as a reference. * The new implementation is 2-3 times faster. * 30000 scenario problem with FTSE100 used as a reference. * The new implementation is 2-6.5 times faster. Summary ------- * AMPL solver interface and ASL make implementation of high-level solvers/algorithms that use other solvers easy. The same technique can be applied to - other-cutting plane methods - decomposition methods, e.g. Bender's decomposition * New solver provides an efficient implementation of a cutting-plane algorithm for solving problems with SSD constraints. * This is in line with our approach that different types of optimisation models are matched with corresponding solvers. References ---------- Dentcheva, D. and Ruszczyński, A. (2006). Portfolio optimization with stochastic dominance constraints. Journal of Banking & Finance, 30 , 433–451. Fábián, C. I., Mitra, G., and Roman, D. (2009). Processing second-order stochastic dominance models using cutting-plane representations. Mathematical Programming, Series A. DOI: 10.1007/s10107-009-0326-1. Fábián, C. I., Mitra, G., Roman, D., and Zverovich, V. (2010). An enhanced model for portfolio choice with ssd criteria: a constructive approach. Quantitative Finance. First published on: 11 May 2010. Fishburn, P. C. and Vickson, R. G. (1978). Theoretical foundations of stochastic dominance. In Stochastic Dominance: An Approach to Decision-Making Under Risk, (pp. 37–113). D.C. Heath and Company, Lexington, Massachusetts. Roman, D., Darby-Dowman, K., and Mitra, G. (2006). Portfolio construction based on stochastic dominance and target return distributions. Mathematical Programming, eries B, 108, 541–569. Thank you! ----------
CommonCrawl
Virtual methylome dissection facilitated by single-cell analyses Liduo Yin1,2,3 na1, Yanting Luo4 na1, Xiguang Xu5,6, Shiyu Wen4, Xiaowei Wu7, Xuemei Lu ORCID: orcid.org/0000-0001-6044-60021,3,9 & Hehuang Xie ORCID: orcid.org/0000-0001-5739-16535,6,8 Numerous cell types can be identified within plant tissues and animal organs, and the epigenetic modifications underlying such enormous cellular heterogeneity are just beginning to be understood. It remains a challenge to infer cellular composition using DNA methylomes generated for mixed cell populations. Here, we propose a semi-reference-free procedure to perform virtual methylome dissection using the nonnegative matrix factorization (NMF) algorithm. In the pipeline that we implemented to predict cell-subtype percentages, putative cell-type-specific methylated (pCSM) loci were first determined according to their DNA methylation patterns in bulk methylomes and clustered into groups based on their correlations in methylation profiles. A representative set of pCSM loci was then chosen to decompose target methylomes into multiple latent DNA methylation components (LMCs). To test the performance of this pipeline, we made use of single-cell brain methylomes to create synthetic methylomes of known cell composition. Compared with highly variable CpG sites, pCSM loci achieved a higher prediction accuracy in the virtual methylome dissection of synthetic methylomes. In addition, pCSM loci were shown to be good predictors of the cell type of the sorted brain cells. The software package developed in this study is available in the GitHub repository (https://github.com/Gavin-Yinld). We anticipate that the pipeline implemented in this study will be an innovative and valuable tool for the decoding of cellular heterogeneity. DNA methylation plays a key role in tissue development and cell specification. As the gold standard for methylation detection, bisulfite sequencing has been widely used to generate genome-wide methylation data and computational efforts have been made to meet the statistical challenges in mapping bisulfite-converted reads and determining differentially methylated sites [1,2,3,4]. Methylation data analysis has been extended from simple comparisons of methylation levels to more sophisticated interpretations of methylation patterns embedded in sequencing reads, which are referred to as the combinatory methylation statuses of multiple neighboring CpG sites [5]. Through multiple bisulfite sequencing reads mapped to a given genome locus, methylation entropy can be calculated as a measurement of the randomness, specifically the variations, of DNA methylation patterns in a cell population [6]. It was soon realized that such variations in methylation patterns could have resulted from methylation differences: (1) among different types of cells in a mixed cell population, (2) between the maternal and paternal alleles within a cell, or (3) between the CpG sites on the top and bottom DNA strands within a DNA molecule [7,8,9]. The genome-wide hairpin bisulfite sequencing technique was developed to determine strand-specific DNA methylation, i.e., methylation patterns resulting from (3). The methylation difference between two DNA strands is high in embryonic stem cell (ESC) but low in differentiated cells [8]. For instance, in human brain, the chances of four neighboring CpG sites having an asymmetric DNA methylation pattern in a double-stranded DNA molecule are less than 0.02% [10]. Allelic DNA methylation, i.e., methylation patterns resulting from (2), was found to be limited in a small set of CpG sites. In the mouse genome, approximately two thousand CpG sites were found to be associated with allele-specific DNA methylation [11]. Thus, cellular heterogeneity could be a primary source of the variations in DNA methylation patterns. This often leads to bipolar methylation patterns, meaning that genome loci are covered both with completely methylated reads and completely unmethylated reads simultaneously in bulk methylomes. Such bipolar methylated loci can be detected using nonparametric Bayesian clustering followed by hypothesis testing and were found to be highly consistent with the differentially methylated regions identified among purified cell subsets [12]. For this reason, these loci are called the putative cell-type-specific methylated (pCSM) loci. They were further demonstrated to exhibit methylation variation across single-cell methylomes [13]. An appropriate interpretation of methylome data derived from bulk tissues requires consideration of methylation variations contributed by diverse cellular compositions. With the existing reference methylomes for different types of cells, it is possible to estimate cell ratios in a heterogeneous population with known information about the cell types. For instance, cell mixture distributions within peripheral blood can be assessed using constrained projection, which adopts least-squares multivariate regression to estimate regression coefficients as the ratios for cell types [14]. More recent studies suggest that non-constrained reference-based methods are robust across a range of different tissue types [15] and Bayesian semi-supervised methods may construct cell-type components in a way that each component corresponds to a single-cell type [16]. For reference-based algorithms, prior knowledge of cell composition and cell-specific methylation markers is critical [17]. To overcome these issues, principal component analysis (PCA) was adopted by ReFACTor for the correction of cell-type heterogeneity [18], and nonnegative matrix factorization (NMF) was adopted by MeDeCom to recover cell-type-specific latent methylation components [19]. However, the performance of such reference-free cell-type deconvolution tools relies heavily on model assumptions [20]. Recently, the development of single-cell DNA methylation sequencing techniques generated a growing number of methylomes at unprecedented resolution, providing new opportunities to explore cellular diversity within cell populations [21,22,23,24,25,26,27]; yet, no attempt has been taken to make use of single-cell methylomes for cell-type deconvolution analysis. In this study, we propose a semi-reference-free, NMF-based pipeline to dissect cell-type compositions for methylomes generated from bulk tissues. This pipeline takes advantage of pCSM segments that exhibit bipolar methylation patterns in methylomes generated from bulk tissues or among single-cell methylomes. To overcome the shallow depth of whole-genome bisulfite sequencing, weighted gene co-expression network analysis (WGCNA) was modified to cluster pCSM loci. PCA was performed to select eigen-pCSM loci, which are representative loci for clusters of pCSM loci. To evaluate the performance of eigen-pCSM loci selected in cell-type deconvolution, over 3000 brain single-cell methylomes were mixed in random proportions in simulation studies to create synthetic methylomes. The pipeline implemented in this study provides an accurate estimation of cell-type composition on both synthetic methylomes and bulk methylomes from five neuronal cell populations. Virtual methylome dissection based on eigen-pCSM loci To perform virtual methylome dissection, we introduced a three-step pipeline (Fig. 1). In the first step, pCSM loci were determined for target methylomes, which were generated from various sources including tissues, sorted cells, or single cells. The key issue in this step was to efficiently distinguish cell-type-specific DNA methylation events from stochastic methylation events. Using the hairpin bisulfite sequencing approach, we observed that 5% of CpG sites were asymmetrically methylated, but the frequencies of asymmetric methylation events decreased more than 200 times from approximately 5% for a single CpG to 0.02% for a sliding window of a 4-CpG genomic segment [10]. Therefore, in our proposed pipeline, the methylation patterns of 4-CpG genomic segments were determined from each bisulfite-converted sequencing read to minimize the influence of asymmetric DNA methylation. For all 4-CpG segments mapped to a given genomic loci, the variation in their methylation patterns was subjected to nonparametric Bayesian clustering followed by hypothesis testing to infer bipolar methylated loci [12]. After the filtering of allelic-specific methylated regions and merging overlapping segments, pCSM loci were collected for co-methylation analysis. In the second step, eigen-pCSM loci, representing pCSM clusters with distinct methylation profiles, were determined by WGCNA clustering and PCA analysis. In the third step, target methylomes were decomposed with eigen-pCSM loci using the NMF algorithm. The methylation matrix of eigen-pCSM loci in all samples was decomposed into a product with two matrices: one for the methylation profiles of estimated cell types and the other for the cell-type proportions across all samples. A three-step process to perform methylome dissection using eigen-pCSM loci. a In the first step, bipolar 4-CG segments are identified and a nonparametric Bayesian clustering algorithm is used for the determination of pCSM loci. b In the second step, co-methylation analysis is performed by k-means clustering coupled with WGCNA analysis. In each co-methylation module, PCA analysis is performed to pick the eigen-pCSM loci as a representative for the whole module. c In the third step, methylome dissection is performed by nonnegative matrix factorization (NMF), where matrix N stands for the raw methylation profile and is decomposed into two matrices, W and H. Matrix W represents the methylation profile of cell components, and matrix H represents the proportion of cell components Mammalian brain consists of many functionally distinct cell subsets that can contribute to diverse DNA methylation patterns on loci with cell subset-specific methylation. In particular, diverse subpopulations of neurons and glial cells can often be found even within a given brain region [28]. To demonstrate the effectiveness of our procedure, we performed two distinct analyses using synthetic methylomes derived from brain single cells and methylomes from brain-sorted cells. pCSM loci predicted with brain single-cell methylomes Our first case study took advantage of recent brain single-cell methylomes generated for 3377 neurons derived from mouse frontal cortex tissue [21] (Additional file 1: Table S1). Following our previous procedure for single-cell methylome analysis [13], we determined the pCSM loci from each single-cell methylome. Briefly, for each methylome, we scanned the sequence reads one by one to identify genomic segments with methylation data for four neighboring CpG sites. To facilitate pCSM identification from the 4,326,935 4-CG segments identified, we first selected 1,070,952 pCSM candidates that were completely methylated in at least one neuron but also completely unmethylated in another. We next applied the beta mixture model to the methylation patterns in single neurons for these candidates segments [13]. 921,565 segments were determined to be pCSM segments with bipolar distributed methylation profiles, while the rest (149,387 segments) had heterogeneous methylation patterns among neurons. To gain a better understanding of pCSM, we analyzed several features of these 921,565 pCSM segments using the leftover 3,405,370 non-CSM segments from the starting 4,326,935 segments as controls. According to the methylation status of each 4-CG segment, we assigned the neurons into two subsets, hypermethylated and hypomethylated, and calculated the methylation difference of each 4-CG segment between the two cell subsets. For non-CSM segments with all methylated reads or unmethylated reads, only one cell subset could be identified, and thus, the methylation difference was set as zero. As expected, pCSM segments showed large methylation differences between the two cell subsets with an average of 0.70, while the average methylation difference for non-CSM segments was only 0.11 (Fig. 2a). The average methylation levels of pCSM segments among cells were broadly distributed, while the non-CSM segments tended to be either hypermethylated or hypomethylated (Fig. 2b). Some pCSM segments had average methylation levels approaching 1 or 0, but their bipolar methylation patterns allowed the splitting of cells into two groups with a methylation difference close to 1 (Fig. 2c). In contrast, the majority of either hypermethylated or hypomethylated non-CSM segment cells split into two groups with a methylation difference less than 0.2 (Fig. 2d). pCSM segments reflected methylation heterogeneity. a Distribution of methylation differences between cell subsets classified with pCSM and non-CSM segments. b Average methylation levels of pCSM segments and non-CSM segments across single cells. c, d Relationship between methylation level and methylation difference of pCSM segments (c) and non-CSM segments (d). The color indicates the densities of pCSM segments or non-CSM segments from low (blue) to high (red). e The distribution of pCSM loci across various genomic features compared to those of control regions To further explore the functional characteristics of pCSM segments, we merged the overlapped pCSM segments into 347,889 loci (Additional file 2: Table S2) and integrated them with brain histone modification maps. We observed that these pCSM loci were enriched at H3K27ac, H3K4me, and H3K4me3 peaks and CpG islands with 1.63-, 1.93-, 1.28-, and 1.52-fold increases, respectively (Fig. 2e). In addition, pCSM loci were depleted from repeat regions including SINE, LINE, and LTR. This result suggested that pCSM loci might play important regulatory roles in the brain. For the pCSM loci that overlapped with histone marks for enhancers or promoters, we identified their adjacent genes for functional enrichment analysis using the GREAT analysis tools [29]. As shown in Additional file 3: Figure S1, genes associated with these pCSM loci are significantly enriched in the functional categories for brain development, such as "regulation of synaptic plasticity" and "metencephalon development." Altogether, these results indicate that pCSM loci showing bipolar methylation among neurons may play important roles in the epigenetic regulation of brain development. Synthetic methylome: eigen-pCSM loci determination and virtual methylome dissection by NMF In the previous study [21], a total of 3377 neurons were clustered into 16 neuronal cell types including mL2.3, mL4, mL5.1, mL5.2, mL6.1, mL6.2, mDL.1, mDL.2, mDL.3, and mIn.1 for excitatory neurons and mVip, mPv, mSst.1, mSst.2, mNdnf.1, and mNdnf.2 for inhibitory neurons. Such single-cell methylomes with assigned cell-type information provide ideal training and test sets to examine our approach. By merging single-cell methylomes within each cluster, we first created 16 artificial methylomes as references for distinct cell types. These 16 reference methylomes were then mixed in random proportions to create synthetic methylomes. To overcome the low read depth at each genomic locus, we performed clustering analysis to extract eigen-pCSM loci from the synthetic methylomes (Fig. 1b). To identify co-methylated modules, we collected a total of 61 mouse methylomes across all brain development stages and cell types (Additional file 1: Table S1). Based on the methylation profiles of pCSM loci in these brain methylomes, co-methylation analysis was performed through k-means clustering followed by weighted correlation network analysis [30] (Fig. 3a). For each co-methylation module, PCA analysis was performed to select a subset of pCSM loci as the eigen-pCSM loci representing the methylation trend (Fig. 3b). Co-methylation analysis to extract eigen-pCSM loci. a Heatmap of the methylation level of pCSM loci across brain methylomes. The methylation levels were represented by color gradient from blue (unmethylation) to red (full methylation). The color key in the right panel represents co-methylation modules. b Methylation profiles of the top five co-methylation modules. Each blue line represents the methylation level of pCSM loci across brain methylomes, the red lines represent the methylation level of eigen-pCSM loci picked by PCA analysis in each module, and 10% eigen-pCSM loci with the maximal loadings in PC1 were shown We simulated 100 synthetic methylomes composed of 16 reference methylomes in various ratios. The number of LMCs (k = 16) was determined according to prior knowledge, and the regularizer shifts' parameter (λ = 1e−04) was selected via cross-validation provided in the MeDeCom package (Additional file 3: Figure S2A). Each synthetic methylome was dissected into multiple latent DNA methylation components representing the hypothetic origins of the 16 reference methylomes (Fig. 4a, b) with their proportions determined (Fig. 4c). We further assigned the cell types predicted by NMF to the aforementioned 16 reference methylomes via clustering analysis (Fig. 4d). Corresponding to the decomposed cell types, the proportions of cell types predicted with NMF were also accurately reproduced (Fig. 4e) with a mean absolute error (MAE) of 0.037, which serves as a measure for the precision of the proportions of LMCs predicted by NMF. A high level of Pearson's correlations with a range from 0.82 to 1.00 was observed between the 12 immediately grouped reference neuronal types (i.e., mL5.1, mL4, mDL.1, mL2.3, mDL.2, mL6.1, mL6.2, mL5.2, mVip, mNdnf.2, mPv, and mSst.1) and the predicted cell types (Additional file 3: Figure S2B). The other four types of neuronal cells, including mDL.3, mIn.1, mNdnf.1, and mSst.2, were not decomposed from synthetic methylomes. The percentages of these four types of neurons only account for a small fraction (< 1.7%) of the 3377 neurons sequenced (Additional file 3: Figure S2C). The mapped reads for these four types were very limited (Additional file 3: Figure S2D). Thus, the methylation features of these four types may not be fully represented by the small number of pCSM loci identified (Additional file 3: Figure S2E). Since the proportions of the 16 cell types followed a uniform distribution in the simulation study (Additional file 3: Figure S2F), the failure in cell component decomposition is likely due to insufficient information in the eigen-pCSM loci to distinguish these four types of neurons from the others. This indicates that our procedure could have a detection limit for the rare cells. Another possibility is that some of the components had the unidentified cell types as their second-best matches. Therefore, missing just a few population-specific loci, e.g., due to poor coverage, could be the reason behind this loss of identifiability. Virtual methylome dissection based on eigen-pCSM loci. a Methylation profiles of eigen-pCSM loci, with each row representing an eigen-pCSM locus and each column representing one synthetic methylome. b Methylation profiles of NMF predicted cell types, with each row representing an eigen-pCSM loci and each column representing an NMF predicted cell type. c Heatmap of cell proportions predicted with NMF across all samples, with each row representing an NMF predicted cell type and each column representing a sample. The proportions were represented by color gradient from blue (low) to red (high). d Clustering analysis of cell types predicted by NMF and 16 reference methylomes. e Recovery of the mixing ratios for 16 neuronal cell types. The reference cell types that could not be unambiguously assigned to an LMC were considered as failures in prediction with a ratio of zero. In each line plot, the synthetic samples are sorted by ascending true mixing proportion In a previous study [19], highly variable CpG (hVar-CpG) sites, i.e., CpG sites with high sample-to-sample methylation variance, were proposed for the dissection of bulk methylomes. We next performed simulations 100 times with 2000 to 24,000 hVar-CpG sites or with pCSM loci to compare the classification accuracy using hVar-CpG sites vs pCSM loci. For the 16 cell types, the eigen-pCSM-loci-based method accurately assigned ten on average, while the hVar-CpG-sites-based method only predicted nine on average (Fig. 5a). Compared to the hVar-CpG-sites-based method, the eigen-pCSM-loci-based method exhibited a higher correlation and lower root-mean-square error (RMSE) between LMCs and their corresponding reference methylomes (Fig. 5b, c). In addition, a lower MAE was achieved with the increasing number of eigen-pCSM loci from each module. However, such an improvement could not be achieved by using additional hVar-CpG sites (Fig. 5d). Performance of virtual methylome dissection based on eigen-pCSM loci and hVar-CpG sites. a Number of correctly predicted cell types in each simulation. b Pearson correlation coefficient between LMCs and their corresponding reference methylome. c The root-mean-square error (RMSE) between LMCs and their corresponding reference methylome. d Mean absolute error (MAE) between NMF predicted proportions and real proportions, with the dot showing the mean MAE and the shade showing the standard deviation of the MAE in 100 simulations Brain methylome: virtual methylome dissection for neuronal cells To examine whether the proposed virtual methylome dissection approach can be applied to the methylomes generated from tissue samples, we re-analyzed five brain methylomes derived from sorted nuclei including excitatory (EXC) neurons, parvalbumin (PV) expressing fast-spiking interneurons, vasoactive intestinal peptide (VIP) expressing interneurons [31], and mixed neurons from the cortex's of 7-week (7wk NeuN+) and 12-month (12mo NeuN+) mice [32]. These five methylomes were analyzed separately and together as a mixed pool (Additional file 3: Figure S3A). 19,091 to 212,218 pCSM segments were identified in the six methylomes, accordingly. Among the 212,218 pCSM segments identified in the mixed pool, 118,409 segments showed differential DNA methylation states across the five neuronal samples; the other 93,809 pCSM segments were found to be pCSM segments within the five methylomes (Additional file 3: Figure S3B). Since a significant number of pCSM segments can be identified from pooled samples to capture differences among sorted cells (Additional file 3: Figure S3B), it is a better strategy to pool methylomes from sorted cells for pCSM loci identification, particularly when methylomes have a low read depth. Next, we asked whether the pCSM segments identified from the pooled methylome could reflect the cell-type-specific methylation pattern derived from single-cell methylomes. Interestingly, we found that the pCSM segments identified from the pooled methylome were significantly overlapped with those identified using single-cell methylomes (Additional file 3: Figure S3C). This indicates that the cell-type-specific methylated loci determined with single-cell methylomes could also be detected using a bulk methylome. In addition, pCSM loci identified from the pooled methylome (Additional file 4: Table S3) were enriched at enhancer histone markers and CpG islands, but were depleted from promoter, 5′UTR, and repeat elements (Additional file 3: Figure S3D). To further explore the composition of the five neuronal cell populations, we performed methylome virtual dissection based on pCSM loci identified from the pooled methylome. Following the aforementioned procedure, we performed co-methylation analysis and extracted eigen-pCSM loci from each module. An NMF model was performed with 20,000 eigen-pCSM loci selected to decompose the five methylomes. The cross-validation error showed a substantial change at k ≥ 3 (Fig. 6a), which indicated the existence of at least three major epigenetically distinct cell components, i.e., LMCs. We then examined the factorization results and compared the three main LMCs at k = 3 and λ = 10−5 to the single-cell reference profiles. Clustering analysis showed that the reference profiles of EXC, PV, and VIP neurons are related to LMC1, LMC3, and LMC2, respectively (Fig. 6b). In addition, we found that the samples of EXC, PV, and VIP neurons have high purity (Fig. 6c). Although the cellular composition of NeuN+ cells is unknown and depends highly on the cell sorting procedure, about 70–85% of mouse cortical neurons are excitatory with 6–12% PV neurons and 1.8–3.6% VIP neurons [31, 33]. In our study, the 7-week NeuN+ sample was predicted to have a mixture of 94.73% excitatory neurons, 4.35% PV neurons, and 0.92% VIP neurons. The 12-month NeuN+ sample was predicted to consist of 88.98% excitatory neurons, 7.6% PV neurons, and 3.42% VIP neurons. Considering the fact that inhibitory neurons have been reported as more likely to be depleted during the NeuN sorting procedure [34], our predictions were largely consistent with the known composition of mouse cortical neurons. Altogether, these results indicate that pCSM loci may serve as excellent predictors to decompose bulk methylomes. Methylome virtual dissection of five neuronal sorted cell populations. a Selection of parameters k and λ by cross-validation provided by MeDeCom Package. b Clustering analysis of predicted cell types and reference cell types when k = 3, with the red nodes representing the predicted cell types and the blue nodes representing the reference cell types from single-cell methylomes. c Predicted proportions of each LMC in five datasets In this study, we implemented an analysis pipeline to predict the composition of cell subtypes in bulk methylomes. To our knowledge, this is the first endeavor to systematically analyze the variation in DNA methylation patterns to infer pCSM loci as inputs for the NMF model. Application of synthetic methylomes that are simulated based on single-cell methylomes and methylomes derived from sorted cells demonstrated that our approach is efficient and has high prediction accuracy. Our procedure is semi-reference free. The clustering of pCSM loci to identify representative eigen-pCSM loci depends on the methylomes collected. With rapidly accumulating methylome data, such a method will gain power and can be widely used to explore cell heterogeneity during tissue development and disease progression. Analyses of single-nucleus methylcytosine sequencing (snmC-seq) datasets Single-nucleus methylcytosine sequencing datasets of 3377 neurons from 8-week-old mouse cortex (GSE97179) were downloaded from the Gene Expression Omnibus (GEO). These datasets were analyzed following the processing steps provided in a previous study [21]: (1) Sequencing adaptors were first removed using Cutadapt v2.1 [35], (2) trimmed reads were mapped to the mouse genome (GRCm38/mm10) in single-end mode using Bismark v0.16.3 [1], with the pbat option activated for mapping R1 reads [21], (3) duplicated reads were filtered using picard-tools v2.0.1, (4) non-clonal reads were further filtered by minimal mapping quality (MAPQ ≥ 30) using samtools view [36] with option −q30, and (5) methylation calling was performed by Bismark v0.16.3. Identification of pCSM loci from snmC-seq datasets pCSM loci were determined from single-cell methylomes with a similar procedure to what was provided in a previous study [13]. Briefly, for each snmC-seq dataset, all segments with four neighboring CpG sites in any sequence read were extracted from autosomes, and the corresponding methylation patterns were recorded. The 4-CpG segments that overlapped with known imprinted regions [11] were excluded in subsequent steps. To ensure statistical power for the identification of pCSM loci, segments covered by at least ten single-cell methylomes were retained for further analysis. The remaining 4-CG segments covered by at least one completely methylated cell and one completely unmethylated cell in such genomic loci were identified as CSM loci candidates. From these candidates, a beta mixture model [13] was used to infer pCSM loci, by which cells that covered the same segment could be grouped into hypomethylated and hypermethylated cell subsets. The segments with methylation differences between hypomethylated and hypermethylated cell subsets over 30% and adjusted p values less than 0.05 were then identified as the pCSM loci. Analyses of whole-genome bisulfite sequencing datasets Sequencing adaptors and bases with low sequencing quality were first trimmed off using Trim Galore v0.4.4. The retained reads were then mapped to the mouse reference genome (GRCm38/mm10) using Bismark v0.16.3. Duplicated reads were removed using deduplicate_bismark. Lastly, methylation calling was performed by Bismark v0.16.3. Identification of pCSM loci from WGBS datasets pCSM loci were identified from WGBS datasets following a strategy described previously [10] with slight modifications. Genomic segments with four neighboring CpGs were determined within each sequence read. Such 4-CpG segments covered with at least ten reads were retained for further identification of bipolar methylated segments. A nonparametric Bayesian clustering algorithm [12] was performed to detect bipolar methylated segments that were covered by at least one completely methylated and one completely unmethylated read concurrently. Bipolar segments in chromosome X, Y, and known imprinted regions [11] were excluded from further analysis. Genome annotation and gene ontology analysis Genomic features were downloaded from the UCSC Genome database [37], including annotation for gene structure, CpG islands (CGI), and repeat elements in mm10. Promoters were defined as 2 kb regions upstream of transcription starting sites (TSS). CGI shores were defined as 2 kb outside of the CGI, and CGI shelves were defined as 2 kb outside of the CGI shores. The broad peaks of histone modifications H3K4me1, H3k4me3, and H3K27ac for 8-week mouse cortex were obtained from the ENCODE Project [38] (with accession GSM769022, GSM769026, and GSM1000100, respectively) and lifted from mm9 to mm10 using UCSC LiftOver tools. GO enrichment analysis for pCSM loci enriched in histone peaks was performed by the GREAT tool V3.0.0 [29] using default settings. Co-methylation, eigen-pCSM loci extraction, and NMF analyses for virtual methylome dissection A two-step clustering approach was adopted for co-methylation analysis. First, k-means clustering analysis was performed to divide pCSM loci into hypo/mid/hypermethylation groups. For each k-means cluster, the R package WGCNA v1.61 [30] was used to identify co-methylation modules of highly correlated pCSM loci. Briefly, for a given DNA methylation profile, a topological overlap measure (TOM) was used to cluster pCSM loci into network modules. The soft-thresholding power was determined with the scale-free topology. Network construction and module determination were performed using the "blockwiseModules" function in WGCNA, and the network type was set to "signed" during network construction to filter the negatively correlated pCSM loci within one module. PCA analysis was performed to select a subset of pCSM loci with the maximal loadings in PC1 as eigen-pCSM loci for the corresponding module. The R package MeDeCom V0.2 [19] was used to dissect the methylomes using NMF analysis. A matrix with eigen-pCSM loci in rows and samples in columns can be decomposed into the product of two matrices: one representing the profile of predicted cell types with eigen-pCSM loci in rows and cell types in columns and the other containing the proportion of predicted cell types in each sample with cell types in rows and samples in columns. Two parameters need to be artificially set in NMF analysis, i.e., the number of cell types k, and the regularizer shifts' parameter λ, by which the estimated matrix of methylation patterns toward biologically plausible binary values close to zero (unmethylated) or one (methylated). k is dictated by prior knowledge on the input methylomes. In the case that no prior knowledge of cell composition is available for the input methylomes, both k and λ may be selected via cross-validation as suggested in the MeDeCom package. Cell mixture methylome synthesis and virtual methylome dissection simulation First, 16 artificial methylomes were created as references by merging single-cell methylomes of each neuronal cell type identified in a previous study [21]. Then, the simulated methylomes were generated by mixing the reference methylomes with random proportions. In each simulation, 100 methylomes were synthesized, based on which virtual methylome dissection was performed using the profiles of the eigen-pCSM loci in these 100 methylomes. To identify cell components from the dissection results, clustering analysis was performed on the dissected LMCs and 16 reference neuronal cell types, and the LMCs unambiguously matched to one of the reference neuronal cell types were considered to be recognized. The RMSE between LMCs and their matched reference methylomes was calculated to evaluate the recovery of reference methylomes by the following formula: $$ {\text{RMSE}} = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{N} (m_{i} - \widehat{{m_{i} }})^{2} }}{N}} $$ where each pair of \( m \) and \( \widehat{m} \) denotes the true methylation level (m) of one genomic loci in the reference methylation and the estimated methylation level (\( \widehat{m} \)) of that loci in the corresponding predicted cell component. N denotes the number of loci. To evaluate the recovery of the mixing proportions, the MAE between true proportions of neuronal cell types and the estimated proportions of recognized cell components was calculated by the following formula: $$ {\text{MAE}} = \frac{{\mathop \sum \nolimits_{i = 1}^{16} \left| {p_{i} - \widehat{{p_{i} }}} \right|}}{16} $$ where each pair of p and \( \widehat{p} \) denotes the true proportion (p) of one reference neuronal cell type and the estimated proportion (\( \widehat{p} \)) of its corresponding predicted cell component. The proportions of the estimated cell components that cannot be mapped to the true cell types were set to zero. For comparison, a parallel analysis was also performed using 2000 to 24,000 hVar-CpG sites with the maximal sample-to-sample variation. Source code for pCSM loci identification and eigen-pCSM loci extraction is available at https://github.com/Gavin-Yinld/csmFinder and https://github.com/Gavin-Yinld/coMethy, respectively. Krueger F, Andrews SR. Bismark: a flexible aligner and methylation caller for Bisulfite-Seq applications. Bioinformatics. 2011;27:1571–2. https://doi.org/10.1093/bioinformatics/btr167. Liu Y, Siegmund KD, Laird PW, Berman BP. Bis-SNP: combined DNA methylation and SNP calling for Bisulfite-seq data. Genome Biol. 2012;13:R61. https://doi.org/10.1186/gb-2012-13-7-r61. Assenov Y, Muller F, Lutsik P, Walter J, Lengauer T, Bock C. Comprehensive analysis of DNA methylation data with RnBeads. Nat Methods. 2014;11:1138–40. https://doi.org/10.1038/nmeth.3115. Morris TJ, Butcher LM, Feber A, Teschendorff AE, Chakravarthy AR, Wojdacz TK, et al. ChAMP: 450k chip analysis methylation pipeline. Bioinformatics. 2014;30:428–30. https://doi.org/10.1093/bioinformatics/btt684. Xie H, Wang M, Andrade A, Bonaldo Mde F, Galat V, Arndt K, et al. Genome-wide quantitative assessment of variation in DNA methylation patterns. Nucleic Acids Res. 2011;39:4099–108. https://doi.org/10.1093/nar/gkr017. He J, Sun X, Shao X, Liang L, Xie H. DMEAS: DNA methylation entropy analysis software. Bioinformatics. 2013;29:2044–5. https://doi.org/10.1093/bioinformatics/btt332. Shao X, Zhang C, Sun MA, Lu X, Xie H. Deciphering the heterogeneity in DNA methylation patterns during stem cell differentiation and reprogramming. BMC Genomics. 2014;15:978. https://doi.org/10.1186/1471-2164-15-978. Zhao L, Sun MA, Li Z, Bai X, Yu M, Wang M, et al. The dynamics of DNA methylation fidelity during mouse embryonic stem cell self-renewal and differentiation. Genome Res. 2014;24:1296–307. https://doi.org/10.1101/gr.163147.113. He J, Sun MA, Wang Z, Wang Q, Li Q, Xie H. Characterization and machine learning prediction of allele-specific DNA methylation. Genomics. 2015;106:331–9. https://doi.org/10.1016/j.ygeno.2015.09.007. Sun MA, Sun Z, Wu X, Rajaram V, Keimig D, Lim J, et al. Mammalian brain development is accompanied by a dramatic increase in bipolar DNA methylation. Sci Rep. 2016;6:32298. https://doi.org/10.1038/srep32298. Xie W, Barr CL, Kim A, Yue F, Lee AY, Eubanks J, et al. Base-resolution analyses of sequence and parent-of-origin dependent DNA methylation in the mouse genome. Cell. 2012;148:816–31. https://doi.org/10.1016/j.cell.2011.12.035. Wu X, Sun MA, Zhu H, Xie H. Nonparametric Bayesian clustering to detect bipolar methylated genomic loci. BMC Bioinformatics. 2015. https://doi.org/10.1186/s12859-014-0439-2. Luo Y, He J, Xu X, Sun MA, Wu X, Lu X, et al. Integrative single-cell omics analyses reveal epigenetic heterogeneity in mouse embryonic stem cells. PLoS Comput Biol. 2018;14:e1006034. https://doi.org/10.1371/journal.pcbi.1006034. Accomando WP, Wiencke JK, Houseman EA, Nelson HH, Kelsey KT. Quantitative reconstruction of leukocyte subsets using DNA methylation. Genome Biol. 2014;15:R50. https://doi.org/10.1186/gb-2014-15-3-r50. Teschendorff AE, Breeze CE, Zheng SC, Beck S. A comparison of reference-based algorithms for correcting cell-type heterogeneity in epigenome-wide association studies. BMC Bioinformatics. 2017;18:105. https://doi.org/10.1186/s12859-017-1511-5. Rahmani E, Schweiger R, Shenhav L, Wingert T, Hofer I, Gabel E, et al. BayesCCE: a Bayesian framework for estimating cell-type composition from DNA methylation without the need for methylation reference. Genome Biol. 2018;19:141. https://doi.org/10.1186/s13059-018-1513-2. Koestler DC, Jones MJ, Usset J, Christensen BC, Butler RA, Kobor MS, et al. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL). BMC Bioinformatics. 2016;17:120. https://doi.org/10.1186/s12859-016-0943-7. Rahmani E, Zaitlen N, Baran Y, Eng C, Hu D, Galanter J, et al. Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies. Nat Methods. 2016;13:443–5. https://doi.org/10.1038/nmeth.3809. Lutsik P, Slawski M, Gasparoni G, Vedeneev N, Hein M, Walter J. MeDeCom: discovery and quantification of latent components of heterogeneous methylomes. Genome Biol. 2017;18:55. https://doi.org/10.1186/s13059-017-1182-6. Teschendorff AE, Relton CL. Statistical and integrative system-level analysis of DNA methylation data. Nat Rev Genet. 2018;19:129–47. https://doi.org/10.1038/nrg.2017.86. Luo C, Keown CL, Kurihara L, Zhou J, He Y, Li J, et al. Single-cell methylomes identify neuronal subtypes and regulatory elements in mammalian cortex. Science. 2017;357:600–4. https://doi.org/10.1126/science.aan3351. Gu C, Liu S, Wu Q, Zhang L, Guo F. Integrative single-cell analysis of transcriptome, DNA methylome and chromatin accessibility in mouse oocytes. Cell Res. 2019;29:110–23. https://doi.org/10.1038/s41422-018-0125-4. Luo C, Rivkin A, Zhou J, Sandoval JP, Kurihara L, Lucero J, et al. Robust single-cell DNA methylome profiling with snmC-seq2. Nat Commun. 2018;9:3824. https://doi.org/10.1038/s41467-018-06355-2. Hu Y, Huang K, An Q, Du G, Hu G, Xue J, et al. Simultaneous profiling of transcriptome and DNA methylome from a single cell. Genome Biol. 2016;17:88. https://doi.org/10.1186/s13059-016-0950-z. Gravina S, Dong X, Yu B, Vijg J. Single-cell genome-wide bisulfite sequencing uncovers extensive heterogeneity in the mouse liver methylome. Genome Biol. 2016;17:150. https://doi.org/10.1186/s13059-016-1011-3. Farlik M, Sheffield NC, Nuzzo A, Datlinger P, Schonegger A, Klughammer J, et al. Single-cell DNA methylome sequencing and bioinformatic inference of epigenomic cell-state dynamics. Cell Rep. 2015;10:1386–97. https://doi.org/10.1016/j.celrep.2015.02.001. Guo H, Zhu P, Wu X, Li X, Wen L, Tang F. Single-cell methylome landscapes of mouse embryonic stem cells and early embryos analyzed using reduced representation bisulfite sequencing. Genome Res. 2013;23:2126–35. https://doi.org/10.1101/gr.161679.113. Molyneaux BJ, Arlotta P, Menezes JR, Macklis JD. Neuronal subtype specification in the cerebral cortex. Nat Rev Neurosci. 2007;8:427–37. https://doi.org/10.1038/nrn2151. McLean CY, Bristor D, Hiller M, Clarke SL, Schaar BT, Lowe CB, et al. GREAT improves functional interpretation of cis-regulatory regions. Nat Biotechnol. 2010;28:495–501. https://doi.org/10.1038/nbt.1630. Langfelder P, Horvath S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics. 2008;9:559. https://doi.org/10.1186/1471-2105-9-559. Mo A, Mukamel EA, Davis FP, Luo C, Henry GL, Picard S, et al. Epigenomic signatures of neuronal diversity in the mammalian brain. Neuron. 2015;86:1369–84. https://doi.org/10.1016/j.neuron.2015.05.018. Lister R, Mukamel EA, Nery JR, Urich M, Puddifoot CA, Johnson ND, et al. Global epigenomic reconfiguration during mammalian brain development. Science. 2013;341:1237905. https://doi.org/10.1126/science.1237905. Gelman DM, Marin O. Generation of interneuron diversity in the mouse cerebral cortex. Eur J Neurosci. 2010;31:2136–41. https://doi.org/10.1111/j.1460-9568.2010.07267.x. Lake BB, Codeluppi S, Yung YC, Gao D, Chun J, Kharchenko PV, et al. A comparative strategy for single-nucleus and single-cell transcriptomes confirms accuracy in predicted cell-type expression from nuclear RNA. Sci Rep. 2017;7:6031. https://doi.org/10.1038/s41598-017-04426-w. Martin M. Cutadapt removes adapter sequences from high-throughput sequencing reads. EMBnet.j. 2011;17:10–2. https://doi.org/10.14806/ej.17.1.200. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25:2078–9. https://doi.org/10.1093/bioinformatics/btp352. Karolchik D, Hinrichs AS, Furey TS, Roskin KM, Sugnet CW, Haussler D, et al. The UCSC table browser data retrieval tool. Nucleic Acids Res. 2004;32:D493–6. https://doi.org/10.1093/nar/gkh103. Consortium EP, Birney E, Stamatoyannopoulos JA, Dutta A, Guigo R, Gingeras TR, et al. Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. Nature. 2007;447:799–816. https://doi.org/10.1038/nature05874. The authors thank Dr. Janet Webster for English language editing and Drs. Joseph R. Ecker, Ryan Lister, and Eran A. Mukamel for sharing brain methylome data and the laboratories contributing to ENCODE project. This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB13000000 for X.L.), Fralin Life Sciences Institute at Virginia Tech faculty development fund (for H.X.) and VT's Open Access Subvention Fund, the Key Research Program of the Chinese Academy of Sciences (KFZD-SW-220-1 for X.L.), and the CAS Light of West China Program (for X.L.). Liduo Yin and Yanting Luo contributed equally to this work State Key Laboratory of Genetic Resources and Evolution, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, 650223, China Liduo Yin & Xuemei Lu Kunming College of Life Science, University of Chinese Academy of Sciences, Beijing, 100101, China Liduo Yin Center for Excellence in Animal Evolution and Genetics, Chinese Academy of Sciences, Kunming, 650223, China Key Laboratory of Genomic and Precision Medicine, Beijing Institute of Genomics, Chinese Academy of Sciences, Beijing, 100101, China Yanting Luo & Shiyu Wen Epigenomics and Computational Biology Lab, Fralin Life Sciences Institute at Virginia Tech, Virginia Tech, Blacksburg, VA, 24061, USA Xiguang Xu & Hehuang Xie Department of Biological Sciences, Virginia Tech, Blacksburg, VA, 24061, USA Department of Statistics, Virginia Tech, Blacksburg, VA, 24061, USA Xiaowei Wu Department of Biomedical Sciences and Pathobiology, Virginia-Maryland College of Veterinary Medicine, Virginia Tech, Blacksburg, VA, 24061, USA Hehuang Xie School of Future Technology, University of Chinese Academy of Sciences, Beijing, 100101, China Xuemei Lu Yanting Luo Xiguang Xu Shiyu Wen HX and XL conceived and designed the study; LY and YL implemented procedures and conducted data analysis; XX, SW, and XL participated in data preparation, result organization, and discussion; LY, XW, and HX wrote the manuscript. All authors discussed the results and commented on the manuscript. All authors read and approved the final manuscript. Correspondence to Xuemei Lu or Hehuang Xie. A summary of data source for datasets derived from mouse brain and sorted neurons. Genomic coordinate (mm10 based) of pCSM loci identified from single-cell brain methylomes. Functional enrichment of genes with pCSM loci overlapped with enhancer or promoter histone marks. Figure S2. Virtual methylome dissection using eigen-pCSM loci. A) Selection of parameter λ by cross-validation. B) Pearson's correlation coefficient between real cell types and NMF predicted cell types. C) The number of cells in each neuronal cell types identified by Luo et al. The percentage of each neuronal type in 3377 neurons sequenced is shown at the top of each bar. D) The number of mapped reads in each neuronal cell type. The fraction of reads mapped in each neuronal type accounts for all mapped reads in 3377 neurons is shown at the top of each bar. E) The fraction of the pCSM loci covering each cell type. F) The synthetic proportions of each neuronal cell type. The error bar shows the standard deviation of the synthetic proportions in 100 methylomes. Figure S3. Characteristics of pCSM loci identified from brain methylomes. A) A sketch map of pooling samples. B) Number of pCSM segments identified from neuronal and pooled methylome. "Vanished" represents the segments identified as pCSM segments within each neuronal cell population but identified as non-CSM segments in pooled sample. "Emerged" represents the segments identified as pCSM segments in pooled sample but identified as non-CSM segments within each individual cell population. "Derived" represents the segments identified as pCSM segments in both pooled sample and at least one neuronal cell population. C) Venn plot shows the overlap between pCSM segments identified from single-cell methylomes and those identified from the pooled methylome. D) The distribution of pCSM loci across various genomic features compared to those of control regions. Additional file 4: Table S3 . Genomic coordinate (mm10 based) of pCSM loci identified from bulk brain methylomes. Yin, L., Luo, Y., Xu, X. et al. Virtual methylome dissection facilitated by single-cell analyses. Epigenetics & Chromatin 12, 66 (2019). https://doi.org/10.1186/s13072-019-0310-9 Cellular heterogeneity Nonnegative matrix factorization Single-cell methylome
CommonCrawl
Skip to main content Skip to sections International Journal of Digital Humanities July 2019 , Volume 1, Issue 2, pp 235–250 | Cite as Tracking the evolution of translated documents: revisions, languages and contaminations Gioele Barabucci Part of the following topical collections: Special Issue on Digital Scholarly Editing Dealing with documents that have changed through time requires keeping track of additional metadata, for example the order of the revisions. This small issue explodes in complexity when these documents are translated. Even more complicate is keeping track of the parallel evolution of a document and its translations. The fact that this extra metadata has to be encoded in formal terms in order to be processed by computers has forced us to reflect on issues that are usually overlooked or, at least, not actively discussed and documented: How do I record which document is a translation of which? How do I record that this document is a translation of that specific revision of another document? And what if a certain translation has been created using one or more intermediate translations with no access to the original document? In this paper we addresses all these issues, starting from first principles and incrementally building towards a comprehensive solution. This solution is then distilled in terms of formal concepts (e.g., translation, abstraction levels, comparability, division in parts, addressability) and abstract data structures (e.g., derivation graphs, revisions-alignment tables, source-document tables, source-part tables). The proposed data structures can be seen as a generalization of the classical evolutionary trees (e.g., stemma codicum), extended to take into account the concepts of translation and contamination (i.e., multiple sources). The presented abstract data structures can easily be implemented in any programming language and customized to fit the specific needs of a research project. Revision control for translated documents Independent evolution of translated documents Data structures for stemma codicum Multi-language stemma codicum Early in the process of setting up the digital environment for a critical edition of a work whose tradition is composed of witnesses in many different languages a problem arises: how to describe the relations between all these witnesses. Classical textual scholars have a solid scholarly tradition to refer to when dealing with such an issue. Textbooks, theories and methodologies on how to write a stemma codicum abound. Much literature on how to deal with translated witnesses also exists, mostly thanks to biblical studies. The literature, being written with classical scholars in mind, does not address in depth all the formal details needed to carry out a machine-based analysis of the evolution of text and its translations. Take, for example, the case of referring to part of a document. Different languages have different ways to divide documents and these have changed through the centuries. A simple reference, such as "the second paragraph", may refer to the wrong content in a certain translation or be completely nonsensical in the context of another translation written in an older language without the concept of paragraphs. For traditional textual scholars this is not a real problem: they know how to adjust their work to overcome these issues. In digital scholarly editions, however, it is not possible to gloss over these details. Without a proper formalization, many of the tools of the digital scholar are no longer usable. To continue with the example of the references, think how useless a synoptic visualization tool like CATview (Pöckelmann et al. 2015; Barabucci 2016) would be, if it naively relied on paragraph indices to display the content of two documents written in two languages, each of which employs a different paragraph order. The aim of this paper is to provide a set of formal and actionable definitions for many of the concepts that have to be addressed when creating a digital edition of works which have evolved over time (through authorial changes or copyist errors) and of which multiple translations exist (with some of them having a tradition of their own). The content of this paper reflects the work being done in the context of the Averroes Digital Edition project at the Thomas-Institut of the University of Cologne.1 The objective of the Averroes Digital Edition is to produce various scholarly editions of the works of the Andalusian Philosopher Averroes (also known as Ibn Rušd). The editions comprise Averroes's original Arabic texts as well as their Medieval Latin and Hebrew translations. To this end, all the known witnesses are being transcribed using a suitable TEI schema2 and adding extra metadata about the origin of the text that they contain, for example, adding references to an ideal superstructure via a so-called chunking mechanism (Barabucci 2017). The fact that this extra metadata has to be encoded in formal terms in order to be read easily by machines has forced the project members to reflect on issues that are usually overlooked or, at least, not actively discussed and documented. This paper provides, in Section 2, a discussion of the overall problem of describing the parallel evolution of a text and its translation. It then goes on in Section 3 to describe a series of possible approaches to this problem, starting with simple solutions to simple problems and progressively addressing more and more complex cases that require more advanced solutions. After this discussion from first principles, each fundamental concept is isolated and formalized in Section 4. These formalizations have a double aim: providing a blueprint for an implementation and allowing a more precise discussion of these issues, including their inherent trade-offs. Future publications will describe how the abstract data structures described in this paper can be implemented in practice in standalone tools or as part of an environment dedicated to the study and production of digital editions. 2 The overall problem: How to describe the parallel evolution of a text and its translations The problem of keeping track of how a document has evolved and how its translations have followed this evolution can been seen from two perspectives: from the present towards the future (time-forward) or from the present towards the past (time-reversed). In the time-forward perspective we see the document evolution unfold: first a document is created, then its translations appear, then the document is modified and, subsequently, its translations are updated. This perspective is the perspective of the working translator. Understanding the evolution of the work is then quite easy: it is a matter of recording what has happened. The question is, however, "how, exactly?". A classical solution is a tree-like structure in which the chain of master documents forms the trunk of the tree and its translations are branches on the side. As the document evolves this tree grows with more and more branches. In the time-reversed perspective we retrospectively rebuild this tree-like structure starting from the single items that will compose it and making educated guesses about their relations and the possible branches of the tree. This perspective is the perspective of the textual scholar. Part of the work of the textual scholar is to create a stemma codicum on the basis of plausible hypotheses. In other words, the textual scholar makes suppositions from the present about how the tree has evolved in the past. In practice, these two perspectives are equivalent for what concerns the task of recording the relations between revisions of documents and their translations. The only difference lies in where the information about these relations comes from: in the time-forward case, the required information is recorded as time goes on; in the time-reversed case, the required information is provided by the scholar on the basis of their hypotheses. Only the time-forward perspective will be used in this paper. Given that the data structures required to keep track of how the document and its translations have evolved are the same, it makes sense to use the perspective that is the most straightforward to work with. One thing that will not be discussed is, surprisingly enough, the concept of translation itself. We will refer to Halverson (1997) for a thorough discussion on the different ways a document can be translated and on the various schools of thought on this topic. 3 Practical issues This section deals with various practical issues related to the general problem of keeping track of the evolution of a document and its translations. In particular, we will investigate things like which pieces of information one has to record when a new revision of the master document appears, how to record that only part of a document needs to be translated anew, and how to record that a translation has been compiled from multiple sources. Discussing these issues allows us to incrementally build the set of requirements that our formalization will have to support. 3.1 The act of translation In its most basic form, the translation problem comprises three parts: document A (the master), with content expressed in the language λA; document B (the target), the content of which is expressed in the language λB; the translator, that reads the content of A and produces the content of B. As already stated, this paper will neither discuss the role of the translator nor the concept of translation. We assume that the translation act can be represented by a function trans that takes A, λA and λB as inputs and produces B. (This rough function definition will be refined and improved in the rest of this paper.) At this stage things are quite simple: one file will store the content of A, another file will store the content for B. In this paper we will ignore details like which formats are used to store the content or where the content is stored. To keep things simple, we are going to assume that the content of A and B is stored in two distinct files that are readily accessible to the system. 3.2 The basic problem: Keeping master and target aligned We start with a basic problem: the master document A has been modified and its translation B must be updated. The fact that documents can be updated leads to the existence of multiple revisions of A and (eventually) of multiple revisions of B. To keep track of them, one needs a way to identify these revisions. In other words, one needs a revision ID. For the moment, we will identify the first revisions of A with Aα and Aβ. Similarly we will use Bα and Bβ to refer to the revisions of B that are translations of, respectively, Aα and Aβ . In addition, we can suppose that each revision is stored in a different file and that each file has a unique name. The names of the files are not directly related to the revision IDs. We can visualize the evolution of a document and its translated versions as a graph, as shown in Fig. 1. The graph shows the evolution of A through time, and, the dependency between B and A. Evolution of A and of its translation B. A straight line denote an authorial change leading to a new revision, a weavy line indicates a translation leading to a new document In this simple case, to keep track of the evolution of A and its translation B we need a table with three columns: the revision ID, called υ, the path to the file where the content of revision Aυ is stored, the path to the file where the content of revision Bυ is stored. An empty value in the third column signals that a certain revision has not been translated yet. In the generic case of N different target languages, the table will have 2 + N columns. Figure 2 shows a graph with the evolution of a document and its translations, together with the equivalent revision table. Evolution of A and its translations B, C Each modification to the master document introduces a new row in the revision table. Initially, a new row contains only the revision ID and the path to the content of the master document; the cells referring to the translations are empty. As this revision is translated into other languages, the respective cells are filled with the path to the translated content. This simple revision alignment table is useful to keep track of documents that have a clear evolution path and whose translations are orderly produced from the master document. In reality, though, this scenario is quite rare and more advanced techniques are required to keep track of how a document and its translations have evolved. 3.3 Translations of translations There is a common translation practice that the simple revision alignment table shown in the previous section cannot deal with: the translation of a document from another translation. This happens, for example, when the document C is translated from another translation B instead of using the master document A. While, in theory, the document C is a translation of A, in practice, going though the intermediary translation B may introduce translation errors. It may well happen that B is a valid translation of A and that C is a valid translation of B, but, in fact, C is not a valid translation of A. In mathematical terms, the root problem stems from the fact that the translation function transl is not a transitive function. Therefore, the use of an intermediary translation needs to be recorded and traced. This extra piece of information complicates our tracking system: we can still draw the graph as a tree, but the revision table must be split. Each language will have its own three-column table that associates the revision ID with the file path and with the origin, i.e. the document from which a certain revision has been translated. The updated graph is shown in Fig. 3, the split tables in Fig. 4. An evolutionary tree with intermediary translations Tabular representation of the tree in Fig. 3 with intermediary translations (all file paths replaced by …) 3.4 Contaminations An even more complicated case is when a translation has been derived from other translations or from multiple sources, i.e. some kind of contamination has occurred. An example of this is the way in which the translation teams in various EU legal departments work (Cavoski 2017; Schäffner 2001). The use of multiple sources is, upon closer inspection, something that is intrinsic to the way translations are updated. When a new revision Aβ is released, its translation Bβ is often produced using the existing translation Bα in addition to Aβ itself. Tracking the fact that a translation may have more than one source has two effects. First, we will have to modify the trans function to accept more than one source document and one source language. In principle, an arbitrary large number of documents could be recorded as being the source for a certain translated revision. Second, the derivation tree is no longer a tree, but an acyclic direct graph (cycles are not possible, because a future document cannot be the source of a past translation). This change also reflects on the derivation table that now allows for multiple source documents to be recorded. An example of the new graph and of the new table are show, respectively, in Fig. 5 and Fig. 6. An evolution graph with translations derived from multiple sources. A dashed line denotes the use of a secondary source, e.g. the translation of a previous version or an intermediary translation Tabular representation of the graph in Fig. 5 with translations derived from multiple sources (all file paths replaced with …) 3.5 Updates to part of a document and partial translations Up to now, we have considered documents as atomic documents: a whole document is updated, a whole document is translated, a whole document is used a source for a translation. This is an extremely simplified view of the reality. Under this model, updating A produces Aβ. In turn, Bβ is produced by translating Aβ from scratch, ignoring the work done to translate the previous revision Bα. In practice, however, what is commonly done is translating only the parts of Aβ that have been changed since Aα. This approach can drastically reduce the amount of work done to update B. Translating only the changed parts of A introduces, however, two new problems: How are we going to divide A in parts? How do we know that certain parts of A have been changed? The first issue, dividing a document into parts, is both a practical problem and a highly theoretical problem. In general, the parts into which a document can be divided depends on the model imposed (or that can be imposed) on the content in that model. For example, an essay may be divided in paragraphs, a poem in lines often grouped in stanzas. In particular, electronic documents can be divided in a plethora of different ways, related to the many different levels of abstractions that can be recognized in a document (Barabucci 2019). Associated with a way to divide a document in multiple parts, there must also be an addressing scheme to refer to these parts. For the moment, we will suppose that documents are nothing more than a sequence of paragraphs. In other words, they can be divided in paragraphs and each paragraph in a document has an associated index; for example Aα/2 refers to the second paragraph in Aα. The second issue, identifying which parts of Aα and Aβ are different is what is known as the diff problem (Barabucci 2013). A basic diff function returns the indexes of the parts of Aβ that are different from their counterparts in Aα. For example, if Aα was composed of the three paragraphs "X Y Z" and Aβ of "X K W", the diff function would return (2,3), i.e. it would state that the second and third paragraphs are different. In practice, even the most basic version of the output of a proper diff function has to be much more complicated than that because it has to take into account not only in-place modifications but also, as a minimum, the insertion of new paragraphs and the deletion of existing paragraphs. For instance, if Aα were "X Y Z" and Aβ were "X K Y W", the diff function will state that K has been added after X, that W has been added after Y, and that Z has been removed. For the sake of brevity, we will assume that an appropriate diff function exists and we will ignore the problem of how the diff function detects differences between revisions of a document. In practice, the output of a diff function can be seen as an alignment table with two columns: the ID of a paragraph in Aα and the ID of the corresponding paragraph in Aβ. The first column will be empty for paragraphs added in Aβ, the second column will be empty for paragraphs deleted from Aα. Such an alignment table is shown in Fig. 7. Alignment table and differences between Aα (X Y Z) and Aβ (X K Y W) Once one knows which parts of Aβ differ from their respective parts in Aα, one can understand which parts of Bα can be reused and which parts have to be translated from Aβ. It must be noted that this alignment table is unrelated to the previous revision table. It is just a means to understand what is "new" and has to be translated. The alignment table could be thrown away once the target documents have been updated. What about our evolutionary tree and the associated revision tables? Do we need to extend them to take into account that documents may be divided in parts, for example in paragraphs? If we assume that the number and the order of the paragraphs is the same in the master document and in each target translation, we do not need to change anything. 3.6 Different scripts, grammars and subdivision styles To assume that the number and the order of the paragraphs or sentences is going to be the same in all target languages is, however, unrealistic, especially when working with languages that have quite different grammars and rhetorical constructions, for example English and Chinese, or Latin and Arabic. For example, Arabic in the twelfth century was written with no punctuation and there was no clear distinction between paragraphs or sentences. Latin, on the contrary, had already abandoned scripto continua and the division in paragraphs was being introduced via rubrics (Saenger 1997). This means that an Arabic text being translated into Latin was going to be divided into parts by the translator, with the boundaries of these parts being quite arbitrary and with the possibility of parts of the text being transposed for the sake of style.3 There is the need, thus, to track which parts of the document A has been used as the source of which parts of the target document B. To keep track of this information over the whole life of A and B we need a new source-part table, in addition to the tables that we have seen up to now. In this alignment table we are going to record which parts of Aυ are the source for which parts of Bυ. The new source-part table has three columns: the ID υ of the document version, the ID of the part in Aυ, the ID of the part in Bυ. An example of such a source-part alignment table is shown in Fig. 8. Source-part table 3.7 Revisions of translations So far we have assumed that the master document gets updated but translations do not. However, translations do get updated. Some translations even start a tradition of their own. If we accept the idea that a translation of a certain revision can have revisions of its own, then we need to keep track not only of which revisions of B are translations of which revisions of A, but we also need to keep track of how the various revisions of B relate to one another. To store these new pieces of data, we need to modify the structure our tables. A simple list of revisions like we have hitherto used is no longer enough. We must give a unique name to each revision of each document in our graph, like in Fig. 9, and then store the association between them in a new table, such as that shown in Fig. 10. An evolution graph with multiple revisions of a translation. As in the previous figures, a straight line denotes an authorial change leading to a new revision, a weavy line indicates a translation leading to a new document, and a dashed line symbolizes the use of a secondary source in the creation of certain document Tabular representation of the graph in Fig. 9 with a translation undergoing multiple revisions 4 Formalization of the fundamental concepts In the previous section we have seen a variety of problems that are routinely faced by the scholars that want to track the evolution of a document and its translations. These problems were presented in a discursive manner and in an incremental way. In this section, instead, the focus will be on formalizing the separate fundamental concepts that are behind the problem of tracking the parallel evolution of documents. This formalization has a double aim. On the one hand, this formalization represents an actionable guide for the implementation of a system that can be used to track the evolution of documents and translations. On the other hand, such a formalization opens the door to a more precise discussion about these concepts. 4.1 Documents according to CMV+P The first concept one must formalize is that of the document. There are many definitions of what a (digital or non-digital) document is. The function of these definitions is to impose a particular view of what a document should be, which kind of data it should contain and so on. In this paper we will make use of the so-called Content, Model, Variants + Physical level (CMV+P) document model (Barabucci 2019). In its most basic form, the linear version, the CMV+P model describes each document as a stack of abstraction levels. Each level in the CMV+P stack is composed of a) an addressable Content, b) a Model according to which the content has been recorded, and c) a set of Variants used for equivalence matching. At the bottom of this stack is the Physical level, that symbolizes the concrete medium in which the document is embodied. These abstraction levels are connected by transformation functions that map the content of the upper abstraction level to (encoding) and from (decoding) the content of the lower abstraction level. Examples of stacked abstraction levels are the TEI and the XML abstraction levels. At the TEI abstraction level, the Model is the TEI specification and the Content is a tree of titles, sections, paragraphs, figures and so on. At the lower abstraction level XML, the Model is the XML specification and the Content is a tree of nodes, elements, attributes and comments. These two levels are connected by transformation functions that map the content of the upper abstraction level to and from the content of the lower abstraction level. 4.2 Translation function Translated documents are the results of a translation function that we will name transl. The transl function represents the act of translating a document. At the document level, this translations function takes in input: one master document, written in certain language, zero or more secondary documents, each written in a certain language, a target language. These parameters are necessary in order to perform the translation itself. Formally, the translation function at the documentation level can be written as $$ transl:{Doc}_a\times {lang}_a\times {\left({Doc}_i\times {lang}_i\right)}^n\times {lang}_{target}\to {Doc}_{target} $$ Where Doc is a document at a certain abstraction level, langxis the language of the document x and n is the number of secondary documents used in the translation. For example, the fact that the document Bβ is a translation of Aβ and has been created reusing parts of Bα would be recorded as $$ transl\left({A}_{\beta },{\lambda}_A,{B}_{\beta },{\lambda}_B,{\lambda}_B\right)={B}_{\beta } $$ The transl functions is not a transitive function. That means that if C has been translated from B and B has been translated from A, it may be the case that C is not a proper translation of A. More formally, the fact that $$ transl\left(A,{\lambda}_A,{\lambda}_B\right)=B, transl\left(B,{\lambda}_B,{\lambda}_C\right)=C $$ does not imply that $$ transl\left(A,{\lambda}_A,{\lambda}_C\right)=C $$ One implication of the non transitivity of the transl function is that each intermediary translation must be recorded. 4.3 Equality/equivalence between documents When can we say that two documents are different? When are two aligned parts in two revisions of a document different? Generically speaking, the answer to these question is application- and model-specific: it always depends on the point of view of the person asking and on the model used to describe the documents. For example, in certain cases, capitalizing a word may be seen as a trivial change that does not make the document really different, while in other cases such a change may be regarded as critical. The CMV+P document model provides a formal framework to answer these questions. In particular, it provides a formal definition of equality and equivalence. According to CMV+P, two documents are equal at a certain level of abstraction when their respective C sets (Content sets) for that level of abstraction are identical: they contain the same number of elements, in the same order and all elements are pairwise equal. Conversely, two documents are equivalent at a certain level of abstraction when some of the elements in their C sets are different, but all these different elements have an associated record in their V set (set of Variants) and there is an equivalence between these records. These formal definitions do not solve the problem of understanding if two documents are different and, if so, how. These definitions provide, however a formal framework to describe, for example, which phenomena can be considered variants. 4.4 Revision identifiers Revisions must be uniquely identified, regardless of how one defines what constitutes a new revision. A system of identifiers must be in place to give each of these revisions a referenceable name. The simplest way to identify revisions is to associate a global progressive identifier to each one. This means that A1 is the first revision of A, A2 is the second and so on. At the same time, B1 is the translation of A1, B2 of A2 and, in general, Bn is the translations of An. This technique has a shortcoming: it can accommodate only cases where each translation has exactly one revision. Another way to generate revision identifiers is to generate them from the content of the document using a mapping function revid that takes a Content set and returns an unique ID. Each ID will then identify with precision only a precise arrangement of the content. $$ revid:C\to ID $$ The content used to generate these identifiers must be understood as the Content set at a certain abstraction level. Which exact abstraction level should be used is an application-specific choice. In simple cases, e.g. when the abstraction level is a bitstream level, the revid function can be implemented via an hash function such as SHA1. 4.5 Division in parts and indexing The model of a document is what decides how a document can be divided. Bibles, for example, are divided in books and subdivided in chapters and verses; poems on the other hand are divided in stanzas and verses. The fact that the use of different models lead to different kinds of elements is reflected in the CMV+P model, where the kinds of elements present in the Content set at a certain abstraction level are dictated by the Model of that level. An indexing mechanism—i.e. a way to name these different parts—is needed if we want to keep track of the influence of these parts on other documents or if we want to state that they have been modified in a certain revision. There are three kinds of indexing mechanisms: sequential indexes, for documents whose parts have an implicit order (for example, the paragraphs in a letter); path indexes, for tree-like documents with nested structures (for instance books that are divided in chapters, subchapters, paragraphs and so on); direct indexes (or labels), for graph-like documents (hardly found in textual studies). 4.6 Derivation graph The derivation graph is the data structure that stores the information about the relations between the revisions of the master document and its translations. It is described via three kinds of tables: the revisions-alignment table, the source-document table and the source-part table. 4.6.1 Revision-alignment The revision-alignment table describes which revision IDs are to be considered equivalent, i.e. all the revisions on the same row are all translations of the same content. There is only one revision-alignment table per derivation graph. This table has one column for each language and one row for each revision of the master document. Each cell contains a list of revisions IDs. υ A υ B υ C 3fx 11g, 7he 4.6.2 Source-document table The source-document table describes two things: where the content for a certain revision of a certain document is, and what are the other documents that have been used to translate this revision. There is one source-document table for each language. This table has three columns: the revision id, the path to the stored content for that revision in that language, the list of documents that have been used to translate this content Each row of this table represents a revision in a certain language. B-18-jan-03.txt A 3 fx A3fx, B11g B2-08-sept-04.txt Ac9k, B7he 4.6.3 Source-part table The source-part table describes the alignment between the parts of two documents at a certain abstraction level. There is one source-part table for each pair of languages. This table has four columns: the revision id of the master language, the revision id of the target language, the index of a part in the master language, the index of the same part in the target language. Each row in this table represents the alignment between a part of the document in the master language and a part of the document in the target language. 7w1 This paper presented a series of formalized data structures that can be used to keep track of how a document and its translations have evolved through time. These data structures provide the foundation for a revision tracking system that understands the concept of translation and that can be used in a digital environment to support the work of textual scholars. One practical use of these data structures is to encode the stemma codicum of literary works whose tradition is spread over different languages. In the future, extensions to the shown data structures will be presented, in order to address traditions where there is no concept of "master document" and to keep track of the so called "overlay locations", i.e. voluntary aberrations from a normal translation that are preserved even when the master is updated. http://averroes.uni-koeln.de/ https://thomas-institut.github.io/averroes-tei For practical examples, see the chunking system used by the Averroes project. Barabucci, G. (2013). A universal delta model. PhD thesis. Università di Bologna. https://doi.org/10.6092/unibo/amsdottorato/5761. Barabucci, G. (2016). CATview (review). Digital Medievalist, 10. https://doi.org/10.16995/dm.57. Barabucci, G. (2017). Not a single bit in common: Issues in collating digital transcriptions of Ibn Rusd's writings in multiple languages (Arabic, Hebrew and Latin). Presented at Digital Humanities Abu Dhabi 2017 Conference. New York University Abu Dhabi.Google Scholar Barabucci, G. (2019). The CMV+P document model, linear version. In R. Bleier and V. Das Gupta (Eds.), Versioning cultural objects. IDE. (in print).Google Scholar Cavoski, A. (2017). Interaction of law and language in the EU: Challenges of translating in multilingual environment. Journal of Specialised Translation, 27, 58–74.Google Scholar Halverson, S. L. (1997). The concept of equivalence in translation studies: Much ado about something. Target: International Journal of Translation Studies, 9(2), 207–233. https://doi.org/10.1075/target.9.2.02hal.CrossRefGoogle Scholar Pöckelmann, M., Medek, A., Molitor, P., & Ritter, J. (2015). CATview: Supporting the investigation of text genesis of large manuscripts by an overall interactive visualization tool. Presented at Digital Humanities, DH2015, Sydney.Google Scholar Saenger, P. (1997). Space between words: The origins of silent reading. Stanford University Press: Stanford. ISBN: 9780804740166.Google Scholar Schäffner, C. (2001). Translation and the EU: Conditions and consequences. Perspectives: Studies in Translation Theory and Practice, 9(4), 247–261. https://doi.org/10.1080/0907676X.2001.9961422.CrossRefGoogle Scholar 1.Cologne Center for eHumanitiesUniversität zu KölnKölnGermany Barabucci, G. Int J Digit Humanities (2019) 1: 235. https://doi.org/10.1007/s42803-019-00013-9 DOI https://doi.org/10.1007/s42803-019-00013-9 Publisher Name Springer International Publishing
CommonCrawl
Uncertainty and energy-sector equity returns in Iran: a Bayesian and quasi-Monte Carlo time-varying analysis Babak Fazelabdolabadi1 This study investigates whether the implied crude oil volatility and the historical OPEC price volatility can impact the return to and volatility of the energy-sector equity indices in Iran. The analysis specifically considers the refining, drilling, and petrochemical equity sectors of the Tehran Stock Exchange. The parameter estimation uses the quasi-Monte Carlo and Bayesian optimization methods in the framework of a generalized autoregressive conditional heteroskedasticity model, and a complementary Bayesian network analysis is also conducted. The analysis takes into account geopolitical risk and economic policy uncertainty data as other proxies for uncertainty. This study also aims to detect different price regimes for each equity index in a novel way using homogeneous/non-homogeneous Markov switching autoregressive models. Although these methods provide improvements by restricting the analysis to a specific price-regime period, they produce conflicting results, rendering it impossible to draw general conclusions regarding the contagion effect on returns or the volatility transmission between markets. Nevertheless, the results indicate that the OPEC (historical) price volatility has a stronger effect on the energy sectors than the implied volatility has. These types of oil price shocks are found to have no effect on the drilling sector price pattern, whereas the refining and petrochemical equity sectors do seem to undergo changes in their price patterns nearly concurrently with future demand shocks and oil supply shocks, respectively, gaining dominance in the oil market. As global financial markets become more integrated, knowledge of their mutual interplay becomes more important for market participants to choose appropriate investment strategies. Furthermore, the financialization of the commodity markets has provided valuable tools for managing portfolio risks (Erb and Harvey 2006; Silvennoinen and Thorp 2013). As a commodity, crude oil has a well-recognized impact on equity markets worldwide. Early studies of the impact of oil prices on aggregate stock returns are limited to specific countries, with mixed findings. Some studies find a positive relationship (Narayan and Narayan 2010; Zhu et al. 2011; Zhu et al. 2014; Silvapulle et al. 2017), others find a negative relationship (Gjerde and Saettem 1999; Sadorsky 1999; Papapetrou 2001; Basher and Sadorsky 2006; Driesprong et al. 2008; Park and Ratti 2008; Chen 2009; Filis 2010; Basher et al. 2012), and still others find no relationship (Huang et al. 1996; Cong et al. 2008; Apergis and Miller 2009; Miller and Ratti 2009; Reboredo and Rivero-Castro 2014; Hatemi et al. 2017). These conflicting results may arise owing to several underlying pitfalls in the studies, such as not considering the level of oil dependence among stock markets, not explicitly considering heterogeneity in the context in which the aggregate index is exposed to gains or losses from changes in the oil price, the nature of the oil price shock considered, and the time-varying element used (Smyth and Narayan 2018). Whereas earlier studies assume linear and symmetrical adjustment processes for the underlying variables (Zhu et al. 2011), the current view favors assuming an asymmetrical effect of oil prices on stock returns (Basher and Sadorsky 2006; Kilian 2008; Kilian and Park 2009; Arouri 2011; Aggarwal et al. 2012; Asteriou and Bashmakova 2013; Narayan and Gupta 2015; Phan et al. 2015; Kang et al. 2016; Li et al. 2017). This view is further supported by the nonlinear characteristics of the oil price-stock return relationship. However, some empirical studies do not support this view, as they either find no asymmetric effects (Bachmeier 2008; Cong et al. 2008; Nandha and Faff 2008; Mollick and Assefa 2013; Reboredo and Rivero-Castro 2014; Asalman and Herrera 2015; Reboredo and Ugolini 2016) or only find evidence for such effects in oil-importing countries in the period after the global financial crisis (Ramos and Veiga 2013). Failure to account for the mixed heterogeneous effects of positive and negative oil price shocks on individual stock returns and merely considering aggregate stock returns may result in these conflicting findings (Tsai 2015). Furthermore, Kilian (2008) seminal work demonstrates that the nature of the oil price structural shock, be it an oil supply shock, aggregate demand shock, or oil-specific demand shock, could be an important factor in the oil–stock interplay, and failure to consider it may result in erroneous findings (Kilian and Park 2009). The period during which each type of stock gains dominance can be obtained by decomposing oil price data (Fueki et al. 2016). Performing this analysis shows that oil supply shocks were mainly influential from the second half of 2013 through the first half of 2015. Currently, oil supply shocks are no longer as important to macroeconomic developments, and aggregate demand shocks are seen as more influential (Kang et al. 2016). Further, empirical evidence shows that the effect of the oil price on equity returns varies considerably across sectors depending on the nature of the structural shock (Broadstock and Filis 2014). The effect of an oil supply shock is found to be positive (Basher et al. 2012; Abhyankar et al. 2013) or negative (Gupta and Modise 2013; Cunado and de Gracia 2014). For oil-specific demand shocks, however, the empirical evidence almost unanimously suggests a negative effect on equity returns in oil-importing countries (Filis et al. 2011; Basher et al. 2012; Abhyankar et al. 2013; Gupta and Modise 2013; Güntner 2014; Koh 2017) and a positive effect for Norway, an oil-exporting nation. Another puzzling feature of the oil–stock relationship is that it exhibits different behaviors in periods of low and high economic volatility; in other words, it varies over time. Quite a few studies focus on this feature, primarily using Markov-switching vector autoregression (VAR) models, regime switching models, wavelet decomposition, or frequency domain methods (Aloui and Jimmazi 2009; Chen 2009; Mohanty et al. 2010; Reboredo 2010; Jammazi and Aloui 2010; Daskalaki and Skiadopolous 2011; Filis et al. 2011; Chang and Yu 2013; Ciner 2013; Broadstock and Filis 2014; Reboredo and Rivero-Castro, 2014; Zhang and Li 2014; Kang et al. 2015; Martin-Barragan et al. 2015; Xu 2015; Zhang 2017; Zhu et al. 2017). Concurrently, studies have examined the role of oil price volatility on stock returns using both generalized autoregressive conditional heteroskedasticity (GARCH)-type models and structural VAR models (Äijö 2008; Arouri and Nguyen 2010; Choi and Hammoudeh 2010; Elyasiani et al. 2011; Chen 2014; Lin et al. 2014; Narayan and Sharma 2014; Kang et al. 2015; Salisu and Oloko 2015; Awartani et al. 2016; Maghyereh et al. 2016; Bouri et al. 2017a, 2017b). The findings show that the volatility spillovers across markets can be strong and are significantly influenced by structural breaks, indicating a heterogeneous volatility transmission phenomenon with potential economic significance for hedging purposes. Thus, the recommended approach involves using the implied rather than the historical volatility in analyzing the cross-market association, as the former is a more accurate predictor of investor sentiment. Many studies have focused on the effect of the oil price–stock returns relationship at the sector level (Cong et al. 2008; Arouri 2011; Elyasiani et al. 2011; Narayan and Sharma 2011; Lee et al. 2012; Li et al. 2012; Moya-Martinez et al. 2014; Caporale et al. 2015; Xu 2015; Zhu et al. 2016; Li et al. 2017; Peng et al. 2017), and many specifically focus on the oil and gas sector (Sadorsky 2001; Boyer and Filion 2007; Cong et al. 2008; Nandha and Faff 2008; Gupta 2016; Li et al. 2017). A key conclusion of these studies is that oil price increases positively affect the stock returns of firms in the oil and gas sector (Smyth and Narayan 2018), with a prolonged nonlinear relationship that strengthens over time (Managi and Okimoto 2013). However, the bulk of these studies focus on developed economies and rarely extend their analyses to emerging or transition markets. At the country level, studies have been conducted for oil-importing (Masih et al. 2011; Cunado and de Gracia 2014; Bouri 2015; Silvapulle et al. 2017) and oil-exporting countries (Bjornland 2009; Arouri and Rault 2012; Mohanty et al. 2011; Ramos and Veiga 2013; Gil-Alana and Yaya 2014; Demirer et al. 2015), as is intuitive. Although their findings vary, these studies generally find that oil prices positively affect equity returns in oil-exporting countries (Smyth and Narayan 2018). Only a limited number of previous studies examine the oil price–stock relationship in Iran, and few focus on the effect on the price index of the Tehran Stock Exchange (TSE) (Najafabadi et al. 2012) or its volatility (Davoudi et al. 2018). Based on the above literature review, this study makes a two-fold contribution to the existing literature. It provides the first analysis of the oil price impact on equity returns in Iran's oil sector, and it uses a firsthand application of the quasi-Monte Carlo (QMC) method and Bayesian network (BN) theory in this setting. The remainder of the paper proceeds as follows. The next section provides a description and preliminary analysis of the data. Section 3 outlines the research methodology used in the empirical investigation. A discussion of the results is presented in Section 4, followed by the concluding remarks. Implied oil volatility index The implied oil volatility index is reported by the Chicago Board of Options Exchange and is constructed on an option basis, disregarding the pricing models. In this process, the market prices of out-of-the-money calls and puts are incorporated into the computation, and the implied crude oil volatility (OVX) is obtained using Eq. 1: $$ OVX={\left[\frac{2}{T}\sum \limits_i\frac{\varDelta {K}_i}{K_i^2}\exp (RT)Q\left({K}_i\right)-\frac{1}{T}{\left(\frac{F}{K_0}-1\right)}^2\right]}^{1/2}\times 100 $$ Here, T is the time to maturity of the set of options, F is the forward price level derived from the smallest call-put option premium difference, R is the risk-free interest rate, \( \varDelta {K}_i=\frac{k_{i+1}-{K}_{i-1}}{2} \)measures the average interval of the two strike prices adjacent to the strike price of option i, K0 denotes the first strike price below the forward price level F, and (K i) represents the option premium computed as the midpoint of the bid-ask Q spread of each option with strike price K i (Maghyereh et al. 2016). This measure provides an accurate reflection of future market volatility. Other useful texts (Maghyereh et al. 2016) on this topic, however, provide more detailed information on the OVX computation. Preliminary statistics Daily data on stock prices were obtained from the TSE archive (Tehran Stock Exchange archive 2018). The archive contains data on several equity sectors, among which the oil drilling (ODR), oil refining (ORE), and chemical/petrochemical (PET) categories are presumably the most-relevant to the energy sector. Thus, this study uses data from these sectors. In addition, data on geopolitical risk (GPR), global economic policy uncertainty (EPU) (Economic Policy Uncertainty 2018) and the TSE price index (TPI) are incorporated to account for other proxies for uncertainty in the analysis. The implied oil volatility is inferred from OVX, whereas the historical oil volatility is taken from OPEC oil price data (OPEC), both of which were obtained using the Quandl package (McTaggart et al. 2016). The sample spans September 27, 2009 to November 12, 2018, which is an interesting period that witnessed several major market events, including the emergence of shale oil as a key market player, the collapse of cooperation among OPEC members, the start of the global economic recovery, and so on (Maghyereh et al. 2016). The first date of the sample is the first available date in the TSE archive. To provide better insight into the sector data used, Fig. 1 plots these data, and Table 1 provides descriptive statistics of their log returns. Plots of OVX and OPEC are given in Fig. 2. For all the model parameters reported herein, the precision estimates are available from the author. The equity sector data for ORI (a), PET (b), and ODR (c) Table 1 Descriptive statistics of the equity log returns The historical price data for OVX (a), and OPEC (b) An augmented Dickey–Fuller Test confirms that log return series is stationary. All the equity returns show leptokurtic characteristics (i.e., kurtosis > 3), suggesting that a GARCH model is an appropriate choice in this setting. The Jarque-Bera test rejects the null hypothesis of a normal distribution for all series at the 1% significance level. The results of ARCH tests also confirm the presence of heteroscedasticity in the data. We initially considered a simple GARCH model class for studying the return (r i,t) and variance (σ2i,t) dynamics of asset i at time t. In doing so, the OVX and OPEC data were used as external regressors, as in Eqs. 2 and 3. $$ {\displaystyle \begin{array}{l}{r}_{i,t}=\mu +{\eta}_{i,1}{r}_{i,t-1}+{\eta}_{i,2}{r}_{i,t-2}+{\eta}_{i,3}{\varepsilon}_{i,t-1}+{\eta}_{i,4}{\varepsilon}_{i,t-2}+{\eta}_{i,5}{\varepsilon}_{i,t-3}+{\eta}_{i,6}{r}_{ovx_{t-1}}+{\eta}_{i,7}{r}_{OPEC_{t-1}}\\ {}i\in \left\{ ORI, PET, ODR\right\}\\ {}{\varepsilon}_{i,t}={\sigma}_{i,t}{z}_{i,t}\end{array}} $$ $$ {\sigma}_{i,t}^2=\omega +{\eta}_{i,8}{\sigma}_{i,t-1}^2+{\eta}_{i,9}{\sigma}_{i,t-2}^2+{\eta}_{i,10}{\varepsilon}_{i,t-1}^2+{\eta}_{i,11}{\sigma_{ovx_{t-1}}}^2+{\eta}_{i,12}{\sigma_{opec_{t-1}}}^2 $$ We also applied dynamic conditional correlation GARCH (DCC-GARCH) and asymmetric DCC-GARCH (ADCC-GARCH) models; Appendix 1 provides an explanation of these methods. Quasi-Monte Carlo method Let Ω be a separable topological space in an N dim -dimensional space. Clearly, any point in Ω can be described by a set of dim N values, with ϖ = (×1, ×2, … , x Ndim) for x i ∈ ℜ1 ≤ i ≤ Ndim. Let f be a real-valued function on Ω for which a global maximum is sought. Because f is assumed to hold a global maximum in the region of interest, it is bounded from above, and we define its global maximum as: $$ m(f)={\max}_{\varpi \in \varOmega }f\left(\varpi \right) $$ Let λ prob be a probability measure on Ω . Furthermore, let S be a sequence of N independent λ prob -distributed random samples, ϖ1 ,ϖ2 ,...,ϖ N ∈Ω. We define $$ {m}_N\left(f;S\right)={\max}_{1\le i\le N}f\left({\varpi}_i\right) $$ The QMC method of quasi-random search uses a deterministic sequence of points ϖ1 ,ϖ2 ,...,ϖ N in Ω to find the global optimum. It is proven that m N (f;S) converges to the global maximum of f with unit probability if f is continuous and if a positive probability measure (λ prob > 0) is taken for every nonempty subset of Ω (Niederreiter 1994). $$ {\lim}_{N\to \infty }{m}_N\left(f;S\right)=m(f) $$ Consider a point set P = (ϖ1, ϖ2,..., ϖN). The dispersion of point P in Ω is defined by: $$ {disp}_N\left(P;\Omega \right)=\max {\min}_{1\le i\le N} disp\left(\varpi, {\varpi}_i\right) $$ $$ {\displaystyle \begin{array}{l} disp\left(\vartheta, o\right)={\max}_{1\le i\le {N}_{\mathrm{dim}}}\left|{\vartheta}_i-{o}_i\right|\kern1.5em \mathrm{for}\\ {}{\vartheta}_i=\left({y}_1,{y}_2,\dots, {y}_s\right);{o}_i=\left({z}_1,{z}_2,\dots, {z}_s\right)\end{array}} $$ In summary, point sets with small dispersion are proven to be suitable for quasi-random search purposes (Niederreiter 1994). In addition, the point set used for QMC should possess nice properties on its discrepancy, which is interpreted as the difference between the empirical and uniform distributions of the QMC point set (Drew and Homem-de-Mello 2006). QMC deals with infinite low discrepancy sequences (LDS), which have the additional property that, for arbitrary N, the initial segments have relatively small discrepancies (Lei 2002). The merits of LDS are twofold; first, they provide uniform sample points avoiding large gaps or clusters, and, second, they know about their predecessors and fill the gaps left from previous iterations (Kucherenko 2006), eliminating empty areas in which no information on the behavior of the underlying problem can be deducted. The choice of LDS is therefore central to the QMC methodology. Different principles have been used to generate LDS sets (Sobol 1976; Bratley et al. 1992; Niederreiter 1994). Whereas other theories, such as Niederreiter's (1994), result in better asymptotic properties (Kucherenko 2006), Sobol's LDS sets provide enhanced reliability in terms of rapid convergence in high dimensionality situations (Jäckel 2002). As a result, Sobol's (1976) method was adopted for LDS generation in this study. A description of this method can be found in Appendix 2 to keep this text within a reasonable length. Once an LDS set is available, the multistart QMC algorithm implements an inexpensive local search (such as the steepest descent) on the quasi-random points to concentrate the sample, which is subsequently reduced by replacing the worst points (with lower function values) with the new quasi-random points. A completely new local search is then applied to any point retained for a certain number of iterations. Two types of stopping criteria may be used for this algorithm. The first is to stop if no new value for the local maximum is found after several iterations (Glover and Kochenberger 2003). The second is to stop if the number of worse stationary points exceeds the number of stationary points, usually by a fraction of three (Hickernell and Yuan 1997). Appendix 3 provides a more detailed description of the QMC algorithm. Bayesian optimization Consider the original problem of maximizing the function f on a bounded set Ω . If ϖ is a point in this region, a dataset of the point parameters and their corresponding function evaluations can be iteratively collected, ℘ = {ϖi, f(ϖi)}1 ≤ i ≤ iter, up to the iterth iteration. The dataset can be subsequently used to build a response surface. At this point, the optimization of the original function can be replaced by an alternative optimization of the response surface; the difference is that the latter optimization merely requires the evaluation of the learned model rather than that of the original function f. Thus, the first requirement of Bayesian optimization (BO) is adopting a probabilistic model to create the response surface. Such a probabilistic framework allows the use of priors that encode the collected information in a principled way. Moreover, probabilistic models tend to be more robust to the effect of model errors, as they take into account uncertainty about the model (Calandra et al. 2016). In other words, the first step involves selecting a prior distribution over functions that expresses assumptions about the function being evaluated (Snoek et al. 2012). The prior over functions is then updated in light of new observations (Brochu et al. 2010). This analysis uses a Gaussian process (GP) (Rasmussen and Williams 2006) for the prior, meaning that any finite set of function values induces a multivariate Gaussian distribution. Other plausible choices for the prior include the random forest (Criminsi et al. 2011) and the Student-t processes (Shah et al. 2014). In the second step, the previously collected data,℘, are combined with the prior to obtain a posterior distribution using Bayes' rule. The posterior captures our updated beliefs about the unknown objective function (Brochu et al. 2010). We attempt to maximize the posterior at each step to decrease the distance between the true global maximum and the expected maximum given by the model (Brochu et al. 2010). The next point to evaluate, ϖ iter + 1 , is determined based on an acquisition function, f acquisition, which is a posterior over the functions induced by prior knowledge and data (Snoek et al. 2012). In practice, the next sample is drawn at the maximal acquisition function, ϖ iter + 1 = argmaxϖ f (ϖ) . This study uses the acquisition function of Srinivas et al. (2010), which exploits the upper confidence bound, for maximization. $$ {f}_{acquisition}\left(\varpi; \wp, {\theta}_{GP}\right)=\mu \left(\varpi; \wp, {\theta}_{GP}\right)-{K}_{balance}\sigma \left(\varpi; \wp, {\theta}_{GP}\right) $$ Here, θ GP stands for the GP hyper-parameters; μ (ϖ; ℘, θGP) and σ2 (ϖ; ℘, θGP) are the mean and variance functions of the multivariate Gaussian distribution, respectively; and κbalance is a tradeoff parameter (k balance > 0) tuned to balance the search in terms of exploitation (where f is uncertain) against exploration (where f is expected to be high). Exploration prevails if the value of κ balance is increased. In this analysis, this parameter was set such that κbalance = 2.576. The BO algorithm conducts these steps sequentially (Appendix 4) in search for the global optimum. The mathematical foundations behind the method are thoroughly described in several useful texts (Brochu et al. 2010; Snoek et al. 2012), which also provide illustrations of its implementation. Bayesian network A BN is an implementation of a graphical model in which nodes represent random variables and arrows represent probabilistic dependencies between the nodes (Korb and Nicholson 2004). The BN's graphical structure is a directed acyclic graph (DAG) that enables estimation of the joint probability distribution. For each variable, the DAG defines a factorization of the joint probability distribution into a set of local probability distributions, and the factorization form is given by the BN's Markov property, which assumes that a variable is solely dependent on its parents (Scutari 2010). In this way, the methodology seeks to find a structure along with its parameters. The two classifications of the BN-structure-learning process either handle this issue by analyzing the probabilistic relationships supervised by the Markov property of BNs with conditional independence tests and subsequently constructing a graph that satisfies the corresponding d-separation statements (constraint-based algorithms) or by assigning a score to each BN candidate and maximizing it with a heuristic algorithm (score-based algorithms) (Scutari 2010). This study tested both algorithm types. For the constrained-based type, we used a Monte Carlo permutation test for the conditional independence test, whereas, in the score-based case, we applied a score-equivalent Gaussian posterior density criterion. The implementation of the graphical structure-learning of the BNs was attempted using the bnlearn package (Scutari 2017). The choice of lag in the GARCH models was rendered, following a series of computations, over a grid of lag values to identify the model with the least Bayesian information criterion (BIC) value. As our primary deduction on the cross-market association is based on the GARCH model parameters, we meticulously performed their estimation using a variety of techniques, as shown in Tables 2, 3 and 4. As evident from the results, the choice of solution technique clearly affects the estimated parameter values and is therefore extremely critical. Table 2 The GARCH parameters for the ORI sector Table 3 The GARCH parameters for the PET sector Table 4 The GARCH parameters for the ODR sector In the ORI sector, for example, the limited Broyden–Fletcher–Goldfarb–Shanno (LBFGS)/Bayesian methods estimate a positive effect of OVX/OPEC, whereas the QMC method finds that the effect is insignificant. Furthermore, the LBFGS/Bayesian results estimate that the volatility transmission from OVX/OPEC to ORI is positive, but this result is not supported by the QMC. Similar contradictions arise in other sectors. For example, OVX returns are found to affect PET returns positively and negatively by the LBFGS and Bayesian methods, respectively. The LBFGS/Bayesian method finds that the effect of OPEC returns on PET returns is significant, whereas the QMC method finds minimal effects. The Bayesian method finds that the volatility transmission from OVX/OPEC to PET is significant, whereas the other methods find an insignificant effect. In the ODR sector, the LBFGS and Bayesian methods find a negative effect of OVX returns on ODR returns, but the methods largely disagree on the direction of the effect of OPEC returns on ODR returns. The QMC method, however, considers ODR returns to be insensitive to external factors. We find similar results for volatility transmission in the ODR sector; the LBFGS/Bayesian methods estimate opposite directions of the volatility transmission between OPEC and ODR returns, but both find a positive volatility transmission scheme between OVX and ODR returns. Likewise, the QMC method finds that the ODR volatility is independent of any external factors. The results for the mean dynamic correlation from the DCC/ADCC-GARCH models, shown in Table 5, also find a strong positive correlation between TPI and the sectors studied. Table 5 The mean dynamic correlations, obtained from DCC/ADCC GARCH To help understand the underlying connectedness, the data were also analyzed in the framework of BN theory, which is essentially a GARCH-free scheme. Interestingly, the results, shown in Figs. 3 and 4, indicate a mostly different set of significant relationships relative to those found in the GARCH results. Specifically, the BN results indicate that OPEC returns do affect returns in the ORI and PET sectors and that the volatility transmission between OPEC returns and the ORI and ODR sectors is positive. The extracted Bayesian network for returns in each sector, obtained by the score-based algorithm. The solid/dashed lines refer to statistically significant/insignificant arcs, respectively. The letters represent A rt, B rt − 1, C rt − 2, D rOVX, t − 1, E rOVX, t − 2, F rOPEC, t − 1, G rOPEC, t − 2 The extracted Bayesian network for volatility in each sector, obtained by the score-based algorithm. The solid/dashed lines refer to statistically significant/insignificant arcs, respectively. The letters represent A \( {\sigma}_t^2 \), B \( {\sigma}_{t-1}^2 \), C \( {\sigma}_{t-2}^2 \), D \( {\sigma}_{OVX,t-1}^2 \), E \( {\sigma}_{OVX,t-2}^2 \), F \( {\sigma}_{OPEC,t-1}^2 \), G \( {\sigma}_{OPEC,t-2}^2 \) To further investigate whether the dependency structure changes over time, we conducted a copula analysis by estimating the copula dependence parameters for Gaussian, Student-t, Gumbel, Clayton, and Frank copula models, as shown in Table 6. The results on a monthly basis indicate a positive correlation between GPR and EPU and all the energy sectors studied, as shown in Table 7. The correlation is stronger for GPR than for EPU. Table 6 The estimated copula dependence parameters for Gaussian, Student-t, Gumbel, Clayton, and Frank copula models Table 7 The correlation between GPR/EPU and the equity sectors, on a monthly basis To avoid the potential pitfall of bias in the parameters due to periods of high or low volatility, price regimes were also detected for each sector, as shown in Table 8. We initially determined the number of plausible price regimes for each sector by identifying the case with the lowest BIC value out of the results obtained by applying the homogeneous/nonhomogeneous Markov switching autoregressive models (Monbet 2018), as shown in Additional file 1: Tables S1-S2 of the supplementary information. The exact timing of the period was later determined by fitting a regression tree model (Therneau et al. 2015). Table 8 The time span of the detected price regimes, for the equity sectors Interestingly, some of the time spans identified coincide with times when the oil price was undergoing breaks owing to changes in the type of shocks (i.e., oil supply or future demand shocks). For example, the ORI sector clearly enters a new price regime in early 2015, when future demand shocks became more dominant than supply shocks in influencing the oil market (Fueki et al. 2016). Likewise, the PET sector begins a new price regime in late 2013, which fits chronologically with the timeline of the positive contribution of future supply shocks to the oil price hike (Fueki et al. 2016). The only exception to this rule is the ODR sector, which has been rather insensitive to shocks. For the sake of brevity, however, the results for the selected price periods are given in the supplementary information to the article. When the analysis is restricted to a specific price-regime timeline, the results for the GARCH and BN methods tend to agree more. For instance, both methods now confirm the volatility transmission between OPEC returns and the ORI and ODR sectors. Similarly, both methods confirm that the OPEC returns have a contagion effect on the ORI and PET returns. The choice of solution technique has a clear effect on the parameter values estimated by GARCH, making it an unreliable platform for analyzing cross-market associations. The Bayesian scheme provides an alternative robust route for understanding the underlying connectivity in the market. We find a positive correlation between GPR, EPU, and all the energy sectors studied. Neither the contagion effect on returns nor the volatility transmission between the markets, however, can be deducted upfront, as the methods yield different results. Nevertheless, the results point to the OPEC (historical) price volatility as having a stronger effect on the energy sectors relative to the implied volatility. The ODR sector is found to be insensitive to the type of oil price shock, whereas the price patterns in the ORI and PET sectors seemingly changed when future demand shocks and oil supply shocks, respectively, gained dominance in the oil market. This latter information may have potential significance for TSE market participants in re-shaping their investment portfolios. ARCH: Autoregressive conditional heteroscedasticity Bayesian information criterion CBOE: Chicago board of options exchange DCC-GARCH: Dynamic conditional correlation GARCH: Generalized autoregressive conditional heteroskedasticity GP: Gaussian process LBFGS: Low Memory Broyden–Fletcher–Goldfarb–Shanno Low discrepancy sequences NSP: Number of stationary points NWSP: Number of worse stationary points ODR: OPEC: OPEC basket oil price ORI: OVX: Oil volatility index QMC: Quasi-Monte Carlo TPI: Price index of tehran stock exchange TSE: Abhyankar A, Xu B, Wang J (2013) Oil price shocks and the stock market: evidence from Japan. Energy J 34:199–222 Aggarwal R, Akhigbe A, Mohanty SK (2012) Oil price shocks and transportation firm asset prices. Energy Econ 34:1370–1379. https://doi.org/10.1016/j.eneco.2012.05.001 Äijö J (2008) Implied volatility term structure linkages between VDAX, VSMI and VSTOXX volatility indices. Glob Financ J 18(3). https://doi.org/10.1016/j.gfj.2006.11.003 Aloui C, Jimmazi R (2009) The effects of crude oil shocks on stock market shifts behavior: a regime switching approach. Energy Econ 31:789–799. https://doi.org/10.1016/j.eneco.2009.03.009 Apergis N, Miller SM (2009) Do structural oil-market shocks affect stock prices? Energy Econ 31(4):569–575. https://doi.org/10.1016/j.eneco.2009.03.001 Arouri MEH (2011) Does crude oil move stock markets in Europe? A sector investigation. Econ Model 28:1716–1725. https://doi.org/10.1016/j.econmod.2011.02.039 Arouri MEH, Nguyen DK (2010) Oil prices, stock markets and portfolio investment: evidence from sector analysis in Europe over the last decade. Energy Policy 38:4528–4539 Arouri MEH, Rault C (2012) Oil prices and stock markets in GCC countries: empirical evidence from panel analysis. Int J Financ Econ 17:242–253. https://doi.org/10.1002/ijfe.443 Asalman Z, Herrera AM (2015) Oil price shocks and the US stock market: do sign and size matter? Energy J 36(3):171–188 Asteriou D, Bashmakova Y (2013) Assessing the impact of oil returns on emerging stock markets: a panel data approach for ten central and eastern European countries. Energy Econ 38:204–211. https://doi.org/10.1016/j.eneco.2013.02.011 Awartani B, Maghyereh A, Guermat C (2016) The connectedness between crude oil and financial markets: Evidence from implied volatility indices. J Common Mark Stud:11. https://doi.org/10.1016/j.jcomm.2016.11.002 Bachmeier L (2008) Monetary policy and the transmission of oil shocks. J Macroecon 30:1738–1755. https://doi.org/10.1016/j.jmacro.2007.11.002 Basher SA, Haug AA, Sadorsky P (2012) Oil prices, exchange rates and emerging stock markets. Energy Econ 34(1):227–240. https://doi.org/10.1016/j.eneco.2011.10.005 Basher SA, Sadorsky P (2006) Oil price risk and emerging stock markets. Glob Financ J 17(2):224–251. https://doi.org/10.1016/j.gfj.2006.04.001 Bjornland HC (2009) Oil price shocks and stock market booms in an oil exporting country. Scot J Political Econ 56(2):232–254 Bouri E (2015) Oil volatility shocks and the stock markets of oil-importing MENA economies: a tale from the financial crisis. Energy Econ 51:590–598. https://doi.org/10.1016/j.eneco.2015.09.002 Bouri E, Gupta R, Hosseini S, Lau M, C.K. (2017a) Does global fear predict fear in BRICS stock markets? Evidence from a Bayesian Graphical Structural VAR model. Emerg Mark Rev. https://doi.org/10.1016/j.ememar.2017.11.004 Bouri E, Jain A, Biswal PC, Roubaud D (2017b) Cointegration and nonlinear causality amongst gold, oil, and the Indian stock market: evidence from implied volatility indices. Resour Policy 52:201–206. https://doi.org/10.1016/j.resourpol.2017.03.003 Boyer MM, Filion D (2007) Common and fundamental factors in stock returns of Canadian oil and gas companies. Energy Econ 29(3):428–453. https://doi.org/10.1016/j.eneco.2005.12.003 Bratley P, Fox BL, Niederreiter H (1992) Implementation and tests of low discrepancy sequences. ACM T Model Comput S 2:195–213. https://doi.org/10.1145/146382.146385 Broadstock DC, Filis G (2014) Oil price shocks and stock market returns: new evidence from the United States and China. J Int Financ Mark Inst Money 33:417–433. https://doi.org/10.1016/j.intfin.2014.09.007 Brochu E, Cora VM, de Freitas N (2010) A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv:1012.2599 Calandra R, Seyfarth A, Peters J, Deisenroth MP (2016) Bayesian optimization for learning gaits under uncertainty. Ann Math Artif Intell 76:5–23. https://doi.org/10.1007/s10472-015-9463-9 Caporale GM, Ali FM, Spagnolo N (2015) Oil price uncertainty and sectoral stock returns in China: a time-varying approach. China Econ Rev 34:311–321. https://doi.org/10.1016/j.chieco.2014.09.008 Chang KL, Yu ST (2013) Does crude oil price play an important role in explaining stock return behavior? Energy Econ 39:159–168. https://doi.org/10.1016/j.eneco.2013.05.008 Chen CY (2014) Does fear spill over? Asia Pac J Financ Stud 43(4):465–491. https://doi.org/10.1111/ajfs.12055 Chen SS (2009) Do higher oil prices push the stock market into bear territory? Energy Econ 32(2):490–495. https://doi.org/10.1016/j.eneco.2009.08.018 Choi K, Hammoudeh S (2010) Volatility behavior of oil, industrial commodity and stock markets in a regime-switching environment. Energy Policy 38:4388–4399. https://doi.org/10.1016/j.enpol.2010.03.067 Ciner C (2013) Oil and stock returns: frequency domain evidence. J Int Financ Mark Inst Money 23:1–11. https://doi.org/10.1016/j.intfin.2012.09.002 Cong RG, Wei YM, Jiao JL, Fan Y (2008) Relationships between oil price shocks and stock market: an empirical analysis from China. Energy Policy 36:3544–3553. https://doi.org/10.1016/j.enpol.2008.06.006 Criminsi A, Shotton J, Konukoglu E (2011) Decision forests: a unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found Trends® Comput Graph Vis 7(2–3):81–227. https://doi.org/10.1561/0600000035 Cunado J, de Gracia FP (2014) Oil price shocks and stock market returns: evidence for some European countries. Energy Econ 42:365–377. https://doi.org/10.1016/j.eneco.2013.10.017 Daskalaki C, Skiadopolous G (2011) Should investors include commodities in their portfolios after all? New evidence. J Bank Financ 35:2606–2626. https://doi.org/10.1016/j.jbankfin.2011.02.022 Davoudi S, Fazlzadeh A, Fallahi F, Asgharpour H (2018) The impact of oil revenue shocks on the volatility of Iran's stock market return. Int J Energy Econ Policy 8(2):102–110 Demirer R, Jategaonkar SP, Khalifa AAA (2015) Oil price risk exposure and the crosssection of stock returns. Energy Econ 49:132–140. https://doi.org/10.1016/j.eneco.2015.02.010 Drew S, Homem-de-Mello T (2006) Quasi-Monte Carlo strategies for stochastic optimization. In: Proceedings of the 2006 winter simulation conference, pp 774–782 Driesprong G, Jacobson B, Matt B (2008) Striking oil: another puzzle? J Financ Econ 89(2):307–327. https://doi.org/10.1016/j.jfineco.2007.07.008 Economic Policy Uncertainty. http://www.policyuncertainty.com; (2018) [Accessed: 12 Nov 2018] Elyasiani E, Mansur I, Odusami B (2011) Oil price shocks and industry stock returns. Energy Econ 33:966–974. https://doi.org/10.1016/j.eneco.2011.03.013 Erb CB, Harvey CR (2006) The strategic and tactical value of commodity futures. Financ Anal J 62:69–97. https://doi.org/10.2307/4480745 Filis G (2010) Macro economy, stock market and oil prices: do meaningful relationships exist among their cyclical fluctuations? Energy Econ 32(4):877–886. https://doi.org/10.1016/j.eneco.2010.03.010 Filis G, Degiannakis S, Floros C (2011) Dynamic correlation between stock market and oil prices: the case of oil-importing and oil-exporting countries. Int Rev Financ Anal 20(3):152–164. https://doi.org/10.1016/j.irfa.2011.02.014 Fueki, T.; Higashi, H.; Higashio, N.; Nakajima, J.; Ohyama, S.; Tamanyu, Y. (2016) Identifying Oil Price Shocks and Their Consequences: Role of Expectations and Financial Factors in the Crude Oil Market. Bank of Japan Working Paper Series 16-E-17, Bank of Japan Gil-Alana L, Yaya OS (2014) The relationship between oil prices and the Nigerian stock market: an analysis based on fractional integration and cointegration. Energy Econ 46:328–333. https://doi.org/10.1016/j.eneco.2014.10.001 Gjerde O, Saettem F (1999) Causal relations among stock returns and macroeconomic variables in a small, open economy. J Int Financ Mark Inst Money 9:61–74. https://doi.org/10.1016/S1042-4431(98)00036-5 Glover F, Kochenberger G (2003) Handbook of metaheuristics. Kluwer Academic Publishers, New York Güntner JH (2014) How do international stock markets respond to oil demand and supply shocks? Macroecon Dyn 18(8):1657–1682. https://doi.org/10.1017/S1365100513000084 Gupta K (2016) Oil price shocks, competition and oil & gas stock returns – global evidence. Energy Econ 57:140–153. https://doi.org/10.1016/j.eneco.2016.04.019 Gupta R, Modise MP (2013) Does the source of oil price shocks matter for south African stock returns? A structural VAR approach. Energy Econ 40:825–831. https://doi.org/10.1016/j.eneco.2013.10.005 Hatemi JA, Al Shayed A, Roca E (2017) The effect of oil prices on stock prices: fresh evidence from asymmetric causality tests. Appl Econ 49(16):1584–1592. https://doi.org/10.1080/00036846.2016.1221045 Hickernell FJ, Yuan Y (1997) A simple multistart algorithm for global optimization. OR Transact 1(2). https://doi.org/10.1011/461346 Huang RD, Masulis RW, Stoll HR (1996) Energy shocks and financial markets. J Futur Mark 16:1–27. https://doi.org/10.1002/(sici)1096-9934(199602)16:1<1::aid-fut1>3.0.co;2-q Jäckel P (2002) Monte Carlo methods in finance. Wiley, New York Jammazi R, Aloui C (2010) Wavelet decomposition and regime shifts: assessing the effect of crude oil shocks on stock market returns. Energy Policy 38:1415–1435. https://doi.org/10.1016/j.enpol.2009.11.023 Kang W, Ratti RA, Vespignaniet J (2016) The impact of oil price shocks on the US stock market: a note on the roles of US and non-US oil production. Econ Lett 145:176–181. https://doi.org/10.1016/j.econlet.2016.06.008 Kang W, Ratti RA, Yoon KH (2015) Time-varying effect of oil market shocks on the stock market. J Bank Financ 61:S150–S163. https://doi.org/10.1016/j.jbankfin.2015.08.027 Kilian L (2008) Exogenous oil supply shocks: how big are they and how much do they matter for the US economy? Rev Econ Stat 90:216–240. https://doi.org/10.1162/rest.90.2.216 Kilian L, Park C (2009) The impact of oil price shocks on the US stock market. Int Econ Rev 50(4):1267–1287. https://doi.org/10.1111/j.1468-2354.2009.00568.x Koh WC (2017) How do oil supply and demand shocks affect Asian stock markets? Macroecon Finance Emerg Mark Econ 10:1–18. https://doi.org/10.1080/17520843.2015.1135819 Korb K, Nicholson A (2004) Bayesian artificial intelligence. Chapman and Hall, London Kucherenko S (2006) Application of quasi Monte Carlo methods in global optimization. Glob Optim:111–133. https://doi.org/10.1007/0-387-30528-9_5 Lee BJ, Yang CW, Huang BN (2012) Oil price movements and stock markets revisited: a case of sector stock price indexes in the G7 countries. Energy Econ 34:1284–1300. https://doi.org/10.1016/j.eneco.2012.06.004 Lei G (2002) Adaptive Random Search in Quasi-Monte Carlo Methods for Global Optimization. Comput Math Appl (43):747–754. https://doi.org/10.1016/S0898-1221(01)00318-2 Li Q, Cheng K, Yang Z (2017) Response pattern of stock returns to international oil price shocks from the perspective of China's oil industrial chain. Appl Energy 185:1821–1831. https://doi.org/10.1016/j.apenergy.2015.12.060 Li SF, Zhu HM, Yu K (2012) Oil prices and stock market in China: a sector analysis using panel cointegration with multiple breaks. Energy Econ 34:1951–1958. https://doi.org/10.1016/j.eneco.2012.08.027 Lin B, Wesseh PK, Appiah MO (2014) Oil price fluctuation, volatility spillover and the Ghanian equity market: implication for portfolio management and hedging effectiveness. Energy Econ 42:172–182. https://doi.org/10.1016/j.eneco.2013.12.017 Maghyereh AI, Awartani B, Bouri E (2016) The directional volatility connectedness between crude oil and equity markets: new evidence from implied volatility indexes. Energy Econ 6(57):78–93. https://doi.org/10.1016/j.eneco.2016.04.010 Managi S, Okimoto T (2013) Does the price of oil interact with clean energy prices in the stock market? Jpn World Econ 27:1–9 Martin-Barragan B, Ramos SB, Veiga H (2015) Correlations between oil and stock markets: a wavelet-based approach. Econ Model 50:212–227. https://doi.org/10.1016/j.econmod.2015.06.010 Masih R, Peters S, De Mello L (2011) Oil price volatility and stock price fluctuations in an emerging market: evidence from South Korea. Energy Econ 33(5):975–986 McTaggart, R., Daroczi, G., Leung, C. (2016). Quandl: API Wrapper for Quandl.com. R package version 2.8.0. http://CRAN.R-project.org/package=Quandl Miller JI, Ratti RA (2009) Crude oil and stock markets: Stability, instability, and bubbles. Energy Econ 31(4):559–568. https://doi.org/10.1016/j.eneco.2009.01.009 Mohanty SK, Nandha M, Bota G (2010) Oil shocks and stock returns: the case of the central and eastern European (CEE) oil and gas sectors. Emerg Mark Rev 11:358–372. https://doi.org/10.1016/j.ememar.2010.06.002 Mohanty SK, Nandha M, Turkistani AQ, Alaitani MY (2011) Oil price movements and stock market returns: evidence from gulf cooperation council (GCC) countries. Glob Financ J 22:42–55. https://doi.org/10.1016/j.gfj.2011.05.004 Mollick AV, Assefa TA (2013) US stock returns and oil prices: the tale from daily data and the 2008–2009 financial crisis. Energy Econ 36:1–18. https://doi.org/10.1016/j.eneco.2012.11.021 Monbet V. (2018) NHMSAR: Non-Homogeneous Markov Switching Autoregressive Models. R package version 1.12. URL http://CRAN.R-project.org/package=NHMSAR Moya-Martinez P, Ferrer-Lapena R, Escribano-Sotos F (2014) Oil price risk in the Spanish stock market: an industry perspective. Econ Model 37:280–290. https://doi.org/10.1016/j.econmod.2013.11.014 Najafabadi AP, Qazvini M, Ofoghi R (2012) The impact of oil and gold prices' shock on Tehran stock exchange: a copula approach. Iran J Econ Stud 1(2):23–47 Nandha M, Faff R (2008) Does oil move equity prices? A global view. Energy Econ 30(3):986–997. https://doi.org/10.1016/j.eneco.2007.09.003 Narayan PK, Gupta R (2015) Has oil price predicted stock returns for over a century? Energy Econ 48:18–23. https://doi.org/10.1016/j.eneco.2014.11.018 Narayan PK, Narayan S (2010) Modelling the impact of oil prices on Vietnam's stock prices. Appl Energy 87:356–361. https://doi.org/10.1016/j.apenergy.2009.05.037 Narayan PK, Sharma SS (2011) New evidence on oil price and firm returns. J Bank Financ 35(12):3253–3262. https://doi.org/10.1016/j.jbankfin.2011.05.010 Narayan PK, Sharma SS (2014) Firm return volatility and economic gains: the role of oil prices. Econ Model 38:142–151. https://doi.org/10.1016/j.econmod.2013.12.004 Niederreiter H (1994) Random number generation and quasi-Monte Carlo methods. Society for Industrial and Applied Mathematics, Philadelphia Papapetrou E (2001) Oil price shocks, stock market, economic activity and employment in Greece. Energy Econ 23:511–532. https://doi.org/10.1016/S0140-9883(01)00078-0 Park J, Ratti RA (2008) Oil price shocks and stock markets in the US and 13 European countries. Energy Econ 30(5):2587–2608. https://doi.org/10.1016/j.eneco.2008.04.003 Peng C, Zhu MM, Jia XH, You WH (2017) Stock price syncronicity to oil shocks across quantiles: evidence from Chinese oil firms. Econ Model 61:248–259. https://doi.org/10.1016/j.econmod.2016.12.018 Phan DHB, Sharma SS, Narayan PK (2015) Oil price and stock returns of consumers and producers of crude oil. J Int Financ Mark Inst Money 34:245–262. https://doi.org/10.1016/j.intfin.2014.11.010 Ramos SB, Veiga H (2013) Oil price asymmetric effects: answering the puzzle in international stock markets. Energy Econ 38:136–145. https://doi.org/10.1016/j.eneco.2013.03.011 Reboredo JC (2010) Nonlinear effects of oil shocks on stock returns: a Markov switching approach. Appl Econ 42:3735–3744. https://doi.org/10.1080/00036840802314606 Reboredo JC, Rivero-Castro M (2014) Wavelet-based evidence of the impact of oil prices on stock returns. Int Rev Econ Financ 29:145–176. https://doi.org/10.1016/j.iref.2013.05.014 Reboredo JC, Ugolini A (2016) Quantile dependence of oil price movements and stock returns. Energy Econ 54:33–49. https://doi.org/10.1016/j.eneco.2015.11.015 Sadorsky P (1999) Oil price shocks and stock market activity. Energy Econ 21(5):449–469. https://doi.org/10.1016/S0140-9883(99)00020-1 Sadorsky P (2001) Risk factors in stock returns of Canadian oil and gas companies. Energy Econ 23:17–28. https://doi.org/10.1016/S0140-9883(00)00072-4 Salisu A, Oloko TF (2015) Modelling oil price—US stock nexus: a VARMA-BEKKAGARCH approach. Energy Econ 50:1–12 Scutari M (2010) Learning Bayesian Networks with the bnlearn R Package. J Stat Softw 35(3):1–22 URL http://www.jstatsoft.org/v35/i03 Scutari M (2017) Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimized Implementations in the bnlearn R Package. J Stat Softw 77(2):1–20. https://doi.org/10.18637/jss.v077.i02 Shah A, Wilson AG, Ghahramani Z (2014) Student-t processes as alternatives to Gaussian processes. Proc Int Conf Artif Intell Stat, PMLR 33:877–885 Silvapulle P, Smyth R, Zhang X, Fenech JP (2017) Nonparametric panel data model for crude oil and stock prices in net oil importing countries. Energy Econ 67:255–267. https://doi.org/10.1016/j.eneco.2017.08.017 Silvennoinen A, Thorp S (2013) Financialization, crisis and commodity correlation dynamics. J Int Financ Mark Inst Money 24:42–65. https://doi.org/10.1016/j.intfin.2012.11.007 Smyth R, Narayan PK (2018) What do we know about oil prices and stock returns? Int Rev Financ Anal. https://doi.org/10.1016/j.irfa.2018.03.010 Snoek J, Larochelle H, Adams RP (2012) Practical bayesian optimization of machine learning algorithms. Adv Neural Inf Proces Syst 2:2951–2959 Sobol IM (1976) Uniformly distributed sequences with an additional uniform property U.S.S.R. Comput Maths Math Phys (16):236–242. https://doi.org/10.1016/0041-5553(76)90154-3 Srinivas N, Krause A, Kakade S, Seeger M (2010) Gaussian process optimization in the bandit setting: no regret and experimental design. Proc ICML 10:1015–1022 Tehran Stock Exchange archive. http://tse.ir/archive.html; (2018) [Accessed 12 Nov 2018] Therneau, T.; Atkinson, B.; Ripley, B. (2015) rpart: Recursive Partitioning and Regression Trees. Rpackage version 4.1-10. URL http://CRAN.R-project.org/package=rpart Tsai CL (2015) How do US stock returns respond differently to oil price shocks pre-crisis, within the financial crisis, and post-crisis? Energy Econ 50:47–62. https://doi.org/10.1016/j.eneco.2015.04.012 Xu B (2015) Oil prices and UK industry-level returns. Appl Econ 47:2608–2627. https://doi.org/10.1080/00036846.2015.1008760 Zhang B (2017) How do great shocks influence the correlation between oil and international stock markets? Appl Econ 49:1513–1526. https://doi.org/10.1080/00036846.2016.1221040 Zhang B, Li XM (2014) Recent hikes in oil equity market correlations: transitory or permanent? Energy Econ 53:305–315. https://doi.org/10.1016/j.eneco.2014.03.011 Zhu HM, Guo Y, You Y, Xu Y (2016) The heterogeneity dependence between crude oil price changes and industry stock market returns in China: evidence from a quantile regression approach. Energy Econ 55:30–41. https://doi.org/10.1016/j.eneco.2015.12.027 Zhu HM, Li R, Li S (2014) Modelling dynamic dependence between crude oil prices and Asia Pacific stock returns. Int Rev Econ Financ 29:208–223. https://doi.org/10.1016/j.iref.2013.05.015 Zhu HM, Li R, Yu K (2011) Crude oil shocks and stock markets: a panel threshold cointegration approach. Energy Econ 33(5):987–994. https://doi.org/10.1016/j.eneco.2011.07.002 Zhu HM, Su X, You W, Ren Y (2017) Asymmetric effects of oil price shocks on stock returns: evidence from a two-stage Markov regime-switching approach. Appl Econ 49:2491–2507. https://doi.org/10.1080/00036846.2016.1240351 Center for Exploration and Production Studies and Research, Research Institute of Petroleum Industry (RIPI), P.O. Box 14665-1998, Tehran, Iran Babak Fazelabdolabadi Search for Babak Fazelabdolabadi in: The work was solely done by the corresponding author, BF. The author read and approved the final manuscript. Correspondence to Babak Fazelabdolabadi. The author declares that he has no competing interests. Additional file 1: Figure S1. The ADCC-GARCH dynamic conditional correlation between OVX and ORI (a), PET (b), and ODR (c). Figure S2. The extracted Bayesian network for returns in each sector in selected price regime periods - ['2015-03-17'-'2018-06-12'] (ORI); ['2013-11-18'-'2018-06-12'] (PET); ['2009-07-27'-'2018-07-23'] (ODR) - obtained by the score-based algorithm. The solid/dashed lines refer to statistically significant/insignificant arcs, respectively. The letters represent (A)rt, (B)rt − 1, (C)rt − 2, (D)rOVX, t − 1, (E)rOVX, t − 2, (F) rOPEC, t − 1, (G) rOPEC, t − 2. Figure S3. The extracted Bayesian network for volatility in each sector in selected price regime periods - ['2015-03-17'-'2018-06-12'] (ORI); ['2013-11-18'-'2018-06-12'] (PET); ['2009-07-27'-'2018-07-23'] (ODR) - obtained by the score-based algorithm. The solid/dashed lines refer to statistically significant/insignificant arcs, respectively. The letters represent (A)\( {\sigma}_t^2 \), (B)\( {\sigma}_{t-1}^2 \), (C)\( {\sigma}_{t-2}^2 \), (D)\( {\sigma}_{OVX,t-1}^2 \), (E)\( {\sigma}_{OVX,t-2}^2 \), (F) \( {\sigma}_{OPEC,t-1}^2 \), (G) \( {\sigma}_{OPEC,t-2}^2 \). Table S1. The BIC for different price regimes, under homogeneous Markov switching autoregressive models. Table S2. The BIC for different price regimes, under non-homogeneous Markov switching autoregressive models. Table S3. Descriptive statistics of the equity log returns in the selected periods. Table S4. The GARCH parameters for the ORI sector in the period ['2015-03-17'-'2018-06-12']. Table S5. The GARCH parameters for the PET sector in the period ['2013-11-18'-'2018-06-12']. Table S6. The GARCH parameters for the ODR sector in the period ['2009-07-27'-'2018-07-23']. Table S7. The mean dynamic correlations, obtained from DCC/ADCC GARCH, over the corresponding selected periods. Table S8. The estimated copula dependence parameters for Gaussian, Student-t, Gumbel, Clayton, and Frank copula models, over the corresponding selected periods. (DOCX 470 kb) Dynamic Conditional Correlation (DCC-GARCH) The model specifies the conditional mean and the conditional variance dynamics of an asset at time t, as follows (Engle and Sheppard, 2001): $$ {r}_{i,t}={\mu}_i+{\eta}_{i,13}{r}_{i,t-1}+{\eta}_{i,14}{r}_{i,t-2}+{\varepsilon}_{i,t} $$ $$ {\sigma}_{i,t}^2={\omega}_i+{\eta}_{i,15}{\sigma}_{i,t-1}^2+{\eta}_{i,16}{\varepsilon}_{i,t-1}^2 $$ $$ {\displaystyle \begin{array}{l}i\in \left\{ OVX, OPEC, ORI, PET, ODR\right\}\\ {}{\varepsilon}_t={H}_t^{1/2}z{}_t\end{array}} $$ $$ {H}_t={D}_t{R}_t{D}_t $$ $$ {R}_t={Q}_t^{\ast^{-1}}{Q}_t{Q}_t^{\ast^{-1}} $$ $$ {Q}_t=\left(1-{\eta}_{17}-{\eta}_{18}\right)\overline{Q}+{\eta}_{17}{z}_t{z}_t^{\prime }+{\eta}_{18}{Q}_{t-1} $$ Here, zt is a vector of independent and identically distributed(IID) errors, Ht is the conditional covariance matrix, Rtis the conditional correlation matrix, Dtis a diagonal matrix with conditional volatilities on its main diagonal, Qt = [qij, t]; i, j ∈ {OVX, OPEC, ORI, PET, ODR}, is a time-varying covariance matrix, \( {Q}_t^{\ast } \)is a diagonal matrix with the square root of the diagonal elements of Qt at the diagonal, \( \overline{Q} \)is an unconditional covariance matrix of standardized residuals. For the Asymmetric-DCC (ADCC-GARCH), the conditional variance is modeled by: $$ {\sigma}_{i,t}^2={\omega}_i+{\eta}_{i,15}{\sigma}_{i,t-1}^2+{\eta}_{i,16}{\varepsilon}_{i,t-1}^2+{\eta}_{i,19}{\varepsilon}_{i,t-1}^2I\left({\varepsilon}_{i,t-1}^2\right) $$ The (dynamic) correlation estimator at timet, \( {\widehat{\rho}}_{ij,t} \), is given by: $$ {\widehat{\rho}}_{ij,t}=\frac{q_{ij,t}}{\sqrt{q_{ii,t}{q}_{jj,t}}} $$ The construction of Sobol's low-discrepancy sequences The initial stage in generating a Sobol LDS set deals with operation on a set of integers in the interval [1, 2b − 1], where b represents the number of bits in an unsigned integer on the operating computer (typically b = 32). Let xnkbe the nth draw of one of Sobol` integers in dimensionk. Generation of numbers in the Sobol's method, is based on a set of direction integers. A distinct direction integer is considered for each of the b bits in the binary integer representation. Let vkldenote the lth direction integer for dimension k. In order to construct Sobol` numbers, one needs to evaluate the direction integers, first. This process involves the binary coefficients of a selected primitive modulo two for each dimension (Jäckel 2002). Take pkas the primitive polynomial modulo two for dimension k with the degreegk (defined by Eq. 18). We assume ak0 … akgrepresenting the coefficients ofpk, with ak0being the coefficient of the highest monomial term. $$ {p}_k(z)=\sum \limits_{j=0}^{g_k}{a}_{kj}{z}^{g_k-j} $$ In each dimension, the first gk direction integers vklfor l = 1 … gkare allowed to be freely chosen for the associatedpkof the dimension, provided that two conditions are met. First, the lth leftmost bit of the direction integer vkl must be set. Second, only the l leftmost bits can be non-zero, where the leftmost bit refers to the most significant one in a bit field representation. All subsequent direction integers are calculated from a recurrence relation (Eq. 19) (Jäckel 2002): $$ {v}_{kl}=\frac{v_k\left(l-{g}_k\right)}{2^{g_k}}{\oplus}_2{\sum \limits_{j=1}^{g_k}}^{\oplus_2}{a}_{kj}{v}_{k\left(l-j\right)}\;\mathrm{for}\ l>{g}_k $$ Hereby, ⊕2represents the binary addition of integers modulo two (often referred to in the computer science literature as the XOR gate), and \( {\sum \limits_{\dots}^{\dots}}^{\oplus_2} \)stands for a set of XOR operations. The procedure is to right-shift the direction integer vk(l − gk)by gk bits, and then performing the XOR operation with a selection of the un-shifted direction integers vk(l − j)forj = 1 … gk. The summation is performed analogous to the conventional ∑summation operator. The only remaining requirement for the algorithm is the generating integer of the nth draw. For this sake, the natural choice appears to be the draw number itself, n. Nevertheless, any other sequence with a unique integer for each new draw is equally useful (Jäckel 2002). Once all the preliminaries are set, the Sobol` integers, for the s dimensions of interest, are generated by (Jäckel 2002); $$ {x}_{nk}={\sum \limits_{j=1}^s}^{\oplus_2}{v}_{kj}1 $$ In which the jth bit of the generating integer is set (counting from the right). Jäckel (2002) has provided tabulated initialization numbers for generating Sobol` integers, up to a dimension of 32 (Table 9). The generated sequence, using these initialization numbers, posesses the property; such that for any binary segment of the s-dimensional sequence of length 2s there is exactly one draw in each of the 2s hypercubes which result from subdividing the unit hypercube along each of its unit length extensions into half (Jäckel 2002). Table 9 An instance of the initialisation numbers for generating Sobol's LDS, up to a dimension of 32 (Jäckel 2002) Once generated, conversion of Sobol` integers to other scales is fairly straightforward. For example, they can be converted to the [0, 1] scale by dividing the integers by2b. The algorithm to perform Quasi-Monte Carlo maximization Assume \( {\varpi}_i^{iter} \)to represent the best solution for ith point at the iterth iteration, also consider FBESTas the best (maximum) value off, recorded up to the iterth iteration. A detailed description of the QMC procedure is then ensued as follows (Hickernell and Yuan 1997): Step-0 Initialize. Input the number of initial points, N, the number of points with best (highestt) objective function values to retain in each iteration, Nbest, and the desired number of iterations to be done for local search on each of the points, Niterlocal. search. Set the number of iterations, iter = 0. Set NSP = 0; NSWP = 0; NTIX(j) = 0for (1 ≤ j ≤ N). Step-1 Concentrate Obtain a new point set, by applying Niterlocal. search iteration(s) of an inexpensive local search to each of \( {\varpi}_i^{iter} \) points (1 ≤ i ≤ N). Step-2 Reduce Find Ξ(iter) ⊂ {1, … , N}such that Ξ(iter)has Nbest elements and that \( f\left({\varpi}_i^{iter}\right)\ge f\left({\varpi}_j^{iter}\right)\forall i\in \Xi (iter) \) and ∀j ∉ Ξ(iter). If j ∈ Ξ(iter), set NTIX(j) = NTIX(j) + 1. If j ∉ Ξ(iter), set NTIX(j) = 0. Step-3 Find local maximum For j = 1, … , Nsuch that NTIX(j) ≥ 2. Set NTIX(j) = 0. If NSP = 0or\( f\left({\varpi}_j^{iter}\right)\ge FBEST+{10}^{-4} \) then. Starting from\( {\varpi}_j^{iter} \), perform a local optimization search, to obtain the local maximum of the point, \( {\varpi}_{j, local.\max}^{iter} \). If \( f\left({\varpi}_{j, local.\max}^{iter}\right)> FBEST \) then. Set \( NSP= NSP+1; NSWP=0; FBEST=f\left({\varpi}_{j, local.\max}^{iter}\right) \). Set NSWP = NSWP + 1. If \( \frac{NSWP}{NSP}\ge 3 \)then stop (success). Step-4 Sample additional points For j = 1, 2, … , N. If NTIX(j) = 0then. Generate \( {\varpi}_j^{iter+1} \)by the Sobol's LDS technique. Set \( {\varpi}_j^{iter+1}={\varpi}_j^{iter} \). Set iter = iter + 1. If the total number of function calls reached then stop (failure). Go to Step-1. The algorithm to perform Bayesian Optimization Assume that the upper confidence bound scheme is chosen, as for the acquisition function. The algorithm to perform Bayesian optimization follows the below procedures (Brochu et al. 2010): Step-0 Initialize Input the desired number of iterations to be done for BO search, Niter. Input the tunable parameter κbalance (Eq. 14). Sample the (objective) function at point ϖiter. Form the data set, ℘ = {ϖiter, f(ϖiter)}. Step-1 Repeat Find the next point to sample,ϖiter + 1, by optimizing the acquisition function over GP. $$ {\varpi}^{iter+1}=\arg {\max}_{\varpi }{f}_{acquisition}\left({\left.\varpi \right|}_{\wp_{1: iter}}\right) $$ Sample the (objective) function at point ϖiter + 1. Augment the data set ℘. Update the GP prior. If the total number of desired iterations reached then stop. Fazelabdolabadi, B. Uncertainty and energy-sector equity returns in Iran: a Bayesian and quasi-Monte Carlo time-varying analysis. Financ Innov 5, 12 (2019) doi:10.1186/s40854-019-0128-2 Accepted: 25 February 2019
CommonCrawl
nature electronics Energy-efficient memcapacitor devices for neuromorphic computing Efficient and self-adaptive in-situ learning in multilayer memristor neural networks Can Li, Daniel Belkin, … Qiangfei Xia Organic electronics for neuromorphic computing Yoeri van de Burgt, Armantas Melianas, … Alberto Salleo Fully hardware-implemented memristor convolutional neural network Peng Yao, Huaqiang Wu, … He Qian Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits F. Merrikh Bayat, M. Prezioso, … D. Strukov 4K-memristor analog-grade passive crossbar circuit H. Kim, M. R. Mahmoodi, … D. B. Strukov Integrating memristors and CMOS for better AI Weiwen Jiang, Bike Xie, … Yiyu Shi Memristive crossbar arrays for brain-inspired computing Qiangfei Xia & J. Joshua Yang A crossbar array of magnetoresistive memory devices for in-memory computing Seungchul Jung, Hyungwoo Lee, … Sang Joon Kim Hexagonal boron nitride (h-BN) memristor arrays for analog-based machine learning hardware Jing Xie, Sahra Afshari & Ivan Sanchez Esqueda Kai-Uwe Demasius ORCID: orcid.org/0000-0001-7198-00591, Aron Kirschen ORCID: orcid.org/0000-0003-2193-880X2 & Stuart Parkin ORCID: orcid.org/0000-0003-4702-61391 Nature Electronics volume 4, pages 748–756 (2021)Cite this article Electronic properties and materials Data-intensive computing operations, such as training neural networks, are essential for applications in artificial intelligence but are energy intensive. One solution is to develop specialized hardware onto which neural networks can be directly mapped, and arrays of memristive devices can, for example, be trained to enable parallel multiply–accumulate operations. Here we show that memcapacitive devices that exploit the principle of charge shielding can offer a highly energy-efficient approach for implementing parallel multiply–accumulate operations. We fabricate a crossbar array of 156 microscale memcapacitor devices and use it to train a neural network that could distinguish the letters 'M', 'P' and 'I'. Modelling these arrays suggests that this approach could offer an energy efficiency of 29,600 tera-operations per second per watt, while ensuring high precision (6–8 bits). Simulations also show that the devices could potentially be scaled down to a lateral size of around 45 nm. Brain-inspired computing—often termed neuromorphic computing—based on artificial neural networks and their hardware implementations could be used to solve a broad range of computationally intensive tasks. Neuromorphic computing can be traced back to the 1980s (refs. 1,2), but the field gained considerable momentum after the development of memristive devices3 and the proposal of convolutional layers in deep neural networks at the algorithmic level4,5. Since then, several resistive neuromorphic systems and devices have been implemented using oxide materials6,7,8, phase-change memory9, spintronic devices10,11 and ferroelectric devices (tunnel junctions12,13 and ferroelectric field-effect transistors (FeFETs)14,15), and such systems—namely, ferroelectric tunnel junctions13 and SONOS (that is, silicon–oxide–nitride–oxide–silicon) transistors16—have exhibited energy efficiencies of up to 100 tera-operations per second per watt (TOPS W–1). All these approaches rely on the analogue storage of synaptic weights, which can be used in multiplication operations, and use Kirchhoff's current law for the summation of currents implemented via crossbar arrays17. Memcapacitive devices18 are similar to memristive devices but are based on a capacitive principle, and could potentially offer a lower static power consumption than memristive devices. There have been theoretical proposals for memcapacitor devices18,19,20,21,22, but few practical implementations23,24,25,26. Memcapacitor devices can be realized through the implementation of a variable plate distance concept, as demonstrated in micro-electromechanical systems27, a metal-to-insulator transition material in series with a dielectric layer22, changing the oxygen vacancy front in a classical memristor20, and a simple metal–oxide–semiconductor capacitor with a memory effect24,25. To obtain a high dynamic range, these devices either have a large parasitic resistive component20 at small plate distances or limited lateral scalability due to large plate distances. Similar problems occur with memcapacitors having varying surface areas23 or varying dielectric constants26. In this Article, we report memcapacitor devices based on charge shielding that can offer high dynamic range and low power operation. We fabricate devices on the scale of tens of micrometres and use them to create a crossbar array architecture that we use to run an image recognition algorithm. We also assess the potential scalability of our devices for use in large-scale energy-efficient neuromorphic systems using simulations. Memcapacitive device based on charge shielding Our memcapacitive device consists of a top gate electrode, a shielding layer with contacts and a back-side readout electrode (Fig. 1a). These layers are separated by dielectric layers. The top dielectric layer can have a memory effect, for example, charge trapping or ferroelectric, which may influence the shielding layer, or the shielding layer itself can exhibit a memory effect (in this paper, only the first principle is investigated). A very high on/off ratio of electric field coupling and therefore the capacitance between the gate electrode and readout electrode can be obtained with either total shielding or transmission. The lateral scalability is substantially better compared with the previously mentioned concepts, since the thickness of each layer can be readily optimized, while the dynamic ratio is mainly dependent on the shielding efficiency of the shielding layer. Fig. 1: Structure of the memcapacitor device. a, General device structure with a gate electrode, shielding layer (SL) and readout electrode (I, current; Q, charge). The electric field coupling is indicated by the blue arrow. b, Device structure with a lateral pin junction as well as electron and hole injection. c, Crossbar arrangement of the device in b, where a.c. input signals are applied to the word lines (WLs) and the accumulated charge is read out at the bit lines (BLs). During readout, the SL is mostly connected to GND. Generally, charge screening depends on the Debye screening length LD: $$L_{\mathrm{D}} = \sqrt {\frac{{\varepsilon _0\varepsilon _{\mathrm{r}}U_{\mathrm{T}}}}{{n^2{{{\mathrm{e}}}}}}},$$ where UT is the thermal voltage, n is the charge carrier concentration, ε0 is the electric field constant, εr is the relative electric field constant and e is the elementary charge. The electric field drops exponentially within the shielding layer and drops to 37% within the screening length LD under the condition Ψ ≪ UT. In practice, in semiconductors, the relationship is highly nonlinear depending on potential ψ at depth x, as follows: $$\frac{{{\mathrm{d}}^2\psi }}{{{\mathrm{d}}x^2}} = \frac{{ - e}}{{\varepsilon _0\varepsilon _{\mathrm{r}}}} \left( {p_0\left[ {{\mathrm{exp}}\left( {\frac{{ - \psi }}{{U_{\mathrm{T}}}}} \right) - 1} \right] - n_0\left[ {{\mathrm{exp}}\left( {\frac{\psi }{{U_{\mathrm{T}}}}} \right) - 1} \right]} \right),$$ where p0 and n0 are the charge carrier concentrations of holes and electrons in thermal equilibrium, respectively. Therefore, the Debye screening length (equation (1))—given the exponential spatial dependence of the field in the material—is only a linear approximation of nonlinear differential equation (2). Especially for strong inversion and accumulation within the shielding layer, the length scales of screening become much smaller than the Debye length. This nonlinearity with respect to the applied gate voltage or charge stored in the memory dielectric leads to either strong shielding or fairly good transmission. A more detailed device structure is shown in Fig. 1b with lateral p+n–n+ junctions in the shielding layer. The p+- and n+-doped regions act as reservoirs for electrons and holes, respectively, and can inject each carrier type for the purposes of shielding. This enables additional device functionality; however, more importantly, it also allows a symmetric device response for positive and negative gate voltages. This is a crucial feature for neuromorphic devices, because the weight update is then undistorted and the training accuracy is thus higher17. During readout, the shielding layer is connected to the ground (GND). During writing and training, the voltages applied to the p+ and n+ contacts can differ and can also act as a selector, as explained in Supplementary Section 1. As shown in Fig. 1c, the single device can be arranged into a crossbar for highly parallel multiply–accumulate (MAC) operations. In this case, the gate electrode becomes the word line (WL), where input signals are applied, and the shielding layer becomes a shielding line (SL) in a direction vertical to the WL. The readout electrode functions as the bit line (BL), which is parallel to the SL, and the accumulated charge out of one BL is the calculated result of accumulated multiplications at each crossing point. The multiplication is conducted between the input signal of the WL and the state of the shielding layer, which, in turn, is adjusted by the memory material. The weights are encoded in the capacitance of each crossing point. In contrast to resistive devices, capacitive devices only react on dynamic voltage or current signals; therefore, an alternating current (a.c.) voltage is applied to the WL during readout. Writing of the memory material is achieved by a voltage difference between the SL and WL. CV curves and gradual programming of single devices Single devices on the micrometre scale were fabricated on a silicon-on-insulator wafer, whereas the handle wafer containing a highly n-doped epitaxial layer acts as the readout electrode and the buried oxide acts as the bottom dielectric layer. As a memory principle, ferroelectric-assisted charge trapping (polarization charge attracts carriers and thus promotes trapping) was used to combine the advantages of both principles28,29, whereas the tunnelling oxide was 2.5 nm thick to avoid charge detrapping. Details of the fabrication can be found in Methods. The fabricated devices had a gate length ranging from 10 to 60 µm, and the gate width was enlarged by winding it around several highly p+- and n+-doped finger-shaped regions, thus forming several parallel pin junctions. The larger area leads to a readily detectable capacitance and the minimum capacitance of turned-off devices could also be precisely measured (capacitive dynamic range). Figure 2a shows a microscopic image of the fabricated device. Capacitance–voltage (CV) measurements were carried out by applying an a.c. signal with a direct current (d.c.) bias (sweep) to the gate: the resulting a.c. current of the readout electrode was measured either by lock-in amplification or by an oscilloscope and current pre-amplifier. Data from the resulting fundamental CV curves for different d.c. voltages (VAK) on the n+ and p+ regions are shown in Fig. 2b (note that a normal silicon dioxide dielectric layer was used here instead of a memory dielectric). The CV curves get broader or are nearly extinguished depending on whether the pin junction is used in the reverse or forward bias direction, respectively; this behaviour is further explained in Supplementary Section 1. Generally, a capacitive coupling window is observed, which is high for depletion (and therefore for transmission through the shielding layer) and low during inversion or accumulation. The curves are derivatives of a sigmoid curve, which play an important role in modelling neurons in artificial neural networks. A direct measurement of the sigmoid curve and further uses are explained in Supplementary Section 1. Fig. 2: Measurement setup and CV curves of single devices. a, Microscopy image of the measured single device and measurement setup. b, Measured CV curves for a device without memory at different VAK values; VAK is applied antisymmetrically, the d.c. voltage of the gate was swept between −7 and 7 V, and the small a.c. voltage had an amplitude of 100 mV with a frequency of 1 kHz. c, CV curve shifting due to the injection of charges. The device had a memory in this case. d–f, Analogue value writing with pulse number modulation (constant write height) (d), pulse height modulation (the voltage is increased/decreased from ±4.0 to ±6.1 V) (e) and pulse length modulation (f). In d–f, the shielding layer was grounded, and readout was performed between each pulse with an a.c. signal, as shown in c. g, Pulse number modulation for different write pulse heights. Replacing the normal silicon dioxide dielectric with a memory dielectric and with a CV sweep from −5 to 5 V, one can observe a shifting of the capacitive coupling window with a memory window of 2.7 V (Fig. 2d), while the pin junction was grounded. Due to the shifting direction, one can conclude that charge trapping is the memory principle (for purely ferroelectric switching, the curves would shift in the opposite direction). By contrast, capacitive devices can only be read out by a.c. voltages or current signals. For this reason, an alternating voltage (0.5 V) is applied to the gate for readout, together with a bias voltage (1.0 V) to adjust the readout window, as indicated by the shaded area in Fig. 2d (note that the pin junction is grounded during readout). In Supplementary Fig. 11a,b, the readout current of a written and erased cell is shown, and a capacitive dynamic range of ~1:1,478 was experimentally achieved. To store analogue values, one can apply short pulses with the same amplitude (Fig. 2d,g), apply pulses with increasing height (Fig. 2e) or change the pulse length (Fig. 2f) applied to the gate. The resulting curves exhibit some similarities to those obtained from pure ferroelectric switching14, indicating the ferroelectric assistance in the memory storage process. The curve in Fig. 2d shows a typical nonlinear long-term potentiation (LTP) curve with an exponential dependence. $$C_{\mathrm{LTP}} = C_{\mathrm{min}} + {\Delta}C \left( {1 - {\mathrm{exp}}\left( {\frac{{ - N_{\mathrm{pgr}}}}{{\beta _{\mathrm{pgr}}}}} \right)} \right)$$ The same applies for the long-term depression (LTD) $$C_{\mathrm{LTD}} = C_{\mathrm{max}} - {\Delta}C \left( {1 - {\mathrm{exp}}\left( {\frac{{ - N_{\mathrm{er}}}}{{\beta _{\mathrm{er}}}}} \right)} \right),$$ where Npgr and Ner denote the number of programming or erase pulses, respectively; βpgr and βer are the stretching factors; and Cmin and Cmax denote the minimum and maximum capacitance, respectively. Here ΔC describes the maximum change in capacitance. Changing the write pulse height of the pulse number modulation leads to more flattened or steepened curves (Fig. 2g). Write/erase pulse height modulation (Fig. 2e) can lead to relatively symmetric and—in certain regions, linear—behaviour with respect to the pulse height steps. This is highly beneficial for implementing neuromorphic algorithms17. Pulse length modulation shows similar behaviour to pulse number modulation (Fig. 2f). In Supplementary Fig. 11c, the measured readout current is illustrated for LTP and LTD for different pulse numbers of pulse height modulation (Fig. 2e) and reveals the pinch-off and increase. Other memory parameters, like device-to-device variation, endurance and retention can be found in Supplementary Section 9. Crossbar array and implementation of training algorithm Crossbar devices—used to execute an image recognition algorithm—were fabricated and wire bonded onto a chip carrier. A printed circuit board (PCB) was designed and controlled by a data acquisition system. An image of the fabricated chip with the bonding pads, a zoomed-in microscopy image of the crossbar and a scanning electron microscopy image are shown in Fig. 3a. Each memory cell had a size of 50 × 50 µm2. Fig. 3: Crossbar arrangement and fundamental measurements. a, Wire-bonded chip with microscopy and scanning electron microscopy images. b, Device cross section. c, Neuromorphic system for accomplishing 'four-quadrant multiplication': positive and negative inputs are 180° phase shifted with each other. The a.c. conditions are the same as in Fig. 2, and the number of periods encodes the amount of input. The clock signal is high for a rising edge in the positive signal and the switches are in the left position during a high clock signal. The SL is connected to GND during readout. d, Measured 'four-quadrant multiplication' for different input period numbers Nper and programming pulse numbers (pulse number modulation) Npgr. For negative Nper, the input signal is 180° phase shifted, and for positive Npgr, a positive BL is programmed; a negative BL is kept in an erased state (vice versa for negative Npgr). A schematic of the device cross section is shown in Fig. 3b. The BLs of the memory array were separated by refilled deep trenches. Details of the fabrication process can be found in Methods. The matrix comprised 26 WLs and 6 BLs (Fig. 3c). A differential weight topology17 was used with the positive and negative value of each weight separated in two memory cells. The values of these two BLs were subtracted from each other. $$W_{ij} = C_{ij}^ + - C_{ij}^ -$$ The input values are separated by a sign with a 180° phase shift. For the desired 'four-quadrant multiplication' (input × weight), a global clock signal is used together with the switched capacitor approach (Fig. 3c). Further details are explained in Supplementary Section 11. The integration capacitance of the amplifier is charged up in each period of the input sine signal, and hence, the number of periods (Nper) encodes the value of the input signal. This effect also leads to an averaging of the noise level and improvement in the signal-to-noise ratio, as explained later. This theoretical concept of 'four-quadrant multiplication' was confirmed with the following measurement (Fig. 3d): the input number of periods (Nper) and the number of programming pulses (Npgr), which adjust the actual weight, were varied in positive and negative values, while the output voltage is read. Positive and negative Nper values were encoded by a 180° phase shift and positive/negative programming pulses (Npgr) only changed the positive/negative weights, while the counterpart was in an erased state. Supplementary Fig. 12a,b shows the cross sections of the 3D plot in Fig. 3d. The curves along the input period number behave in a highly linear manner, and this linearity was also confirmed for the accumulation operation (Supplementary Fig. 12c), demonstrating a highly linear MAC operation with the proposed switched capacitor approach. The first 25 WLs enable a vectorized input feature map for images of 5 × 5 pixels; thus, one single fully connected layer is carried out. Dark pixels are represented by positive values and bright pixels, by negative values. The bias input is mapped to the 26th WL. Regarding the implemented training algorithm, the Manhattan update8,30 rule was chosen, due to its simplified training procedure. In conventional backpropagation training, the weight update is calculated as follows: $${\Delta}W_{ij} = - \alpha \delta _i\left( n \right) X_j\left( n \right),$$ where α describes the learning rate, δi(n) is the backpropagated error and Xj(n) is the current input for the nth input image, which is randomly chosen from the training set. The weights are updated after each sample (stochastic training). The backpropagated error for a one-layer perceptron can be calculated as follows: $$\delta _i\left( n \right) = \left[ {f_i\left( n \right) - f_i^{\mathrm{d}}\left( n \right)} \right] \left. {\frac{{{\mathrm{d}}f_i}}{{{\mathrm{d}}v}}} \right|_{v = v_i\left( n \right)},$$ where \(f_i^{\mathrm{d}}\left( n \right)\) is the desired output value and fi(n) is the current output. Function fi is related to the voltage output vi(n) of the ith sense amplifier and the activation function of the neuron (in this case, tanh): $$f_i\left( {v_i} \right) = {\mathrm{tanh}}\left( {\kappa v_i\left( n \right)} \right),$$ where κ is the steepness factor. With the Manhattan update rule, the weight update from equation (6) is coarse-grained by using the following signing. $${\Delta}W_{ij}^{\mathrm{M}} = {\mathop{{{\rm{sgn}}}}} {\Delta}W_{ij}$$ Therefore, all the weights are updated by the same amount based on their sign. Figure 4a illustrates the pulse scheme for implementing the algorithm. The term \(\delta _i\left( n \right) X_j\left( n \right)\) in equation (6) becomes positive if both error δi(n) and input Xj(n) are positive or it becomes negative for the opposite sign if both δi(n) and Xj(n) are negative . Hence, one can describe this by an XNOR combination. To update the weights, the error signal is applied to the SL, as shown in Fig. 4a. The corresponding input signals are applied to the WL. The differential signal at the crossing points follows the XNOR operation, while the specific signals (shown in Fig. 4a) ensure that the maximum disturbance level is not higher than 1/3 and thus effectively prevents the overwriting of cells in the same column or row (the memory cell acts as the selector itself; see Supplementary Sections 7 and 8). As a 5 × 5 image recognition task, the letters M, P and I were chosen, and one pixel in each of the samples was flipped, which results in a total set of 78 samples. These pseudo-images were separated into a test and training set; the test images are indicated by a blue frame (Fig. 4b). The resulting misclassified images versus training epochs for the training and test images are shown in Fig. 4c. Evidently, the number rapidly decreases after one training epoch and stays almost zero throughout the training epochs. Figure 4d shows the obtained mean neuron activations for the three classifications over the training epochs. The slightly higher simulated average misclassification rate (Fig. 4c) is the consequence of single steep climbs of the misclassification rate after an arbitrary number of epochs with 100% accuracy in some runs. Misclassifications after epoch 1 are caused by the very similar expected value for individual presynaptic neurons for letters M and P. Measurements also confirm the more stable results for the classification of letter I, as shown in Fig. 4d. The results are in accordance with other studies7,8. Fig. 4: Manhattan update training on crossbar. a, Pulse scheme to enable XNOR operation during Manhattan weight update (the write/erase pulse height was ±5.2 V and length was 1 ms). The disturb level is exactly 1/3 of the write/erase voltage. b, Training and test set of the letters M, P and I with one flipped pixel. The test images are framed in purple. c, Number of misclassified images Nmis for the training and test sets over ten training epochs (Nepoch). The measured curve is compared with the simulated curves. d, Average artificial neuron activation for three classifications (f1, f2 and f3) and three images over ten training epochs. Thus, experimental results on micrometre-sized devices demonstrate the working principle. For demonstrating scalability to the nanometre regime and superior energy efficiency, detailed and extensive simulations were performed, which are explained in the upcoming sections. TCAD simulations on single devices A device with 90 nm gate length (Fig. 5a) was simulated by Synopsys. Figure 5b (where no memory dielectric was integrated for the first simulations) shows the CV curves of the coupling capacitances between the gate and readout electrode with respect to the applied gate voltage (VG), which are consistent with the observed experimental behaviour (Fig. 2b). Fig. 5: TCAD simulation results. a, Simulated structure with gate length Lg = 90 nm. b, Obtained CV curves with respect to the gate voltage for different voltages VAK along the p+n–n+ diode (quasi-static simulation). The voltage VAK was applied antisymmetrically, as that in Fig. 2. c, Capacitive dynamic ratio (maximum capacitance/minimum capacitance of the CV curves with p+n–n+ connected to GND) for different gate lengths and gate oxide thicknesses. The inset shows the electron density, and the short channel effect becomes obvious. EOT, equivalent oxide thickness. d, Shifting of the CV curves for VAK = 0 V for different memory charges in the gate oxide. Note the applied readout a.c. signal with bias. e, Accumulated charge (Qacc) for different voltage shifts (Vshift; caused by memory charges) over one-half period of the a.c. signal in d. f, Comparison of the simulated and experimental capacitive coupling curves for the micrometre-scaled device shown in Fig. 2. The ratio between the maximum capacitance and lower-state capacitance obtained by shifting the gate voltage by 3 V is 1:90 in this device, and this ratio can be further enlarged by using thinner gate oxides or larger gate lengths, as shown in Fig. 5c. In general, the capacitive ratio decreases with a smaller gate length due to the fact that the influence of the space charge region becomes more pronounced for smaller gate lengths (short channel effect) and sufficient shielding is hard to achieve in this region (Fig. 5c, inset). By using high-κ dielectrics for the top and bottom oxides, a ratio of 1:60 was obtained for a 45 nm device with the same capacitance as the 90 nm device, as shown in Supplementary Section 2. A dynamic range of 1:60–1:90 is sufficient to achieve a precision of 6–8 bits31. Including a memory window (~3 V for charge-trapping memories and ~1–2 V for ferroelectric memories depending on the thickness and coercive field) leads to shifted CV curves (Fig. 5d). The a.c. readout voltage is indicated in Fig. 5d; for the positive shifted curve, the resulting readout current and therefore the accumulated charge will be very large. The total readout charge over one-half period of the applied sinusoidal signal versus memory shift is shown in Fig. 5e. Most of the negative memory window is used for turning off the device. Scalability to 45 nm With regard to lateral scalability, it is necessary to distinguish three aspects: (1) the scalability of the memory technology in the top dielectric itself with regard to how many levels can be stored; (2) the sensitivity of the sense amplifier at the end of each BL for detecting the accumulated charge; (3) the noise level of one single device during readout. Fairly common resolutions for input, weight and output signals for neural networks are in the range of 4–8 bits (16–256 levels)31. This analogue-like resolution has a significant influence on scalability. Typically, lower precision is needed for inference tasks. With respect to the memory material, one can generally conclude that charge-trapping memories (for example, SONOS) have shown up to 31 levels down to 40 nm (ref. 16). The disadvantage of this memory technology is the relatively high write energy and slowness during writing (millisecond regime). However, SONOS might be an alternative for inference-only applications. On the other hand, hafnium oxide (a ferroelectric) has very low write energies and is fast (nanosecond to microsecond regime). Ongoing research is still underway on the scalability of ferroelectric memories with regard to analogue storage. From FeFETs, it is known that they tend to show abrupt switching events below 500 nm, which is attributed to the limited grain size15. Regarding capacitive measurement resolutions, some work was done in the context of DNA sensing and chip interconnect measurements with resolutions down to <10 aF (charge-based capacitive measurements, capacitance-to-frequency conversion and lock-in detection)32,33,34,35,36. These are similar to a conventional sense amplifier37,38 and contain an integration capacitor that is charged either by an operational amplifier circuit or a current mirror. Details on the sensitivity calculation can be found in Supplementary Section 3; generally, however, one has to consider that in neuromorphic devices, the accumulated charge from many memory cells (several hundreds to thousands) is read out at once and used for further information processing, which gives rise to much larger charges compared with only one cell. Furthermore, several pulse/period numbers are used for encoding the input value and leads to stepwise charge integration over many periods. For the device shown in Fig. 5, Nper = 142 periods is necessary, which fits well into a range of 7–8 bits of the input signal (Supplementary Section 3). Note that 128 periods are sufficient for an 8-bit signed integer due to the use of the 180° phase shift for negative values of the switched capacitor approach. Regarding the noise level of capacitive devices, one has to consider kTC noise. $$v_{\mathrm{n}} = \sqrt {\frac{{k_{\mathrm{B}}T}}{C}}$$ where kB defines the Boltzmann constant, T the temperature and C the capacitance. For a 6.65 aF device (Fig. 5d), one obtains a noise voltage of 25.00 mV (at room temperature), which is 14 times lower than the effective readout value of 0.35 V. However, one has to consider that the noise level decreases with the number of repetitive measurements, namely, \(1/\sqrt {N_{\mathrm{per}}}\), which results in a noise level of 2.20 mV (at room temperature) or 169 times lower than the effective readout value; this defines a precision of ~7 bits. Based on this minimum amplitude necessary to distinguish between different levels, it also becomes possible to assess the theoretical energy efficiency of resistive and capacitive devices in general (Supplementary Section 4): capacitive devices are at least eight times more energy efficient than resistive devices. Simulation of ultrahigh energy efficiency Much of the energy sourced to 'memcapacitors' can be recovered since it is stored in the capacitor; this is an important difference from resistors in which the readout operation is inherently dissipative due to Joule heating. The energy fed in during charging can be, in principle, recovered during discharging. This concept of energy recovery is also present in adiabatic circuit designs39,40, which are at the core of the reversible computing paradigm41,42. The limiting factor of energy recovery in adiabatic circuits are resistive losses in the circuit, as well as in the inductances used for the power clock generators. The inductances have limited quality factors (q factor) in the order of dozens to hundreds. In common adiabatic realizations, energy recovery of the supply clock generators is of the order of 95% for harmonic signals43,44,45, which means the supplied active power is q = 20 times lower than the reactive power. To estimate the time delay, areal efficiency and energy efficiency (Table 1) of a realistic crossbar arrangement (including parasitic elements), a SPICE model (Supplementary Fig. 4a) for the 90 nm device was developed (Supplementary Section 5). One can conclude that extremely fast readout transitions can suppress shielding in the SL, since charge cannot be supplied any longer (silicide lines are a critical resistive path). In the table, the energetically worst-case scenario was assumed: all the WLs are activated at once and all the weights are zero with a resulting shielding effect, which, in turn, would lead to charging in the top gate oxide. Table 1 summarizes the minimum period of time for different matrix sizes, which is proportional to the RC delay, with R being the resistance and C the capacitance. The areal efficiency Aη in TOPS mm–2 can be derived from the memory footprint (2 × 8 F2), assuming differential weights and the earlier mentioned time delay. The active (Wp) and reactive (Wr) energy per cell for 142 periods is also summarized in Table 1. With this estimate in mind, we can conclude a minimum energy efficiency ηrec of 3,452.6 TOPS W–1 in the worst-case scenario for 0% input signal sparsity and 100% weight sparsity and an energy recovery of 95% (Supplementary Section 5). Without any charge recovery, the energy efficiency η would amount to 198.5 TOPS W–1. In a realistic neural network scenario, for example, a one-layer perceptron trained on the Modified National Institute of Standards and Technology (MNIST) database, the energy efficiency is 29,600 TOPS W–1 including charge recovery (Supplementary Section 6). Without recovery, the efficiency amounts to 1,702 TOPS W–1 for MNIST. Table 1 Results on areal and energy efficiency obtained from SPICE simulation Comparison of simulation and experimental results To verify the functionality of the simulator, we performed simulations of the device with 60 µm gate length (Fig. 2). As shown in Fig. 5f, experimental data from Fig. 2d match well with the simulated data. As shown in Supplementary Fig. 14, we measured the gate charging current together with the applied readout a.c. voltage for the single device (Fig. 2), and a perfect 90° phase shift is visible. From the curves, we can calculate the reactive (WR) power consumption per period (using equations 31–33, Supplementary Section 5) and obtain Wr = 3.22 nJ per period. Furthermore, for 142 periods, as in the simulation, we obtain the total reactive energy for one MAC operation, namely, Wr,tot = 457 nJ per cell. If we scale this value by seven orders of magnitude, we obtain Wr,scaled = 45.7 fJ per cell (capacitance shown in Fig. 2d is seven orders of magnitude lower compared with the capacitance of the simulated 90 nm device shown in Fig. 5b). This value is approximately ten times higher than the value shown in Table 1 (5 fJ per cell). One has to consider that the thickness of the buried oxide of the experimental devices is much thicker (190 nm) than in the case of the 90 nm device simulation (15 nm), leading to a 12.7 times lower readout capacitance/area at approximately the same gate oxide capacitance/area. Also considering the different device silicon thicknesses, one can obtain a corrected reactive energy of Wr,scaled,corr = 5.84 fJ cell, which is very close to the value shown in Table 1. Other influencing phenomena during scaling, like short channel effects (Fig. 5c), quantum confinement and band-to-band tunnelling, are explained in Supplementary Section 10. We have reported a memcapacitive device with the potential to deliver high tera-operations per second per watt when scaled. By using a shielding layer between two electrodes, we can achieve high dynamic ratios of ~1,480 for microscale devices and ~90 for simulated 90-nm-sized devices. Furthermore, a 5 × 5 image recognition task was implemented using an experimental crossbar array with 156 memory cells. Circuit-level simulations and noise-level calculations show that our memcapacitive devices can potentially offer superior energy efficiency compared with conventional resistive devices. Using adiabatic charging, most of the charging energy of the capacitors can be recovered. This allows a combination of reversible computing and neuromorphic computing. The energy efficiency of the human brain is estimated to be in the range of ~10 fJ per operation (ref. 46) (or 100 TOPS W–1), which is similar to current memristive-device-based approaches13,16. Our approach could potentially offer an energy efficiency of 1,000–10,000 TOPS W–1. The technology is also compatible with complementary metal–oxide–semiconductor technology and could be fabricated using state-of-the-art processes. The technology computer-aided design (TCAD) simulations were performed with Synopsys and SPICE-level simulations were performed with LTspice. In the TCAD simulations, the drift-diffusion equations (electron + hole continuity equation and Poisson equation) were included. Furthermore, Shockley–Read–Hall recombination and electric-field-, temperature- and dopant-dependent mobility models were included. The influence of quantum confinement and band-to-band tunnelling was investigated in Supplementary Section 10. The devices were fabricated using a silicon-on-insulator wafer with an n+-handle, 3.5-µm-thick epitaxial layer; a 190-nm-thick buried oxide layer; and an 88-nm-thick device layer. First, alignment marks were etched into the device layer, followed by boron- and phosphorous-ion implantation and subsequent activation annealing. The interface oxide was chemically grown by Standard Clean 1 solution and O2 oxidation at 750 °C. The Hf0.5Zr0.5O2 deposition with a TiN capping layer was carried out by atomic layer deposition and annealed at 600 °C. The Hf0.5Zr0.5O2 was patterned for contact holes and the first aluminium metallization was deposited by sputtering. The SLs were etched by ion beam sputtering and the BLs were separated by the reactive-ion etching of 7-µm-deep trenches. The trenches were refilled by SU-8 resist and the second metallization layer (WLs) were insulated from the first metallization layer by another patterned SU-8 layer. Measurements were carried out with a function generator (Agilent 33500B), a lock-in amplifier (Stanford Research Systems SR830) and a current pre-amplifier (Stanford Research Systems SR570). A DSO5052A oscilloscope was used for visualizing the measured currents. The PCB for the neuromorphic chip was designed using EAGLE and manufactured by Eurocircuits GmbH. A data acquisition system (USB-6363, National Instruments) was used for controlling the PCB. The measurement routines were written in LabVIEW. Python was used for simulating the Manhattan algorithm and Keras for MNIST simulation. The data that support the findings of this study are available from the corresponding authors upon reasonable request. The code that supports the findings of this study is available from the corresponding authors upon reasonable request. Mead, C. Neuromorphic electronic systems. Proc. IEEE 78, 1629–1636 (1990). Mead, C. How we created neuromorphic engineering. Nat. Electron. 3, 434–435 (2020). Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80–83 (2008). Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNET classification with deep convolutional neural networks. Adv. Neural Inf. Process Syst. 25, 1097–1105 (2012). Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998). Bayat, F. M. et al. Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits. Nat. Commun. 9, 2331 (2018). Cai, F. et al. A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations. Nat. Electron. 2, 290–299 (2019). Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015). Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans. Electron Devices 62, 3498–3507 (2015). Borders, W. A. et al. Analogue spin–orbit torque device for artificial-neural-network-based associative memory operation. Appl. Phys. Express 10, 013007 (2017). Grollier, J. et al. Neuromorphic spintronics. Nat. Electron. 3, 360–370 (2020). Garcia, V. & Bibes, M. Ferroelectric tunnel junctions for information storage and processing. Nat. Commun. 5, 4289 (2014). Berdan, R. et al. Low-power linear computation using nonlinear ferroelectric tunnel junction memristors. Nat. Electron. 3, 259–266 (2020). Jerry, M. et al. Ferroelectric FET analog synapse for acceleration of deep neural network training. In 2017 IEEE International Electron Devices Meeting (IEDM) 6, 6.2.1–6.2.4 (IEEE, 2018). Mulaosmanovic, H. et al. Novel ferroelectric FET based synapse for neuromorphic systems. In 2017 Symposium on VLSI Technology T176–T177 (IEEE, 2017). Agrawal, V. et al. In-memory computing array using 40nm multibit SONOS achieving 100 TOPS/W energy efficiency for deep neural network edge inference accelerators. In 2020 IEEE International Memory Workshop (IMW) 1–4 (IEEE, 2020). Tsai, H., Ambrogio, S., Narayanan, P., Shelby, R. M. & Burr, G. W. Recent progress in analog memory-based accelerators for deep learning. J. Phys. D 51, 283001 (2018). Di Ventra, M., Pershin, Y. V. & Chua, L. O. Circuit elements with memory: memristors, memcapacitors, and meminductors. Proc. IEEE 97, 1717–1724 (2009). Martinez-Rincon, J., Di Ventra, M. & Pershin, Y. V. Solid-state memcapacitive system with negative and diverging capacitance. Phys. Rev. B 81, 195430 (2010). Mohamed, M. G. A., Kim, H. & Cho, T. W. Modeling of memristive and memcapacitive behaviors in metal-oxide junctions. Sci. World J. 2015, 910126 (2015). Pershin, Y. V. & Di Ventra, M. Memcapacitive neural networks. Electron. Lett. 50, 141–143 (2014). Khan, A. K. & Lee, B. H. Monolayer MoS2 metal insulator transition based memcapacitor modeling with extension to a ternary device. AIP Adv. 6, 095022 (2016). Wang, Z. et al. Capacitive neural network with neuro-transistors. Nat. Commun. 9, 3208 (2018). Kwon, D. & Chung, I. Y. Capacitive neural network using charge-stored memory cells for pattern recognition applications. IEEE Electron Device Lett. 41, 493–496 (2020). You, T. et al. An energy-efficient, BiFeO3-coated capacitive switch with integrated memory and demodulation functions. Adv. Electron. Mater. 2, 1500352 (2016). Zheng, Q. et al. Artificial neural network based on doped HfO2 ferroelectric capacitors with multilevel characteristics. IEEE Electron Device Lett. 40, 1309–1312 (2019). Emara, A. A. M., Aboudina, M. M. & Fahmy, H. A. H. Non-volatile low-power crossbar memcapacitor-based memory. Microelectr. J. 64, 39–44 (2017). Yurchuk, E. et al. Charge-trapping phenomena in HfO2-based FeFET-type nonvolatile memories. IEEE Trans. Electron Devices 63, 3501–3507 (2016). Ji, H., Wei, Y., Zhang, X. & Jiang, R. Improvement of charge injection using ferroelectric Si:HfO2 as blocking layer in MONOS charge trapping memory. IEEE J. Electron Devices Soc. 6, 121–125 (2018). Zamanidoost, E., Bayat, F. M., Strukov, D. & Kataeva, I. Manhattan rule training for memristive crossbar circuit pattern classifiers. In Proc. 2015 IEEE 9th International Symposium on Intelligent Signal Processing (WISP) 1–6 (IEEE, 2015). Zhao, M., Gao, B., Tang, J., Qian, H. & Wu, H. Reliability of analog resistive switching memory for neuromorphic computing. Appl. Phys. Rev. 7, 011301 (2020). Chang, Y. W. et al. A novel simple CBCM method free from charge injection-induced errors. IEEE Electron Device Lett. 25, 262–264 (2004). Forouhi, S., Dehghani, R. & Ghafar-Zadeh, E. Toward high throughput core-CBCM CMOS capacitive sensors for life science applications: a novel current-mode for high dynamic range circuitry. Sensors 18, 3370 (2018). Widdershoven, F. et al. A CMOS pixelated nanocapacitor biosensor platform for high-frequency impedance spectroscopy and imaging. IEEE Trans. Biomed. Circuits Syst. 12, 1369–1382 (2018). Nabovati, G., Ghafar-Zadeh, E., Letourneau, A. & Sawan, M. Towards high throughput cell growth screening: a new CMOS 8 × 8 biosensor array for life science applications. IEEE Trans. Biomed. Circuits Syst. 11, 380–391 (2017). Ciccarella, P., Carminati, M., Sampietro, M. & Ferrari, G. Multichannel 65 zF rms resolution CMOS monolithic capacitive sensor for counting single micrometer-sized airborne particles on chip. IEEE J. Solid-State Circuits 51, 2545–2553 (2016). Kern, T. Symmetric differential current sense amplifier. US patent 7,800,968 (2010). Kadetotad, D. et al. Parallel architecture with resistive crosspoint array for dictionary learning acceleration. IEEE Trans. Emerg. Sel. Topics Circuits Syst. 5, 194–204 (2015). Athas, W. et al. The design and implementation of a low-power clock-powered microprocessor. IEEE J. Solid-State Circuits 35, 1561–1570 (2000). Yadav, R. K., Rana, A. K., Chauhan, S., Ranka, D. & Yadav, K. Adiabatic technique for energy efficient logic circuits design. In 2011 International Conference on Emerging Trends in Electrical and Computer Technology 776–780 (IEEE, 2011). Bennett, C. H. Logical reversibility of computation. IBM J. Res. Dev. 17, 525–532 (1973). Frank, M. P. The future of computing depends on making it reversible. In IEEE Spectrum 25 (IEEE, 31 August 2017). Ye, Y. & Roy, K. QSERL: quasi-static energy recovery logic. IEEE J. Solid-State Circuits 36, 239–248 (2001). Bhaaskaran, V. S. K. Energy recovery performance of quasi-adiabatic circuits using lower technology nodes. In India International Conference on Power Electronics (IICPE2010) 1–7 (IEEE, 2011). Maksimović, D., Oklobdžija, V. G., Nikolić, B. & Current, K. W. Clocked CMOS adiabatic logic with integrated single-phase power-clock supply: experimental results. High.-Perform. Syst. Des. Circuits Log. 8, 255–259 (1999). Xu, W., Min, S. Y., Hwang, H. & Lee, T. W. Organic core-sheath nanowire artificial synapses with femtojoule energy consumption. Sci. Adv. 2, e1501326 (2016). We thank the Institute of Semiconductors and Microsystems, TU Dresden, for reactive-ion etching and NaMLab gGmbH for Hf0.5Zr0.5O2 deposition. We acknowledge fruitful discussions with A. Fumarola and K.-H. Stegemann. Open access funding provided by Max Planck Society. Max Planck Institute of Microstructure Physics, Halle (Saale), Germany Kai-Uwe Demasius & Stuart Parkin SEMRON GmbH, Dresden, Germany Aron Kirschen Kai-Uwe Demasius Stuart Parkin K.-U.D. performed the TCAD and SPICE simulations, device fabrication and measurement. A.K. contributed to the MNIST simulation and pulse scheme of the neuromorphic system. S.P. supervised the work. All the authors wrote the paper. Correspondence to Kai-Uwe Demasius, Aron Kirschen or Stuart Parkin. Peer review information Nature Electronics thanks Arash Ahmadi and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Supplementary Figs. 1–14 and Supplementary Sections 1–11. Demasius, KU., Kirschen, A. & Parkin, S. Energy-efficient memcapacitor devices for neuromorphic computing. Nat Electron 4, 748–756 (2021). https://doi.org/10.1038/s41928-021-00649-y Issue Date: October 2021 Nature Electronics (Nat Electron) ISSN 2520-1131 (online)
CommonCrawl
On linear representations of Lie groupoids? First, some notations and definitions: 1) For a vector space $V$:$$\mathsf{End}(V):=\{\mathsf{Linear}\ \mathsf{maps}\ f:V\longrightarrow V\}.$$ 2) A linear representation of a group $G$ is a pair $(V, \rho)$ consisting of a vector space $V$ and a map $\rho:G\longrightarrow \mathsf{End}(V)$ such that $$\rho_{gh}=\rho_g\circ \rho_h\quad \textrm{and}\quad \rho_{e}=\mathsf{id}_V,\quad\quad (\mathsf{Rep})$$ where $e$ is the identity of $G$. 3) A linear representation of a Lie groupoid $\mathsf{G}\rightrightarrows M$ is a pair $(E, \Delta)$ where $E\longrightarrow M$ is a vector bundle and $\Delta$ assigns to each morphism $g:x\longrightarrow y$, a linear isomorphism $\Delta_g:E_x\longrightarrow E_y$ such that $$\Delta_{gh}=\Delta_g\circ \Delta_h\quad \textrm{and}\quad \Delta_{1_x}=\mathsf{id}_{E_x},$$ where $1_x$ is the identity of $x$. Indeed, from $E$ we could define a Lie groupoid $\mathsf{Gl}(E)$ whose objects are points of $M$ and whose morphisms are linear isomorphisms $E_x\longrightarrow E_y$. Then, a linear representation of $\mathsf{G}$ is simply a functor $\Delta:\mathsf{G}\longrightarrow \mathsf{Gl}(E)$ covering the identity. A linear representation of a group is the ''same'' as a functor $\rho:G\longrightarrow \mathsf{Vect}$ where $G$ is seen as the groupoid $G\rightrightarrows \{*\}$ and where $\mathsf{Vect}$ is the category of vector spaces. Taking this into account, is it possible to see a linear representation of a Lie grupoid $\mathsf{G}$ as a functor from $\mathsf{G}$ to the category of vector bundles? category-theory representation-theory lie-groupoids PtFPtF This is a partial answer. I thought the philosophy behind was more evolved but that is not the case. I guess the idea goes as follows: Given $(E, \Delta)$ we associate the functor $\Delta^E:\mathsf{G}\longrightarrow \mathsf{Vect}$ which is given at the level of objects by $\Delta^E(x):=E_x$ and for a morphism $g:x\longrightarrow y$ of $\mathsf{G}$, $\Delta^E(g):=\Delta_g:E_x\longrightarrow E_y$. On the other hand, given a functor $E:\mathsf{G}\longrightarrow \mathsf{Vect}$ we could assign $\bigsqcup_{x\in M} E(x)$. $\Delta$ is then obviously defined. However, I'm wondering: is there a way to ensure $\bigsqcup_{x\in M} E(x)$ is locally trivial? Not the answer you're looking for? Browse other questions tagged category-theory representation-theory lie-groupoids or ask your own question. Reference request: Indecomposable representations of posets Category of Lie group representations equivalent to the category of representations of their Lie algebra How to define the tensor product via an initial morphism? Obtaining representations of $G$ from $\mathrm{Lie}(G)$. $\mathbb Z$ is not a dense generator in $\mathsf{Ab}$ The colimit of all finite-dimensional vector spaces The category of Lie algebra representations Action groupoid as $G\rightrightarrows \textrm{Bij}(X)$? Exactness of $\textrm{Ind}_H^G$ Dual of representations of Lie algebras?
CommonCrawl
heisenberg model derivation should be no less than a limit set by Planck's constant. Here, we outline a new Thus, when fixing the position, velocity /momentum of the particle would have changed from the original value. Detection of classical forces; [39] [40] Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors σ A {\displaystyle \sigma _{A}} and σ B {\displaystyle \sigma _{B}} . relation. Masanao Ozawa from Nagoya University now presents an improved definition for a quantum generalization of the classical root-mean-square error, which doesn't suffer from such limitations. Despite the strong controversy, in the case of projectively measured qubit observables, both approaches lead to equal outcomes. Rev. In the field of quantum mechanics, Heisenberg's uncertainty principle is a fundamental theory that explains why it is impossible to measure more than one quantum variables simultaneously. simultaneous measurements but using an obsolete postulate for quantum 1. this natural criterion, we prove that in any $d$-dimensional Hilbert space and the 1980's, the above result appears to hav, defended the SQL by giving a new formulation and, the error and the disturbance are statistically independent from, survey those results, which were mostly neglected in the re-, As easily seen from Eq. Using the noise-operator based q-rms error ε = ε NO , the first universally valid relation εðAÞεðBÞ þ εðAÞσðBÞ þ σðAÞεðBÞ ! Garretson, J. L., Wiseman, H. M., Pope, D. T, A double-slit 'which-way' experiment on the. This is nothing but a unitary dilation theorem of systems of measurement correlations. observable can be measured without noise and the second will not be disturbed. Therefore any The notion of quantum instruments is formalized as statistical equivalence classes of all the possible quantum measurements and mathematically characterized as normalized completely positive map valued measures under naturally acceptable axioms. 62440Q, Heisenberg's uncertainty relation: Violation and reformulation. valid reformulation of Heisenberg's uncertainty principle under this general We illustrate For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. The collision of the powerful light source, while helping in identification increases the momentum of the electron and makes it move away from the initial position. Nonclassical states of electromagnetic waves as We discuss two approaches to adapting the classic notion of root-mean-square error to quantum measurements. A straightforward generalization based on the noise-operator was used to reformulate Heisenberg's uncertainty relation on the accuracy of simultaneous measurements to be universally valid and made the conventional formulation testable to observe its violation. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years. Here we resolve the conflict by way of an analysis of the possible conceptualizations of measurement error and disturbance in quantum mechanics. consequence of basic postulates for quantum mechanics. constant. Here, we show that Heisenberg actually proved the constraint for the accuracy of simultaneous measurement but assuming an obsolete postulate for quantum mechanics. However, Heisenberg with Although Heisenberg's uncertainty principle can be ignored in the macroscopic world (the uncertainties in the position and velocity of objects with relatively large masses are negligible), it holds significant value in the quantum world. For example, the location and speed of a moving car can be determined at the same time, with minimum error. Ozawa, M. Physical content of Heisenberg's uncertainty relation: Ozawa, M. Does a conservation law limit position measurements? 42 ). realizable in principle. resonator's energy; 12. this model is shown to break the SQL with arbitrary accuracy. theory of quantum measurement. and the disturbance to refute the old relation and to confirm the new relation. Active 2 years, 9 months ago. Methods of modern mathematical physics. This principle was formulated when Heisenberg was in trying to build an intuitive model of quantum physics. no. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. What is Heisenberg Uncertainty Principle? 4. Vorontsov, and Thorne claimed that this relation leads to a sensitivity limit Introduction 1 2. Introduction – What is Heisenberg's Uncertainty Principle? This result is generic for any non-nested integrable model, as is clear from our derivation, and we further show this by providing an additional example of the same … In this review, we discuss quantum measurement theory to develop that for dressed photons. Why is it Impossible to Measure both Position and Momentum Simultaneously? 1 The Heisenberg model 1.1 De nition of the model The model we will focus on is called the Heisenberg model. measurements using photons; 2. theoretically justified. completion by Kennard has long been credited only with a Uncertainty in the momentum of the electron = mass ×10-6 = 9×10-31×10-6 Kg m s-1. © 2008-2020 ResearchGate GmbH. Since atoms and subatomic particles have very small masses, any increase in the accuracy of their positions will be accompanied by an increase in the uncertainty associated with their velocities. Derivation of the Heisenberg uncertainty principle. However, soon after their appearance, an alternative theory was presented by Busch and co-workers, which proclaimed the validity of Heisenberg's relation and thus gave rise to heated debates. Values, Noise and disturbance in quantum measurements and operations - art. The model has ultraviolet divergences, which we regularize using the Pauli-Villars method. ∆t × ∆E ≥ h4π\frac{h}{4\pi }4πh​ = 6.626×10−344×3.14\,\frac{6.626\times {{10}^{-34}}}{4\times 3.14}4×3.146.626×10−34​ = 5.28×10-35Js, Assuming a maximum error in the measurement of lifetime equal to that of lifetime = 3 ×10-3s, ∆E ≥ h4πmΔx=13×10−3 \,\frac{h}{4\pi m\Delta x}=\frac{1}{3\times {{10}^{-3}}}\,\,4πmΔxh​=3×10−31​ × 5.28×10-35J, Uncertainty in the determination of energy of the atom = ∆E = 6.22 × 1018 × 13×10−3 \frac{1}{3\times {{10}^{-3}}}\,3×10−31​ × 5.28 ×10-35. Applying this, a rigorous lower bound is obtained for the gate error probability of physical implementations of Hadamard gates on a standard qubit of a spin 1/2 system by interactions with control fields or ancilla systems obeying the angular momentum conservation law. these features in low-dimensional systems and then discuss the Ozawa They proposed that we should abandon the repeatability hypothesis [27,11. A bigger particle with heavy mass will show the error to be very small and negligible. mechanics. standard quantum limit (SQL) due to the uncertainty principle. Recently, universally valid uncertainty relations have been established to set a precision limit for any instruments given a disturbance constraint in a form more general than the one originally proposed by Heisenberg. Recently, its reliability was examined based on an anomaly that the error vanishes for some inaccurate measurements, in which the meter does not commute with the measured observable. The notion of quantum instruments is formalized as statistical equivalence classes of all the possible quantum measurements and mathematically characterized as normalized completely positive map valued measures under naturally acceptable axioms. Financial Accounting Exam 2 Study Guide, Small Desk With Drawers For Bedroom, Best Scotch Under $200 2020, 2019 Ram 1500 Key Fob Features, Hollywood And Vine Lunch Characters 2020, How To Make Methylamine, Puffy Mattress Vs Purple, Kodiak Cakes Blondie Mix, Knight, Death And The Devil Symbols, Spaceman Ice Cream Machine Reviews, Final Fantasy Xv Wallpaper 4k, Mesoamerican Creation Story, Afghan Kebab House Nyc, Almond Size Grading, Silver Oxide Battery Vs Lithium Ion, Segagaga English Translation, Iris Module Mtss, Pumpkin Chocolate Chip Bread Healthy, Layla Flippable Mattress, Singer Futura Foot Pedal, Best Breakfast Sausage Brand, Normal Stores Losses Are Part Of, Omron Photoelectric Switch, What Does A Civil Engineer Do, Marjoram Flower Meaning, Fingerstyle Ukulele Tabs, The Lost Child Summary, Tagliata Sauce Recipe, Cherry Creek Reservoir Closed, heisenberg model derivation 2020
CommonCrawl
Cough at 1000 km/h? How fast does air move in the airways during a cough? The following passage is from Talley and O'Connor's Clinical examination: a systematic guide to physical diagnosis (emphasis mine): Cough is a common presenting respiratory symptom. It occurs when deep inspiration is followed by explosive expiration. Flow rates of air in the trachea approach the speed of sound during a forceful cough. Coughing enables the airways to be cleared of secretions and foreign bodies. The speed of sound claim is unreferenced. I have found mention of coughs approaching the speed of sound in numerous popular sources (e.g. here, here, here (where it says 1000 km/h), and here), and innumerous books (e.g. here, here, here, here, here, and here); none of these references the claim. The same claim is also here. This book has a more exact (unreferenced!) claim: Velocities as great as 28 000 cm/s (85% of the speed of sound) have been reported, but it is impossible to determine the gas velocity at points of airway constriction, where the greatest shearing forces will be developed. During this phase there is dynamic collapse in the bronchial tree, with large pressure gradients across the collapsed segment. That speed is just over 1000 km/h. When I have searched for research literature behind this, I've only found much lower velocities at the mouth (rather than at a narrower location like the glottis), e.g. a peak cough velocity of 22m/s, also 11.2m/s, and 28.8m/s. The closest to a reference I found was this book with the following: A cough comprises: ... sudden opening of the glottis, causing air to explode outwards at up to 500 mph or 85% of the speed of sound (Irwin et al, 1998), shearing secretions off the airway walls. Irwin et al isn't primary literature either, and references Comroe JH, Jr. Special acts involving breathing. In: Physiology of respiration: an introductory text. 2nd ed. Chicago: Year Book Medical Publishers, 1974; 230-31. I don't have access to this book (does anyone here?), but I expect the references only continue from there. My question is this: how fast is a cough in the airways (I am interested because such an explosive rush of air could explain the substantial damage seen in chronic cough), and does anyone know where the 1000 km/h claim comes from, or can point me to a legitimate reference? human-biology respiration breathing lungs measurement edited Nov 9 '16 at 5:02 AnonAnon $\begingroup$ If you are asking about whether a claim is correct/or not and where it comes from, you might receive more feedback on the skeptics stackexchange. $\endgroup$ – Ebbinghaus Nov 9 '16 at 6:25 $\begingroup$ Perhaps. I'm not a skeptic though, I'm sure the claim is probably true; I would just be interested to find a reference for it so I can see how it was measured. I thought it would be more relevant on biology.SE. $\endgroup$ – Anon Nov 9 '16 at 6:29 This reference from CHEST lists 21 clinically measured peak flow rates during various modes of coughing. Of these patients, and for unassisted cough, the highest peak flow is about 4 liters/sec. The human trachea ranges from 13 to 27 mm diameter. The relationship between velocity, $V$ and flow $Q$ is $$ V=\frac{Q}{A}$$ Assume the 4 liters/sec = 4000 cm^3/sec and minimum diameter, 13 mm = 1.3 cm, the cross section area being $$A = \pi (D/2)^2 = 1.3 cm^2$$ Plugging in $$ V=\frac{4000}{1.3} = 3077 cm/sec$$ which is a far cry from 28,000 cm/sec, so at this point I'm skeptical. Things to consider is that the data taken in the paper was from sick humans so perhaps a healthy (and athletic) person may be able to exert much higher flow rates. But then healthy people generally are not stimulated to cough as much as a sick person with an airway compromised by sputum. Although lower airways do have smaller diameters, the flow measured at the trachea is divided among them, so you wouldn't expect to see peak velocities in the lower airways, but rather the accumulation in the trachea. docsciencedocscience $\begingroup$ Nice answer +1. I did a similar calculation to you, but I thought perhaps using that size for the trachea wasn't legitimate; a cough begins with a closed glottis, so in principle the airflow in the first moments is through a very narrow opening between the vocal folds and so in principle the flow rate could be higher. The flows in your paper are at the mouth which would potentially lead to a slower pickup in velocity at the beginning (if the actual speed were actually near the speed of sound which is the maximum speed that an impulse can be transmitted in air). I agree about smaller airways. $\endgroup$ – Anon Mar 21 '17 at 5:43 Not the answer you're looking for? Browse other questions tagged human-biology respiration breathing lungs measurement or ask your own question. Can Serous inflammation on pleura pulmonalis cause dry cough and runny nouse? What is the cause of dry cough? Why does our voice change when we get affected by cold or cough?
CommonCrawl
Privacy-protecting estimation of adjusted risk ratios using modified Poisson regression in multi-center studies Di Shu ORCID: orcid.org/0000-0001-7564-51861, Jessica G. Young1 & Sengwee Toh1 BMC Medical Research Methodology volume 19, Article number: 228 (2019) Cite this article Multi-center studies can generate robust and generalizable evidence, but privacy considerations and legal restrictions often make it challenging or impossible to pool individual-level data across data-contributing sites. With binary outcomes, privacy-protecting distributed algorithms to conduct logistic regression analyses have been developed. However, the risk ratio often provides a more transparent interpretation of the exposure-outcome association than the odds ratio. Modified Poisson regression has been proposed to directly estimate adjusted risk ratios and produce confidence intervals with the correct nominal coverage when individual-level data are available. There are currently no distributed regression algorithms to estimate adjusted risk ratios while avoiding pooling of individual-level data in multi-center studies. By leveraging the Newton-Raphson procedure, we adapted the modified Poisson regression method to estimate multivariable-adjusted risk ratios using only summary-level information in multi-center studies. We developed and tested the proposed method using both simulated and real-world data examples. We compared its results with the results from the corresponding pooled individual-level data analysis. Our proposed method produced the same adjusted risk ratio estimates and standard errors as the corresponding pooled individual-level data analysis without pooling individual-level data across data-contributing sites. We developed and validated a distributed modified Poisson regression algorithm for valid and privacy-protecting estimation of adjusted risk ratios and confidence intervals in multi-center studies. This method allows computation of a more interpretable measure of association for binary outcomes, along with valid construction of confidence intervals, without sharing of individual-level data. In studies where the outcome variable is binary, a logistic regression model is commonly used for convenient estimation of the adjusted (i.e., conditional on measured covariates) odds ratio comparing exposed to unexposed individuals [1, 2]. However, the odds ratio is not easily interpretable because, unlike the risk ratio, it is not a direct measure of a ratio of probabilities, often of primary interest to patients and clinicians [3]. Although the odds ratio approximates the risk ratio under the rare disease assumption (e.g., odds of the outcome < 10% in all exposure and confounder categories) [4], it can be quite different from the risk ratio and produce misleading results when this assumption is not met [3, 5,6,7,8]. Log-binomial regression can directly estimate adjusted risk ratios without requiring the rare disease assumption, but it is susceptible to non-convergence issues when the maximum likelihood estimators lie near the boundary of the parameter space [9, 10]. Poisson regression is another approach to estimating adjusted risk ratios and does not have any known convergence problems in its parameter space. This approach provides consistent estimates of adjusted risk ratios but incorrect estimates of the variance because it relies on a Poisson distributed, rather than binomially distributed, outcome. In practice, standard implementation of Poisson regression tends to produce conservative confidence intervals [11]. As a solution to these challenges, Zou [12] proposed a modified Poisson regression approach that allows direct estimation of adjusted risk ratios even when the rare disease assumption is not met. This approach avoids the convergence issues typically observed in log-binomial regression and, unlike conventional Poisson regression, provides consistent variance estimates and confidence intervals with the correct nominal coverage. A growing number of studies are now conducted within multi-center distributed data networks [13]. Such collaborations combine data from multiple sources to generate more reliable evidence using larger and more representative samples. Within these networks, each data-contributing site (i.e., data partner) maintains physical control of their data and may not always be able or willing to share individual-level data for analysis. For example, the Sentinel System is a national program funded by the U.S. Food and Drug Administration to proactively monitor the safety of regulated medical products using electronic healthcare data from multiple data partners [14]. In multi-center studies like those conducted within the Sentinel System, it is often crucial to minimize sharing of sensitive individual-level data to protect patient privacy. The development and applications of analytic methods that enable valid statistical analysis without pooling individual-level data are therefore increasingly important. Privacy-protecting distributed algorithms to conduct logistic regression analyses have been previously developed [15,16,17,18]. To our knowledge, there are currently no distributed algorithms to estimate adjusted risk ratios via modified Poisson regression while avoiding pooling of individual-level data across data partners. In this paper, we propose such an algorithm and provide example R [19] code to implement the algorithm. We also illustrate in simulated and real-world data examples that our algorithm produces adjusted risk ratio estimates and standard errors equivalent to those obtained from the corresponding pooled individual-level data analysis. Theory of modified Poisson regression for pooled individual-level data We begin by describing the general theory of modified Poisson regression in single-database studies. Let X be a vector of covariates, E a binary exposure indicator (E = 1 if exposed and E = 0 if unexposed), and Y the binary outcome variable (Y = 1 if the outcome occurs and Y = 0 otherwise). Let Z be a vector of information on the exposure and covariates. Specifically, Z = (1, g(E, XT))T where g(E, XT) is a vector containing a specified function of E and X. Assume the risk of the outcome conditional on E and X can be written as $$ P\left(Y=1|E,\boldsymbol{X}\right)=\exp \left({\boldsymbol{\beta}}^T\boldsymbol{Z}\right) $$ where β is an unknown vector of parameters. An example of model 1 is \( P\left(Y=1|E,\boldsymbol{X}\right)=\exp \left({\beta}_0+{\beta}_E\ E+{\boldsymbol{\beta}}_{\boldsymbol{X}}^T\boldsymbol{X}\right) \). In this special case, the risk ratio of the outcome comparing the exposed to unexposed and adjusting for covariates X is given by P(Y = 1| E = 1, X)/P(Y = 1| E = 0, X) = exp(βE). Alternatively, we might assume a more flexible model that allows interactions between E and X: \( P\left(Y=1|\boldsymbol{X},E\right)=\exp \left({\beta}_0+{\beta}_E\ E+{\boldsymbol{\beta}}_{\boldsymbol{X}}^T\boldsymbol{X}+{\boldsymbol{\beta}}_{E\boldsymbol{X}}^TE\boldsymbol{X}\right) \). Under this model, the risk ratio of the outcome comparing the exposed to unexposed and adjusting for covariates X is given by \( P\left(Y=1|E=1,\boldsymbol{X}\right)/P\left(Y=1|E=0,\boldsymbol{X}\right)=\exp \left({\beta}_E+{\boldsymbol{\beta}}_{E\boldsymbol{X}}^T\boldsymbol{X}\right) \), which depends on the value of X. Suppose we have an independent and identically distributed sample of size n. For each individual i, the following variables are measured: Let Xi be a vector of covariates, Ei a binary exposure indicator (Ei = 1 if exposed and Ei = 0 if unexposed), Yi the binary outcome variable (Yi = 1 if the outcome occurs and Yi = 0 otherwise), and Zi a vector of information on the exposure and covariates, i.e., Zi = (1, g(Ei, XiT))T. Zou [12] provided the theoretical justification for his proposed approach in the setting of a 2 by 2 table (a binary exposure and no covariates). The justification for this approach can be established more generally using the theory of unbiased estimating equations [20]. Provided model 1 is correctly specified, we have E[{Y − exp(ZTβ)}Z] = 0, which leads to the unbiased estimating equation $$ \sum \limits_{i=1}^n\left\{{Y}_i-\exp \left({{\boldsymbol{Z}}_i}^T\boldsymbol{\beta} \right)\right\}{\boldsymbol{Z}}_i=\mathbf{0} $$ Solving (2) for β gives \( \hat{\boldsymbol{\beta}} \), a consistent and asymptotically normal estimator for the true β. A consistent estimator of the variance of \( \hat{\boldsymbol{\beta}} \) is then given by the sandwich variance estimator [20] $$ \hat{\mathit{\operatorname{var}}}\left(\hat{\boldsymbol{\beta}}\right)=\left\{\boldsymbol{H}{\left(\hat{\boldsymbol{\beta}}\right)}^{-1}\right\}\boldsymbol{B}\left(\hat{\boldsymbol{\beta}}\right)\left\{\boldsymbol{H}{\left(\hat{\boldsymbol{\beta}}\right)}^{-1}\right\} $$ where \( \boldsymbol{H}\left(\hat{\boldsymbol{\beta}}\right)=-\sum \limits_{i=1}^n\exp \left({{\boldsymbol{Z}}_i}^T\hat{\boldsymbol{\beta}}\right){\boldsymbol{Z}}_i{{\boldsymbol{Z}}_i}^T \) and \( \boldsymbol{B}\left(\hat{\boldsymbol{\beta}}\right)=\sum \limits_{i=1}^n{\left\{{Y}_i-\exp \left({{\boldsymbol{Z}}_i}^T\hat{\boldsymbol{\beta}}\right)\right\}}^2{\boldsymbol{Z}}_i{{\boldsymbol{Z}}_i}^T \). Zou [12] referred to this procedure as modified Poisson regression because (2) is equivalent to the score equation for the Poisson likelihood but the variance estimator does not rely on the Poisson distribution assumption (clearly unreasonable for binary outcomes). Distributed algorithm for conducting modified Poisson regression in multi-center studies Suppose the n individuals' data are physically stored in K data partners that are unable to share their individual-level data with the analysis center. For k = 1, …, K, let Ωk denote the set of indexes of individuals who are members of the k th data partner. When the individual-level data are available to the analysis center, the estimator \( \hat{\boldsymbol{\beta}} \) and its corresponding variance estimator can be obtained with off-the-shelf statistical software. However, when the individual-level data are not available, (2) cannot be directly solved for β to obtain \( \hat{\boldsymbol{\beta}} \), and \( \hat{\mathit{\operatorname{var}}}\left(\hat{\boldsymbol{\beta}}\right) \) cannot be directly calculated using (3). Here we describe a distributed algorithm that produces identical \( \hat{\boldsymbol{\beta}} \) and \( \hat{\mathit{\operatorname{var}}}\left(\hat{\boldsymbol{\beta}}\right) \) in multi-center studies where individual-level data are not pooled. We leverage the Newton-Raphson method such that \( \hat{\boldsymbol{\beta}} \) can be obtained using an iteration-based procedure with only summary-level information being shared between the data partners and the analysis center in each iteration. The r th iterated estimate of β using the Newton-Raphson method is $$ {\boldsymbol{\beta}}^{\left(\mathrm{r}\right)}={\boldsymbol{\beta}}^{\left(\mathrm{r}-1\right)}-{\left\{{\boldsymbol{H}}^{(r)}\right\}}^{-1}{\boldsymbol{S}}^{(r)} $$ where β(r − 1) is the (r − 1) th iterated estimate of β, \( {\boldsymbol{S}}^{(r)}=\sum \limits_{i=1}^n\left\{{Y}_i-\exp \left({{\boldsymbol{Z}}_i}^T{\boldsymbol{\beta}}^{\left(\mathrm{r}-1\right)}\right)\right\}{\boldsymbol{Z}}_i \), and \( {\boldsymbol{H}}^{(r)}=-\sum \limits_{i=1}^n\exp \left({{\boldsymbol{Z}}_i}^T{\boldsymbol{\beta}}^{\left(\mathrm{r}-1\right)}\right){\boldsymbol{Z}}_i{{\boldsymbol{Z}}_i}^T. \) We observe that S(r) and H(r) can be re-written as summation of site-specific quantities: $$ {\boldsymbol{S}}^{(r)}=\sum \limits_{k=1}^K{{\boldsymbol{S}}_k}^{(r)} $$ $$ {{\boldsymbol{S}}_k}^{(r)}=\sum \limits_{i\text{\EUR} {\varOmega}_k}\left\{{Y}_i-\exp \left({{\boldsymbol{Z}}_i}^T{\boldsymbol{\beta}}^{\left(\mathrm{r}-1\right)}\right)\right\}{\boldsymbol{Z}}_i $$ $$ {\boldsymbol{H}}^{(r)}=\sum \limits_{k=1}^K{{\boldsymbol{H}}_k}^{(r)} $$ $$ {{\boldsymbol{H}}_k}^{(r)}=-\sum \limits_{i\text{\EUR} {\varOmega}_k}\exp \left({{\boldsymbol{Z}}_i}^T{\boldsymbol{\beta}}^{\left(\mathrm{r}-1\right)}\right){\boldsymbol{Z}}_i{{\boldsymbol{Z}}_i}^T $$ Therefore, to calculate S(r) and H(r), each data partner k = 1, …, K only needs to calculate and share with the analysis center the summary-level information Sk(r) and Hk(r). Next, consider estimation of the variance of \( \hat{\boldsymbol{\beta}} \). We observe that $$ \boldsymbol{H}\left(\hat{\boldsymbol{\beta}}\right)=\sum \limits_{k=1}^K{\boldsymbol{H}}_k\left(\hat{\boldsymbol{\beta}}\right) $$ $$ {\boldsymbol{H}}_k\left(\hat{\boldsymbol{\beta}}\right)=-\sum \limits_{i\text{\EUR} {\varOmega}_k}\exp \left({{\boldsymbol{Z}}_i}^T\hat{\boldsymbol{\beta}}\right){\boldsymbol{Z}}_i{{\boldsymbol{Z}}_i}^T $$ $$ \boldsymbol{B}\left(\hat{\boldsymbol{\beta}}\right)=\sum \limits_{k=1}^K{\boldsymbol{B}}_k\left(\hat{\boldsymbol{\beta}}\right) $$ $$ {\boldsymbol{B}}_k\left(\hat{\boldsymbol{\beta}}\right)=\sum \limits_{i\text{\EUR} {\varOmega}_k}{\left\{{Y}_i-\exp \left({{\boldsymbol{Z}}_i}^T\hat{\boldsymbol{\beta}}\right)\right\}}^2{\boldsymbol{Z}}_i{{\boldsymbol{Z}}_i}^T $$ To calculate \( \boldsymbol{H}\left(\hat{\boldsymbol{\beta}}\right) \) and \( \boldsymbol{B}\left(\hat{\boldsymbol{\beta}}\right) \), each data partner k = 1, …, K only needs to calculate and share the summary-level information \( {\boldsymbol{H}}_k\left(\hat{\boldsymbol{\beta}}\right) \) and \( {\boldsymbol{B}}_k\left(\hat{\boldsymbol{\beta}}\right) \) after receiving the value of \( \hat{\boldsymbol{\beta}} \) from the analysis center. Unlike in the estimation of β, no iterations are needed in the sandwich variance estimation. We summarize our distributed algorithm for conducting modified Poisson regression in multi-center studies below. Point estimation Step 0 (Determination of starting values) The analysis center specifies the starting values for the components of β(0) and sends these values to all data partners. Then, for each iteration r until the convergence criteria are met, the following two steps are repeated: Step 1 (r th iteration of data partners) Each data partner k = 1, …, K calculates Sk(r) and Hk(r) using (6) and (8), respectively, based on β(r − 1) received from the analysis center. All data partners then share the values of Sk(r) and Hk(r) with the analysis center. Step 2 (r th iteration of the analysis center) The analysis center calculates S(r) and H(r) using (5) and (7), respectively. The analysis center then calculates β(r) using (4) and shares the value of β(r) with all data partners. The iteration procedure is considered to have converged when the change in the estimates between iterations is within a user-specified tolerance value. In numerical studies to be presented later, we considered a convergence criterion to be met at the (R + 1) th iteration if \( \underset{l}{\max}\left|{\delta_l}^{\left(\mathrm{R}+1\right)}\right|<{10}^{-8} \), where δl(R + 1) = βl(R + 1) − βl(R) if ∣βl(R) ∣ < 0.01 and δl(R + 1) = (βl(R + 1) − βl(R))/βl(R) otherwise, and βl(R) is the l th element of β(R). Once achieving convergence, the analysis center shares the final estimate \( \hat{\boldsymbol{\beta}} \) with all data partners. Variance estimation Step 1 (Calculation of summary-level information by data partners) Each data partner k = 1, …, K calculates \( {\boldsymbol{H}}_k\left(\hat{\boldsymbol{\beta}}\right) \) and \( {\boldsymbol{B}}_k\left(\hat{\boldsymbol{\beta}}\right) \) using (10) and (12), respectively, and then shares the values of \( {\boldsymbol{H}}_k\left(\hat{\boldsymbol{\beta}}\right) \) and \( {\boldsymbol{B}}_k\left(\hat{\boldsymbol{\beta}}\right) \) with the analysis center. Step 2 (Calculation of the variance estimate by the analysis center) The analysis center calculates \( \boldsymbol{H}\left(\hat{\boldsymbol{\beta}}\right) \) and \( \boldsymbol{B}\left(\hat{\boldsymbol{\beta}}\right) \) using (9) and (11), respectively, and then calculates the estimated variance \( \hat{\mathit{\operatorname{var}}}\left(\hat{\boldsymbol{\beta}}\right) \) using (3). Due to mathematical equivalence, the above procedure would provide the same point estimates and sandwich variance estimates as the analysis that uses individual-level data pooled across data partners. Analysis of simulated data We considered a simulation design that enabled us to assess the performance of the proposed summary-level modified Poisson method in the presence of multiple data partners, multiple covariates (including but not limited to data source indicators), and differences in exposure prevalence and outcome incidence across data partners. Although modified Poisson regression is broadly applicable with rare and common outcomes, here we considered a scenario with common outcomes, where logistic regression would provide biased estimates of adjusted risk ratios. Specifically, we simulated a distributed network with three (i.e., K = 3) data partners and n = 10000 individuals with 5000, 2000, and 3000 individuals contributing data from the first, second, and third data partners, respectively. We considered five covariates X1, X2, X3, X4 and X5. We generated X1 as a Bernoulli variable with a mean (i.e., P(X1 = 1)) of 0.6, X2 as a continuous variable following the standard uniform distribution, X3 as a continuous variable following the unit exponential distribution, X4 as an indicator that an individual contributed data from the first data partner, and X5 as an indicator that an individual contributed data from the second data partner. The exposure E was generated from a Bernoulli distribution with the probability of being exposed (E = 1) defined as 1/{1 + exp(0.73 − X1 − X2 + X3 − 0.2X4 + 0.2X5)}, indicating a non-randomized study. This setting led to different exposure prevalences across data partners. The resulting exposure prevalence was approximately 40% overall, 43% for the first data partner, 34% for the second data partner, and 38% for the third data partner. The outcome Y was generated from a Bernoulli distribution with the probability of having the outcome (Y = 1) defined as exp(ZTβ) = exp(−0.1 − 0.5E − 0.4X1 − 0.6X2 − 0.5X3 − 0.1X4 + 0.1X5) such that the true adjusted risk ratio comparing the exposed to unexposed was exp(−0.5) = 0.61. The resulting outcome incidence (i.e., risk) varied across the three data partners. This incidence was about 30% for the entire pooled data, 27% for the first data partner, 35% for the second data partner, and 31% for the third data partner. As the reference, we first fit a modified Poisson regression model using pooled individual-level data (Table 1). We then implemented our proposed distributed algorithm that did not require sharing of individual-level data to estimate β. Based on the starting value β(0) = 0, the analysis took seven iterations to converge. The individual-level and summary-level methods produced identical point estimates and sandwich variance-based standard errors (Table 1). The Additional file 1 provides the summary-level information shared between the data partners and the analysis center during each iteration. Table 1 Point Estimates and Standard Errors Using the Summary-Level Modified Poisson Method and Pooled Individual-Level Data Analysis: Analysis of Simulated Data Analysis of real-world data To further illustrate our method, we analyzed a dataset created from the IBM® Health MarketScan® Research Databases, which contain de-identified individual-level healthcare claims information from employers, health plans, hospitals, and Medicare and Medicaid programs fully compliant with U.S. privacy laws and regulations (e.g., Health Insurance Portability and Accountability Act). The study dataset included 9736 patients aged 18–79 years who received sleeve gastrectomy or Roux-en-Y gastric bypass between 1/1/2010 and 9/30/2015. The outcome of interest was any hospitalization during the 2-year follow-up period after surgery. The exposure variable was set to 1 if the patient received sleeve gastrectomy and 0 if the patient received Roux-en-Y gastric bypass. We estimated the risk ratio of hospitalization comparing sleeve gastrectomy with Roux-en-Y gastric bypass using the pooled individual-level data analysis and the summary-level information approach, adjusting for the following covariates identified during the 365-day period prior to the surgery: age; sex; Charlson/Elixhauser combined comorbidity score; diagnosis of asthma, atrial fibrillation, atrial flutter, coronary artery disease, deep vein thrombosis, gastroesophageal reflux disease, hypertension, ischemic stroke, myocardial infarction, pulmonary embolism, and sleep apnea; use of anticoagulants, assistive walking device, and home oxygen; unique drug classes dispensed and unique generic medications dispensed. Of the 9736 patients in the study dataset, 7877 (81%) patients underwent the sleeve gastrectomy procedure and 1859 (19%) patients had the Roux-en-Y gastric bypass procedure. The outcome event was not rare in the study, with 1485 (19%) sleeve gastrectomy patients and 608 (33%) Roux-en-Y gastric bypass patients having at least one hospitalization during the two-year follow-up period. We randomly partitioned the dataset into three smaller datasets with 2000, 3000 and 4736 patients to create a "simulated" distributed data network. As the reference, the pooled individual-level data analysis produced \( {\hat{\beta}}_E=-0.4632219 \) with a standard error 0.0422368, and a 95% confidence interval: − 0.5460061, − 0.3804377. These results corresponded to an adjusted risk ratio of \( \exp \left({\hat{\beta}}_E\right)=0.63 \) with a 95% confidence interval: 0.58, 0.68. Based on the starting value β(0) = 0, the proposed summary-level modified Poisson method took seven iterations to converge and produced point estimates and sandwich variance-based standard errors identical to those observed in the corresponding pooled individual-level data analysis (Table 2). The adjusted odds ratio from logistic regression was 0.53. As expected, interpreting the estimated adjusted odds ratio as an estimate of the adjusted risk ratio amplified the protective effect of sleeve gastrectomy compared to Roux-en-Y gastric bypass, resulting in an effect estimate that was further from the null (suggesting a 10% greater relative protective effect) than the modified Poisson regression estimate. Table 2 Point Estimates and Standard Errors Using the Summary-Level Modified Poisson Method and Pooled Individual-Level Data Analysis: Analysis of Real-World Data As expected, we had difficulty fitting a log-binomial regression model within this bariatric surgery dataset. Under the starting value β(0) = 0, the first iterated estimate β(1) could not be calculated because the formula for β(1) under the log-binomial regression model includes 1 − exp(ZiTβ(0)) in the denominator, which takes the value 0 when β(0) = 0. We also considered three non-zero starting values. We first set the starting values for all parameters to 0.05, but the analysis stopped at the second iteration due to matrix singularity. We then let the starting values be the estimates obtained from a logistic regression model fit using the entire bariatric surgery dataset, and the analysis converged with \( {\hat{\beta}}_E=-0.7369683 \). Finally, we specified the starting values as the estimates obtained from the modified Poisson regression fit using the entire bariatric surgery dataset, and the analysis converged with \( {\hat{\beta}}_E=-0.4544135 \). These results illustrated the convergence problems of log-binomial regression and the sensitivity of this method to starting values. In comparison, the modified Poisson analysis had no convergence problems and its estimates remained the same as those presented in Table 2 when using these alternative starting values. In this paper, we proposed and demonstrated – in both simulated and real-world data – a method that adapts the modified Poisson approach to directly estimate adjusted risk ratios in multi-center studies where sharing of individual-level data is not always feasible or preferred. Our method produced the same risk ratio estimates and sandwich variance estimates as the corresponding pooled individual-level data analysis without pooling individual-level data across data partners. The required summary-level information does not contain any potentially identifiable individual-level data and therefore offers better privacy protection. Analytic methods like the one we proposed here complement appropriate governance and data use agreements to enable the conduct of multi-center studies, especially when sharing of individual-level data is challenging. In terms of privacy protection, the proposed summary-level modified Poisson method serves as an intermediate approach between meta-analysis of site-specific effect estimates from modified Poisson analyses and modified Poisson analysis using pooled individual-level data. Compared to meta-analysis of site-specific effect estimates, our method requires more granular information, but the shared information is summary-level without detailed individual-level data. Unlike meta-analysis that generally only produces approximate results, our method produces results identical to those obtained from the corresponding pooled individual-level data analysis. Compared to the pooled individual-level analysis, however, our method requires multiple file transfers between the data partners and the analysis center. Although this need for information exchange at each iteration means that our proposed method is more labor-intensive to implement in practice, recent advancements in bioinformatics now allow semi-automated or fully-automated file transfers between data partners and the analysis center [17, 21,22,23,24,25,26,27]. For general users who may not have access to such technical infrastructure, we have developed R code that allows manual implementation of our proposed method. This R code (available in Additional file 2) illustrates the analysis of the simulated data from this study but can be easily modified to accommodate different numbers of covariates or different numbers of participating data partners. We assumed the outcome occurrence between individuals to be independent in our analysis. In some real-world situations, this independence assumption may be violated. In our case, it is possible that individuals who seek care in the same delivery system have correlated outcomes. To account for correlated data, Zou and Donner [28] extended the modified Poisson approach to settings with correlated binary outcomes in single-database studies. Future work will extend the proposed summary-level modified Poisson method to analyze correlated data in multi-center distributed data environments. In conclusion, we proposed a privacy-protecting approach to directly estimate adjusted risk ratios using modified Poisson regression analysis for multi-center studies. This approach does not require sharing of individual-level data across data partners but produces results that are identical to those obtained from the corresponding pooled individual-level data analysis. Replication R code for generating and analyzing the simulated data example is available online and can be easily modified by interested readers for their own multi-center analyses. The real-world data created from the IBM® MarketScan® Research Databases are currently not available for public sharing. Prentice RL, Pyke R. Logistic disease incidence models and case-control studies. Biometrika. 1979;66(3):403–11. Hosmer DW, Lemeshow S, Sturdivant RX. Applied logistic regression. 3rd ed. Hoboken: Wiley; 2013. Norton EC, Dowd BE, Maciejewski ML. Odds ratios - current best practice and use. JAMA. 2018;320(1):84–5. Greenland S. Interpretation and choice of effect measures in epidemiologic analyses. Am J Epidemiol. 1987;125(5):761–8. Altman DG, Deeks JJ, Sackett DL. Odds ratios should be avoided when events are common. BMJ. 1998;317(7168):1318. Holcomb WL Jr, Chaiworapongsa T, Luke DA, Burgdorf KD. An odd measure of risk: use and misuse of the odds ratio. Obstet Gynecol. 2001;98(4):685–8. Knol MJ, Le Cessie S, Algra A, Vandenbroucke JP, Groenwold RH. Overestimation of risk ratios by odds ratios in trials and cohort studies: alternatives to logistic regression. CMAJ. 2012;184(8):895–9. Tajeu GS, Sen B, Allison DB, Menachemi N. Misuse of odds ratios in obesity literature: an empirical analysis of published studies. Obesity. 2012;20(8):1726–31. Wacholder S. Binomial regression in glim: estimating risk ratios and risk differences. Am J Epidemiol. 1986;123(1):174–84. Skove T, Deddens J, Petersen MR, Endahl L. Prevalence proportion ratios: estimation and hypothesis testing. Int J Epidemiol. 1998;27(1):91–5. McNutt LA, Wu C, Xue X, Hafner JP. Estimating the relative risk in cohort studies and clinical trials of common outcomes. Am J Epidemiol. 2003;157(10):940–3. Zou G. A modified Poisson regression approach to prospective studies with binary data. Am J Epidemiol. 2004;159(7):702–6. Toh S, Platt R, Steiner JF, Brown JS. Comparative-effectiveness research in distributed health data networks. Clin Pharmacol Ther. 2011;90(6):883–7. Ball R, Robb M, Anderson SA, Dal PG. The FDA's sentinel initiative - a comprehensive approach to medical product surveillance. Clin Pharmacol Ther. 2016;99(3):265–8. Fienberg SE, Fulp WJ, Slavkovic AB, Wrobel TA. "Secure" log-linear and logistic regression analysis of distributed databases. In: Domingo-Ferrer J, Franconi L, editors. Privacy in Statistical Databases. PSD 2006. Lecture notes in computer science, vol 4302. Berlin, Heidelberg: Springer; 2006. Karr AF, Fulp WJ, Vera F, Young SS, Lin X, Reiter JP. Secure, privacy-preserving analysis of distributed databases. Technometrics. 2007;49(3):335–45. Jiang W, Li P, Wang S, Wu Y, Xue M, Ohno-Machado L, et al. WebGLORE: a web service for grid LOgistic REgression. Bioinformatics. 2013;29(24):3238–40. El Emam K, Samet S, Arbuckle L, Tamblyn R, Earle C, Kantarcioglu M. A secure distributed logistic regression protocol for the detection of rare adverse drug events. J Am Med Inform Assoc. 2013;20(3):453–61. R Core Team. R: A Language and Environment for Statistical Computing. Vienna. URL https://www.R-project.org/: R Foundation for Statistical Computing; 2018. Stefanski LA, Boos DD. The calculus of M-estimation. Am Stat. 2002;56(1):29–38. Her QL, Malenfant JM, Malek S, Vilk Y, Young J, Li L, et al. A query workflow design to perform automatable distributed regression analysis in large distributed data networks. eGEMs. 2018;6(1):11. Jiang X, Wu Y, Marsolo K, Ohno-Machado L. Development of a web service for analysis in a distributed network. eGEMs. 2014;2(1):22. Wolfson M, Wallace SE, Masca N, Rowe G, Sheehan NA, Ferretti V, et al. DataSHIELD: resolving a conflict in contemporary bioscience--performing a pooled analysis of individual-level data without sharing the data. Int J Epidemiol. 2010;39(5):1372–82. Wu Y, Jiang X, Kim J, Ohno-Machado L. Grid binary LOgistic REgression (GLORE): building shared models without sharing data. J Am Med Inform Assoc. 2012;19(5):758–64. Lu CL, Wang S, Ji Z, Wu Y, Xiong L, Jiang X, et al. WebDISCO: a web service for distributed cox model learning without patient-level data sharing. J Am Med Inform Assoc. 2015;22(6):1212–9. Narasimhan B, Rubin DL, Gross SM, Bendersky M, Lavori PW. Software for distributed computation on medical databases: a demonstration project. J Stat Softw. 2017;77(13):22. Meeker D, Jiang X, Matheny ME, Farcas C, D'Arcy M, Pearlman L, et al. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research. J Am Med Inform Assoc. 2015;22(6):1187–95. Zou G, Donner A. Extension of the modified Poisson regression model to prospective studies with correlated binary data. Stat Methods Med Res. 2013;22(6):661–70. The authors thank Qoua Her at the Harvard Pilgrim Health Care Institute for his help with the creation of the real-world dataset. The authors also thank Xiaojuan Li and Jenna Wong at the Harvard Pilgrim Health Care Institute for their comments on an earlier draft of this manuscript. Dr. Toh was funded in part by the Patient-Centered Outcomes Research Institute (ME-1403-11305), the National Institute of Biomedical Imaging and Bioengineering (U01 EB023683), and a Harvard Pilgrim Health Care Institute Robert H. Ebert Career Development Award. The funding bodies had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA Di Shu, Jessica G. Young & Sengwee Toh Di Shu Jessica G. Young Sengwee Toh The draft was written by DS, JGY and ST. DS conceived the idea, wrote the R program and performed all data analyses. JGY and ST helped supervise the presentation of methodology, and the design and interpretations of data analyses. All authors read and approved the final manuscript. Correspondence to Di Shu. This study, including the secondary data analysis using the IBM® MarketScan® Research Databases that contain de-identified individual-level healthcare claims information, was approved as a non-human subject research project by the Institutional Review Board at Harvard Pilgrim Health Care. Additional file 1. Reports the shared summary-level information in the simulated data example. Provides replication R code for the data generation and analysis of the simulated data example. Shu, D., Young, J.G. & Toh, S. Privacy-protecting estimation of adjusted risk ratios using modified Poisson regression in multi-center studies. BMC Med Res Methodol 19, 228 (2019). https://doi.org/10.1186/s12874-019-0878-6 Accepted: 22 November 2019 Distributed analysis Modified Poisson regression Multi-center studies Risk ratio Data analysis, statistics and modelling
CommonCrawl
central bank calendar 2022 pdf » cute nintendo switch oled case » lagrangian mechanics examples » lagrangian mechanics examples Consider a pendulum of mass \(m\) and length \(l\) whose base is driven horizontally by \(x=a\sin wt\). For example, we try to determine the equations of motion of a particle of mass A Lagrangian system can be modi ed to include external forces by adding them directly to Lagrange's equations. In lecture, we presented the elements of this approach and worked some examples. Using this denition in Eq. By voting up you can indicate which examples are most useful and appropriate. Lagrangian for a Particle Interacting with a Field To describe the interaction of a particle with a field, we postulate a Lagrangian of the form 2 1, 2 LU=mv tr. In the nondimensional coordinates, we know that L 4 and L 5 have analytical solutions from Eq. (15) Equations (15) are Lagrange's equations in Cartesian coordinates. To determine the vertical position of the cylinders and the moving pulley 3 distances are required, which can be the three variables y 1, y 2 and y 3 indicated in the figure. Statements made in a weather forecast. mechanics in terms of a variational principle. Lagrangian Mechanics Example. Eulerian information concerns fields, i.e., properties like velocity, pressure and temperature that vary in time and space. Suppose, further, that and are not independent variables. Example: Find the shortest path between points (x 1,y 1) and . . In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). . Lagrangian System Derivation. Microsoft PowerPoint - 007 Examples Constraints and Lagrange Equations.pptx Author: paso Created Date: 9/13/2021 7:19:23 PM . Example: How to use Euler-Lagrange equation. . Amax = 0y(s)dx ds ds = 0sin2sds = 2 1.5708. This handout1 is not meant to provide a rigorous introduction to lagrangian mechanics presented in undergraduate physics. I'm in the process of working through some mechanics examples that use the Lagrangian to find a solution. Classical Mechanics Numerical Example Discrete Mechanics Taylor Variational Integrator Discrete Hamiltonian Variational Integrators Lagrangian Dynamical System Lagrangian System The Con guration Space is a di erentiable manifold, Q. The lagrangian equation in becomes (13.8.8) ( 2 M + m) = m ( cos 2 sin ) These, then, are two differential equations in the two variables. The second case illustrates the power of the above formalism, in a case which is hard to solve with Newton's laws. Example: the brachistochrone problem Examples. Calculus of Variations & Lagrange Multipliers. . The pages look exactly the same as the paperback pages; the files are essentially pdfs . Lagrangian - Examples Generalized Momenta For a simple, free particle, the kinetic Energy is: \begin{equation} T = \frac{1}{2}m\dot{x}^2 \end{equation} . 2 Lagrangian Mechanics Note: ~q(t) describes small variations around the trajectory ~q(t), i.e. The classical Lagrangian is the dierence between the kinetic and potential energies of the system. ~q(t) + ~q(t) is a 'slightly' . Constrained Lagrangian Dynamics. . 1.4 Example of holonomic constraints: a disk on an inclined plane A cylinder of radius arolls without slipping down a plane inclined at an angle to the horizontal. The scheme is Lagrangian and Hamiltonian mechanics. Lagrangian Mechanics Example: Motion of a Half Atwood Machine. (6.3) to x, y, and z) may be combined into the vector statement, mx = rV: (6.8) But rV = F, so we again arrive at Newton's second law, F = ma, now in three dimensions. FINAL LAGRANGIAN EXAMPLES 29.1 Re-examine the sliding blocks using E-L 29.2 Normal modes of coupled identical springs 29.3 Final example: a rotating coordinate system 2 29.1 Re-examine the sliding blocks using E-L A block of mass m slides on a frictionless inclined plane of mass M, which itself rests on a horizontal frictionless surface. takes the form V(x;y;z), so the Lagrangian is L = 1 2 m(_x2 + _y2 + _z2)V(x;y;z): (6.7) It then immediately follows that the three Euler-Lagrange equations (obtained by applying eq. The first thing to make absolutely clear is that the Lagrangian method is a method. This is, however, a simple problem that can easily (and probably more quickly) be solved directly from the Newtonian formalism. Probably the best example for (basic, macroscopic) . For example, we try to determine the equations of motion of a particle of mass . Lagrangian mechanics 2.1 From Newton II to the Lagrangian In the coming sections we will introduce both the notion of a Lagrangian as well as the principle of least action. The Lagrangian is then. . Answer (1 of 4): A2A. Step 2: Set the gradient of equal to the zero vector. No-Nonsense Classical . . Even when it comes to finding equations of motion, you may have to supplement Lagrangians with certain other methods - Lagrange multipliers might be necessary to implement some constraints, s. Let's look at our example and . . A Student's Guide to Lagrangians and Hamiltonians . Ships from and sold by Amazon.com. Lagrangian mechanics 2.1 From Newton II to the Lagrangian In the coming sections we will introduce both the notion of a Lagrangian as well as the principle of least action. This week's homework also presents these steps, so if you've started the homework already, you don't need to read the paragraphs describing each step. The variation of the action is therefore bb aa d S m dt dt dt = r v U, (20) Ch 01 -- Problem 07 -- Classical Mechanics Solutions -- Goldstein Lagrangian mechanics, derived! Oh, and other places. . Figure 1 - Simple pendulum Lagrangian formulation The Lagrangian function is . (6.24) We see that L is cyclic in the angle , hence p = For example, we try to determine the equations of motion of a particle of mass This post is mostly about a tool called Lagrangian Mechanics which lets you solve physical problems like an optimization problem. . The motion of a hockey puck around a frictionless air hockey table (with no holes in it.) Lagrangian information concerns the nature and behavior of fluid parcels. Consider, as an example, the derivation of the conserved quantity for the motion of a point particle in the field generated by an infinite helix: from the symmetry of the Lagrangian it is easy to show what the conserved quantity is (it is one of the first exercises in Landau and Lifshitz; vol 1 Mechanics), while try to do the same in Newtonian . . Lagrangian does not explicitly depend on . . The Lagrangian equations can then be written as simply; \begin{equation} \frac{d p_k}{dt }= \frac{\partial L}{\partial q_k} \end{equation} But what if a particular Lagrangian is missing . Over Newtonian Mechanics 7.1 Lagrange's Equations for Unconstrained Motion Lagrangian Connection to Euler-Lagrange Generalized Coordinates Example 7.1 Generalized Force and Momentum . "A cold air mass is moving in from the North." (Lagrangian) Suppose we have a system with one particle. For Newtonian mechanics, the Lagrangian is chosen to be: ( 4) where T is kinetic energy, (1/2)mv 2, and V is potential energy, which we wrote as in equations ( 1b ) and ( 1c ). In case you missed it, here. One that brought us quantum mechanics, and thus the digital age. A common theme in all of the books (except the 7th one!) The Kepler problem is one of the most foundational physics problems, perhaps, of all time and it has to do with solving for the motion of two massive bodies (such as planets) orbiting each other under the influence of gravity. Constraints and Friction Forces. Like the Lagrangian Formulation, one can use generalized coordinates with the Hamiltonian, however, the Hamiltonian is written in terms of coordinates and their conjugate momenta rather than the coordinates and their time derivatives as with the Lagrangian. In classical mechanics, it is absolutely the same physics as Newton's method. Eulerian information concerns fields, i.e., properties like velocity, pressure and temperature that vary in time and space. . Symmetry and Conservation Laws. It is an example of a general feature of Lagrangian mechanics. But, the benefits of using the Lagrangian approach become obvious if we consider more complicated problems. For example, for a system of n coordinates, that involves m holonomic constraints, there are s = n m independent generalized coordinates. In particular we have now rephrased the variational problem as the solution to a dierential equation: y(x) is an extremum of the functional if and only if it satises the Euler-Lagrange equation. Physics 5153 Classical Mechanics Small Oscillations 1 Introduction As an example of the use of the Lagrangian, we will examine the problem of small oscillations about a stable equilibrium point. Classical Mechanics Lecture 3 Part 1 -- Introduction . Flammable Maths8.01x - Lect 6 - Newton's Laws Worked examples in classical Lagrangian mechanics Physics 68 Lagrangian Mechanics (1 of 25) What is Lagrangian Mechanics? The other two schemes are Hamiltonian me- . The maximum area is then given by. . . In Lagrangian mechanics the energy E is given as : Now in the cases where L have explicit time dependence, E will not be conserved. . Get it as soon as Sunday, Jun 26 FREE Shipping on orders over $25 shipped by Amazon. LAGRANGIAN AND HAMILTONIAN MECHANICS: SOLUTIONS TO THE EXERCISES. Understanding of the material is enhanced by numerous in-depth examples throughout the book, culminating in non-trivial applications . Its original prescription rested on two principles. . However, the most interesting example covered is the Kepler problem using Lagrangian mechanics. Example: Linear Friction Force Using the Modified Lagrangian (click to see more) Now, this modified Lagrangian only works for linear drag, so you can't include things like quadratic drag or friction due to normal force. Click on a book below (or use the menu) for more information on each one. . . Here are the examples of the python api sympy.physics.mechanics.Lagrangian taken from open source projects. xiii 0 Reference Materials 1 0.1 Lagrangian Mechanics (mostly . by M G Calkin Hardcover. Newtonian mechanics. . If you wish to include these, you'll have to use a dissipation function. Lagrangian Mechanics begins with a proper historical perspective on the Lagrangian method by presenting Fermat's Principle of Least Time (as an . As an example, suppose V(x;t) = mgx, i.e., we have a particle moving in a uniform gravitational eld. Let us evaluate the action for the path x(t) = t t2 t1 t2 x1 + t t1 examples often concern particles, conceived in an essentially classical way, and how they might interact with one another when they collide.2 The simulation is written in C++ and uses the QT application framework. $43.00. Lagrangian information concerns the nature and behavior of fluid parcels. The radius of the hemisphere is R and the particle is located by the polar angle and the azimuthal angle . Calculus of Variations ft. We begin by defining the generalized variables. . The equation of the right hand side is called the Euler-Lagrange Equation for . Lagrangian and Hamiltonian Mechanics Melvin G. Calkin 1999 This book contains the exercises from the classical mechanics text Lagrangian and Hamiltonian Mechanics, together with their complete solutions. As an example, the Lagrange function of a pendulum considered in Newtonian mechanics above has the form L= ml2'_2 2 + mglcos'; (9) where '= qand _'= _q . But, the benefits of using the Lagrangian approach become obvious if we consider more complicated problems. Plug each one into . Lagrangian Mechanics Constraints. where is some function of three variables. Suppose that we have a dynamical system described by two generalized coordinates, and . Now let's go back and finally solve the problem that I used to motivate the calculus of variations in the first place. The description of motion about a stable equilibrium is one of the most important problems in physics. First that we should try to Let's get started though. For gravity considered over a larger volume, we might use V =- G m 1 m 2 / r. Answer (1 of 2): Lagrangians only give you a means of finding the equations of motion, not solving them. A Review of Analytical Mechanics (PDF) Lagrangian & Hamiltonian Mechanics. . (19) where the first term is just the Lagrangian of a free particle. Here are some examples: 1. But, the benefits of using the Lagrangian approach become obvious if we consider more complicated problems. A. This example will also be used to illustrate how to use Maxima to solve Lagrangian mechanics problems. In other words, find the critical points of . The use of generalized coordinates in Lagrangian mechanics simplifies derivation of the equations of motion for constrained systems. . It's probably a good idea to understand just what the heck that means. Book Synopsis . The first reason is for quantum mechanics. We use the plural (equa-tions), because Lagrange's equations are a set of equations. and hence the Euler-Lagrange equations are proved!4 Sometimes when we are applying to the Euler-Lagrange equation for more than one generalized coordinate, we will result in coupled di erential equations which are two or more equations that depend on each other as a function of time. x(s) = coss, y(s) = sins, 0 < s < . In this section two examples are provided in which the above concepts are applied. through each step of the Lagrangian procedure for solving a mechanics problem using a simple example. Rigid Body Dynamics (PDF) Coordinates of a Rigid Body. In this example, we will plot the Lagrange points for the system as a function of 2. However, the collinear Lagrange points do . This will be an equivalent, but much more powerful, formulation of Newtonian mechanics than what can be achieved starting from Newton's second law. Lagrangian mechanics yields the Lagrange equations for mechanics. As another example, consider a particle moving in the (x,y) plane under the inuence of a potential U(x,y) = U p x2 +y2 which depends only on the particle's distance from the origin = p x2 +y2. Lagrangian named after Joseph Lagrange (1700's) - Fundamental quantity in the field of Lagrangian Mechanics - Example: Show that this holds for Cartesian coordinates U q n = 0 T U qn d dt T U q n = 0 L qn, qn T U L qn d dt L q n = 0 "Lagrangian" "Euler-Lagrange equations of Statements made in a weather forecast. Step 3: Consider each solution, which will look something like . It is intended primarily for instructors who are using Lagrangian and Hamiltonian Mechanics in their course, but it Only 3 left in stock (more on the way). The Lagrangian is: L = mR2 2 2 sin2 +2 The State Space is the corresponding tangent bundle, TQ, with local coordinates (q;q_). Contents 0.1 Preface . Problem 1: Step-by-Step ! For this example we are using the simplest of pendula, i.e. An Introduction to Lagrangian Mechanics begins with a proper historical perspective on the Lagrangian method by presenting Fermat's Principle of Least Time (as an introduction to the Calculus of Variations) as well as the principles of Maupertuis, Jacobi, and d'Alembert that preceded Hamilton's formulation of the Principle of Least Action, from which the Euler-Lagrange equations of motion are . . It's just a way to solve the same problems more directly. The lagrangian equation in becomes (13.8.7) a ( cos ) + g sin = 0. The lagrangian part of the analysis is over; we now have to see if we can do anything with these equations. In lagrangian mechanics we can use any coordinate system we want as long as the lagrangian could be represented in terms of that preferred coordinate system. By working out a simple example, we show that the Lagrangian approach is equivalent to the Newtonian approach in terms of the system's equation of motion. The pendulum's Lagrangian function is L(, ) = m2(1 22 + 2cos). 2. B. . This is, however, a simple problem that can easily (and probably more quickly) be solved directly from the Newtonian formalism. Lagrangian vs. Newton-Euler Methods There are typically two ways to derive the equation of motion for an open-chain robot: Lagrangian method and Newton-Euler method Lagrangian Formulation-Energy-based method-Dynamic equations in closed form-Often used for study of dynamic properties and analysis of control methods Newton-Euler Formulation If you read some introductory mechanics text like David Morin's Introduction to Classical Mechanics about Euler Lagrange Equations you get a large amount of simple examples like the "moving plane" (Problem 6.1 in the link above) or the double pendulum of how to apply the Euler Lagrange equations.. Ashmit Dutta8 (September 2, 2020) Lagrangian Handout Example 3.1 (2017 China Semi-Finals) A solid cylinder of mass mand radius rrests on the inside . Lagrangian mechanics to see how the procedure is applied and that the result obtained is the same. Our final result is this: The curve that maximizes the area A is described by the parametric relations. Here is an example of a pendulum: The only other possible coordinate system to work with is the cyclic coordinate. to analytical mechanics, using intuitive examples to illustrate the underlying mathematics, helping students formulate, solve and interpret problems in mechanics. Classical Mechanics and Relativity: Lecture 9In this lecture I work through in detail several examples of classical mechanics problems, which I solve using t. The Lagrangian is divided into a center-of-mass term and a relative motion term. Indeed it has pointed us beyond that as well. The General Dissipation Function . The double pendulum, but with the lower mass attached by a spring instead of a string. Ships from and sold by Amazon.com. Examples: A particle is constraint to move in the x-y plane, the equation of constraint is z . (83) and repeated here for reference: L 4: x = 1 2 2 y = 3 2 L 5: x = 1 2 2 y = 3 2. This is L = m2. Generalized Momenta. where M is the total mass, is the reduced mass, and U the potential of the radial force. nian mechanics is a consequence of a more general scheme. Here are some examples: 1. Newtonian mechanics. Variational Principles and Lagrangian Mechanics Physics 3550, Fall 2012 Variational Principles and Lagrangian Mechanics Relevant Sections in Text: Chapters 6 and 7 . In Lagrangian Mechanics you minimize the total action of a system to find its motion. MIT 2.003SC Engineering Dynamics, Fall 2011View the complete course: http://ocw.mit.edu/2-003SCF11Instructor: J. Kim VandiverLicense: Creative Commons BY-NC-. We will prove all this in the coming week. The first example establishes that in a simple case, the Newtonian approach and the Lagrangian formalism agree. Compare our Lagrangian approach to the solution using the Newtonian algorithm in deriving Kepler's laws. Imposing constraints on a system is simply another way of stating that there are forces present in the problem that cannot be specified directly, but are known in term of their effect on the motion of the system. This program simulates the motion of a simple pendulum whose base is driven horizontally by \(x = a\sin wt\). . If you think you have discovered a suitable Lagrangian for a problem, be it from quantum mechanics, classical mechanics or relativity, you can easily check whether the Lagrangian you found describes your problem correctly or not by using the Euler-Lagrange equation. The Lagrangian and Eulerian specifications of the kinematics and dynamics of the flow field are related by the material derivative (also called the Lagrangian derivative, convective derivative, substantial derivative, or particle derivative).. The Lagrangian, expressed in two-dimensional polar coordinates (,), is L = 1 2m 2 +22 U() . This is, however, a simple problem that can easily (and probably more quickly) be solved directly from the Newtonian formalism. Consider a particle of mass m sitting on a frictionless rod lying in x-y plane pointing in . To substitute this into the EL equation we must first evaluate L / , the partial derivative of L with respect to . Svitolina Barty Head To Head Wedding At Sheraton Hotel Rhetorical Question Synonym Florida Boat Show 2022 Dates Oaklawn Racing Tickets 2021 Remarried Empress Quotes Scalextric Slot Car Track
CommonCrawl
How to draw a Markov network graph for two or pair of variables For $i \in \{1, 2, 3\}$, let $X_i$ be a random variable for the event that a coin toss comes up heads (which occurs with probability $q$). Supposing that the $X_i$ are independent, define $X_4 = X_1 ⊕ X_2$ and $X_5 = X_2 ⊕ X_3$, where $⊕$ denotes addition in modulo two arithmetic (XOR logical operation). How do I draw a directed graphical model (the graph and conditional probability tables) for these five random variables? How do I draw an undirected graphical model (the graph and respective potentials) for these five variables? Under what conditions on $q$ do we have $X_5 \perp \!\!\! \perp X_3$ and $X_4 \perp \!\!\! \perp X_1$? Are either of these marginal independence assertions implied by the graphs in (1) or (2)? conditional-probability markov-chain The directed graphical model is simple: $X_1 \to X_4 \leftarrow X_2 \to X_5 \leftarrow X_3$. The CPTs you have already described in your question: \begin{align} P(X_1=1)=P(X_2=1)=P(X_3=1)=q \end{align} \begin{align} P(X_4=1|X_1=1\;\&\;X_2=1)&=P(X_4=1|X_1=0\;\&\;X_2=0)\\ &=0\\ P(X_4=1|X_1=1\;\&\;X_2=0)&=P(X_4=1|X_1=0\;\&\;X_2=1)\\ &=1\\ P(X_5=1|X_2=1\;\&\;X_3=1)&=P(X_5=1|X_2=0\;\&\;X_3=0)\\ &=0\\ P(X_5=1|X_2=1\;\&\;X_3=0)&=P(X_5=1|X_2=0\;\&\;X_3=1)\\ &=1 \end{align} When converting this directed network to an undirected Markov network, you must "moralize" the graph, i.e. connect the parents of a common child node, because conditioning on the child node induces a dependency between the parents. So you need to connect $X_1$ to $X_2$ and $X_2$ to $X_3$, like so: I will leave the question about clique potentials to another user as I don't have much experience with undirected Markov networks. When $X_1 \perp \!\!\! \perp X_4$, we have that $P(X_4=1)=P(X_4=1|X_1=1)$ and $P(X_4=1)=P(X_4=1|X_1=0)$. So we calculate those probabilities and solve for $q$: \begin{align} P(X_4=1) &= P(X_1=1 \; \& \; X_2=0) + P(X_1=0 \; \& \; X_2=1)\\ &= 2q(1-q) \end{align} \begin{align} P(X_4=1|X_1=0) &= P(X_2=1)\\ &= q \end{align} \begin{align} P(X_4=1|X_1=1) &= P(X_2=0)\\ &= 1-q \end{align} Giving us $q=\frac{1}{2}$, 1 or 0. These values of $q$ produce the independence $X_1 \perp \!\!\! \perp X_4$ (and by symmetry, also $X_3 \perp \!\!\! \perp X_5$). These marginal independences are not implied by either of the graphs – this is a case of the distribution being "unfaithful" to the directed graph. Lizzie SilverLizzie Silver Not the answer you're looking for? Browse other questions tagged conditional-probability markov-chain or ask your own question. Expected value of a series of random variables in a markov chain How to encode the time in an Markov Jump Process How to draw state diagram for first order Markov chain for 10000bases from 2 chromosomes? How to estimate the infinitesimal generator of a Markov chain? Appropriate distance measure between two finite state Markov chain models? Gibbs Sampler output: how many Markov chains? How to draw a conditional distribution graph for B given A
CommonCrawl
How to make the two important years 1492 and 1969 with a minimum number of 1s Using only 1s, make 1492 and 1969 with the minimum number of digits Rules: allowed operations +, -, ×, ÷, ^, (), and ! (factorial). Concatenation of the original digits is allowed, but not (1+1)1=21. Note, the () can also be used as binomial coefficient. Example: $\binom{11}{1+1}+1111^{11} +(11+1)!$ The record to beat is 13 for both! formation-of-numbers Rubio♦ ThomasLThomasL $\begingroup$ no square roots? $\endgroup$ – Bass Jul 13 '19 at 19:24 11 for 1492: $(((1+1+1)!)!+(11+1+1)\times(1+1))\times(1+1)$ $=(6!+26)\times2$ $=746\times2$ $=1492$ $(\frac{((1+1+1)!)!}{1+1+1+1}-1)\times11$ $=(\frac{720}{4}-1)\times11$ $=179\times11$ JMPJMP Not the answer you're looking for? Browse other questions tagged formation-of-numbers or ask your own question. Write twenty-four from four numbers Make numbers 1-30 using 2, 0, 1, 9 Use 2, 0, 1 and 8 to make 71 1984 - take the digits 1,9, 8 and 4 and make 246 Make all the ones Using only 1s, make 29 with the minimum number of digits Make $2471$ out of first seven powers of two
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Is Seniority a Partial Dynamic Symmetry in the First $\nu g_{9/2}$ Shell? (1710.11207) A.I. Morales, G. Benzoni, H. Watanabe, G. de Angelis, S. Nishimura, L. Coraggio, A. Gargano, N. Itaco, T. Otsuka, Y. Tsunoda, P. Van Isacker, F. Browne, R. Daido, P. Doornenbal, Y. Fang, G. Lorusso, Z. Patel, S. Rice, L. Sinclair, P.-A. Söderström, T. Sumikama, J. Wu, Z.Y. Xu, A. Yagi, R. Yokoyama, H. Baba, R. Avigo, F.L. Bello Garrote, N. Blasi, A. Bracco, A.M. Bruce, F. Camera, S. Ceruti, F.C.L. Crespi, M.-C. Delattre, Zs. Dombradi, A. Gottardo, T. Isobe, I. Kojouharov, N. Kurz, I. Kuti, S. Lalkovski, K. Matsui, B. Melon, D. Mengoni, T. Miyazaki, V. Modamio-Hoybjor, S. Momiyama, D.R. Napoli, M. Niikura, R. Orlandi, Zs. Podolyák, P.H. Regan, H. Sakurai, E. Sahin, D. Sohler, H. Schaffner, R. Taniuchi, J. Taprogge, Zs. Vajta, J.J. Valiente-Dobón, O. Wieland, M. Yalcinkaya May 2, 2018 nucl-ex, nucl-th The low-lying structures of the midshell $\nu g_{9/2}$ Ni isotopes $^{72}$Ni and $^{74}$Ni have been investigated at the RIBF facility in RIKEN within the EURICA collaboration. Previously unobserved low-lying states were accessed for the first time following $\beta$ decay of the mother nuclei $^{72}$Co and $^{74}$Co. As a result, we provide a complete picture in terms of the seniority scheme up to the first $(8^+)$ levels for both nuclei. The experimental results are compared to shell-model calculations in order to define to what extent the seniority quantum number is preserved in the first neutron $g_{9/2}$ shell. We find that the disappearance of the seniority isomerism in the $(8^+_1)$ states can be explained by a lowering of the seniority-four $(6^+)$ levels as predicted years ago. For $^{74}$Ni, the internal de-excitation pattern of the newly observed $(6^+_2)$ state supports a restoration of the normal seniority ordering up to spin $J=4$. This property, unexplained by the shell-model calculations, is in agreement with a dominance of the single-particle spherical regime near $^{78}$Ni. Experimental and theoretical study of AC losses in variable asymmetrical magnetic environments (1705.05193) S. T. Ranecky, H. Watanabe, J. Ogawa, T. Oka, D. Goelden, L. Alff, Y. A. Genenko May 15, 2017 cond-mat.supr-con Measurements of AC losses in a HTS-tape placed in between of two bulk magnetic shields of high permeability were performed by applying calorimetric techniques for various asymmetrical shielding arrangements. The experiment was supported by analytical calculations and finite-element simulations of the field and current distributions, based on the Bean model of the critical state. The simulated current and field profiles perfectly reproduce the analytic solutions known for certain shielding geometries. The evaluation of the consequent AC losses exhibits good agreement with measurements for the central position of the tape between the magnets but increasing discrepancy when the tape is approaching the shields. This can be explained by the increasing contribution of the eddy currents and magnetic hysteresis losses in the conducting shields. A new search for the $K_{L} \to \pi^0 \nu \overline{\nu}$ and $K_{L} \to \pi^{0} X^{0}$ decays (1609.03637) J. K. Ahn, K. Y. Baek, S. Banno, B. Beckford, B. Brubaker, T. Cai, M. Campbell, C. Carruth, S. H. Chen, S. Chu, J. Comfort, Y. T. Duh, T. Furukawa, H. Haraguchi, T. Hineno, Y. B. Hsiung, M. Hutcheson, T. Inagaki, M. Isoe, E. Iwai, T. Kamibayashi, I. Kamiji, N. Kawasaki, E. J. Kim, Y. J. Kim, J. W. Ko, T. K. Komatsubara, A. S. Kurilin, G. H. Lee, H. S. Lee, J. W. Lee, S. K. Lee, G. Y. Lim, C. Lin, J. Ma, Y. Maeda, T. Masuda, T. Matsumura, D. Mcfarland, J. Micallef, K. Miyazaki, K. Morgan, R. Murayama, D. Naito, K. Nakagiri, Y. Nakajima, Y. Nakaya, H. Nanjo, T. Nomura, T. Nomura, Y. Odani, R. Ogata, H. Okuno, T. Ota, Y. D. Ri, M. Sasaki, N. Sasao, K. Sato, T. Sato, S. Seki, T. Shimogawa, T. Shinkawa, S. Shinohara, K. Shiomi, J. S. Son, J. Stevens, S. Su, Y. Sugiyama, S. Suzuki, Y. Tajima, G. Takahashi, Y. Takashima, M. Tecchio, I. Teo, M. Togawa, T. Toyoda, Y. C. Tung, T. Usuki, Y. W. Wah, H. Watanabe, N. Whallon, J. K. Woo, J. Xu, M. Yamaga, S. Yamamoto, T. Yamanaka, H. Yamauchi, Y. Yanagida, H. Yokota, H. Y. Yoshida, H. Yoshimoto Dec. 28, 2016 hep-ex We searched for the $CP$-violating rare decay of neutral kaon, $K_{L} \to \pi^0 \nu \overline{\nu}$, in data from the first 100 hours of physics running in 2013 of the J-PARC KOTO experiment. One candidate event was observed while $0.34\pm0.16$ background events were expected. We set an upper limit of $5.1\times10^{-8}$ for the branching fraction at the 90\% confidence level (C.L.). An upper limit of $3.7\times10^{-8}$ at the 90\% C.L. for the $K_{L} \to \pi^{0} X^{0}$decay was also set for the first time, where $X^{0}$ is an invisible particle with a mass of 135 MeV/$c^{2}$. Three-dimensional electronic structures and the metal-insulator transition in Ruddlesden-Popper iridates (1610.04361) A. Yamasaki, H. Fujiwara, S. Tachibana, D. Iwasaki, Y. Higashino, C. Yoshimi, K. Nakagawa, Y. Nakatani, K. Yamagami, H. Aratani, O. Kirilmaz, M. Sing, R. Claessen, H. Watanabe, T. Shirakawa, S. Yunoki, A. Naitoh, K. Takase, J. Matsuno, H. Takagi, A. Sekiyama, Y. Saitoh Oct. 14, 2016 cond-mat.str-el In this study, we systematically investigate 3D momentum($\hbar k$)-resolved electronic structures of Ruddlesden-Popper-type iridium oxides Sr$_{n+1}$Ir$_n$O$_{3n+1}$ using soft-x-ray (SX) angle-resolved photoemission spectroscopy (ARPES). Our results provide direct evidence of an insulator-to-metal transition that occurs upon increasing the dimensionality of the IrO$_2$-plane structure. This transition occurs when the spin-orbit-coupled $j_{\rm eff}$=1/2 band changes its behavior in the dispersion relation and moves across the Fermi energy. In addition, an emerging band along the $\Gamma$(0,0,0)-R($\pi$,$\pi$,$\pi$) direction is found to play a crucial role in the metallic characteristics of SrIrO$_3$. By scanning the photon energy over 350 eV, we reveal the 3D Fermi surface in SrIrO$_3$ and $k_z$-dependent oscillations of photoelectron intensity in Sr$_3$Ir$_2$O$_7$. In contrast to previously reported results obtained using low-energy photons, folded bands derived from lattice distortions and/or magnetic ordering make significantly weak (but finite) contributions to the $k$-resolved photoemission spectrum. At the first glance, this leads to the ambiguous result that the observed $k$-space topology is consistent with the unfolded Brillouin zone (BZ) picture derived from a non-realistic simple square or cubic Ir lattice. Through careful analysis, we determine that a superposition of the folded and unfolded band structures has been observed in the ARPES spectra obtained using photons in both ultraviolet and SX regions. To corroborate the physics deduced using low-energy ARPES studies, we propose to utilize SX-ARPES as a powerful complementary technique, as this method surveys more than one whole BZ and provides a panoramic view of electronic structures. Search for electron antineutrinos associated with gravitational wave events GW150914 and GW151226 using KamLAND (1606.07155) KamLAND Collaboration: A. Gando, Y. Gando, T. Hachiya, A. Hayashi, S. Hayashida, H. Ikeda, K. Inoue, K. Ishidoshiro, Y. Karino, M. Koga, S. Matsuda, T. Mitsui, K. Nakamura, S. Obara, T. Oura, H. Ozaki, I. Shimizu, Y. Shirahata, J. Shirai, A. Suzuki, T. Takai, K. Tamae, Y. Teraoka, K. Ueshima, H. Watanabe, A. Kozolov, Y. Takemoto, S. Yoshida, K. Fushimi, A. Piepke, T. I. Banks, B. E. Berger, B. K. Fujikawa, T. O'Donnell, J. G. Learned, J. Maricic, M. Sakai, L. A. Winslow, E. Krupczak, J. Ouellet, Y. Efremenko, H. J. Karwowski, D. M. Markoff, W. Tornow, J. A. Detwiler, S. Enomoto, M. P. Decowski Oct. 4, 2016 hep-ex, nucl-ex, astro-ph.HE We present a search for low energy antineutrino events coincident with the gravitational wave events GW150914 and GW151226, and the candidate event LVT151012 using KamLAND, a kiloton-scale antineutrino detector. We find no inverse beta-decay neutrino events within $\pm 500$ seconds of either gravitational wave signal. This non-detection is used to constrain the electron antineutrino fluence and the luminosity of the astrophysical sources. Characterization of the Spontaneous Light Emission of the PMTs used in the Double Chooz Experiment (1604.06895) Double Chooz collaboration: Y. Abe, T. Abrahão, H. Almazan, C. Alt, S. Appel, E. Baussan, I. Bekman, M. Bergevin, T.J.C. Bezerra, L. Bezrukov, E. Blucher, T. Brugière, C. Buck, J. Busenitz, A. Cabrera, E. Calvo, L. Camilleri, R. Carr, M. Cerrada, E. Chauveau, P. Chimenti, A.P. Collin, E. Conover, J.M. Conrad, J.I. Crespo-Anadón, K. Crum, A.S. Cucoanes, E. Damon, J.V. Dawson, H. de Kerret, J. Dhooghe, D. Dietrich, Z. Djurcic, J.C. dos Anjos, M. Dracos, A. Etenko, M. Fallot, J. Felde, S.M. Fernandes, V. Fischer, D. Franco, M. Franke, H. Furuta, I. Gil-Botella, L. Giot, M. Göger-Neff, H. Gomez, L.F.G. Gonzalez, L. Goodenough, M.C. Goodman, N. Haag, T. Hara, J. Haser, D. Hellwig, M. Hofmann, G.A. Horton-Smith, A. Hourlier, M. Ishitsuka, S. Jiménez, J. Jochum, C. Jollet, F. Kaether, L.N. Kalousis, Y. Kamyshkov, M. Kaneda, D.M. Kaplan, T. Kawasaki, E. Kemp, D. Kryn, M. Kuze, T. Lachenmaier, C.E. Lane, T. Lasserre, A. Letourneau, D. Lhuillier, H.P. Lima Jr, M. Lindner, J.M. López-Castaño, J.M. LoSecco, B. Lubsandorzhiev, S. Lucht, J. Maeda, C. Mariani, J. Maricic, J. Martino, T. Matsubara, G. Mention, A. Meregaglia, T. Miletic, R. Milincic, A. Minotti, Y. Nagasaka, D. Navas-Nicolás, P. Novella, H. Nunokawa, L. Oberauer, M. Obolensky, A. Onillon, A. Osborn, C. Palomares, I.M. Pepe, S. Perasso, A. Porta, G. Pronost, J. Reichenbacher, B. Reinhold, M. Röhling, R. Roncin, B. Rybolt, Y. Sakamoto, R. Santorelli, A.C. Schilithz, S. Schönert, S. Schoppmann, M.H. Shaevitz, R. Sharankova, D. Shrestha, V. Sibille, V. Sinev, M. Skorokhvatov, E. Smith, M. Soiron, J. Spitz, A. Stahl, I. Stancu, L.F.F. Stokes, M. Strait, F. Suekane, S. Sukhotin, T. Sumiyoshi, Y. Sun, R. Svoboda, K. Terao, A. Tonazzo, H.H. Trinh Thi, G. Valdiviesso, N. Vassilopoulos, C. Veyssiere, M. Vivier, F. von Feilitzsch, S. Wagner, N. Walsh, H. Watanabe, C. Wiebusch, M. Wurm, G. Yang, F. Yermia, V. Zimmer Aug. 17, 2016 hep-ex, physics.ins-det During the commissioning of the first of the two detectors of the Double Chooz experiment, an unexpected and dominant background caused by the emission of light inside the optical volume has been observed. A specific study of the ensemble of phenomena called "Light Noise" has been carried out in-situ, and in an external laboratory, in order to characterize the signals and to identify the possible processes underlying the effect. Some mechanisms of instrumental noise originating from the PMTs were identified and it has been found that the leading one arises from the light emission localized on the photomultiplier base and produced by the combined effect of heat and high voltage across the transparent epoxy resin covering the electric components. The correlation of the rate and the amplitude of the signal with the temperature has been observed. For the first detector in operation the induced background has been mitigated using online and offline analysis selections based on timing and light pattern of the signals, while a modification of the photomultiplier assembly has been implemented for the second detector in order to blacken the PMT bases. Muon capture on light isotopes in Double Chooz (1512.07562) Double Chooz collaboration: Y. Abe, T. Abrahão, H. Almazan, C. Alt, S. Appel, J.C. Barriere, E. Baussan, I. Bekman, M. Bergevin, T.J.C. Bezerra, L. Bezrukov, E. Blucher, T. Brugière, C. Buck, J. Busenitz, A. Cabrera, L. Camilleri, R. Carr, M. Cerrada, E. Chauveau, P. Chimenti, A.P. Collin, E. Conover, J.M. Conrad, J.I. Crespo-Anadón, K. Crum, A.S. Cucoanes, E. Damon, J.V. Dawson, H. de Kerret, J. Dhooghe, D. Dietrich, Z. Djurcic, J.C. dos Anjos, M. Dracos, A. Etenko, M. Fallot, J. Felde, S.M. Fernandes, V. Fischer, D. Franco, M. Franke, H. Furuta, I. Gil-Botella, L. Giot, M. Göger-Neff, H. Gomez, L.F.G. Gonzalez, L. Goodenough, M.C. Goodman, N. Haag, T. Hara, J. Haser, D. Hellwig, M. Hofmann, G.A. Horton-Smith, A. Hourlier, M. Ishitsuka, J. Jochum, C. Jollet, F. Kaether, L.N. Kalousis, Y. Kamyshkov, M. Kaneda, D.M. Kaplan, T. Kawasaki, E. Kemp, D. Kryn, M. Kuze, T. Lachenmaier, C.E. Lane, T. Lasserre, A. Letourneau, D. Lhuillier, H.P. Lima Jr, M. Lindner, J.M. López-Castaño, J.M. LoSecco, B. Lubsandorzhiev, S. Lucht, J. Maeda, C. Mariani, J. Maricic, J. Martino, T. Matsubara, G. Mention, A. Meregaglia, T. Miletic, R. Milincic, A. Minotti, Y. Nagasaka, D. Navas-Nicolás, P. Novella, L. Oberauer, M. Obolensky, A. Onillon, A. Osborn, C. Palomares, I.M. Pepe, S. Perasso, A. Porta, G. Pronost, J. Reichenbacher, B. Reinhold, M. Röhling, R. Roncin, B. Rybolt, Y. Sakamoto, R. Santorelli, A.C. Schilithz, S. Schönert, S. Schoppmann, M.H. Shaevitz, R. Sharankova, D. Shrestha, V. Sibille, V. Sinev, M. Skorokhvatov, E. Smith, M. Soiron, J. Spitz, A. Stahl, I. Stancu, L.F.F. Stokes, M. Strait, F. Suekane, S. Sukhotin, T. Sumiyoshi, Y. Sun, R. Svoboda, K. Terao, A. Tonazzo, H.H. Trinh Thi, G. Valdiviesso, N. Vassilopoulos, C. Veyssiere, M. Vivier, F. von Feilitzsch, S. Wagner, N. Walsh, H. Watanabe, C. Wiebusch, M. Wurm, G. Yang, F. Yermia, V. Zimmer May 17, 2016 hep-ex, nucl-ex, physics.ins-det Using the Double Chooz detector, designed to measure the neutrino mixing angle $\theta_{13}$, the products of $\mu^-$ capture on $^{12}$C, $^{13}$C, $^{14}$N and $^{16}$O have been measured. Over a period of 489.5 days, $2.3\times10^6$ stopping cosmic $\mu^-$ have been collected, of which $1.8\times10^5$ captured on carbon, nitrogen, or oxygen nuclei in the inner detector scintillator or acrylic vessels. The resulting isotopes were tagged using prompt neutron emission (when applicable), the subsequent beta decays, and, in some cases, $\beta$-delayed neutrons. The most precise measurement of the rate of $^{12}\mathrm C(\mu^-,\nu)^{12}\mathrm B$ to date is reported: $6.57^{+0.11}_{-0.21}\times10^{3}\,\mathrm s^{-1}$, or $(17.35^{+0.35}_{-0.59})\%$ of nuclear captures. By tagging excited states emitting gammas, the ground state transition rate to $^{12}$B has been determined to be $5.68^{+0.14}_{-0.23}\times10^3\,\mathrm s^{-1}$. The heretofore unobserved reactions $^{12}\mathrm C(\mu^-,\nu\alpha)^{8}\mathrm{Li}$, $^{13}\mathrm C(\mu^-,\nu\mathrm n\alpha)^{8}\mathrm{Li}$, and $^{13}\mathrm C(\mu^-,\nu\mathrm n)^{12}\mathrm B$ are measured. Further, a population of $\beta$n decays following stopping muons is identified with $5.5\sigma$ significance. Statistics limit our ability to identify these decays definitively. Assuming negligible production of $^{8}$He, the reaction $^{13}\mathrm C(\mu^-,\nu\alpha)^{9}\mathrm{Li}$ is found to be present at the $2.7\sigma$ level. Limits are set on a variety of other processes. Pluto's atmosphere from the 29 June 2015 ground-based stellar occultation at the time of the New Horizons flyby (1601.05672) B. Sicardy, J. Talbot, E. Meza, J. I. B. Camargo, J. Desmars, D. Gault, D. Herald, S. Kerr, H. Pavlov, F. Braga-Ribas, M. Assafin, G. Benedetti-Rossi, A. Dias-Oliveira, A. Ramos-Gomes-Jr., R. Vieira-Martins, D. Berard, P. Kervella, J. Lecacheux, E. Lellouch, W. Beisker, D. Dunham, M. Jelinek, R. Duffard, J. L. Ortiz, A. J. Castro-Tirado, R. Cunniffe, R. Querel, P. A. Yock, A. A. Cole, A. B. Giles, K. M. Hill, J. P. Beaulieu, M. Harnisch, R. Jansen, A. Pennell, S. Todd, W. H. Allen, P. B. Graham, B. Loader, G. McKay, J. Milner, S. Parker, M. A. Barry, J. Bradshaw, J. Broughton, L. Davis, H. Devillepoix, J. Drummond, L. Field, M. Forbes, D. Giles, R. Glassey, R. Groom, D. Hooper, R. Horvat, G. Hudson, R. Idaczyk, D. Jenke, B. Lade, J. Newman, P. Nosworthy, P. Purcell, P. F. Skilton, M. Streamer, M. Unwin, H. Watanabe, G. L. White, D. Watson March 1, 2016 astro-ph.EP We present results from a multi-chord Pluto stellar occultation observed on 29 June 2015 from New Zealand and Australia. This occurred only two weeks before the NASA New Horizons flyby of the Pluto system and serves as a useful comparison between ground-based and space results. We find that Pluto's atmosphere is still expanding, with a significant pressure increase of 5$\pm$2\% since 2013 and a factor of almost three since 1988. This trend rules out, as of today, an atmospheric collapse associated with Pluto's recession from the Sun. A central flash, a rare occurrence, was observed from several sites in New Zealand. The flash shape and amplitude are compatible with a spherical and transparent atmospheric layer of roughly 3~km in thickness whose base lies at about 4~km above Pluto's surface, and where an average thermal gradient of about 5 K~km$^{-1}$ prevails. We discuss the possibility that small departures between the observed and modeled flash are caused by local topographic features (mountains) along Pluto's limb that block the stellar light. Finally, using two possible temperature profiles, and extrapolating our pressure profile from our deepest accessible level down to the surface, we obtain a possible range of 11.9-13.7~$\mu$bar for the surface pressure. KamLAND Sensitivity to Neutrinos from Pre-Supernova Stars (1506.01175) K. Asakura, A. Gando, Y. Gando, T. Hachiya, S. Hayashida, H. Ikeda, K. Inoue, K. Ishidoshiro, T. Ishikawa, S. Ishio, M. Koga, S. Matsuda, T. Mitsui, D. Motoki, K. Nakamura, S. Obara, T. Oura, I. Shimizu, Y. Shirahata, J. Shirai, A. Suzuki, H. Tachibana, K. Tamae, K. Ueshima, H. Watanabe, B.D. Xu, A. Kozlov, Y. Takemoto, S. Yoshida, K. Fushimi, A. Piepke, T. I. Banks, B. E. Berger, B.K. Fujikawa, T. O'Donnell, J.G. Learned, J. Maricic, S. Matsuno, M. Sakai, L. A. Winslow, Y. Efremenko, H. J. Karwowski, D. M. Markoff, W. Tornow, J. A. Detwiler, S. Enomoto, M.P. Decowski Jan. 22, 2016 hep-ex, physics.ins-det, astro-ph.HE In the late stages of nuclear burning for massive stars ($M>8~M_{\sun}$), the production of neutrino-antineutrino pairs through various processes becomes the dominant stellar cooling mechanism. As the star evolves, the energy of these neutrinos increases and in the days preceding the supernova a significant fraction of emitted electron anti-neutrinos exceeds the energy threshold for inverse beta decay on free hydrogen. This is the golden channel for liquid scintillator detectors because the coincidence signature allows for significant reductions in background signals. We find that the kiloton-scale liquid scintillator detector KamLAND can detect these pre-supernova neutrinos from a star with a mass of $25~M_{\sun}$ at a distance less than 690~pc with 3$\sigma$ significance before the supernova. This limit is dependent on the neutrino mass ordering and background levels. KamLAND takes data continuously and can provide a supernova alert to the community. Long-lived neutral-kaon flux measurement for the KOTO experiment (1509.03386) T. Masuda, J. K. Ahn, S. Banno, M. Campbell, J. Comfort, Y. T. Duh, T. Hineno, Y. B. Hsiung, T. Inagaki, E. Iwai, N. Kawasaki, E. J. Kim, Y. J. Kim, J. W. Ko, T. K. Komatsubara, A. S. Kurilin, G. H. Lee, J. W. Lee, S. K. Lee, G. Y. Lim, J. Ma, D. MacFarland, Y. Maeda, T. Matsumura, R. Murayama, D. Naito, Y. Nakaya, H. Nanjo, T. Nomura, Y. Odani, H. Okuno, Y. D. Ri, N. Sasao, K. Sato, T. Sato, S. Seki, T. Shimogawa, T. Shinkawa, K. Shiomi, J. S. Son, Y. Sugiyama, S. Suzuki, Y. Tajima, G. Takahashi, Y. Takashima, M. Tecchio, M. Togawa, T. Toyoda, Y. C. Tung, Y. W. Wah, H. Watanabe, J. K. Woo, J. Xu, T. Yamanaka, Y. Yanagida, H. Y. Yoshida, H. Yoshimoto Jan. 7, 2016 hep-ex, physics.ins-det The KOTO ($K^0$ at Tokai) experiment aims to observe the CP-violating rare decay $K_L \rightarrow \pi^0 \nu \bar{\nu}$ by using a long-lived neutral-kaon beam produced by the 30 GeV proton beam at the Japan Proton Accelerator Research Complex. The $K_L$ flux is an essential parameter for the measurement of the branching fraction. Three $K_L$ neutral decay modes, $K_L \rightarrow 3\pi^0$, $K_L \rightarrow 2\pi^0$, and $K_L \rightarrow 2\gamma$ were used to measure the $K_L$ flux in the beam line in the 2013 KOTO engineering run. A Monte Carlo simulation was used to estimate the detector acceptance for these decays. Agreement was found between the simulation model and the experimental data, and the remaining systematic uncertainty was estimated at the 1.4\% level. The $K_L$ flux was measured as $(4.183 \pm 0.017_{\mathrm{stat.}} \pm 0.059_{\mathrm{sys.}}) \times 10^7$ $K_L$ per $2\times 10^{14}$ protons on a 66-mm-long Au target. Measurement of $\theta_{13}$ in Double Chooz using neutron captures on hydrogen with novel background rejection techniques (1510.08937) Y. Abe, S. Appel, T. Abrahão, H. Almazan, C. Alt, J.C. dos Anjos, J.C. Barriere, E. Baussan, I. Bekman, M. Bergevin, T.J.C. Bezerra, L. Bezrukov, E. Blucher, T. Brugière, C. Buck, J. Busenitz, A. Cabrera, L. Camilleri, R. Carr, M. Cerrada, E. Chauveau, P. Chimenti, A.P. Collin, J.M. Conrad, J.I. Crespo-Anadón, K. Crum, A.S. Cucoanes, E. Damon, J.V. Dawson, J. Dhooghe, D. Dietrich, Z. Djurcic, M. Dracos, A. Etenko, M. Fallot, F. von Feilitzsch, J. Felde, S.M. Fernandes, V. Fischer, D. Franco, M. Franke, H. Furuta, I. Gil-Botella, L. Giot, M. Göger-Neff, H. Gomez, L.F.G. Gonzalez, L. Goodenough, M.C. Goodman, N. Haag, T. Hara, J. Haser, D. Hellwig, M. Hofmann, G.A. Horton-Smith, A. Hourlier, M. Ishitsuka, J. Jochum, C. Jollet, F. Kaether, L.N. Kalousis, Y. Kamyshkov, M. Kaneda, D.M. Kaplan, T. Kawasaki, E. Kemp, H. de Kerret, D. Kryn, M. Kuze, T. Lachenmaier, C.E. Lane, T. Lasserre, A. Letourneau, D. Lhuillier, H.P. Lima Jr, M. Lindner, J.M. López-Castaño, J.M. LoSecco, B. Lubsandorzhiev, S. Lucht, J. Maeda, C. Mariani, J. Maricic, J. Martino, T. Matsubara, G. Mention, A. Meregaglia, T. Miletic, R. Milincic, A. Minotti, Y. Nagasaka, D. Navas-Nicolás, P. Novella, L. Oberauer, M. Obolensky, A. Onillon, A. Osborn, C. Palomares, I.M. Pepe, S. Perasso, A. Porta, G. Pronost, J. Reichenbacher, B. Reinhold, M. Röhling, R. Roncin, B. Rybolt, Y. Sakamoto, R. Santorelli, A.C. Schilithz, S. Schönert, S. Schoppmann, M.H. Shaevitz, R. Sharankova, D. Shrestha, V. Sibille, V. Sinev, M. Skorokhvatov, E. Smith, M. Soiron, J. Spitz, A. Stahl, I. Stancu, L.F.F. Stokes, M. Strait, F. Suekane, S. Sukhotin, T. Sumiyoshi, Y. Sun, R. Svoboda, K. Terao, A. Tonazzo, H.H. Trinh Thi, G. Valdiviesso, N. Vassilopoulos, C. Veyssiere, M. Vivier, S. Wagner, N. Walsh, H. Watanabe, C. Wiebusch, M. Wurm, G. Yang, F. Yermia, V. Zimmer Dec. 28, 2015 hep-ex, physics.ins-det The Double Chooz collaboration presents a measurement of the neutrino mixing angle $\theta_{13}$ using reactor $\overline{\nu}_{e}$ observed via the inverse beta decay reaction in which the neutron is captured on hydrogen. This measurement is based on 462.72 live days data, approximately twice as much data as in the previous such analysis, collected with a detector positioned at an average distance of 1050m from two reactor cores. Several novel techniques have been developed to achieve significant reductions of the backgrounds and systematic uncertainties. Accidental coincidences, the dominant background in this analysis, are suppressed by more than an order of magnitude with respect to our previous publication by a multi-variate analysis. These improvements demonstrate the capability of precise measurement of reactor $\overline{\nu}_{e}$ without gadolinium loading. Spectral distortions from the $\overline{\nu}_{e}$ reactor flux predictions previously reported with the neutron capture on gadolinium events are confirmed in the independent data sample presented here. A value of $\sin^{2}2\theta_{13} = 0.095^{+0.038}_{-0.039}$(stat+syst) is obtained from a fit to the observed event rate as a function of the reactor power, a method insensitive to the energy spectrum shape. A simultaneous fit of the hydrogen capture events and of the gadolinium capture events yields a measurement of $\sin^{2}2\theta_{13} = 0.088\pm0.033$(stat+syst). Nuclear structure of 140Te with N = 88: Structural symmetry and asymmetry in Te isotopes with respect to the double-shell closure Z = 50 and N = 82 (1512.07324) C.-B. Moon, P. Lee, C. S. Lee, A. Odahara, R. Lozeva, A. Yagi, F. Browne, S. Nishimura, P. Doornenbal, G. Lorusso, P.-A. Söderström, T. Sumikama, H. Watanabe, T. Isobe, H. Baba, H. Sakurai, R. Daido, Y. Fang, H. Nishibata, Z. Patel, S. Rice, L. Sinclair, J. Wu, Z. Y. Xu, R. Yokoyama, T. Kubo, N. Inabe, H. Suzuki, N. Fukuda, D. Kameda, H. Takeda, D. S. Ahn, D. Murai, F. L. Bello Garrote, J. M. Daugas, F. Didierjean, E. Ideguchi, T. Ishigaki, H. S. Jung, T. Komatsubara, Y. K. Kwon, S. Morimoto, M. Niikura, I. Nishizuka, K. Tshoo Dec. 23, 2015 nucl-ex We study for the first time the internal structure of 140Te through the beta-delayed gamma-ray spectroscopy of 140Sb. The very neutron-rich 140Sb, Z = 51 and N = 89, ions were produced by the in-flight fission of 238U beam on a 9Be target at 345 MeV per nucleon at the Radioactive Ion Beam Factory, RIKEN. The half-life and spin-parity of 140Sb are reported as 124(30) ms and (4-), respectively. In addition to the excited states of 140Te produced by the beta-decay branch, the beta-delayed one-neutron and two-neutron emission branches were also established. By identifying the first 2+ and 4+ excited states of 140Te, we found that Te isotopes persist their vibrator character with E(4+)/E(2+) = 2. We discuss the distinctive features manifest in this region, such as valence neutron symmetry and asymmetry, revealed in pairs of isotopes with the same neutron holes and particles with respect to N = 82. Search for double-beta decay of 136Xe to excited states of 136Ba with the KamLAND-Zen experiment (1509.03724) KamLAND-Zen Collaboration: K. Asakura, A. Gando, Y. Gando, T. Hachiya, S. Hayashida, H. Ikeda, K. Inoue, K. Ishidoshiro, T. Ishikawa, S. Ishio, M. Koga, S. Matsuda, T. Mitsui, D. Motoki, K. Nakamura, S. Obara, M. Otani, T. Oura, I. Shimizu, Y. Shirahata, J. Shirai, A. Suzuki, H. Tachibana, K. Tamae, K. Ueshima, H. Watanabe, B.D. Xu, H. Yoshida, A. Kozlov, Y. Takemoto, S. Yoshida, K. Fushimi, T.I. Banks, B.E. Berger, B.K. Fujikawa, T. O'Donnell, L.A. Winslow, Y. Efremenko, H.J. Karwowski, D.M. Markoff, W. Tornow, J.A. Detwiler, S. Enomoto, M.P. Decowski Dec. 8, 2015 hep-ex, physics.ins-det A search for double-beta decays of 136Xe to excited states of 136Ba has been performed with the first phase data set of the KamLAND-Zen experiment. The 0+1, 2+1 and 2+2 transitions of 0{\nu}\{beta}\{beta} decay were evaluated in an exposure of 89.5kg-yr of 136Xe, while the same transitions of 2{\nu}\{beta}\{beta} decay were evaluated in an exposure of 61.8kg-yr. No excess over background was found for all decay modes. The lower half-life limits of the 2+1 state transitions of 0{\nu}\{beta}\{beta} and 2{\nu}\{beta}\{beta} decay were improved to T(0{\nu}, 0+ \rightarrow 2+) > 2.6\times10^25 yr and T(2{\nu}, 0+ \rightarrow 2+) > 4.6\times10^23 yr (90% C.L.), respectively. We report on the first experimental lower half-life limits for the transitions to the 0+1 state of 136Xe for 0{\nu}\{beta}\{beta} and 2{\nu}\{beta}\{beta} decay. They are T (0{\nu}, 0+ \rightarrow 0+) > 2.4\times10^25 yr and T(2{\nu}, 0+ \rightarrow 0+) > 8.3\times10^23 yr (90% C.L.). The transitions to the 2+2 states are also evaluated for the first time to be T(0{\nu}, 0+ \rightarrow 2+) > 2.6\times10^25 yr and T(2{\nu}, 0+ \rightarrow 2+) > 9.0\times10^23 yr (90% C.L.). These results are compared to recent theoretical predictions. 7Be Solar Neutrino Measurement with KamLAND (1405.6190) A. Gando, Y. Gando, H. Hanakago, H. Ikeda, K. Inoue, K. Ishidoshiro, H. Ishikawa, Y. Kishimoto, M. Koga, R. Matsuda, S. Matsuda, T. Mitsui, D. Motoki, K. Nakajima, K. Nakamura, A. Obata, A. Oki, Y. Oki, M. Otani, I. Shimizu, J. Shirai, A. Suzuki, K. Tamae, K. Ueshima, H. Watanabe, B.D. Xu, S. Yamada, Y. Yamauchi, H. Yoshida, A. Kozlov, Y. Takemoto, S. Yoshida, C. Grant, G. Keefer, D.W. McKee, A. Piepke, T.I. Banks, T. Bloxham, S.J. Freedman, B.K. Fujikawa, K. Han, L. Hsu, K. Ichimura, H. Murayama, T. O'Donnell, H.M. Steiner, L.A. Winslow, D. Dwyer, C. Mauger, R.D. McKeown, C. Zhang, B.E. Berger, C.E. Lane, J. Maricic, T. Miletic, J.G. Learned, M. Sakai, G.A. Horton-Smith, A. Tang, K.E. Downum, K. Tolich, Y. Efremenko, Y. Kamyshkov, O. Perevozchikov, H.J. Karwowski, D.M. Markoff, W. Tornow, J.A. Detwiler, S. Enomoto, K. Heeger, M.P. Decowski Oct. 1, 2015 hep-ex, nucl-ex, astro-ph.SR We report a measurement of the neutrino-electron elastic scattering rate of 862 keV 7Be solar neutrinos based on a 165.4 kton-day exposure of KamLAND. The observed rate is 582 +/- 90 (kton-day)^-1, which corresponds to a 862 keV 7Be solar neutrino flux of (3.26 +/- 0.50) x 10^9 cm^-2s^-1, assuming a pure electron flavor flux. Comparing this flux with the standard solar model prediction and further assuming three flavor mixing, a nu_e survival probability of 0.66 +/- 0.14 is determined from the KamLAND data. Utilizing a global three flavor oscillation analysis, we obtain a total 7Be solar neutrino flux of (5.82 +/- 0.98) x 10^9 cm^-2s^-1, which is consistent with the standard solar model predictions. Study of electron anti-neutrinos associated with gamma-ray bursts using KamLAND (1503.02137) K. Asakura, A. Gando, Y. Gando, T. Hachiya, S. Hayashida, H. Ikeda, K. Inoue, K. Ishidoshiro, T. Ishikawa, S. Ishio, M. Koga, S. Matsuda, T. Mitsui, D. Motoki, K. Nakamura, S. Obara, Y. Oki, T. Oura, I. Shimizu, Y. Shirahata, J. Shirai, A. Suzuki, H. Tachibana, K. Tamae, K. Ueshima, H. Watanabe, B.D. Xu, H. Yoshida, A. Kozlov, Y. Takemoto, S. Yoshida, K. Fushimi, A. Piepke, T. I. Banks, B. E. Berger, T. O'Donnell, B.K. Fujikawa, J. Maricic, J.G. Learned, M. Sakai, L. A. Winslow, Y. Efremenko, H. J. Karwowski, D. M. Markoff, W. Tornow, J. A. Detwiler, S. Enomoto, M.P. Decowski June 15, 2015 hep-ex, astro-ph.HE We search for electron anti-neutrinos ($\overline{\nu}_e$) from long and short-duration gamma-ray bursts~(GRBs) using data taken by the KamLAND detector from August 2002 to June 2013. No statistically significant excess over the background level is found. We place the tightest upper limits on $\overline{\nu}_e$ fluence from GRBs below 7 MeV and place first constraints on the relation between $\overline{\nu}_e$ luminosity and effective temperature. Photon-Veto Counters at the Outer Edge of the Endcap Calorimeter for the KOTO Experiment (1409.8367) T. Matsumura, T. Shinkawa, H. Yokota, E. Iwai, T.K. Komatsubara, J.W. Lee, G.Y. Lim, J.Ma, T. Masuda, H. Nanjo, T. Nomura, Y. Odani, Y.D. Ri, K. Shiomi, Y. Sugiyama, S. Suzuki, M. Togawa, Y. Wah, H. Watanabe, T. Yamanaka May 19, 2015 hep-ex, physics.ins-det The Outer-Edge Veto (OEV) counter subsystem for extra-photon detection from the backgrounds for the? $K^0_L\rightarrow\pi^0\nu\bar{\nu}$ decay is located at the outer edge of the endcap CsI calorimeter of the KOTO experiment at J-PARC. The subsystem is composed of 44 counters with different cross-sectional shapes. All counters are made of lead and scintillator plates and read out through wavelength-shifting fibers. In this paper, we discuss the design and performances of the OEV counters under heavy load ($\sim8$ tons/m$^2$) in vacuum. For 1-MeV energy deposit, the average light yield and time resolution are 20.9 photo-electrons and 1.5 ns, respectively. Although no pronounced peak by minimum-ionizing particles is observed in the energy distributions, an energy calibration method with cosmic rays works well in monitoring the gain stability with an accuracy of a few percent. Search for n-nbar oscillation in Super-Kamiokande (1109.4227) Super-Kamiokande collaboration: K. Abe, Y. Hayato, T. Iida, K. Ishihara, J. Kameda, Y. Koshio, A. Minamino, C. Mitsuda, M. Miura, S. Moriyama, M. Nakahata, Y. Obayashi, H. Ogawa, H. Sekiya, M. Shiozawa, Y. Suzuki, A. Takeda, Y. Takeuchi, K. Ueshima, H. Watanabe, I. Higuchi, C. Ishihara, M. Ishitsuka, T. Kajita, K. Kaneyuki, G. Mitsuka, S. Nakayama, H. Nishino, K. Okumura, C. Saji, Y. Takenaga, S. Clark, S. Desai, F. Dufour, A. Herfurth, E. Kearns, S. Likhoded, M. Litos, J.L. Raaf, J.L. Stone, L.R. Sulak, W. Wang, M. Goldhaber, D. Casper, J.P. Cravens, J. Dunmore, J. Griskevich, W.R. Kropp, D.W. Liu, S. Mine, C. Regis, M.B. Smy, H.W. Sobel, M.R. Vagins, K.S. Ganezer, B. Hartfiel, J. Hill, W.E. Keig, J.S. Jang, I.S. Jeoung, J.Y. Kim, I.T. Lim, K. Scholberg, N. Tanimoto, C.W. Walter, R. Wendell, R.W. Ellsworth, S. Tasaka, G. Guillian, J.G. Learned, S. Matsuno, M.D.Messier, A.K. Ichikawa, T. Ishida, T. Ishii, T. Iwashita, T. Kobayashi, T. Nakadaira, K. Nakamura, K. Nishikawa, K. Nitta, Y. Oyama, A.T. Suzuki, M. Hasegawa, H. Maesaka, T. Nakaya, T. Sasaki, H. Sato, H. Tanaka, S. Yamamoto, M. Yokoyama, T.J. Haines, S. Dazeley, S. Hatakeyama, R. Svoboda, G.W. Sullivan, R. Gran, A. Habig, Y. Fukuda, Y. Itow, T. Koike, C.K. Jung, T. Kato, K. Kobayashi, C. McGrew, A. Sarrat, R. Terri, C. Yanagisawa, N. Tamura, M. Ikeda, M. Sakuda, Y. Kuno, M. Yoshida, S.B. Kim, B.S. Yang, T. Ishizuka, H. Okazawa, Y. Choi, H.K. Seo, Y. Gando, T. Hasegawa, K. Inoue, H. Ishii, K. Nishijima, H. Ishino, Y. Watanabe, M. Koshiba, Y. Totsuka, S. Chen, Z. Deng, Y. Liu, D. Kielczewska, H.G.Berns, K.K. Shiraishi, E. Thrane, K. Washburn, R.J. Wilkes April 15, 2015 hep-ex A search for neutron-antineutron ($n-\bar{n}$) oscillation was undertaken in Super-Kamiokande using the 1489 live-day or $2.45 \times 10^{34}$ neutron-year exposure data. This process violates both baryon and baryon minus lepton numbers by an absolute value of two units and is predicted by a large class of hypothetical models where the seesaw mechanism is incorporated to explain the observed tiny neutrino masses and the matter-antimatter asymmetry in the Universe. No evidence for $n-\bar{n}$ oscillation was found, the lower limit of the lifetime for neutrons bound in ${}^{16}$O, in an analysis that included all of the significant sources of experimental uncertainties, was determined to be $1.9 \times 10^{32}$~years at the 90\% confidence level. The corresponding lower limit for the oscillation time of free neutrons was calculated to be $2.7 \times 10^8$~s using a theoretical value of the nuclear suppression factor of $0.517 \times 10^{23}$~s$^{-1}$ and its uncertainty. A compact ultra-clean system for deploying radioactive sources inside the KamLAND detector (1407.0413) T.I. Banks, S.J. Freedman, J. Wallig, N. Ybarrolaza, A. Gando, Y. Gando, H. Ikeda, K. Inoue, Y. Kishimoto, M. Koga, T. Mitsui, K. Nakamura, I. Shimizu, J. Shirai, A. Suzuki, Y. Takemoto, K. Tamae, K. Ueshima, H. Watanabe, B.D. Xu, H. Yoshida, S. Yoshida, A. Kozlov, C. Grant, G. Keefer, A. Piepke, T. Bloxham, B.K. Fujikawa, K. Han, K. Ichimura, H. Murayama, T. O'Donnell, H.M. Steiner, L.A. Winslow, D.A. Dwyer, R.D. McKeown, C. Zhang, B.E. Berger, C.E. Lane, J. Maricic, T. Miletic, M. Batygov, J.G. Learned, S. Matsuno, M. Sakai, G.A. Horton-Smith, K.E. Downum, G. Gratta, Y. Efremenko, O. Perevozchikov, H.J. Karwowski, D.M. Markoff, W. Tornow, K.M. Heeger, J.A. Detwiler, S. Enomoto, M.P. Decowski Feb. 12, 2015 hep-ex, nucl-ex, physics.ins-det We describe a compact, ultra-clean device used to deploy radioactive sources along the vertical axis of the KamLAND liquid-scintillator neutrino detector for purposes of calibration. The device worked by paying out and reeling in precise lengths of a hanging, small-gauge wire rope (cable); an assortment of interchangeable radioactive sources could be attached to a weight at the end of the cable. All components exposed to the radiopure liquid scintillator were made of chemically compatible UHV-cleaned materials, primarily stainless steel, in order to avoid contaminating or degrading the scintillator. To prevent radon intrusion, the apparatus was enclosed in a hermetically sealed housing inside a glove box, and both volumes were regularly flushed with purified nitrogen gas. An infrared camera attached to the side of the housing permitted real-time visual monitoring of the cable's motion, and the system was controlled via a graphical user interface. Improved measurements of the neutrino mixing angle $\theta_{13}$ with the Double Chooz detector (1406.7763) Y. Abe, J.C. dos Anjos, J.C. Barriere, E. Baussan, I. Bekman, M. Bergevin, T.J.C. Bezerra, L. Bezrukov, E. Blucher, C. Buck, J. Busenitz, A. Cabrera, E. Caden, L. Camilleri, R. Carr, M. Cerrada, P.-J. Chang, E. Chauveau, P. Chimenti, A.P. Collin, E. Conover, J.M. Conrad, J.I. Crespo-Anadón, K. Crum, A.S. Cucoanes, E. Damon, J.V. Dawson, J. Dhooghe, D. Dietrich, Z. Djurcic, M. Dracos, M. Elnimr, A. Etenko, M. Fallot, F. von Feilitzsch, J. Felde, S.M. Fernandes, V. Fischer, D. Franco, M. Franke, H. Furuta, I. Gil-Botella, L. Giot, M. Göger-Neff, L.F.G. Gonzalez, L. Goodenough, M.C. Goodman, C. Grant, N. Haag, T. Hara, J. Haser, M. Hofmann, G.A. Horton-Smith, A. Hourlier, M. Ishitsuka, J. Jochum, C. Jollet, F. Kaether, L.N. Kalousis, Y. Kamyshkov, D.M. Kaplan, T. Kawasaki, E. Kemp, H. de Kerret, D. Kryn, M. Kuze, T. Lachenmaier, C.E. Lane, T. Lasserre, A. Letourneau, D. Lhuillier, H.P. Lima Jr, M. Lindner, J.M. López-Castaño, J.M. LoSecco, B. Lubsandorzhiev, S. Lucht, J. Maeda, C. Mariani, J. Maricic, J. Martino, T. Matsubara, G. Mention, A. Meregaglia, T. Miletic, R. Milincic, A. Minotti, Y. Nagasaka, Y. Nikitenko, P. Novella, L. Oberauer, M. Obolensky, A. Onillon, A. Osborn, C. Palomares, I.M. Pepe, S. Perasso, P. Pfahler, A. Porta, G. Pronost, J. Reichenbacher, B. Reinhold, M. Röhling, R. Roncin, S. Roth, B. Rybolt, Y. Sakamoto, R. Santorelli, A.C. Schilithz, S. Schönert, S. Schoppmann, M.H. Shaevitz, R. Sharankova, S. Shimojima, D. Shrestha, V. Sibille, V. Sinev, M. Skorokhvatov, E. Smith, J. Spitz, A. Stahl, I. Stancu, L.F.F. Stokes, M. Strait, A. Stüken, F. Suekane, S. Sukhotin, T. Sumiyoshi, Y. Sun, R. Svoboda, K. Terao, A. Tonazzo, H.H. Trinh Thi, G. Valdiviesso, N. Vassilopoulos, C. Veyssiere, M. Vivier, S. Wagner, N. Walsh, H. Watanabe, C. Wiebusch, L. Winslow, M. Wurm, G. Yang, F. Yermia, V. Zimmer (Double Chooz Collaboration) Jan. 21, 2015 hep-ex, physics.ins-det The Double Chooz experiment presents improved measurements of the neutrino mixing angle $\theta_{13}$ using the data collected in 467.90 live days from a detector positioned at an average distance of 1050 m from two reactor cores at the Chooz nuclear power plant. Several novel techniques have been developed to achieve significant reductions of the backgrounds and systematic uncertainties with respect to previous publications, whereas the efficiency of the $\bar\nu_{e}$ signal has increased. The value of $\theta_{13}$ is measured to be $\sin^{2}2\theta_{13} = 0.090 ^{+0.032}_{-0.029}$ from a fit to the observed energy spectrum. Deviations from the reactor $\bar\nu_{e}$ prediction observed above a prompt signal energy of 4 MeV and possible explanations are also reported. A consistent value of $\theta_{13}$ is obtained from a fit to the observed rate as a function of the reactor power independently of the spectrum shape and background estimation, demonstrating the robustness of the $\theta_{13}$ measurement despite the observed distortion. Ortho-positronium observation in the Double Chooz Experiment (1407.6913) Y. Abe, J.C. dos Anjos, J.C. Barriere, E. Baussan, I. Bekman, M. Bergevin, T.J.C. Bezerra, L. Bezrukov, E. Blucher, C. Buck, J. Busenitz, A. Cabrera, E. Caden, L. Camilleri, R. Carr, M. Cerrada, P.-J. Chang, E. Chauveau, P. Chimenti, A.P. Collin, E. Conover, J.M. Conrad, J.I. Crespo-Anadon, K. Crum, A.S. Cucoanes, E. Damon, J.V. Dawson, J. Dhooghe, D. Dietrich, Z. Djurcic, M. Dracos, M. Elnimr, A. Etenko, M. Fallot, F. von Feilitzsch, J. Felde, S.M. Fernandes, V. Fischer, D. Franco, M. Franke, H. Furuta, I. Gil-Botella, L. Giot, M. Goger-Neff, L.F.G. Gonzalez, L. Goodenough, M.C. Goodman, C. Grant, N. Haag, T. Hara, J. Haser, M. Hofmann, G.A. Horton-Smith, A. Hourlier, M. Ishitsuka, J. Jochum, C. Jollet, F. Kaether, L.N. Kalousis, Y. Kamyshkov, D.M. Kaplan, T. Kawasaki, E. Kemp, H. de Kerret, D. Kryn, M. Kuze, T. Lachenmaier, C.E. Lane, T. Lasserre, A. Letourneau, D. Lhuillier, H.P. Lima Jr, M. Lindner, J.M. Lopez-Castano, J.M. LoSecco, B. Lubsandorzhiev, S. Lucht, J. Maeda, C. Mariani, J. Maricic, J. Martino, T. Matsubara, G. Mention, A. Meregaglia, T. Miletic, R. Milincic, A. Minotti, Y. Nagasaka, Y. Nikitenko, P. Novella, L. Oberauer, M. Obolensky, A. Onillon, A. Osborn, C. Palomares, I.M. Pepe, S. Perasso, P. Pfahler, A. Porta, G. Pronost, J. Reichenbacher, B. Reinhold, M. Rohling, R. Roncin, S. Roth, B. Rybolt, Y. Sakamoto, R. Santorelli, A.C. Schilithz, S. Schonert, S. Schoppmann, M.H. Shaevitz, R. Sharankova, S. Shimojima, D. Shrestha, V. Sibille, V. Sinev, M. Skorokhvatov, E. Smith, J. Spitz, A. Stahl, I. Stancu, L.F.F. Stokes, M. Strait, A. Stuken, F. Suekane, S. Sukhotin, T. Sumiyoshi, Y. Sun, R. Svoboda, K. Terao, A. Tonazzo, H.H. Trinh Thi, G. Valdiviesso, N. Vassilopoulos, C. Veyssiere, M. Vivier, S. Wagner, N. Walsh, H. Watanabe, C. Wiebusch, L. Winslow, M. Wurm, G. Yang, F. Yermia, V. Zimmer Oct. 7, 2014 hep-ex, physics.ins-det The Double Chooz experiment measures the neutrino mixing angle $\theta_{13}$ by detecting reactor $\bar{\nu}_e$ via inverse beta decay. The positron-neutron space and time coincidence allows for a sizable background rejection, nonetheless liquid scintillator detectors would profit from a positron/electron discrimination, if feasible in large detector, to suppress the remaining background. Standard particle identification, based on particle dependent time profile of photon emission in liquid scintillator, can not be used given the identical mass of the two particles. However, the positron annihilation is sometimes delayed by the ortho-positronium (o-Ps) metastable state formation, which induces a pulse shape distortion that could be used for positron identification. In this paper we report on the first observation of positronium formation in a large liquid scintillator detector based on pulse shape analysis of single events. The o-Ps formation fraction and its lifetime were measured, finding the values of 44$\%$ $\pm$ 12$\%$ (sys.) $\pm$ 5$\%$ (stat.) and $3.68$ns $\pm$ 0.17ns (sys.) $\pm$ 0.15ns (stat.) respectively, in agreement with the results obtained with a dedicated positron annihilation lifetime spectroscopy setup. Precision Muon Reconstruction in Double Chooz (1405.6227) Double Chooz collaboration: Y. Abe, J. C. dos Anjos, J. C. Barriere, E. Baussan, I. Bekman, M. Bergevin, T. J. C. Bezerra, L. Bezrukov, E. Blucher, C. Buck, J. Busenitz, A. Cabrera, E. Caden, L. Camilleri, R. Carr, M. Cerrada, P.-J. Chang, E. Chauveau, P. Chimenti, A. P. Collin, E. Conover, J. M. Conrad, J. I. Crespo-Anadón, K. Crum, A. Cucoanes, E. Damon, J. V. Dawson, D. Dietrich, Z. Djurcic, M. Dracos, M. Elnimr, A. Etenko, M. Fallot, F. von Feilitzsch, J. Felde, S. M. Fernandes, V. Fischer, D. Franco, M. Franke, H. Furuta, I. Gil-Botella, L. Giot, M. Göger-Neff, L. F. G. Gonzalez, L. Goodenough, M. C. Goodman, C. Grant, N. Haag, T. Hara, J. Haser, M. Hofmann, G. A. Horton-Smith, A. Hourlier, M. Ishitsuka, J. Jochum, C. Jollet, F. Kaether, L. N. Kalousis, Y. Kamyshkov, D. M. Kaplan, T. Kawasaki, E. Kemp, H. de Kerret, D. Kryn, M. Kuze, T. Lachenmaier, C. E. Lane, T. Lasserre, A. Letourneau, D. Lhuillier, H. P. Lima Jr, M. Lindner, J. M. López-Casta no, J. M. LoSecco, B. Lubsandorzhiev, S. Lucht, J. Maeda, C. Mariani, J. Maricic, J. Martino, T. Matsubara, G. Mention, A. Meregaglia, T. Miletic, R. Milincic, A. Minotti, Y. Nagasaka, Y. Nikitenko, P. Novella, M. Obolensky, L. Oberauer, A. Onillon, A. Osborn, C. Palomares, I. M. Pepe, S. Perasso, P. Pfahler, A. Porta, G. Pronost, J. Reichenbacher, B. Reinhold, M. Röhling, R. Roncin, S. Roth, B. Rybolt, Y. Sakamoto, R. Santorelli, A. C. Schilithz, S. Schönert, S. Schoppmann, M. H. Shaevitz, R. Sharankova, S. Shimojima, V. Sibille, V. Sinev, M. Skorokhvatov, E. Smith, J. Spitz, A. Stahl, I. Stancu, L. F. F. Stokes, M. Strait, A. Stüken, F. Suekane, S. Sukhotin, T. Sumiyoshi, Y. Sun, R. Svoboda, K. Terao, A. Tonazzo, H. H. Trinh Thi, G. Valdiviesso, N. Vassilopoulos, C. Veyssiere, M. Vivier, S. Wagner, H. Watanabe, C. Wiebusch, L. Winslow, M. Wurm, G. Yang, F. Yermia, V. Zimmer We describe a muon track reconstruction algorithm for the reactor anti-neutrino experiment Double Chooz. The Double Chooz detector consists of two optically isolated volumes of liquid scintillator viewed by PMTs, and an Outer Veto above these made of crossed scintillator strips. Muons are reconstructed by their Outer Veto hit positions along with timing information from the other two detector volumes. All muons are fit under the hypothesis that they are through-going and ultrarelativistic. If the energy depositions suggest that the muon may have stopped, the reconstruction fits also for this hypothesis and chooses between the two via the relative goodness-of-fit. In the ideal case of a through-going muon intersecting the center of the detector, the resolution is ~40 mm in each transverse dimension. High quality muon reconstruction is an important tool for reducing the impact of the cosmogenic isotope background in Double Chooz. Ly-alpha polarimeter design for CLASP rocket experiment (1407.4577) H. Watanabe, N. Narukage, M. Kubo, R. Ishikawa, T. Bando, R. Kano, S. Tsuneta, K. Kobayashi, K. Ichimoto, J. Trujillo-Bueno July 17, 2014 physics.optics, astro-ph.SR, astro-ph.IM A sounding-rocket program called the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is proposed to be launched in the summer of 2014. CLASP will observe the solar chromosphere in Ly-alpha (121.567 nm), aiming to detect the linear polarization signal produced by scattering processes and the Hanle effect for the first time. The polarimeter of CLASP consists of a rotating half-waveplate, a beam splitter, and a polarization analyzer. Magnesium Fluoride (MgF2) is used for these optical components, because MgF2 exhibits birefringent property and high transparency at ultraviolet wavelength. Background-independent measurement of $\theta_{13}$ in Double Chooz (1401.5981) Y. Abe, J.C. dos Anjos, J.C. Barriere, E. Baussan, I. Bekman, M. Bergevin, T.J.C. Bezerra, L. Bezrukov, E. Blucher, C. Buck, J. Busenitz, A. Cabrera, E. Caden, L. Camilleri, R. Carr, M. Cerrada, P.-J. Chang, E. Chauveau, P. Chimenti, A.P. Collin, E. Conover, J.M. Conrad, J.I. Crespo-Anadón, K. Crum, A. Cucoanes, E. Damon, J.V. Dawson, D. Dietrich, Z. Djurcic, M. Dracos, M. Elnimr, A. Etenko, M. Fallot, F. von Feilitzsch, J. Felde, S.M. Fernandes, V. Fischer, D. Franco, M. Franke, H. Furuta, I. Gil-Botella, L. Giot, M. Göger-Neff, L.F.G. Gonzalez, L. Goodenough, M.C. Goodman, C. Grant, N. Haag, T. Hara, J. Haser, M. Hofmann, G.A. Horton-Smith, A. Hourlier, M. Ishitsuka, J. Jochum, C. Jollet, F. Kaether, L.N. Kalousis, Y. Kamyshkov, D.M. Kaplan, T. Kawasaki, E. Kemp, H. de Kerret, T. Konno, D. Kryn, M. Kuze, T. Lachenmaier, C.E. Lane, T. Lasserre, A. Letourneau, D. Lhuillier, H.P. Lima Jr, M. Lindner, J.M. López-Castaño, J.M. LoSecco, B.K. Lubsandorzhiev, S. Lucht, J. Maeda, C. Mariani, J. Maricic, J. Martino, T. Matsubara, G. Mention, A. Meregaglia, T. Miletic, R. Milincic, A. Minotti, Y. Nagasaka, K. Nakajima, Y. Nikitenko, P. Novella, M. Obolensky, L. Oberauer, A. Onillon, A. Osborn, C. Palomares, I.M. Pepe, S. Perasso, P. Pfahler, A. Porta, G. Pronost, J. Reichenbacher, B. Reinhold, M. Röhling, R. Roncin, S. Roth, B. Rybolt, Y. Sakamoto, R. Santorelli, F. Sato, A.C. Schilithz, S. Schönert, S. Schoppmann, M.H. Shaevitz, R. Sharankova, S. Shimojima, V. Sibille, V. Sinev, M. Skorokhvatov, E. Smith, J. Spitz, A. Stahl, I. Stancu, L.F.F. Stokes, M. Strait, A. Stüken, F. Suekane, S. Sukhotin, T. Sumiyoshi, Y. Sun, R. Svoboda, K. Terao, A. Tonazzo, H.H. Trinh Thi, G. Valdiviesso, N. Vassilopoulos, C. Veyssiere, M. Vivier, S. Wagner, H. Watanabe, C. Wiebusch, L. Winslow, M. Wurm, G. Yang, F. Yermia, V. Zimmer The oscillation results published by the Double Chooz collaboration in 2011 and 2012 rely on background models substantiated by reactor-on data. In this analysis, we present a background-model-independent measurement of the mixing angle $\theta_{13}$ by including 7.53 days of reactor-off data. A global fit of the observed neutrino rates for different reactor power conditions is performed, yielding a measurement of both $\theta_{13}$ and the total background rate. The results on the mixing angle are improved significantly by including the reactor-off data in the fit, as it provides a direct measurement of the total background rate. This reactor rate modulation analysis considers antineutrino candidates with neutron captures on both Gd and H, whose combination yields $\sin^2(2\theta_{13})=$ 0.102 $\pm$ 0.028(stat.) $\pm$ 0.033(syst.). The results presented in this study are fully consistent with the ones already published by Double Chooz, achieving a competitive precision. They provide, for the first time, a determination of $\theta_{13}$ that does not depend on a background model. Asymptotic Properties of the Misclassification Errors for Euclidean Distance Discriminant Rule in High-Dimensional Data (1403.0329) H. Watanabe, M. Hyodo, T. Seo, T. Pavlenko March 3, 2014 math.ST, stat.TH Performance accuracy of the Euclidean Distance Discriminant rule (EDDR) is studied in the high-dimensional asymptotic framework which allows the dimensionality to exceed sample size. Under mild assumptions on the traces of the covariance matrix, our new results provide the asymptotic distribution of the conditional misclassification error and the explicit expression for the consistent and asymptotically unbiased estimator of the expected misclassification error. To get these properties, new results on the asymptotic normality of the quadratic forms and traces of the higher power of Wishart matrix, are established. Using our asymptotic results, we further develop two generic methods of determining a cut-off point for EDDR to adjust the misclassification errors. Finally, we numerically justify the high accuracy of our asymptotic findings along with the cut-off determination methods in finite sample applications, inclusive of the large sample and high-dimensional scenarios. Mission design of LiteBIRD (1311.2847) T. Matsumura, Y. Akiba, J. Borrill, Y. Chinone, M. Dobbs, H. Fuke, A. Ghribi, M. Hasegawa, K. Hattori, M. Hattori, M. Hazumi, W. Holzapfel, Y. Inoue, K. Ishidoshiro, H. Ishino, H. Ishitsuka, K. Karatsu, N. Katayama, I. Kawano, A. Kibayashi, Y. Kibe, K. Kimura, N. Kimura, K. Koga, M. Kozu, E. Komatsu, A. Lee, H. Matsuhara, S. Mima, K. Mitsuda, K. Mizukami, H. Morii, T. Morishima, S. Murayama, M. Nagai, R. Nagata, S. Nakamura, M. Naruse, K. Natsume, T. Nishibori, H. Nishino, A. Noda, T. Noguchi, H. Ogawa, S. Oguri, I. Ohta, C. Otani, P. Richards, S. Sakai, N. Sato, Y. Sato, Y. Sekimoto, A. Shimizu, K. Shinozaki, H. Sugita, T. Suzuki, A. Suzuki, O. Tajima, S. Takada, S. Takakura, Y. Takei, T. Tomaru, Y. Uzawa, T. Wada, H. Watanabe, N. Yamasaki, M. Yoshida, T. Yoshida, K. Yotsumoto Nov. 12, 2013 astro-ph.CO, astro-ph.IM LiteBIRD is a next-generation satellite mission to measure the polarization of the cosmic microwave background (CMB) radiation. On large angular scales the B-mode polarization of the CMB carries the imprint of primordial gravitational waves, and its precise measurement would provide a powerful probe of the epoch of inflation. The goal of LiteBIRD is to achieve a measurement of the characterizing tensor to scalar ratio $r$ to an uncertainty of $\delta r=0.001$. In order to achieve this goal we will employ a kilo-pixel superconducting detector array on a cryogenically cooled sub-Kelvin focal plane with an optical system at a temperature of 4~K. We are currently considering two detector array options; transition edge sensor (TES) bolometers and microwave kinetic inductance detectors (MKID). In this paper we give an overview of LiteBIRD and describe a TES-based polarimeter designed to achieve the target sensitivity of 2~$\mu$K$\cdot$arcmin over the frequency range 50 to 320~GHz.
CommonCrawl
Global boundedness of classical solutions to a logistic chemotaxis system with singular sensitivity Delay-induced spiking dynamics in integrate-and-fire neurons Dynamic analysis on an almost periodic predator-prey system with impulsive effects and time delays Demou Luo and Qiru Wang , School of Mathematics, Sun Yat-sen University, Guangzhou 510275, Guangdong, China * Corresponding author: [email protected] Received May 2020 Revised June 2020 Published August 2020 Fund Project: This research was supported by the National Natural Science Foundation of China (No. 11671406) This article is concerned with a generalized almost periodic predator-prey model with impulsive effects and time delays. By utilizing comparison theorem and constructing a feasible Lyapunov functional, we obtain sufficient conditions to guarantee the permanence and global asymptotic stability of the system. By applying Arzelà-Ascoli theorem, we establish the existence and uniqueness of almost-periodic positive solutions. A feasible numerical simulation is provided to explain the suitability of our main criteria. Keywords: Almost periodic predator-prey model, impulsive effects and time delays, permanence and global stability, existence and uniqueness of almost periodic positive solutions. Mathematics Subject Classification: Primary: 34K14, 34K20, 34K45; Secondary: 92D25. Citation: Demou Luo, Qiru Wang. Dynamic analysis on an almost periodic predator-prey system with impulsive effects and time delays. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020238 D. D. Bainov and P. S. Simeonov, Impulsive Differential Equations, Series on Advances in Mathematics for Applied Sciences, 28. World Scientific Publishing Co., Inc., River Edge, NJ, 1995. doi: 10.1142/9789812831804. Google Scholar [2] D. D. Bainov and P. S. Simeonov, Systems with Impulsive Effect: Stability, Theory and Applications, Halsted Press, New York, 1998. Google Scholar I. Barbalat, System dequations differentielles doscillations nonlinears, Rev. Roumaine Math. Pures Appl., 4 (1959), 267-270. Google Scholar S. Bochner, Abstrakte fastperiodische funktionen, Acta Math., 61 (1933), 149–184, in German. doi: 10.1007/BF02547790. Google Scholar S. Bochner, Beiträge zur theorie der fastperiodischen funktionen, Ⅰ. Funktionen einer Variablen, Math. Ann., 96 (1927), 119–147, in German. doi: 10.1007/BF01209156. Google Scholar H. Bohr, Zur theorie der fastperiodischen funktionen: Ⅰ. Eine verallgemeinerung der theorie der fourierreihen, Acta Math. 45 (1925), 29–127, in German. doi: 10.1007/BF02395468. Google Scholar H. Bohr, Zur theorie der fastperiodischen funktionen. Ⅱ, Acta Math. 46 (1925), 101–214, in German. doi: 10.1007/BF02543859. Google Scholar H. Bohr, Zur theorie der fastperiodischen funktionen. Ⅲ, Acta Math., 47 (1926), 237–281, in German. doi: 10.1007/BF02543846. Google Scholar T. Diagana and H. Zhou, Existence of positive almost periodic solutions to the hematopoiesis model, Appl. Math. Comput., 274 (2016), 644-648. doi: 10.1016/j.amc.2015.10.029. Google Scholar H.-S. Ding, Q.-L. Liu and J. J. Nieto, Existence of positive almost periodic solutions to a class of hematopoiesis model, Appl. Math. Model., 40 (2016), 3289-3297. doi: 10.1016/j.apm.2015.10.020. Google Scholar H.-S. Ding, G. M. N'Guérékata and J. J. Nieto, Weighted pseudo almost periodic solutions to a class of discrete hematopoiesis model, Rev. Mat. Complut., 26 (2013), 427-443. doi: 10.1007/s13163-012-0114-y. Google Scholar A. M. Fink, Almost Periodic Differential Equations, Lecture Notes on Mathematics, vol. 377, Springer-Verlag, Berlin-New York, 1974. Google Scholar J. Gao, Q. Wang and Y. Lin, Existence and exponential stability of almost-periodic solutions for neutral BAM neural networks with time-varying delays in leakage terms on time scales, Math. Methods Appl. Sci., 39 (2016), 1361-1375. doi: 10.1002/mma.3574. Google Scholar J. Gao, Q.-R. Wang and L.-W. Zhang, Existence and stability of almost-periodic solutions for cellular neural networks with time-varying delays in leakage terms on time scales, Appl. Math. Comput., 237 (2014), 639-649. doi: 10.1016/j.amc.2014.03.051. Google Scholar [15] C. Y. He, Almost Periodic Differential Equations, Higher Education Press, Beijing, 1992. Google Scholar K. Hong and P. Weng, Stability and traveling waves of a stage-structured predator-prey model with Holling type-Ⅱ functional response and harvesting, Nonlinear Anal. Real World Appl., 14 (2013), 83-103. doi: 10.1016/j.nonrwa.2012.05.004. Google Scholar T. Hu, Z. He, X. Zhang and S. Zhong, Finite-time stability for fractional-order complex-valued neural networks with time delay, Appl. Math. Comput., 365 (2020), 548-556. doi: 10.1016/j.amc.2019.124715. Google Scholar T. Hu, X. Zhang and S. Zhong, Global asymptotic synchronization of nonidentical fractional-order neural networks, Neurocomputing, 313 (2018), 39-46. doi: 10.1016/j.neucom.2018.05.098. Google Scholar F. Kong and J. J. Nieto, Almost periodic dynamical behaviors of the hematopoiesis model with mixed discontinuous harvesting terms, Discrete Contin. Dyn. Syst. Ser. B, 24 (2019), 5803-5830. Google Scholar F. Kong, Q. Zhu, K. Wang and J. J. Nieto, Stability analysis of almost periodic solutions of discontinuous BAM neural networks with hybrid time-varying delays and D operator, J. Franklin Inst., 356 (2019), 11605-11637. doi: 10.1016/j.jfranklin.2019.09.030. Google Scholar N. A. Kudryashov and A. S. Zakharchenko, Analytical properties and exact solutions of the Lotka-Volterra competition system, Appl. Math. Comput., 254 (2015), 219-228. doi: 10.1016/j.amc.2014.12.113. Google Scholar X. Lin and F. Chen, Almost periodic solution for a Volterra model with mutual interference and Beddington-DeAngelis functional response, Appl. Math. Comput., 214 (2009), 548-556. doi: 10.1016/j.amc.2009.04.028. Google Scholar X. Lin, Z. Du and Y. Lv, Global asymptotic stability of almost periodic solution for a multispecies competition-predator system with time delays, Appl. Math. Comput., 219 (2013), 4908-4923. doi: 10.1016/j.amc.2012.10.083. Google Scholar B. Lisena, Global stability of a periodic Holling-Tanner predator-prey model, Math. Methods Appl. Sci., 41 (2018), 3270-3281. doi: 10.1002/mma.4814. Google Scholar B. Liu, New results on the positive almost periodic solutions for a model of hematopoiesis, Nonlinear Anal. Real World Appl., 17 (2014), 252-264. doi: 10.1016/j.nonrwa.2013.12.003. Google Scholar Z. Ma, F. Chen, C. Wu and W. Chen, Dynamic behaviors of a Lotka-Volterra predator-prey model incorporating a prey refuge and predator mutual interference, Appl. Math. Comput., 219 (2013), 7945-7953. doi: 10.1016/j.amc.2013.02.033. Google Scholar X. Meng, W. Xu and L. Chen, Profitless delays for a nonautonomous Lotka-Volterra predator-prey almost periodic system with dispersion, Appl. Math. Comput., 188 (2007), 365-378. doi: 10.1016/j.amc.2006.09.133. Google Scholar V. D. Mil'man and A. D. Myshkis, On the stability of motion in the presence of impulses, Siberian Math. Ž., 1 (1960), 233-237. Google Scholar L. Nie, Z. Teng, L. Hu and J. Peng, Qualitative analysis of a modified Leslie-Gowerand Holling-type Ⅱ predator-prey model with state dependent impulsive effects, Nonlinear Anal. Real World Appl., 11 (2010), 1364-1373. doi: 10.1016/j.nonrwa.2009.02.026. Google Scholar J. Qiu and J. Cao, Exponential stability of a competitive Lotka-Volterra system with delays, Appl. Math. Comput., 201 (2008), 819-829. doi: 10.1016/j.amc.2007.11.046. Google Scholar A. M. Samoilenko and N. A. Perestyuk, Differential Equations with Impulse Effect, World Scientific, Singapore, 1995. Google Scholar Y. Shan, K. She, S. Zhong, Q. Zhong, K. Shi and C. Zhao, Exponential stability and extended dissipativity criteria for generalized discrete-time neural networks with additive time-varying delays, Appl. Math. Comput., 333 (2018), 145-168. doi: 10.1016/j.amc.2018.03.101. Google Scholar C. Shen, Permanence and global attractivity of the food-chain system with Holling Ⅳ type functional response, Appl. Math. Comput., 194 (2007), 179-185. doi: 10.1016/j.amc.2007.04.019. Google Scholar E. R. van Kampen, Almost periodic functions and compact groups, Ann. of Math., 37 (1936), 78-91. doi: 10.2307/1968688. Google Scholar J. von Neumann, Almost periodic functions in a group. I, Trans. Amer. Math. Soc., 36 (1934), 445-492. doi: 10.1090/S0002-9947-1934-1501752-3. Google Scholar K. Wang and Y. Zhu, Global attractivity of positive periodic solution for a Volterra model, Appl. Math. Comput., 203 (2008), 493-501. doi: 10.1016/j.amc.2008.04.005. Google Scholar L. Wang, Dynamic analysis on an almost periodic predator-prey model with impulses effects, Engineering Letters, 26 (2018), 333-339. Google Scholar X. Yu and Q. Wang, Weighted pseudo-almost periodic solutions for Shunting inhibitory cellular neural networks on time scales, Bull. Malays. Math. Sci. Soc., 42 (2019), 2055-2074. doi: 10.1007/s40840-017-0595-4. Google Scholar X. Yu, Q. Wang and Y. Bai, Permanence and almost periodic solutions for $N$-species nonautonomous Lotka-Volterra competitive systems with delays and impulsive perturbations on time scales, Complexity, 2018 (2018), Article ID 2658745, 12 pp. doi: 10.1155/2018/2658745. Google Scholar H. Zhang, Y. Li, B. Jing and W. Zhao, Global stability of almost periodic solution of multispecies mutualism system with time delays and impulsive effects, Appl. Math. Comput., 232 (2014), 1138-1150. doi: 10.1016/j.amc.2014.01.131. Google Scholar H. Zhang and J. Shao, Almost periodic solutions for cellular neural networks with time-varying delays in leakage terms, Appl. Math. Comput., 219 (2013), 11471-11482. doi: 10.1016/j.amc.2013.05.046. Google Scholar H. Zhang, M. Yang and L. Wang, Existence and exponential convergence of the positive almost periodic solution for a model of hematopoiesis, Appl. Math. Lett., 26 (2013), 38-42. doi: 10.1016/j.aml.2012.02.034. Google Scholar H. Zhou, W. Wang and Z. Zhou, Positive almost periodic solution for a model of hematopoiesis with infinite time delays and a nonlinear harvesting term, Abstr. Appl. Anal., 2013 (2013), Article ID 146729, 6 pp. doi: 10.1155/2013/146729. Google Scholar H. Zhou and L. Yang, A new result on the existence of positive almost periodic solution for generalized hematopoiesis model, J. Math. Anal. Appl., 462 (2018), 370-379. doi: 10.1016/j.jmaa.2018.01.075. Google Scholar X. Zhou, X. Shi and X. Song, Analysis of nonautonomous predator-prey model with nonlinear diffusion and time delay, Appl. Math. Comput., 196 (2008), 129-136. doi: 10.1016/j.amc.2007.05.041. Google Scholar Z.-Q. Zhu and Q.-R. Wang, Existence of nonoscillatory solutions to neutral dynamic equations on time scales, J. Math. Anal. Appl., 335 (2007), 751-762. doi: 10.1016/j.jmaa.2007.02.008. Google Scholar L. Zu, D. Jiang, D. O'Regan, T. Hayat and B. Ahmad, Ergodic property of a Lotka-Volterra predator-prey model with white noise higher order perturbation under regime switching, Appl. Math. Comput., 330 (2018), 93-102. doi: 10.1016/j.amc.2018.02.035. Google Scholar Figure 1. Numeric simulation of the prey $ x(t) $ and the predator $ y(t) $ of (42) with the initial conditions $ (x(0),y(0))^{T} = (0.6,0.3)^{T} $, $ (x(0),y(0))^{T} = (0.2,0.1)^{T} $ and $ (x(0),y(0))^{T} = (0.4,0.2)^{T} $ Table 1. The biological parameters of $ x $ and $ y $ $a_{i}^{l}$ $a_{i}^{u}$ $b_{i}^{l}$ $b_{i}^{u}$ $c_{i}^{l}$ $c_{i}^{u}$ $m_{i}^{l}$ $M_{i}^{u}$ $P_{i}^{l}$ $\tau_{i}^{l}$ $\tau_{i}^{u}$ $x$ $0.25$ $0.35$ $0.94$ $0.96$ $0.11$ $0.11$ $0.2495$ $0.3736$ $0.01$ $0.01$ $0.01$ $y$ $0.45$ $0.55$ $1.20$ $3.80$ $3.00$ $3.00$ $0.0189$ $0.5451$ $0.02$ $0.02$ $0.02$ Mengyu Cheng, Zhenxin Liu. Periodic, almost periodic and almost automorphic solutions for SPDEs with monotone coefficients. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021026 Claudio Arancibia-Ibarra, José Flores, Michael Bode, Graeme Pettet, Peter van Heijster. A modified May–Holling–Tanner predator-prey model with multiple Allee effects on the prey and an alternative food source for the predator. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 943-962. doi: 10.3934/dcdsb.2020148 Amira M. Boughoufala, Ahmed Y. Abdallah. Attractors for FitzHugh-Nagumo lattice systems with almost periodic nonlinear parts. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1549-1563. doi: 10.3934/dcdsb.2020172 Guihong Fan, Gail S. K. Wolkowicz. Chaotic dynamics in a simple predator-prey model with discrete delay. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 191-216. doi: 10.3934/dcdsb.2020263 Ching-Hui Wang, Sheng-Chen Fu. Traveling wave solutions to diffusive Holling-Tanner predator-prey models. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021007 Zheng Han, Daoyuan Fang. Almost global existence for the Klein-Gordon equation with the Kirchhoff-type nonlinearity. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020287 Jinfeng Wang, Sainan Wu, Junping Shi. Pattern formation in diffusive predator-prey systems with predator-taxis and prey-taxis. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1273-1289. doi: 10.3934/dcdsb.2020162 Zhihua Liu, Yayun Wu, Xiangming Zhang. Existence of periodic wave trains for an age-structured model with diffusion. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021009 Yi Guan, Michal Fečkan, Jinrong Wang. Periodic solutions and Hyers-Ulam stability of atmospheric Ekman flows. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1157-1176. doi: 10.3934/dcds.2020313 Michal Fečkan, Kui Liu, JinRong Wang. $ (\omega,\mathbb{T}) $-periodic solutions of impulsive evolution equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021006 Rong Chen, Shihang Pan, Baoshuai Zhang. Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29 (1) : 1691-1708. doi: 10.3934/era.2020087 Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037 Yicheng Liu, Yipeng Chen, Jun Wu, Xiao Wang. Periodic consensus in network systems with general distributed processing delays. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2021002 Yongjie Wang, Nan Gao. Some properties for almost cellular algebras. Electronic Research Archive, 2021, 29 (1) : 1681-1689. doi: 10.3934/era.2020086 Taige Wang, Bing-Yu Zhang. Forced oscillation of viscous Burgers' equation with a time-periodic force. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1205-1221. doi: 10.3934/dcdsb.2020160 Christian Beck, Lukas Gonon, Martin Hutzenthaler, Arnulf Jentzen. On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020320 Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326 Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 Demou Luo Qiru Wang
CommonCrawl
Q&A Meta How can 3/1 ≡ 1/(1/3), when left side features merely integers, but right side features a repetend? On one hand, I know that algebraically, $\dfrac{3}1 ≡ \dfrac{1}{\color{red}{1/3}}$. On the other hand, they differ in practice, not least because $\color{red}{1/3}$ contains 3 as the repetend. For example, if a scrap of physical material must have a 3:1 ratio and a length of 3 m, then I shall make the width 1 m. But presuppose a length of 1 m. Then a 3:1 ratio is impossible to accomplish, because it would require a width of $\color{red}{1/3=0.\overline{3}}$. But it's impossible to measure and cut anything physical at a repetend! Doesn't this physical impossibility due to the reptend belie, or undermine, the theoretical "equality" that $\dfrac{3}1 ≡ \dfrac{1}{\color{red}{1/3}}$? How can this physical impossibility due to the repetend be reconciled with equality? arithmetical-hierarchy Chgg Clou‭ -41 reputation 39 0 -56 41 This question is not within the scope of Mathematics. 1 comment thread The existence of a repetend depends on the base (1 comment) Your equations express rational numbers, and are correct. Just because in some numbering systems some of these rational numbers can't be represented with finite notation doesn't in any way invalidate the equations. 81 reputation 0 3 8 2 ... it's impossible to measure and cut anything physical at a repetend. This is incorrect. Given a segment with $1$ unit length, one can easily construct, "physically", a segment with a length of $1/3$ units. In fact, given any line segment, there are many ways to trisect it. See for instance this article or this YouTube video. See also this Wikipedia article on $\sqrt{2}$; one can even get a line segment with the length of an irrational number! Your confusion essentially boils down to why $0.\overline{9}=1$. Notice that on the right you have the integer $1$, while on the left, you have a repeating decimal which continues infinitely. Snoopy‭ 241 reputation 9 3 38 28 Sign up to answer this question » Q&A — Is $f(x)=\sin(x)$ the unique function satisfying $f'(0)=1$ and $f^{(n)}(\Bbb R)\subset [-1,1]$ for all $n=0,1,\ldots$? Q&A — Prove that 49 is the only prime square to be followed by twice a prime square and then a semiprime Q&A — What is the probability that the convex hull of $n$ randomly distributed points has $l$ of the points on its boundary? You can also join us in chat!
CommonCrawl
"Axiom of global choice" In some books on category theory (for example, in J.Adámek, H.Herrlich, E.Strecker "Abstract and concrete categories...") the authors use the idea of "big sets" ("conglomerates" or "collections") which can contain classes (as far as I understand, in the Goedel-Bernays sense) as elements, and they formulate the "generalized axiom of choice", where it is stated that the choice function exists (not only for families or classes of sets, but also) for families of classes (indexed by elements of those "big sets"). This approach allows to prove, in particular, the existence of a skeleton in each category, and some other useful things. This generalization of the axiom of choice is also mentioned In Wikipedia: http://en.wikipedia.org/wiki/Axiom_of_global_choice (as the "strong form of the axiom of global choice"). I wonder if there are any texts with the justification of this trick? The references I found (in particular, those mentioned in Wikipedia) give justification only for usual axiom of choice (for families of sets or for classes of sets, but not for "conglomerates of classes"). So actually I can't understand whether, for example, the existence of a skeleton, is true for all categories (in some interpretation of set theory) or for some special ones... Similarly the other corollaries of this "global axiom of choice" look doubtful for me. I would be grateful if anybody could clarify this. From the comments I see that there is a risk of misunderstanding, so I want to explain that by justification I mean an accurate (rigorous) definition of the new tool together with the analysis of whether it is compatible with the other tools of the theory. As an illustration, in the case of the usual axiom of choice (I mean its "weak form", in terms of Wikipedia), there are many textbooks (I can recommend E.Mendelson "Introduction to mathematical logic" or J.Kelly "General topology", the appendix), where the fundamental objects of the theory (in this case, the classes) are accurately introduced (here, axiomatically) and the necessary constructions (like functions) are rigorously defined in the theory. This makes possible to give rigorous formulation to the axiom of choice (again, to its "weak form") inside the theory, and moreover, this presentation of a new axiom is followed by a thorough investigation of whether it contradicts to the previous axioms of the theory. Only after receiving the answer that no contradictions can appear (in fact, a more strong thing is true: the new axiom is independent from the others, that was the result by P.Cohen) mathematicians can use this new axiom without worrying that something is wrong here. So my question is whether there is something similar for the "strong form of the axiom of choice"? Is it possible that nothing lies behind these words? On the contrary, if there is a justification, where can I read about it? Dear colleagues, from what I learn on this subject in the textbooks which I found, in Wikipedia and here in MO, I deduce that what people call "axiom of global choice" is just the usual axiom of choice (as it is presented in Kelly's book) applied to some special classes of sets arising in consideration of what is called the Grothendieck Universe. It's a puzzle for me 1) why people call this special case "a stronger form of the axiom of choice", and 2) why they don't want to give references, where this construction is accurately introduced. With the aim to accelerate the clarification of this question, I now nominated for deletion the article in Wikipedia devoted to his topic: http://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Axiom_of_global_choice. As I wrote there, I don't exclude that the partisans of the idea will rewrite the article in Wikipedia for endowing "global choice" with some mathematical sense, but you should agree that in its present form this article and the other mentionings of "global choice" available for external observers, look indecently vague. I invite all comers to share their opinion here or at the Wikipedia page. ct.category-theory reference-request set-theory Sergei Akbarov Sergei AkbarovSergei Akbarov $\begingroup$ One way of putting it is this: the axiom of choice for sets is equivalent to the statement that every small category has a skeleton. Since the definition of category is first order, you could consider the axiom: Every model of the category axioms has a skeleton. This is then prior to any choice of ambient set theory. $\endgroup$ – David Roberts Sep 20 '12 at 7:27 $\begingroup$ What exactly do you mean by "justification" of an axiom? I'd use that phrase to mean the pre-axiomatic intuitive ideas that lead me to regard the statement as a reasonable axiom to include in my theory. For this purpose, it seems to me that whatever intuition leads you to accept the axiom of choice for sets would probably do the same for classes, conglomerates, and whatever higher entities you include in your theory. $\endgroup$ – Andreas Blass Sep 20 '12 at 13:25 $\begingroup$ I don't know that much about set theory, but I guess to get a workable theory of conglomerates (in ZF+something) we need some large cardinal axiom and then the axiom of choice is simply the usual axiom of choice, applied to very large sets then. And if the most intriguing part is "how can a class (which is not a set) be an element of someting else?", then this has little to do with the axiom of choice. $\endgroup$ – Michael Greinecker Sep 21 '12 at 10:35 $\begingroup$ Sergei: The main reason why you're not getting an answer is not necessarily the meaning of "justification" but the meaning of "conglomerate." None of the standard set theories (ZF, NBG, MK) admit those. Some more esoteric theories do (e.g. Ackermann) but since these can be wildly different from each other, you really need to say which one you're using before any kind of serious analysis can be done. If you don't know which one you are using then you can ask a separate question to figure that out first. $\endgroup$ – François G. Dorais♦ Sep 21 '12 at 12:21 $\begingroup$ Indeed, I am also appalled that my civilization still can't answer vague questions about undefined objects. $\endgroup$ – François G. Dorais♦ Sep 22 '12 at 6:47 Here at least is the usual justification for moving from AC for sets to what is normally called the global axiom of choice, which asserts that there is a class well-ordering of the (first-order) universe. Theorem. The global axiom of choice, when added to the ZFC or GB+AC axioms of set theory, leads to no new theorems about sets. That is, the first-order assertions about sets that are provable in GBC are precisely the same as the theorems of ZFC. Furthermore, every model of ZFC can be extended (by forcing) to a model of GBC, in which the global axiom of choice is true, while adding no new sets (only classes). In particular, the global axiom of choice is safe in the sense that it will not cause inconsistency, unless the underlying system without the global axiom of choice was already inconsistent. Proof. Suppose that $M$ is any model of ZFC. Consider the class partial order $\mathbb{P}$ consisting of all well-orderings in $M$ of any set in $M$, ordered by end-extension. As a forcing notion, this partial order is $\kappa$-closed for every $\kappa$ in $M$, since the union of a chain of (end-extending) well-orderings is still a well-order. If $G\subset\mathbb{P}$ is $M$-generic for this partial order, then $G$ is, in effect, a well-ordering of all the sets in $M$. Furthermore, one can prove by the usual forcing technology that the structure $\langle M,{\in},G\rangle$ satisfies $\text{ZFC}(G)$, that is, where the predicate $G$ is allowed to appear in the replacement and other axiom schemes. Essentially, what we've done is add a global well-ordering of the universe generically. And since the forcing was closed, no new sets were added, and so $M[G]$ has the same first-order part as $M$. It follows now that GBC is conservative over ZFC for first-order assertions, since any first-order statement $\sigma$ that is true in all GBC models will be true in $M[G]$ and therefore also in $M$, and so $\sigma$ is true in all ZFC models as well. QED Joel David HamkinsJoel David Hamkins $\begingroup$ I am afraid, this is too technical for me. Are you saying that it is possible to extend the Gödel-Bernays theory in such a way that the axiom of choice becomes more powerful, it can be applied to classes? $\endgroup$ – Sergei Akbarov Apr 13 '13 at 16:39 $\begingroup$ I might add that this is completely standard material when considering Goedel-Bernays set theory, and so if you do find this "completely unfamiliar", then it may be appropriate for you to adopt a less strident tone. $\endgroup$ – Joel David Hamkins Apr 21 '13 at 11:30 $\begingroup$ In particular, the Wikipedia page on global choice en.wikipedia.org/wiki/Axiom_of_global_choice seems to me currently to be completely fine, presenting the standard and familiar facts about it as they are usually understood, including the conservation result that I prove above. Please do not delete that Wikipedia page, and please remove your request for deletion. $\endgroup$ – Joel David Hamkins Apr 21 '13 at 11:40 $\begingroup$ The axiom of choice for classes (i.e. global choice) is known not to be equivalent to the axiom of choice for sets, as one can build a model of Goedel-Bernays set theory that does not satisfy the axiom of choice for classes but does satisfy AC (this is done in a few questions here on MO). For this reason, I would regard it as a mistake to refer to both axioms as the "axiom of choice". Indeed, such a terminology would likely cause students to become very confused about the difference between them. $\endgroup$ – Joel David Hamkins Apr 24 '13 at 19:17 $\begingroup$ Well, the forcing arguments show that global choice (choice for classes) is strictly stronger than the axiom of choice (choice for sets). I don't find that to be banal at all; rather, I find it to be a very enlightening piece of mathematics, explaining something fundamental about the set/class distinction. I would suggest that you take a look at it... $\endgroup$ – Joel David Hamkins Apr 25 '13 at 20:23 I think that you should place yourself in ZFC+ existence of strongly innaccessible cardinals. Then the existence of a strongly inaccessible cardinal provides you a universe as in Borceux's Handbook of Categorical algebra. Then, what you call sets are elements of the universe, and what you call classes are the subsets of the universe, but they are still sets in the set theoretic sense, so you can apply choice. EDIT: clarification The problem of category theory is that we want to have the category Set of all sets to actually be a category. Since there is no set which contains all sets, we can't ask a category to have a set of objects, or Set will no more be a category. That's why in the first place we define the collection of object to be potentially wider than a set: we ask it to be a class. The point is that you can avoid the difficulty differently, by limiting yourself to a rich enough set of sets, which should contains "everthing that you can be interested in". This is the concept of a Grothendieck universe, see http://ncatlab.org/nlab/show/Grothendieck+universe or Borceux's Handbook of Categorical algebra for a definition. Existence of Grothendieck universes turns out to be equivalent to the existence of strongly inaccessible cardinals (here we are in ZFC), and this existence axiom has been studied in set theory (I'm not a specialist of that at all). So you place yourself in ZFC + existence of strongly inaccessible cardinals, and you take a universe $U$. Call the elements of U the "U-sets" and the subsets of U the "U-classes". Then, define a category $\mathscr C$ (in the universe $U$) to be a triple $(Obj~ \mathscr C, Mor~ \mathscr C, \circ)$ with $Obj~ \mathscr C \subseteq U$ and $\mathscr C(A,B) \in U$ for all $A,B \in Obj~ \mathscr C$, satisfying the usual axioms of a category. So now, your U-classes are indeed sets of ZFC (as subsets of the set U), so you can use the axiom of choice in them, without bothering. I am not sure if it is what you were looking for, but it is what I personally have in mind when I am using the axiom of choice to choose in a collection of objects, in category theory. Dimitri ZaganidisDimitri Zaganidis $\begingroup$ Dimitri, I did not undertand you. Is this written anywhere? If not, can you contact me to explain what you write? (Or explain this here.) $\endgroup$ – Sergei Akbarov Apr 8 '13 at 5:18 $\begingroup$ I hope my edit made the ideas I tried to express clearer. $\endgroup$ – Dimitri Zaganidis Apr 13 '13 at 15:39 $\begingroup$ Dimitri, is it possible to translate this into the language of the Gödel-Bernays set theory? $\endgroup$ – Sergei Akbarov Apr 13 '13 at 16:15 $\begingroup$ As far as I understand, $U$ must be a set here, is it? If yes, then I don'see any profits. If the "axiom of global choice" is just the usual axiom of choice applied to families of $U$-classes (which are families of sets), then why do people introduce the very term "axiom of global choice", and claim that "the axiom of global choice is a stronger variant of the axiom of choice which applies to proper classes as well as sets" (en.wikipedia.org/wiki/Axiom_of_global_choice). This is evidently not true in the situation you describe. $\endgroup$ – Sergei Akbarov Apr 20 '13 at 8:05 Dear Sergei, you might be interested in first reading Bourbaki's Théorie des ensembles (at least chapters I--III) and then have a look at section 0 and the appendix of SGA 4.I. This gives a slightly different approach using Hilbert's almighty symbol $\tau$. Fred RohrerFred Rohrer $\begingroup$ Appendix of SGA? What is it? In which translation is this? English? $\endgroup$ – Sergei Akbarov Apr 21 '13 at 10:35 $\begingroup$ Dear Sergei, I refer to SGA 4, Expose I, Appendix ("Univers" by N. Bourbaki). I do not know of any translation of SGA, so you have to go with the french original. $\endgroup$ – Fred Rohrer Apr 21 '13 at 10:42 $\begingroup$ Pedantic remark: Hilbert's symbol was actually $\varepsilon$. en.wikipedia.org/wiki/Epsilon_calculus $\endgroup$ – François G. Dorais♦ Apr 21 '13 at 18:31 Not the answer you're looking for? Browse other questions tagged ct.category-theory reference-request set-theory or ask your own question. Axiom of Choice and Order Types Non-Borel sets without axiom of choice Axiom of Replacement in Category Theory Category theory without axiom of choice Largest ordered "field" in NBG without axiom of global choice Tarski's axiom A, MK set theory and the Global Choice axiom Is Global Choice conservative over Zermelo with Choice? Very large axiom of choice
CommonCrawl
Home Journals IJSDP Method for Forecasting Electric Consumption for Household Users in the Conditions of the Republic of Tajikistan Method for Forecasting Electric Consumption for Household Users in the Conditions of the Republic of Tajikistan Aleksandr Ivanovich Sidorov | Saidjon Sheralievich Tavarov* Department of "Life Safety", South-Ural State University (National Research University), 454080, Lenin Avenue, 76, Chelyabinsk, Russia Corresponding Author Email: [email protected] https://doi.org/10.18280/ijsdp.150417 | Citation 15.04_17.pdf In the work, a method is proposed that allows predicting electricity consumption by household consumers in the terrain of the Republic of Tajikistan. To take into account the factor features of the terrain of the cities of the Republic of Tajikistan, a coefficient is proposed that characterizes the terrain conditions depending on the temperatures and the difference in the heights of the cities above the sea level. The adequacy of the proposed method for predicting electricity consumption by household consumers is presented in the form of dependences of annual electricity consumption schedules obtained by calculation and experimental methods. electricity consumption, forecasting method, household consumers After the collapse of the Soviet Union and the independence of the Republic of Tajikistan, the dynamics of the power consumption of industrial enterprises in relation to household consumers began to decrease gradually. This dynamics was influenced by a number of factors: the civil war, which led to a sharp decrease in funds for the reconstruction and modernization of existing electrical equipment, in particular in energy-intensive enterprises such as the Tajik Aluminum Plant, Tajik Textile Mach. Javanese Chemical Plant, and. t.; shutdown of the supply of natural gas and the impossibility of transferring technological equipment from natural gas to electric energy due to increased electricity consumption by domestic consumers that has been grafted onto overloading existing electric networks. The increase in electricity consumption by household consumers in particular is due to the disconnection of heat and hot water supply, heating and natural gas and the transfer to full electrification, the structure of electricity consumption by years is shown in Figure 1. e5wk71rnaajjgvngbb.png Figure 1. Structure of power consumption by groups accepted in RT Group 1 – industrial, non-industrial, agricultural and equivalent consumers; Group 2 – consumers of budgetary sphere, the enterprises of the communal services and electric transport; Group 3 – pumping stations of machine irrigation systems, borehole and reclamation pumping stations; Group 4 – population, localities, and dormitories. According to Figure 1. The dynamics of electricity consumption by household consumers, as mentioned above, is increasing. As you know, the dynamics of electricity consumption is affected by an increase in the population, as well as the number of people living in one apartment [1-17]. In the examined series of discounts, it is shown that the increase in power consumption is affected by the placement of people in rooms in the apartment [8-22]. Given the traditional feature of Central Asian countries, including the Republic of Tajikistan, an average of 4-6 people lives in one apartment. Consequently, given the lack of heat and gas supply, hot water supply and heating to maintain optimal room temperatures (taking into account the number of people living in one apartment and the average use of at least two rooms), much more electricity is needed than jelly than in cases where household consumers are provided with other energy sources noted above According to generally accepted opinions [6-10], the ambient temperature during the winter periods in Central Asian countries, including the Republic of Tajikistan, does not fall to critical values. However, this opinion does not fully correspond to reality, this is due to the fact that the terrain of the Republic of Tajikistan is quite complex. The Republic of Tajikistan consists of 93% of the mountains, while altitudes vary from 2,000 to 7,500 m above sea level, such differences cannot affect the critical drops in ambient temperatures on winter days. Critical ambient temperatures during winter days are observed in particular in the North - Eastern part of the Republic of Tajikistan, namely in the Sogd and Gorn-Badakhshan autonomous regions with the centers of the city of Hajent and the city of Khorog. In this case, the following factor should be noted that according to the territorial arrangement, the ambient temperature of Dushanbe in winter does not fall below -5℃ and, for example, in the Gorno-Badakhshan Autonomous Region the ambient temperature in winter can reach -30℃ or more (there is no centralized heat supply), therefore, the time of using the maximum loads during the day in the Gorno-Badakhshan Autonomous Region will be significantly longer than in other regions of the country. Therefore, I take into account the population growth, the tradition of children living with their parents, regardless of the age and the absence of other sources of energy, electricity consumption among household consumers will increase more and more. It should be noted that the Republic of Tajikistan (RT) does not have its own regulatory documents and when determining electrical loads for the construction and reconstruction of urban electric networks, documents are used either developed in Soviet times or the Russian Federation. 2. Formulation of the Problem Setting the same specific load for the household sector according to ref. [21, 22] for the whole RT is not correct. It is well known that if the unit loads are incorrectly set and the power supply systems are designed according to these standards, there will be an increase in energy losses [6-23], and in the future, a reduction in the operating time of elements of power supply circuits. The existing norms of specific loads for typical houses [21, 22], currently used in design, were developed during the Soviet Union and partly in modern Russia, they do not take into account the climatic and meteorological and territorial features of cities, as well as the load of air conditioners and electric water heaters. The introduction of new specific load standards in the RT is complicated by a number of reasons, one of the main of which is the shortage of electricity production in winter. Therefore, a relevant solution is the recommendation of energy consumption standards for cities of the Republic of Tajikistan based on the proposed forecasting method, but taking into account the climatic and meteorological and territorial features of the cities of the Republic of Tajikistan. The reason for the shortage of electricity in the winter period is due to the fact that the main source of electricity in the Republic of Tajikistan is hydraulic power plants (HPPs), which account for more than 90% of the total electricity generated in the republic. At present, only one hydroelectric power station in the Republic of Tajikistan has its own reservoir (Nurek hydroelectric power station), while the rest are riverbed and depend on the influx of water due to the melting of glaciers. Thus, in order to improve the quality of electricity and the reliability of electricity supply, it is necessary to develop a method for forecasting electricity consumption taking into account the current norms of specific loads and allowed capacity established by Barki Tajik for household consumers with typical houses, taking into account a number of factors affecting household electricity consumption . These factors include the following: the territorial location of cities and climatic and meteorological conditions [6-22]. These factors affect the duration of the maximum load during the day and month [18-20]. 3. Theoretical Part In order to regulate the operational parameters of urban electric networks by the operational dispatching service, a method for forecasting power consumption for household consumers in the Republic of Tajikistan (RT) is proposed based on equations taking into account the maximum load time coefficient ($\alpha_{\text {maximum load time }}$) obtained for various cities of the Republic of Tajikistan [19, 20] and having a functional relationship: $\alpha_{\text {maximum load time }}=f\left(x_{i}\right)$ (1) $x_{i}=x_{1} ; x_{2} ; x_{3} ; x_{4} ; x_{5}$ (2) where, $x_{1}$ - air temperature; $x_{2}$ - features of the construction of houses; $x_{3}$ –height of cities above sea level; $x_{4}$ - amount of precipitation; $x_{5}$ - wind speed. In view of the foregoing, we propose a method for predicting power consumption and average load for household consumers in the Republic of Tajikistan in the form of a system of equations taking into account factors - the coefficient of maximum load time -$-\alpha_{\text {maximumload time}}$ for various cities of the Republic of Tajikistan [19, 20], which allows taking into account the territorial and climate - meteorological feature of the Republic of Tatarstan. The equations for forecasting power consumption and average power, taking into account the coefficient of time of maximum loads - $-\alpha_{\text {maximumload time}}$ of the cities of the Republic of Tajikistan, are given below: $W_{\alpha_{\text {maximum load time}}+1}=W_{\text {consumption }} \cdot \alpha_{\text {maximum load time }} \cdot\left(1-\alpha_{\text {maximum load time }}\right)$ (3) $P_{\text {average time of maximum loads during the day }}=\frac{W_{\alpha_{\text {maximum load time }}+1}}{t_{\text {maximum load time during the day }}} \cdot \alpha_{\text {maximum load time }}$ (4) where, $W_{\text {consumption}}$ - electricity consumption during the period under review, kW ∙ h; $t_{\text {maximum load time during the day}}$ - time of maximum loads during the day, hours, $\alpha_{\text {maximum load time}}$ - time coefficient of maximum loads. The coefficient that characterizes the terrain conditions ($\alpha_{\text {terrain conditions}}$) is determined by the expression: $\alpha_{\text {maximum load } t .}=\frac{\left(t_{\text {av. mon. am. tem.}}+t_{\text {tem.diff.}}\right)}{\left(t_{\text {av. mon. am. tem.}}+t_{\text {av.mon. add. tem.}}\right)}$ (5) where: t(Ms. M. T.) – average monthly ambient temperature, ℃. (The value of the average environmental temperature is obtained according to "Tajikhydromet" for each month during the forecast period taking into account the location of points of elevation above sea level. In this paper, we consider the elevation points above sea level-2123 m for Khorog and 706 m for Dushanbe); t(times. t) – the temperature difference between the point at sea level and the point of location of the consumer above sea level, ℃. (the temperature Difference between the point at sea level and the points of location of the consumer above sea level) for the corresponding considered forecast months. For the consumer above sea level according to "Tajikhydromet" during the forecast period, and for a point at sea level for the same forecast periods according dateandtime.info. The need to take into account the temperature difference at these levels relative to the sea is due to the following. When the altitude increases above sea level for every 100 meters, the average temperature decreases by 0.6℃. Consequently, this leads to an increase in power consumption); t(cf. m. t. V 0) – the average monthly temperature of the environment at the point at sea level, СC. (This value is the initial value for estimating the change in the average monthly temperature at the forecast point. It is selected based on data dateandtime.info); t(cf.).)- average monthly additional temperature, СC. (This average monthly temperature characterizes the degree of constructability of residential buildings, taking into account the thermal insulation ability, and shows the difference between the average monthly temperature of the external and internal temperature of a residential building [18-25]. As noted above, due to the lack of heating in the winter in the cities of the Republic of Tajikistan, for well-known reasons, the temperature inside a residential building is much lower than in houses with heating. Therefore, the average monthly additional temperature in (4) at sub-zero average monthly temperatures when predicting power consumption for household consumers can be taken into account if the factors listed above are taken into account. But at the same time, to choose thermal insulation materials for construction, it is necessary to focus on the fact that in summer the temperature in the Central and southern cities of the Republic of Tajikistan can reach +40-45℃). The obtained equations will allow us to predict, plan and control power consumption [19, 20] without violating the established norms of specific loads. To maintain those operating parameters in the power supply system, it is necessary to develop power consumption standards that do not exceed specific loads [21, 22]. 4. Practical Part Based on the obtained Eq. (3) for the considered cities of the Republic of Tajikistan, taking into account the climatic and meteorological conditions of the terrain and the territoriality of their location, we recommend the following standards for electricity consumption by household consumers. In our opinion, these recommended norms will allow us to solve the previously set tasks, thereby improving the reliability of power supply and all indicators that affect the reliability of power supply, in particular, in 0.4 kV networks. We will present the results of the recommended energy consumption standards in the form of recommended average electricity consumption schedules for 52 household consumers located in the east and central part of Tajikistan (Figure 2 and Figure 3). Figure 2. Recommended average power consumption schedule for 52 household consumers located in Eastern Tajikistan To verify the adequacy of the recommended average power consumption schedules (Figure 2 and Figure 3) obtained on the basis of the proposed forecasting method (Eq. (3)), we determine the average loads at peak hours during the considered period. The results of the calculations are presented in table. The given values of average loads at maximum hours in the table. correspond to the values established for household consumers according to [21, 22]. Table 1. Average loads during peak load hours Average power consumption, kWh (Eastern cities) Average power consumption, kWh (Central cities) The average load, kW (East of the city) Average load, kW (Central cities) Figure 3. Recommended average power consumption schedule for 52 household consumers located in the Central part of Tajikistan Figure 4. Comparison of calculated and experimental values of average electric load at maximum hours for 16 consumers (eastern cities of Tajikistan) Given that the maximum power consumption is observed on winter days, therefore, at this time, the average load increases. To assess the results obtained (see Table 1), we construct the dependences of the average electric load at maximum hours for 16 household consumers Figures 4-6. For a constant value, we take the normalized value set in accordance with [21, 22]. The given dependences (see Figures 4-6) once again show that during the day the electrical load of household consumers does not change uniformly. The results obtained by calculation allow us to correctly predict the electrical load, and thereby taking into account the above that more than 90% of the generated electricity is on the balance of the hydraulic stations, and for these types of power plants, the possibility of pre-predicted power consumption allows more efficient and reliable generation of electricity. Figure 5. Comparison of calculated and experimental values of the average electric load at maximum hours for 16 consumers (capital of Tajikistan) Figure 6. Comparison of calculated and experimental values of average electric load at maximum hours for 16 consumers (northern cities of Tajikistan) 5. Discussion of Research Results The obtained method of forecasting power consumption by household consumers, taking into account all factor conditions, allows us to solve the tasks set from the beginning. The main task of this article was to obtain a method for forecasting power consumption, which would take into account the influence of changes in climatic and meteorological conditions of the locality of the cities under consideration, and would also allow without changing the established specific norms of electrical loads [21, 22] for the proposed average power consumption by household consumers (see Table 1) plan and control modes of operation of urban electric networks. Using the derived Eq. (3) and the coefficient of maximum load time taking into account the terrain conditions of the Republic of Tajikistan, the average values of power consumption by household consumers were obtained and recommended for the first time. The given data of average calculated capacities during peak load hours (see Table 1) depending on the recommended average values of power consumption by household consumers do not permissible values [21, 22]. At the same time, it should be noted that the proposed forecasting method allows you to equalize the average load, thereby reducing the under loading mode during the summer hours of maximum loads, thereby optimizing the network operation mode. 1. A method is Proposed that allows predicting power consumption by household consumers in the conditions of the RT area. 2. The dependence of changes in height differences on power consumption by household consumers is established. This dependence is presented as a coefficient that characterizes the terrain conditions of the cities of the Republic of Tajikistan. 3. The Adequacy of the proposed method for predicting power consumption by household consumers is shown in the form of a comparison of the results obtained by calculation and experimental methods. [1] Zakaria, Y., Anup, P. (2018). An optimal load schedule of household appliances with leveled load profile and consumer's preferences. 2018 International Conference on the Domestic Use of Energy (DUE), Cape Town, South Africa, pp. 1-7. https://doi.org/10.23919/DUE.2018.8384382 [2] Zakaria, Y., Pule, K.H. (2018). A binary integer programming model for optimal load scheduling of household appliances with consumer's preferences. 2018 International Conference on the Domestic Use of Energy (DUE). Cape Town, South Africa, pp. 1-8. https://doi.org/10.23919/DUE.2018.8384381 [3] Gheorghe, G., Florina, S. (2015). Processing of smart meters data for peak load estimation of consumers. 9th International Symposium on Advanced Topics in Electrical Engineering (ATEE). Bucharest, Romania, pp. 864-867. https://doi.org/10.1109/ATEE.2015.7133922 [4] Hussein, S., Boonruang, M. (2018). Intelligent algorithm for optimal load management in smart home appliance scheduling in distribution system. 2018 International Electrical Engineering Congress (iEECON), Krabi, Thailand, Thailand, pp. 1-4. https://doi.org/10.1109/IEECON.2018.8712166 [5] Jangkyum, K., Han, J., Kim, N., Kim, M, Choi, J. (2018). Analysis of power usage at household and proper energy management. 2018 International Conference on Information and Communication Technology Convergence (ICTC). Jeju, South Korea, pp. 450-456. https://doi.org/10.1109/ICTC.2018.8539459 [6] Makokluev, B.I., Kostikov, V. (1994). Modeling of electric loads of electric power systems. Electricity, 10: 6-18. [7] Makokluev, B.I., Pavlikov, V., Vladimirov, A. (2003). Influence of fluctuations of meteorological factors on power consumption of power units. Powerman, 6: 11-23. [8] Makokluev, B.I. (2019). Trend of electricity consumption of UES of Russia. Scientific and Technical Journal "Energy of the Unified Network", 5(48): 56-64. https://www.xn-----glcfccctdci4bhow0as6psb.xn--p1ai/publications/103-5-48-2019-g/288-tendentsii-elektropotrebleniya-energosistem-rossii-trends-in-power-consumption-of-russia-s-power-grid-system. [9] Makokluev, B.I., Polizharov, A.S., Basov, A.A., Alla Yu, E., Loktionov, S.V. (2018). Short-term forecasting of power consumption of power systems. Electric Stations, 4: 24-35. [10] Makokluev, B.I., Polizharov, A.S., Antonov, A.V., Govorun, M.N., Kolesnikov, A.V., Basov, A.A., Alla Yu, E. (2019). Operational correction of schedules of electric power consumption in the planning cycle of the balancing market. Electric Stations, 5: 36-44. [11] Repkina, N.G. (2015). Study of factors affecting the accuracy of daily power consumption forecasting. News of higher educational institutions. Electromechanics, 2: 41-43. http://dx.doi.org/10.17213/0136-3360-2015-2-41-43 [12] Zubakin, V.A., Kovshov, N.M. (2015). Methods and models for analyzing the volatility of electricity consumption taking into account cyclicality and stochasticity. Analysis, Forecast, and Management, 7(15): 6-12. [13] Komornik, S., Kalichets, E. (2008). Requirements for energy consumption forecasting systems. Energo. Market, 3: 5-7. [14] Vorotnitsky, V.E., Morzhin Yu, I. (2018). Digital transformation of energy in Russia -a system task of the fourth industrial revolution. Scientific and Technical Journal "Energy of the Unified Network", 6(42): 12-21. [15] Vorotnitsky, V.E. (2018). The Solution to the problems of the Russian electric power industry should be systematic, qualified and customer-oriented. Powerman, 6: 14-21. [16] Vorotnitsky, V.E. (2019). On digitalization in the economy and electric power industry. Powerman, 12: 6-14. [17] Valeev, G.S., Dzyuba, M.A., Valeev, R.G. (2016). Modeling of daily load schedules of 6-10 kV distribution network sections in cities and localities under conditions of limited initial information. Bulletin Of SUSU. A Series of "Energy", 16(2): 23-29. https://doi.org/10.14529/power160203 [18] Sidorov, A.I., Khanzhina, O.A., Tavarov, S.S. (2019). Ensuring the efficiency of distribution networks C. Dushanbe and Republic of Tajikistan. International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon), Vladivostok, Russia, pp. 1-4. https://doi.org/10.1109/FarEastCon.2019.8934377 [19] Sidorov, A.I., Tavarov, S.S. (2019). Normalization of power consumption in the Republic of Tajikistan taking into account the climatic features of the region. Scientific and Technical Journal "Energy of the Unified Network", 3(45): 70-75. [20] Tavarov S.S. (2019). Specific power consumption of the domestic sector taking into account the ambient air temperature and the territorial location of the Republic of Tajikistan. Industrial Power Engineering, 7(7): 19-22. [21] SP 256. 1325800.2016. Electrical installations of residential and public buildings rules of design and installation [Electronic resource]. URL: http://files.stroyinf.ru/Data2/1/4293751/4293751598.htm, accessed on Jul. 11, 2017. [22] RM-2696-01. Temporary instructions for calculating electrical loads of residential buildings. Moscow. Publishing house GUP "NIAC". 2001. 22 p. [23] Vatin, N.I., Gorshkov, A.S., Nemova, D.V. (2013). Energy efficiency of enclosing structures during major repairs. Construction of Unique Buildings and Structures, 3(8): 1-10. [24] GOST 30494-2011. Residential and public buildings. The parameters of the microclimate in the premises. [25] Boguslavsky, L.D. (1985). Reducing energy consumption when working with heating and ventilation systems. Moscow, Stroizdat, 336 pages.
CommonCrawl
Journal of Fluid Mechanics (10) Clay Minerals (1) European Journal of Applied Mathematics (1) International Astronomical Union Colloquium (1) Mathematical Modelling of Natural Phenomena (1) Ryan Test (9) International Astronomical Union (1) Mineralogical Society (1) test society (1) Cambridge Texts in Applied Mathematics (22) The initial development of a jet caused by fluid, body and free surface interaction with a uniformly accelerated advancing or retreating plate. Part 1. The principal flow M. T. Gallagher, D. J. Needham, J. Billingham Journal: Journal of Fluid Mechanics / Volume 841 / 25 April 2018 Print publication: 25 April 2018 The free surface and flow field structure generated by the uniform acceleration (with dimensionless acceleration $\unicode[STIX]{x1D70E}$ ) of a rigid plate, inclined at an angle $\unicode[STIX]{x1D6FC}\in (0,\unicode[STIX]{x03C0}/2)$ to the exterior horizontal, as it advances ( $\unicode[STIX]{x1D70E}>0$ ) or retreats ( $\unicode[STIX]{x1D70E}<0$ ) from an initially stationary and horizontal strip of inviscid incompressible fluid under gravity, are studied in the small-time limit via the method of matched asymptotic expansions. This work generalises the case of a uniformly accelerating plate advancing into a fluid as studied by Needham et al. (Q. J. Mech. Appl. Maths, vol. 61 (4), 2008, pp. 581–614). Particular attention is paid to the innermost asymptotic regions encompassing the initial interaction between the plate and the free surface. We find that the structure of the solution to the governing initial boundary value problem is characterised in terms of the parameters $\unicode[STIX]{x1D6FC}$ and $\unicode[STIX]{x1D707}$ (where $\unicode[STIX]{x1D707}=1+\unicode[STIX]{x1D70E}\tan \unicode[STIX]{x1D6FC}$ ), with a bifurcation in structure as $\unicode[STIX]{x1D707}$ changes sign. This bifurcation in structure leads us to question the well-posedness and stability of the governing initial boundary value problem with respect to small perturbations in initial data in the innermost asymptotic regions, the discussion of which will be presented in the companion paper Gallagher et al. (J. Fluid Mech. vol. 841, 2018, pp. 146–166). In particular, when $(\unicode[STIX]{x1D6FC},\unicode[STIX]{x1D707})\in (0,\unicode[STIX]{x03C0}/2)\times \mathbb{R}^{+}$ , the free surface close to the initial contact point remains monotone, and encompasses a swelling jet when $(\unicode[STIX]{x1D6FC},\unicode[STIX]{x1D707})\in (0,\unicode[STIX]{x03C0}/2)\times [1,\infty )$ or a collapsing jet when $(\unicode[STIX]{x1D6FC},\unicode[STIX]{x1D707})\in (0,\unicode[STIX]{x03C0}/2)\times (0,1)$ . However, when $(\unicode[STIX]{x1D6FC},\unicode[STIX]{x1D707})\in (0,\unicode[STIX]{x03C0}/2)\times \mathbb{R}^{-}$ , the collapsing jet develops a more complex structure, with the free surface close to the initial contact point now developing a finite number of local oscillations, with near resonance type behaviour occurring close to a countable set of critical plate angles $\unicode[STIX]{x1D6FC}=\unicode[STIX]{x1D6FC}_{n}^{\ast }\in (0,\unicode[STIX]{x03C0}/2)$ ( $n=1,2,\ldots$ ). The initial development of a jet caused by fluid, body and free surface interaction with a uniformly accelerated advancing or retreating plate. Part 2. Well-posedness and stability of the principal flow We consider the problem of a rigid plate, inclined at an angle $\unicode[STIX]{x1D6FC}\in (0,\unicode[STIX]{x03C0}/2)$ to the horizontal, accelerating uniformly from rest into, or away from, a semi-infinite strip of inviscid, incompressible fluid under gravity. Following on from Gallagher et al. (J. Fluid Mech., vol. 841, 2018, pp. 109–145) (henceforth referred to as GNB), it is of interest to analyse the well-posedness and stability of the principal flow with respect to perturbations in the initially horizontal free surface close to the plate contact point. In particular we find that the solution to the principal unperturbed problem, denoted by [IBVP] in GNB, is well-posed and stable with respect to perturbations in initial data in the region of interest, localised close to the contact point of the free surface and the plate, when the plate is accelerated with dimensionless acceleration $\unicode[STIX]{x1D70E}\geqslant -\cot \,\unicode[STIX]{x1D6FC}$ , while the solution to [IBVP] is ill-posed with respect to such perturbations in the initial data, when the plate is accelerated with dimensionless acceleration $\unicode[STIX]{x1D70E}<-\cot \,\unicode[STIX]{x1D6FC}$ . The physical source of the ill-posedness of the principal problem [IBVP], when $\unicode[STIX]{x1D70E}<-\cot \,\unicode[STIX]{x1D6FC}$ , is revealed to be due to the leading-order problem in the innermost region localised close to the initial contact point being in the form of a local Rayleigh–Taylor problem. As a consequence of this mechanistic interpretation we anticipate that, when the plate is accelerated with $\unicode[STIX]{x1D70E}<-\cot \,\unicode[STIX]{x1D6FC}$ , the inclusion of weak surface tension effects will restore well-posedness of the problem [IBVP] which will, however, remain temporally unstable. The Life in the Universe Series J. Billingham, E. DeVore, D. Milne, K. O'Sullivan, C. Stoneburner, J. Tarter Journal: International Astronomical Union Colloquium / Volume 162 / 1998 Students, young and old, find the existence of extraterrestrial life one of the most intriguing of all science topics. The theme of searching for life in the universe lends itself naturally to the integration of many scientific disciplines for thematic science education. Based upon the search for extraterrestrial intelligence (SETI), the Life in the Universe (LITU) curriculum project at the SETI Institute developed a series of six teachers guides, with ancillary materials, for use in elementary and middle school classrooms, grades 3 through 9. Lessons address topics such as the formation of planetary systems, the origin and nature of life, the rise of intelligence and culture, spectroscopy, scales of distance and size, communication and the search for extraterrestrial intelligence. Each guide is structured to present a challenge as the students work through the lessons. The six LITU teachers guides may be used individually or as a multi-grade curriculum for a school. Thick drops climbing uphill on an oscillating substrate J. T. Bradshaw, J. Billingham Experiments have shown that a liquid droplet on an inclined plane can be made to move uphill by sufficiently strong, vertical oscillations (Brunet et al., Phys. Rev. Lett., vol. 99, 2007, 144501). In this paper, we study a two-dimensional, inviscid, irrotational model of this flow, with the velocity of the contact lines a function of contact angle. We use asymptotic analysis to show that, for forcing of sufficiently small amplitude, the motion of the droplet can be separated into an odd and an even mode, and that the weakly nonlinear interaction between these modes determines whether the droplet climbs up or slides down the plane, consistent with earlier work in the limit of small contact angles (Benilov and Billingham, J. Fluid Mech. vol. 674, 2011, pp. 93–119). In this weakly nonlinear limit, we find that, as the static contact angle approaches $\unicode[STIX]{x03C0}$ (the non-wetting limit), the rise velocity of the droplet (specifically the velocity of the droplet averaged over one period of the motion) becomes a highly oscillatory function of static contact angle due to a high frequency mode that is excited by the forcing. We also solve the full nonlinear moving boundary problem numerically using a boundary integral method. We use this to study the effect of contact angle hysteresis, which we find can increase the rise velocity of the droplet, provided that it is not so large as to completely fix the contact lines. We also study a time-dependent modification of the contact line law in an attempt to reproduce the unsteady contact line dynamics observed in experiments, where the apparent contact angle is not a single-valued function of contact line velocity. After adding lag into the contact line model, we find that the rise velocity of the droplet is significantly affected, and that larger rise velocities are possible. The initial development of a jet caused by fluid, body and free surface interaction. Part 5. Parasitic capillary waves on an initially horizontal surface J. Billingham, D. J. Needham, E. Korsukova, R. J. Munro Journal: Journal of Fluid Mechanics / Volume 836 / 10 February 2018 Published online by Cambridge University Press: 18 December 2017, pp. 850-872 Print publication: 10 February 2018 In Part 3 of this series of papers (Needham et al., Q. J. Mech. Appl. Maths, vol. 61, 2008, pp. 581–614), we studied the free surface flow generated in a horizontal layer of inviscid fluid when a flat, rigid plate, inclined at an external angle $\unicode[STIX]{x1D6FC}$ to the horizontal, is driven into the fluid with a constant, horizontal acceleration. We found that the most interesting behaviour occurs when $\unicode[STIX]{x1D6FC}>\unicode[STIX]{x03C0}/2$ (the plate leaning into the fluid). When $\unicode[STIX]{x03C0}/2<\unicode[STIX]{x1D6FC}\leqslant \unicode[STIX]{x1D6FC}_{c}$ , with $\unicode[STIX]{x1D6FC}_{c}\approx 102.6^{\circ }$ , we were able to find the small-time asymptotic solution structure, and solve the leading-order problem numerically. When $\unicode[STIX]{x1D6FC}=\unicode[STIX]{x1D6FC}_{c}$ , we found numerical evidence that a $120^{\circ }$ corner exists on the free surface, at leading order as $t\rightarrow 0$ . For $\unicode[STIX]{x1D6FC}>\unicode[STIX]{x1D6FC}_{c}$ , we could find no numerical solution of the leading-order problem as $t\rightarrow 0$ , and hypothesised that the solution does not exist for any $t>0$ for these values of $\unicode[STIX]{x1D6FC}$ . At the present time, there is no rigorous proof of this hypothesis. In this paper, we demonstrate that the likely non-existence of a solution for any $t>0$ when $\unicode[STIX]{x1D6FC}>\unicode[STIX]{x1D6FC}_{c}$ can be reconciled with the physics of the system by including the effect of surface tension in the model. Specifically, we find that for $\unicode[STIX]{x1D6FC}>\unicode[STIX]{x1D6FC}_{c}$ , the solution exists for $0\leqslant t<t_{c}$ , with $t_{c}\sim Bo^{-1/(3\unicode[STIX]{x1D6FE}-1)}\unicode[STIX]{x1D70F}_{c}$ and $\unicode[STIX]{x1D70F}_{c}=O(1)$ as $Bo^{-1}\rightarrow 0$ , where $Bo$ is the Bond number ( $Bo^{-1}$ is the square of the ratio of the capillary length to the fluid depth) and $\unicode[STIX]{x1D6FE}\equiv 1/(1-\unicode[STIX]{x03C0}/4\unicode[STIX]{x1D6FC})$ . The solution does not exist for $t\geqslant t_{c}$ due to a topological transition driven by a nonlinear capillary wave, i.e. the free surface pinches off (self-intersects) when $t=t_{c}$ . We are also able to compare this asymptotic solution with experimental results which show that, in an experimental case where the contact angle remains approximately constant (the modelling assumption that we make in this paper), the asymptotic solution is in good agreement. In general, the inclusion of surface tension leads to the generation of capillary waves ahead of the wavecrest, which decay as $t$ increases for $\unicode[STIX]{x1D6FC}<\unicode[STIX]{x1D6FC}_{c}$ but dominate the flow and lead to pinch-off for $\unicode[STIX]{x1D6FC}>\unicode[STIX]{x1D6FC}_{c}$ . These capillary waves are an unsteady analogue of the parasitic capillary waves that can be generated ahead of steadily propagating, periodic capillary–gravity waves, e.g. Lin & Rockwell (J. Fluid Mech., vol. 302, 1995, pp. 29–44). A Reaction Diffusion Model for Inter-Species Competition and Intra-Species Cooperation S. M. Rasheed, J. Billingham Journal: Mathematical Modelling of Natural Phenomena / Volume 8 / Issue 3 / 2013 We study a reaction diffusion system that models the dynamics of two species that display inter-species competition and intra-species cooperation. We find that there are between three and six different equilibrium states and a variety of possible travelling wave solutions that can connect them. After examining the travelling waves that are generated in three different ecologically-relevant initial value problems, we construct asymptotic solutions in the limit λ ≪ 1 (fast diffusion, slow reaction for the second species relative to the first). Drops climbing uphill on an oscillating substrate E. S. BENILOV, J. BILLINGHAM Journal: Journal of Fluid Mechanics / Volume 674 / 10 May 2011 Published online by Cambridge University Press: 07 March 2011, pp. 93-119 Print publication: 10 May 2011 Recent experiments by Brunet, Eggers & Deegan (Phys. Rev. Lett., vol. 99, 2007, p. 144501 and Eur. Phys. J., vol. 166, 2009, p. 11) have demonstrated that drops of liquid placed on an inclined plane oscillating vertically are able to climb uphill. In the present paper, we show that a two-dimensional shallow-water model incorporating surface tension and inertia can reproduce qualitatively the main features of these experiments. We find that the motion of the drop is controlled by the interaction of a 'swaying' (odd) mode driven by the in-plane acceleration and a 'spreading' (even) mode driven by the cross-plane acceleration. Both modes need to be present to make the drop climb uphill, and the effect is strongest when they are in phase with each other. The initial development of a jet caused by fluid, body and free-surface interaction. Part 2. An impulsively moved plate D. J. NEEDHAM, J. BILLINGHAM, A. C. KING The free-surface deformation and flow field caused by the impulsive horizontal motion of a rigid vertical plate into a horizontal strip of inviscid incompressible fluid, initially at rest, is studied in the small time limit using the method of matched asymptotic expansions. It is found that three different asymptotic regions are necessary to describe the flow. There is a main, O(1) sized, outer region in which the flow is singular at the point where the free surface meets the plate. This leads to an inner region, centred on the point where the free surface initially meets the plate, with size of O(-t log t). To resolve the singularities that arise in this inner region, it is necessary to analyse further the flow in an inner-inner region, with size of O(t), again centred upon the wetting point of the nascent rising jet. The solutions of the boundary value problems in the two largest regions are obtained analytically. The solution of the parameter-free nonlinear boundary value problem that arises in the inner-inner region is obtained numerically. On a model for the motion of a contact line on a smooth solid surface J. BILLINGHAM Journal: European Journal of Applied Mathematics / Volume 17 / Issue 3 / June 2006 Published online by Cambridge University Press: 03 July 2006, pp. 347-382 In this paper we investigate the model for the motion of a contact line over a smooth solid surface developed by Shikhmurzaev, [24]. We show that the formulation is incomplete as it stands, since the mathematical structure of the model indicates that an additional condition is required at the contact line. Recent work by Bedeaux, [4], provides this missing condition, and we examine the consequences of this for the relationship between the contact angle and contact line speed for Stokes flow, using asymptotic methods to investigate the case of small capillary number, and a boundary integral method to find the solution for general capillary number, which allows us to include the effect of viscous bending. We compare the theory with experimental data from a plunging tape experiment with water/glycerol mixtures of varying viscosities [11]. We find that we are able to obtain a reasonable fit using Shikhmurzaev's model, but that it remains unclear whether the linearized surface thermodynamics that underlies the theory provide an adequate description for the motion of a contact line. An asymptotic theory for the propagation of a surface-catalysed flame in a tube F. ADAMSON, J. BILLINGHAM, A. C. KING, K. KENDALL Journal: Journal of Fluid Mechanics / Volume 546 / 10 January 2006 Experiments have shown that when a mixture of fuel and oxygen is passed through a zirconia tube whose inner surface is coated with a catalyst, and then ignited at the end of the tube, a reaction front, or flame, propagates back along the tube towards the fuel inlet. The reaction front is visible as a red hot region moving at a speed of a few millimetres per second. In this paper we study a model of the flow, which takes into account diffusion, advection and chemical reaction at the inner surface of the tube. By assuming that the flame propagates at a constant speed without change of form, we can formulate a steady problem in a frame of reference moving with the reaction front. This is solved using the method of matched asymptotic expansions, assuming that the Reynolds and Damköhler numbers are large. We present numerical and, where possible, analytical results, first when the change in fluid density is small (a simplistic but informative limit) and secondly in the variable-density case. The speed of the travelling wave decreases as the critical temperature of the surface reaction increases and as the mass flow rate of fuel increases. We also make a comparison between our results and some preliminary experiments. Surface-tension-driven flow outside a slender wedge with an application to the inviscid coalescence of drops J. BILLINGHAM, A. C. KING Journal: Journal of Fluid Mechanics / Volume 533 / 25 June 2005 Print publication: 25 June 2005 We consider the two-dimensional inviscid flow that occurs when a fluid, initially at rest around a slender wedge-shaped void, is allowed to recoil under the action of surface tension. As noted by Keller & Miksis (1983), a similarity scaling is available, with lengths scaling like $t^{2/3}$. We find that an asymptotic balance is possible when the wedge semi-angle, $\alpha$, is small, in an inner region of $O(\alpha^{4/9})$, a distance of $O(\alpha^{-2/9})$ from the origin, which leads to a simpler boundary value problem at leading order. Although we are able to reformulate the inner problem in terms of a complex potential and reduce it to a single nonlinear integral equation, we are unable to find a solution numerically. This is because, as noted by Vanden-Broeck, Keller & Milewski (2000), and reproduced here numerically using a boundary integral method, the free surface is self-intersecting for $\alpha\,{<}\, \alpha_0\,{\approx}\,2.87^\circ$. Since disconnected solutions are only possible when there is a void inside the initial wedge, we consider the effect of an inviscid low-density fluid inside the wedge. In this case, a solution is available for a slightly smaller range of wedge semi-angles, since the flow of the interior fluid sucks the free surfaces together, with the exterior flow seeing pinch-off at a finite angle. We conclude that for $\alpha\,{<}\,\alpha_0$, we must introduce the effect of viscosity at small times in order to regularize the initial value problem. Since the solution for $t\,{\ll}\,1$ in the presence of viscosity is simply connected (Billingham 2005), the free surface must first pinch off at some finite time and then continue to do so at a sequence of later times. We investigate this using boundary integral solutions of the full inviscid initial value problem, with smooth initial conditions close to those of the original problem. In addition, we show that the inner asymptotic scalings that we developed for the steady problem can also be used in this time-dependent problem. The unsteady inner equations reduce to those for steady unidirectional flow outside a region of constant pressure, and can be solved numerically. We also show how the slender wedge solution can be related to the small-time behaviour of two coalescing drops, and describe the relationship between our solutions and those found by Duchemin, Eggers & Josserand (2003), for which a similar unsteady inner region exists. In each case, the free surface pinches off repeatedly, and no similarity solution exists. 3 - Bessel Functions A. C. King, University of Birmingham, J. Billingham, University of Birmingham, S. R. Otto, University of Birmingham Book: Differential Equations Print publication: 08 May 2003, pp 58-92 8 - Existence, Uniqueness, Continuity and Comparison of Solutions of Ordinary Differential Equations Print publication: 08 May 2003, pp 203-216 Appendix 6 - Complex Variables 10 - Group Theoretical Methods Appendix 7 - A Short Introduction to MATLAB 5 - Fourier Series and the Fourier Transform 2 - Legendre Functions 14 - Time-Optimal Control in the Phase Plane Many physical systems that are amenable to mathematical modelling do not exist in isolation from human intervention. A good example is the British economy, for which the Treasury has a complicated mathematical model. The state of the system (the economy) is given by values of the dependent variables (for example, unemployment, foreign exchange rates, growth, consumer spending and inflation), and the government attempts to control the state of the system to a target state (low inflation, high employment, high growth) by varying several control parameters (most notably taxes and government spending). There is also a cost associated with any particular action, which the government tries to minimize (some function of, for example, government borrowing and, one would hope, the environmental cost of any government action or inaction). The optimal control leads to the economy reaching the target state with the smallest possible cost. Another system, for which we have studied a simple mathematical model, consists of two populations of different species coexisting on an isolated island. For the case of two herbivorous species, which we studied in Chapter 9, we saw that one species or the other will eventually die out. If the island is under human management, this may well be undesirable, and we would like to know how to intervene to maintain the island close to a state of equilibrium, which we know, if left uncontrolled, is unstable. We could choose between either continually culling the more successful species, continually introducing animals of the less successful species or some combination of these two methods of control. Each of these actions has a cost associated with it.
CommonCrawl
CLRS Solutions 22.2 Breadth-first search 22.2 Breadth-first search 22.2 Breadth-first search Table of contents Show the $d$ and $\pi$ values that result from running breadth-first search on the directed graph of Figure 22.2(a), using vertex $3$ as the source. $$ \begin{array}{c|cccccc} \text{vertex} & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline d & \infty & 3 & 0 & 2 & 1 & 1 \\ \pi & \text{NIL} & 4 & \text{NIL} & 5 & 3 & 3 \end{array} $$ Show the $d$ and $\pi$ values that result from running breadth-first search on the undirected graph of Figure 22.3, using vertex $u$ as the source. $$ \begin{array}{c|cccccc} \text{vertex} & r & s & t & u & v & w & x & y \\ \hline d & 4 & 3 & 1 & 0 & 5 & 2 & 1 & 1 \\ \pi & s & w & u & \text{NIL} & r & t & u & u \end{array} $$ Show that using a single bit to store each vertex color suffices by arguing that the $\text{BFS}$ procedure would produce the same result if lines 5 and 14 were removed. The textbook introduces the $\text{GRAY}$ color for the pedagogical purpose to distinguish between the $\text{GRAY}$ nodes (which are enqueued) and the $\text{BLACK}$ nodes (which are dequeued). Therefore, it suffices to use a single bit to store each vertex color. What is the running time of $\text{BFS}$ if we represent its input graph by an adjacency matrix and modify the algorithm to handle this form of input? The time of iterating all edges becomes $O(V^2)$ from $O(E)$. Therefore, the running time is $O(V + V^2)$. Argue that in a breadth-first search, the value $u.d$ assigned to a vertex $u$ is independent of the order in which the vertices appear in each adjacency list. Using Figure 22.3 as an example, show that the breadth-first tree computed by $\text{BFS}$ can depend on the ordering within adjacency lists. First, we will show that the value $d$ assigned to a vertex is independent of the order that entries appear in adjacency lists. To do this, we rely on theorem 22.5, which proves correctness of BFS. In particular, that we have $v.d = \delta(s, v)$ at the end of the procedure. Since $\delta(s, v)$ is a property of the underlying graph, no matter which representation of the graph in terms of adjacency lists that we choose, this value will not change. Since the $d$ values are equal to this thing that doesn't change when we mess with the adjacency lists, it too doesn't change when we mess with the adjacency lists. Now, to show that $\pi$ does depend on the ordering of the adjacency lists, we will be using Figure 22.3 as a guide. First, we note that in the given worked out procedure, we have that in the adjacency list for $w$, $t$ precedes $x$. Also, in the worked out procedure, we have that $u.\pi = t$. Now, suppose instead that we had $x$ preceding $t$ in the adjacency list of $w$. Then, it would get added to the queue before $t$, which means that it would $u$ as it's child before we have a chance to process the children of $t$. This will mean that $u.\pi = x$ in this different ordering of the adjacency list for $w$. Give an example of a directed graph $G = (V, E)$, a source vertex $s \in V$, and a set of tree edges $E_\pi \subseteq E$ such that for each vertex $v \in V$, the unique simple path in the graph $(V, E_\pi)$ from $s$ to $v$ is a shortest path in $G$, yet the set of edges $E_\pi$ cannot be produced by running $\text{BFS}$ on $G$, no matter how the vertices are ordered in each adjacency list. Let $G$ be the graph shown in the first picture, $G_\pi = (V, E_\pi)$ be the graph shown in the second picture, and $s$ be the source vertex. We could see that $E_\pi$ will never be produced by running BFS on $G$. If $y$ precedes $v$ in the $Adj[s]$. We'll dequeue $y$ before $v$, so $u.\pi$ and $x.\pi$ are both $y$. However, this is not the case. If $v$ preceded $y$ in the $Adj[s]$. We'll dequeue $v$ before $y$, so $u.\pi$ and $x.\pi$ are both $v$, which again isn't true. Nonetheless, the unique simple path in $G_\pi$ from $s$ to any vertex is a shortest path in $G$. There are two types of professional wrestlers: "babyfaces" ("good guys") and "heels" ("bad guys"). Between any pair of professional wrestlers, there may or may not be a rivalry. Suppose we have $n$ professional wrestlers and we have a list of $r$ pairs of wrestlers for which there are rivalries. Give an $O(n + r)$-time algorithm that determines whether it is possible to designate some of the wrestlers as babyfaces and the remainder as heels such that each rivalry is between a babyface and a heel. If it is possible to perform such a designation, your algorithm should produce it. This problem is basically just a obfuscated version of two coloring. We will try to color the vertices of this graph of rivalries by two colors, "babyface" and "heel". To have that no two babyfaces and no two heels have a rivalry is the same as saying that the coloring is proper. To two color, we perform a breadth first search of each connected component to get the $d$ values for each vertex. Then, we give all the odd ones one color say "heel", and all the even d values a different color. We know that no other coloring will succeed where this one fails since if we gave any other coloring, we would have that a vertex $v$ has the same color as $v.\pi$ since $v$ and $v.\pi$ must have different parities for their $d$ values. Since we know that there is no better coloring, we just need to check each edge to see if this coloring is valid. If each edge works, it is possible to find a designation, if a single edge fails, then it is not possible. Since the BFS took time $O(n + r)$ and the checking took time $O(r)$, the total runtime is $O(n + r)$. The diameter of a tree $T = (V, E)$ is defined as $\max_{u,v \in V} \delta(u, v)$, that is, the largest of all shortest-path distances in the tree. Give an efficient algorithm to compute the diameter of a tree, and analyze the running time of your algorithm. Suppose that a and b are the endpoints of the path in the tree which achieve the diameter, and without loss of generality assume that $a$ and $b$ are the unique pair which do so. Let $s$ be any vertex in $T$. We claim that the result of a single $\text{BFS}$ will return either $a$ or $b$ (or both) as the vertex whose distance from $s$ is greatest. To see this, suppose to the contrary that some other vertex $x$ is shown to be furthest from $s$. (Note that $x$ cannot be on the path from $a$ to $b$, otherwise we could extend). Then we have $$d(s, a) < d(s, x)$$ $$d(s, b) < d(s, x).$$ Let $c$ denote the vertex on the path from $a$ to $b$ which minimizes $d(s, c)$. Since the graph is in fact a tree, we must have $$d(s, a) = d(s, c) + d(c, a)$$ $$d(s, b) = d(s, c) + d(c, b).$$ (If there were another path, we could form a cycle). Using the triangle inequality and inequalities and equalities mentioned above we must have $$ \begin{aligned} d(a, b) + 2d(s, c) & = d(s, c) + d(c, b) + d(s, c) + d(c, a) \\ & < d(s, x) + d(s, c) + d(c, b). \end{aligned} $$ I claim that $d(x, b) = d(s, x) + d(s, b)$. If not, then by the triangle inequality we must have a strict less-than. In other words, there is some path from $x$ to $b$ which does not go through $c$. This gives the contradiction, because it implies there is a cycle formed by concatenating these paths. Then we have $$d(a, b) < d(a, b) + 2d(s, c) < d(x, b).$$ Since it is assumed that $d(a, b)$ is maximal among all pairs, we have a contradiction. Therefore, since trees have $|V| - 1$ edges, we can run $\text{BFS}$ a single time in $O(V)$ to obtain one of the vertices which is the endpoint of the longest simple path contained in the graph. Running $\text{BFS}$ again will show us where the other one is, so we can solve the diameter problem for trees in $O(V)$. Let $G = (V, E)$ be a connected, undirected graph. Give an $O(V + E)$-time algorithm to compute a path in $G$ that traverses each edge in $E$ exactly once in each direction. Describe how you can find your way out of a maze if you are given a large supply of pennies. First, the algorithm computes a minimum spanning tree of the graph. Note that this can be done using the procedures of Chapter 23. It can also be done by performing a breadth first search, and restricting to the edges between $v$ and $v.\pi$ for every $v$. To aide in not double counting edges, fix any ordering $\le$ on the vertices before hand. Then, we will construct the sequence of steps by calling $\text{MAKE-PATH}(s)$, where $s$ was the root used for the $\text{BFS}$. MAKE-PATH(u) for each v ∈ Adj[u] but not in the tree such that u ≤ v go to v and back to u for each v ∈ Adj[u] but not equal to u.π go to v perform the path proscribed by MAKE-PATH(v) go to u.π Previous 22.1 Representations of graphs Next 22.3 Depth-first search
CommonCrawl
Technical help question: Quantum Design magnet power supplies I'd like to ask my readers that own Quantum Design PPMS or MPMS instruments for help regarding a technical glitch. My aging PPMS superconducting magnet power supply (the kind QD calls the H-plate version) has developed a problem. For high fields (say above 7 T) the power supply fails to properly put the magnet in persistent mode and throws up an error in the control software. After talking with QD, it seems like options are limited. They no longer service this model of power supply, and therefore one option would be to buy a new one. However, I have a sense that other people have dealt with this issue before, and I would feel dumb buying a new supply if the answer was that this is a known issue involving a $ 0.30 diode or something. Without a schematic it's difficult to do diagnostics ourselves. Has anyone out there seen this issue and knows how to correct it? Oxide interfaces for fun and profit The so-called III-V semiconductors, compounds that combine a group III element (Al, Ga, In) and a group V element (N, As, P, Sb), are mainstays of (opto)electronic devices and condensed matter physics. They have never taken over for Si in logic and memory like some thought they might, for a number of materials science and economic reasons. (To paraphrase an old line, "GaAs is the material of the future [for logic] and always will be.") However, they are tremendously useful, in part because they are (now) fortuitously easy to grow - many of the compounds prefer the diamond-like "zinc blende" structure, and it is possible to prepare atomically sharp, flat, abrupt interfaces between materials with quite different semiconducting properties (very different band gaps and energetic alignments relative to each other). Fundamentally, though, the palette is limited - these materials are very conventional semiconductors, without exhibiting other potentially exciting properties or competing phases like ferroelectricity, magnetism, superconductivity, etc. Enter oxides. Various complex oxides can exhibit all of these properties, and that has led to a concerted effort to develop materials growth techniques to create high quality oxide thin films, with an eye toward creating the same kind of atomically sharp heterointerfaces as in III-Vs. A foundational paper is this one by Ohtomo and Hwang, where they used pulsed laser deposition to produce a heterojunction between LaAlO3, an insulating transparent oxide, and SrTiO3, another insulating transparent oxide (though one known to be almost a ferroelectric). Despite the fact that both of those parent constituents are band insulators, the interface between the two was found to play host to a two-dimensional gas of electrons with remarkable properties. The wikipedia article linked above is pretty good, so you should read it if you're interested. When you think about it, this is really remarkable. You take an insulator, and another insulator, and yet the interface between them acts like a metal. Where did the charge carriers come from? (It's complicated - charge transfer from LAO to STO, but the free surface of the LAO and its chemical termination is hugely important.) What is happening right at that interface? (It's complicated. There can be some lattice distortion from the growth process. There can be oxygen vacancies and other kinds of defects. Below about 105 K the STO substrate distorts "ferroelastically", further complicating matters.) Do the charge carriers live more on one side of the interface than the other, as in III-V interfaces, where the (conduction) band offset between the two layers can act like a potential barrier, and the same charge transfer that spills electrons onto one side leads to a self-consistent electrostatic potential that holds the charge layer right against that interface? (Yes.) Even just looking at the LAO/STO system, there is a ton of exciting work being performed. Directly relevant to the meeting I just attended, Jeremy Levy's group at Pitt has been at the forefront of creating nanoscale electronic structures at the LAO/STO interface and examining their properties. It turns out (one of these fortunate things!) that you can use a conductive atomic force microscope tip to do (reversible) electrochemistry at the free LAO surface, and basically draw conductive structures with nm resolution at the buried LAO/STO interface right below. This is a very powerful technique, and it's enabled the study of the basic science of electronic transport at this interface at the nanoscale. Beyond LAO/STO, over the same period there has been great progress in complex oxide materials growth by groups at a number of universities and at national labs. I will refrain from trying to list them since I don't know them all and don't want to offend with the sin of inadvertent omission. It is now possible to prepare a dizzying array of material types (ferromagnetic insulators like GdTiO3; antiferromagnetic insulators like SmTiO3; Mott insulators like LaTiO3; nickelates; superconducting cuprates; etc.) and complicated multilayers and superlattices of these systems. It's far too early to say where this is all going, but historically the ability to grow new material systems of high quality with excellent precision tends to pay big dividends in the long term, even if they're not the benefits originally envisioned. The Pittsburgh Quantum Institute: PQI2016 - Quantum Challenges For the last 2.5 days I've been at the PQI2016: Quantum Challenges symposium. It's been a very fun meeting, bringing together talks spanning physical chemistry, 2d materials, semiconductor and oxide structures, magnetic systems, plasmonics, cold atoms, and quantum information. Since the talks are all going to end up streamable online from the PQI website, I'll highlight just a couple of things that I learned rather than trying to summarize everything. If you can make a material such that the dielectric permittivity \( \epsilon \equiv \kappa \epsilon_{0} \) is zero over some frequency range, you end up with a very odd situation. The phase velocity of EM waves at that frequency would go to infinity, and the in-medium wavelength at that frequency would therefore become infinite. Everything in that medium (at that frequency) would be in the near-field of everything else. See here for a paper about what this means for transmission of EM waves through such a region, and here for a review. Screening of charge and therefore carrier-carrier electrostatic interactions in 2d materials like transition metal dichalcogenides varies in a complicated way with distance. At short range, screening is pretty effective (logarithmic with distance, basically the result you'd get if you worried about the interaction potential from an infinitely long charged rod), and at longer distances the field lines leak out into empty space, so the potential falls like \(1/\epsilon_{0}r\). This has a big effect on the binding of electrons and holes into excitons in these materials. There are a bunch of people working on unconventional transistor designs, including devices based on band-to-band tunneling between band-offset 2d materials. In a discussion about growth and shapes of magnetic domains in a particular system, I learned about the Wulff construction, and this great paper by Conyers Herring on why crystal take the shapes that they do. After a public talk by Michel Devoret, I think I finally have some sense of the fundamental differences between the Yale group's approach to quantum computing and the John Martinis/Google group's approach. This deserves a longer post later. Oxide interfaces continue to show interesting and surprising properties - again, I hope to say more later. On a more science-outreach note, I learned about an app called Periscope (basically part of twitter) that allows people to do video broadcasting from their phones. Hat tip to Julia Majors (aka Feynwoman) who pointed this out to me and that it's becoming a platform for a lot of science education work. I'll update this post later with links to the talks when those become available. Update: Here is the link to all the talk videos, which have been uploaded to youtube. Sci-fi time, part 2: Really big lasers I had a whole post written about laser weapons, and then the announcement came out about trying to build laser-launched interstellar probes, so I figured I should revise and talk about that as well. Now that the future is here, and space-faring rockets can land upright on autonomous ships, it's clearly time to look at other formerly science fiction technologies. Last August I wrote a post looking at whether laser pistols really make practical physics sense as weapons. The short answer: Not really, at least not with present power densities. What about laser cannons? The US military has been looking at bigger, high power lasers for things like anti-aircraft and ship defense applications. Given that Navy ships would not have to worry so much about portability and size, and that in principle nuclear-powered ships should have plenty of electrical generating capacity, do big lasers make more sense here? It's not entirely clear. Supposedly the operating costs of the laser systems are less than $1/shot, though that's also not a transparent analysis. Let's look first at the competition. The US Navy has been using the Phalanx gun system for ship defense, a high speed 20mm cannon that can spew out 75 rounds per second, each about 100 g and traveling at around 1100 m/s. That's an effective output power, in kinetic energy alone, of 4.5 MW (!). Even ignoring explosive munitions, each projectile carries 60 kJ of kinetic energy. The laser weapons being tested are typically 150 kW. To transfer the same amount of energy to the target as a single kinetic slug from the Phalanx would require keeping the beam focused on the target (assuming complete absorption) for about 0.4 sec, which is a pretty long time if the target is an inbound antiship missile traveling at supersonic speeds. Clearly, as with hand-held weapons, kinetic projectiles are pretty serious in terms of power and delivered energy on target, and beating that with lasers is not simple. The other big news story recently about big lasers was the announcement by Yuri Milner and Stephen Hawking of the Starshot project, an attempt to launch many extremely small and light probes toward Alpha Centauri using ground-based lasers for propulsion. One striking feature of the plan is the idea of using a ground-based optical phased array laser system with about 100 GW of power (!) to boost the probes up to about 0.2 c in a few minutes. As far as I can tell, the reason for the very high power and quick boost is to avoid problems with pointing the lasers for long periods of time as the earth rotates and the probes become increasingly distant. Needless to say, pulling this off is an enormous technical challenge. That power would be about equivalent to 50 large city-serving powerplants. I really wonder if it would be easier to drop the power by a factor of 1000, increase the boost time by a factor of 1000, and use a 100 MW nuclear reactor in solar orbit (i.e. at the earth-sun L1 or L2 point) to avoid the earth rotation or earth orbital velocity constraint. That level of reactor power is comparable to what is used in naval ships, and I have a feeling like the pain of working out in space may be easier to overcome than the challenge of building a 100 GW laser array. Still, exciting times that anyone is even entertaining the idea of trying this. "Joulies": the coffee equivalent of whiskey stones, done right Once upon a time I wrote a post about whiskey stones, rocks that you cool down and then place into your drink to chill your Scotch without dilution, and why they are rather lousy at controlling your drink's temperature. The short version: Ice is so effective, per mass, at cooling your drink because its melting is a phase transition. Add heat to a mixture of ice and water, and the mixture sits there at zero degrees Celsius, sucking up energy (the "latent heat") as the solid ice is converted into liquid water. Conversely, a rock just gets warmer. Now look at Joulies, designed to keep your hot beverage of choice at about 60 degrees Celsius. Note: I've never used these, so I don't know how well-made they are, but the science behind them is right. They're stainless steel and contain a material that happens to have a melting phase transition right at 60 C and a pretty large latent heat - more on that below. If you put them into coffee that's hotter than this, the coffee will transfer heat to the Joulies until their interior warms up to the transition, and then the temperature of the coffee+Joulies will sit fixed at 60 C as the filling partially melts. Then, if you leave the coffee sitting there and it loses heat to the environment through evaporation, conduction, convection, and radiation, the Joulies will transfer heat back to the coffee as their interior solidifies, again doing their level best to keep the (Joulies+coffee) at 60 C as long as there is a liquid/solid mixture within the Joulies. This is how you regulate the temperature of your beverage. (Note that we can estimate the total latent heat of the filling of the Joulies - you'd want it to be enough that cooling 375 ml of coffee from 100 C to 60 C would not completely melt the filling. At 4.18 J/g for the specific heat of water (close enough), the total latent heat of the Joulies filling should be more than 375 g \( \times \) 40 degrees C \( \times \) 4.18 J/g = 62700 J. ) Unsurprisingly, the same company offers a version filled with a different material, one that melts a bit below 0 C, for cooling your cold beverages. Basically they function like an ice cube, but with the melting liquid contained within a thin stainless steel shell so that it doesn't dilute your drink. Random undergrad anecdote: As a senior in college I was part of an undergrad senior design team in a class where the theme was satellites and spacecraft. We designed a probe to land on Venus, and a big part of our design was a temperature-regulating reservoir of a material with a big latent heat of melting and a melting point at something like 100 C, to keep the interior of the probe comparatively cool for as long as possible. Clearly we should've been early investors in Joulies. Interacting Quantum Systems Out of Equilibrium - Workshop at Rice The Rice Center for Quantum Materials will be hosting a workshop, "Interacting Quantum Systems Driven Out of Equilibrium", at Rice University in Houston on May 5-6, 2016. A central challenge of condensed matter and atomic physics today is understanding interacting, quantum many-body systems driven out of thermal equilibrium. Thanks to recent advances in both experimental and theoretical techniques, this is an exciting, active area that is seeing new emergent results. The Rice Center for Quantum Materials is hosting a workshop that will bring together the diverse community of researchers examining the various facets of the nonequilibrium quantum many-body problem. Experimental systems include: quantum materials driven by electronic bias beyond the linear regime; optical pump/probe methods to examine dynamic and steady-state nonequilibrium response; ultracold atoms in response to quench conditions and probed with far-from-equilibrium spectroscopy. Theoretical issues include: coherent many-body dynamics; many-body localization; Floquet states and dynamics in driving potentials; and thermalization/dissipation with driven quantum dynamics. For more details, including a speaker list and draft program, please see our website. Attendance by students/postdocs from traditionally underrepresented groups is encouraged. Technical help question: Quantum Design magnet po... The Pittsburgh Quantum Institute: PQI2016 - Quant... "Joulies": the coffee equivalent of whiskey stone... Interacting Quantum Systems Out of Equilibrium - W...
CommonCrawl
An Electrical Properties Analysis of CMOS IC by Narrow-Band High-Power Electromagnetic Wave Park, Jin-Wook;Huh, Chang-Su;Seo, Chang-Su;Lee, Sung-Woo 535 The changes in the electrical characteristics of CMOS ICs due to coupling with a narrow-band electromagnetic wave were analyzed in this study. A magnetron (3 kW, 2.45 GHz) was used as the narrow-band electromagnetic source. The DUT was a CMOS logic IC and the gate output was in the ON state. The malfunction of the ICs was confirmed by monitoring the variation of the gate output voltage. It was observed that malfunction (self-reset) and destruction of the ICs occurred as the electric field increased. To confirm the variation of electrical characteristics of the ICs due to the narrow-band electromagnetic wave, the pin-to-pin resistances (Vcc-GND, Vcc-Input1, Input1-GND) and input capacitance of the ICs were measured. The pin-to-pin resistances and input capacitance of the ICs before exposure to the narrow-band electromagnetic waves were $8.57M{\Omega}$ (Vcc-GND), $14.14M{\Omega}$ (Vcc-Input1), $18.24M{\Omega}$ (Input1-GND), and 5 pF (input capacitance). The ICs exposed to narrow-band electromagnetic waves showed mostly similar values, but some error values were observed, such as $2.5{\Omega}$, $50M{\Omega}$, or 71 pF. This is attributed to the breakdown of the pn junction when latch-up in CMOS occurred. In order to confirm surface damage of the ICs, the epoxy molding compound was removed and then studied with an optical microscope. In general, there was severe deterioration in the PCB trace. It is considered that the current density of the trace increased due to the electromagnetic wave, resulting in the deterioration of the trace. The results of this study can be applied as basic data for the analysis of the effect of narrow-band high-power electromagnetic waves on ICs. Phase Evolution and Electrical Properties of PZT Films by Aerosol-Deposition Method Park, Chun-Kil;Kang, Dong-Kyun;Lee, Seung-Hee;Kong, Young-Min;Jeong, Dae-Yong 541 $Pb(Zr_{0.52}Ti_{0.48})O_3$ (PZT) films with a thickness of $5{\sim}10{\mu}m$ at the morphotropic phase boundary were fabricated by aerosol-deposition (AD), and their phase evolution and electrical properties were investigated. The microstructure of the AD PZT films revealed nanosized grains with a low crystallinity and a dense structure at room temperature. The AD PZT films showed a mixture of tetragonal and rhombohedral phases. The post-annealing temperature was varied to study the phase transition behavior. The crystallinity of the AD PZT films was enhanced by annealing at 450, 550, and $650^{\circ}C$ for 2 h. At $650^{\circ}C$, the tetragonal and rhombohedral phases reacted to form a bridge phase between the two phases. The polarization-electric field hysteresis loops of the AD PZT film annealed at $650^{\circ}C$ exhibited a smaller cohesive field and a lower slim hysteresis than the films annealed at 450 and $550^{\circ}C$. Ferroelastic Domain Wall Motions in Lead Zirconate Titanate Under Compressive Stress Observed by Piezoresponse Force Microscopy Kim, Kwanlae 546 Ferroelectric properties are governed by domain structures and domain wall motions, so it is of significance to understand domain evolution processes under mechanical stress. In the present study, in situ piezoresponse force microscopy (PFM) observation under compressive stress was carried out for a near-morphotropic PZT. Both $180^{\circ}$ and $non-180^{\circ}$ domain structures were observed from PFM images, and their habit planes were identified using electron backscatter diffraction in conjunction with PFM data. By externally applied mechanical stress, needle-like $non-180^{\circ}$ domain patterns were broadened via domain wall motions. This was interpreted via phenomenological approach such that the total energy minimization can be achieved by domain wall motion rather than domain nucleation mainly due to the local gradient energy. Meanwhile, no motion was observed from curvy $180^{\circ}$ domain walls under the mechanical stress, validating that $180^{\circ}$ domain walls are not directly influenced by mechanical stress. Lifetime Assessments on 154 kV Transmission Porcelain Insulators with a Bayesian Approach Choi, In-Hyuk;Kim, Tae-Kyun;Yoon, Yong-Beum;Yi, Junsin;Kim, Seong Wook 551 It is extremely important to improve methodologies for the lifetime assessment of porcelain insulators. While there has been a considerable amount of work regarding the phenomena of lifetime distributions, most of the studies assume that aging distributions follow the Weibull distribution. However, the true underlying distribution is unknown, giving rise to unrealistic inferences, such as parameter estimations. In this article, we review several distributions that are commonly used in reliability and survival analysis, such as the exponential, Weibull, log-normal, and gamma distributions. Some properties, including the characteristics of failure rates of these distributions, are presented. We use a Bayesian approach for model selection and parameter estimation procedures. A well-known measure, called the Bayes factor, is used to find the most plausible model among several contending models. The posterior mean can be used as a parameter estimate for unknown parameters, once a model with the highest posterior probability is selected. Extensive simulation studies are performed to demonstrate our methodologies. Effect of Porcelain/Polymer Interface on the Microstructure, Insulation Characteristics and Electrical Field Distribution of Hybrid Insulators Cho, Jun-Young;Kim, Woo-Seok;An, Ho-Sung;An, Hee-Sung;Kim, Tae-wan;Lim, Yun-Seog;Bae, Sung-Hwan;Park, Chan 558 Hybrid insulators that have the advantages of both porcelain (high mechanical strength and chemical stability) as well as polymer (light weight and high resistance to pollution) insulators, can be used in place of individual porcelain and polymer insulators that are used for both mechanical support as well as electrical insulation of overhead power transmission lines. The most significant feature of hybrid insulators is the presence of porcelain/polymer interfaces where the porcelain and polymer are physically bonded. Individual porcelain and polymer insulators do not have such porcelain/polymer interfaces. Although the interface is expected to affect the mechanical/electrical properties of the hybrid insulator, systematic studies of the adhesion properties at the porcelain/polymer interface and the effect of the interface on the insulation characteristics and electric field distribution of the hybrid insulator have not been reported. In this study, we fabricated small hybrid insulator specimens with various types of interfaces and investigated the effect of the porcelain/polymer interface on the microstructure, insulating characteristics, and electric field distribution of the hybrid insulators. It was observed that the porcelain/polymer interface of the hybrid insulator does not have a significant effect on the insulating characteristics and electric field distribution, and the hybrid insulator can exhibit electrical insulating properties that are similar or superior to those of individual porcelain and polymer insulators. Improvement in Electrical Characteristics of Solution-Processed In-Zn-O Thin-Film Transistors Using a Soft Baking Process Kim, Han-Sang;Kim, Sung-Jin 566 A soft baking process was used to enhance the electrical characteristics of solution-processed indium-zincoxide (IZO) thin-film transistors (TFTs). We demonstrate a stable soft baking process using a hot plate in air to maintain the electrical stability and improve the electrical performance of IZO TFTs. These oxide transistors exhibited good electrical performance; a field-effect mobility of $7.9cm^2/Vs$, threshold voltage of 1.4 V, sub-threshold slope of 0.5 V/dec, and a current on/off ratio of $2.9{\times}10^7$ were measured. To investigate the static response of our solutionprocessed IZO TFTs, simple resistor load type inverters were fabricated by connecting a resistor (5 or $10M{\Omega}$). Our IZO TFTs, which were manufactured using the soft baking process at a baking temperature of $120^{\circ}C$, performed well at the operating voltage, and are therefore a good candidate for use in advanced logic circuits and transparent display backplanes. Fabrication of Graphene/Silver Nanowire Hybrid Electrodes via Transfer Printing of Graphene Ha, Bonhee;Jo, Sungjin 572 A hybrid transparent electrode was fabricated with graphene and silver nanowires (Ag NWs). Three different processes were used to fabricate the hybrid electrode. Measurements of the sheet resistances, transmittances, and surface roughnesses of the hybrid electrodes were used to identify the optimal fabrication process. The surface roughness of the hybrid electrodes with Ag NWs embedded in a transparent polymer matrix was significantly lower than that of the other hybrid electrodes. A hybrid electrode fabricated by transferring graphene onto Ag NWs after spin-coating the Ag NWs onto the substrate showed the lowest sheet resistance. The transmittance of the hybrid electrodes was comparable to that of Ag NW electrodes. Synthesis and Characterization of Red Organic Fluorescent of Perylene Bisimide Derivatives Lee, Seung Min;Jeong, Yeon Tae 577 The white light of a hybrid LED is obtained by using red and green organic fluorescent layers made of polymethylmethacrylate (PMMA) films, which function as color down-conversion layers of blue light-emitting diodes. In this research, we studied the fluorescence properties of a red organic fluorophore, employing perylene bisimide derivatives applicable to hybrid LEDs. The solubility, thermal stability, and luminous efficiency are important characteristics of organic fluorophores for use in hybrid LEDs. The perylene fluorescent compounds (1A and 1B) were prepared by the reaction of 4-bromophenol and 4-iodophenol with N,N'-bis(4-bromo-2,6-diisopropylphenyl)-1, 6,7,12-tetrachloroperylene-3,4,9,10-tetracarboxyl diimide (1) in the presence of dimethyl formaldehyde (DMF) at $70^{\circ}C$. The synthesized derivatives were characterized by using $^1H-NMR$, FT-IR, UV/Vis absorption and PL spectra, and TGA analysis. Compounds 1A and 1B showed absorption and emission at 570 nm and 604 nm in the UV/Vis spectrum. We also documented favorable solubility and thermal stability characteristics of the perylene fluorophores in our work. Perylene fluorophore 1, with the 4-bromophenol substituent 1A, exhibited particularly good thermal stability and solubility in organic solvents. Effect of Cathode Materials (MS2, M=Fe, Ni, Co) on Electrochemical Properties of Thermal Batteries Lee, Jungmin;Im, Chae-Nam;Yoon, Hyun-Ki;Cheong, Hae-Won 583 Thermal batteries are used in military power sources that require robustness and long storage life for applications in missiles and torpedoes. $FeS_2$ powder is currently used as a cathode material because of its high specific energy density, environmental non-toxicity, and low cost. $MS_2$ (M = Fe, Ni, Co) cathodes have been explored as novel candidates for thermal batteries in many studies; however, the discharge characteristics (1, 2, 3 plateau) of single cells in thermal batteries with different cathodes have not been elucidated in detail. In this study, we independently analyzed the discharge voltage and calculated the total polarizations of single cells using $MS_2$ cathodes. Based on the results of this study, we propose $NiS_2$ as a potential cathode material for use in thermal batteries. Heat Energy Diffusion Analysis in the Gas Sensor Body with the Variation of Drain-Source Electrode Distance Jang, Kyung-Uk 589 MOS-FET structured gas sensors were manufactured using MWCNTs for application as NOx gas sensors. As the gas sensors need to be heated to facilitate desorption of the gas molecules, heat dispersion plays a key role in boosting the degree of uniformity of molecular desorption. We report the desorption of gas molecules from the sensor at $150^{\circ}C$ for different sensor electrode gaps (30, 60, and $90{\mu}m$). The COMSOL analysis program was used to verify the process of heat dispersion. For heat analysis, structure of FET gas sensor modeling was proceeded. In addition, a property value of the material was used for two-dimensional modeling. To ascertain the degree of heat dispersion by FEM, the governing equations were presented as partial differential equations. The heat analysis revealed that although a large electrode gap is advantageous for effective gas adsorption, consideration of the heat dispersion gradient indicated that the optimal electrode gap for the sensor is $60{\mu}m$. Development of Moving and Attaching Diagnosis Device Using IoT Ka, Chool-Hyun;Lee, Dong-Gyu;Kim, Jin-Sa 596 The advancement and diversification of urban functions has caused an increasing need to improve the reliability of power supplies. The diversification of urban areas causes social disruptions by paralyzing urban functions during power outages. A large power outage occurs in the event of an accident, owing to the subduction of distribution lines. Therefore, in recent years, for the sake of the environment and safety, the safety diagnosis of electric power facilities has become an important issue. In this system, because thermal information changes rapidly during unattended monitoring owing to heat concentration phenomenon due to abnormal load or deterioration, studies have been conducted on the development of a device that can notify the manager at all times. Development of Multiple Wireless Communication Controller for Smart Factory Construction Oh, Jae-Jun;Choi, Seong-Ju;Kim, Jin-Sa 602 Due to recent industry 4.0, manufacturing has changed a lot. In particular, it is necessary to control the controller and controller of the control system, to communicate various production information and measurement information, and to produce a database in accordance with the flexible production for a small quantity of various items, and to manage the trend of major parts of production facilities. In this paper, we developed a multiple wireless communication controller for small scale control system for smart factory by applying XBee and microcomputer. This controller is cheap and easy to build multi-radio communication environment of 1: N and can control and monitor control system. In addition, we tested multiple wireless communication controllers by using signal processing device and C++, and constructed network, control, and database for mechanism module, and confirmed effectiveness for industrial application.
CommonCrawl